This is an action detector for the Smart Classroom scenario. It is based on the RMNet backbone that includes depth-wise convolutions to reduce the amount of computations for the 3x3 convolution block. The first SSD head from 1/16 scale feature map has four clustered prior boxes and outputs detected persons (two class detector). The second SSD-based head predicts actions of the detected persons. Possible actions: sitting, standing, raising hand.
| Metric | Value |
|---|---|
| Detector AP (internal test set 2) | 82.79% |
| Accuracy (internal test set 2) | 93.55% |
| Pose coverage | Sitting, standing, raising hand |
| Support of occluded pedestrians | YES |
| Occlusion coverage | <50% |
| Min pedestrian height | 80 pixels (on 1080p) |
| GFlops | 7.140 |
| MParams | 1.951 |
| Source framework | Caffe* |
Average Precision (AP) is defined as an area under the precision/recall curve.
name: "input" , shape: [1x3x400x680] - An input image in the format [BxCxHxW], where:
Expected color order is BGR.
The net outputs four branches:
mbox_loc1/out/conv/flat, shape: [b, num_priors*4] - Box coordinates in SSD formatmbox_main_conf/out/conv/flat/softmax/flat, shape: [b, num_priors*2] - Detection confidencesmbox/priorbox, shape: [1, 2, num_priors*4] - Prior boxes in SSD formatout/anchor1, shape: [b, 3, h, w] - Action confidencesout/anchor2, shape: [b, 3, h, w] - Action confidencesout/anchor3, shape: [b, 3, h, w] - Action confidencesout/anchor4, shape: [b, 3, h, w] - Action confidencesWhere:
[*] Other names and brands may be claimed as the property of others.