This is an action detector for the Smart Classroom scenario. It is based on the RMNet backbone that includes depth-wise convolutions to reduce the amount of computations for the 3x3 convolution block. The first SSD head from 1/8 and 1/16 scale feature maps has four clustered prior boxes and outputs detected persons (two class detector). The second SSD-based head predicts actions of the detected persons. Possible actions: sitting, writing, raising hand, standing, turned around, lie on the desk.
| Metric | Value |
|---|---|
| Detector AP (internal test set 2) | 90.70% |
| Accuracy (internal test set 2) | 80.74% |
| Pose coverage | sitting, writing, raising_hand, standing, |
| turned around, lie on the desk | |
| Support of occluded pedestrians | YES |
| Occlusion coverage | <50% |
| Min pedestrian height | 80 pixels (on 1080p) |
| GFlops | 8.225 |
| MParams | 2.001 |
| Source framework | TensorFlow* |
Average Precision (AP) is defined as an area under the precision/recall curve.
name: "input" , shape: [1x400x680x3] - An input image in the format [BxHxWxC], where:
Expected color order is BGR.
The net outputs four branches:
ActionNet/out_detection_loc, shape: [b, num_priors*4] - Box coordinates in SSD formatActionNet/out_detection_conf, shape: [b, num_priors*2] - Detection confidencesActionNet/action_heads/out_head_1_anchor_1, shape: [b, 6, 50, 86] - Action confidencesActionNet/action_heads/out_head_2_anchor_1, shape: [b, 6, 25, 43] - Action confidencesActionNet/action_heads/out_head_2_anchor_2, shape: [b, 6, 25, 43] - Action confidencesActionNet/action_heads/out_head_2_anchor_3, shape: [b, 6, 25, 43] - Action confidencesActionNet/action_heads/out_head_2_anchor_4, shape: [b, 6, 25, 43] - Action confidencesWhere:
[*] Other names and brands may be claimed as the property of others.