Skip to content

Commit a0e535e

Browse files
committed
Updated README.md
main.py Signed-off-by: Mpho Mphego <[email protected]>
1 parent bb5f13c commit a0e535e

File tree

2 files changed

+61
-25
lines changed

2 files changed

+61
-25
lines changed

README.md

Lines changed: 54 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -22,31 +22,39 @@ The gaze estimation model used requires three inputs:
2222
To get these inputs, use the three other OpenVino models model below:
2323

2424
- [Face Detection](https://docs.openvinotoolkit.org/latest/_models_intel_face_detection_adas_binary_0001_description_face_detection_adas_binary_0001.html)
25-
- [Head Pose Estimation](https://docs.openvinotoolkit.org/latest/_models_intel_head_pose_estimation_adas_0001_description_head_pose_estimation_adas_0001.html)
25+
26+
Implementation: https://github.com/mmphego/computer-pointer-controller/blob/bb5f13c6d2567c0856407db6c35b3fa6345f97c2/src/model.py#L156
27+
28+
![face_Detection](https://user-images.githubusercontent.com/7910856/87830444-4a3bf080-c881-11ea-993a-7f76c979449f.gif)
29+
2630
- [Facial Landmarks Detection](https://docs.openvinotoolkit.org/latest/_models_intel_landmarks_regression_retail_0009_description_landmarks_regression_retail_0009.html).
31+
Implementation: https://github.com/mmphego/computer-pointer-controller/blob/bb5f13c6d2567c0856407db6c35b3fa6345f97c2/src/model.py#L239
2732

28-
### Project Pipeline
29-
Coordinate the flow of data from the input, and then amongst the different models and finally to the mouse controller. The flow of data looks like this:
33+
![facial_landmarks](https://user-images.githubusercontent.com/7910856/87830446-4c05b400-c881-11ea-90a5-d1b80d984f01.gif)
3034

31-
![image](https://user-images.githubusercontent.com/7910856/87787550-1db1b580-c83c-11ea-9f21-5048c803bf5c.png)
35+
- [Head Pose Estimation](https://docs.openvinotoolkit.org/latest/_models_intel_head_pose_estimation_adas_0001_description_head_pose_estimation_adas_0001.html)
3236

33-
## Demo
37+
Implementation: https://github.com/mmphego/computer-pointer-controller/blob/bb5f13c6d2567c0856407db6c35b3fa6345f97c2/src/model.py#L305
3438

35-
![vide-demo](https://user-images.githubusercontent.com/7910856/87830451-50ca6800-c881-11ea-87cf-3943795a76e8.gif)
39+
![head_pose](https://user-images.githubusercontent.com/7910856/87830450-4f00a480-c881-11ea-9d0b-4b43316456a2.gif)
3640

41+
- [Gaze Estimation](https://docs.openvinotoolkit.org/latest/_models_intel_gaze_estimation_adas_0002_description_gaze_estimation_adas_0002.html)
3742

38-
### Gaze Estimates
43+
Using the above outputs as inputs.
44+
Implementation: https://github.com/mmphego/computer-pointer-controller/blob/bb5f13c6d2567c0856407db6c35b3fa6345f97c2/src/model.py#L422
3945

4046
![all](https://user-images.githubusercontent.com/7910856/87830436-47d99680-c881-11ea-8c22-6a0a7e17c78d.gif)
4147

42-
### Face Detection
43-
![face_Detection](https://user-images.githubusercontent.com/7910856/87830444-4a3bf080-c881-11ea-993a-7f76c979449f.gif)
48+
### Project Pipeline
49+
Coordinate the flow of data from the input, and then amongst the different models and finally to the mouse controller. The flow of data looks like this:
50+
51+
![image](https://user-images.githubusercontent.com/7910856/87787550-1db1b580-c83c-11ea-9f21-5048c803bf5c.png)
52+
53+
## Demo
54+
55+
![vide-demo](https://user-images.githubusercontent.com/7910856/87830451-50ca6800-c881-11ea-87cf-3943795a76e8.gif)
4456

45-
### Facial Landmark Estimates
46-
![facial_landmarks](https://user-images.githubusercontent.com/7910856/87830446-4c05b400-c881-11ea-90a5-d1b80d984f01.gif)
4757

48-
### Head Pose Estimates
49-
![head_pose](https://user-images.githubusercontent.com/7910856/87830450-4f00a480-c881-11ea-9d0b-4b43316456a2.gif)
5058

5159
## Project Set Up and Installation
5260

@@ -119,8 +127,8 @@ $ python main.py -h
119127
usage: main.py [-h] -fm FACE_MODEL -hp HEAD_POSE_MODEL -fl
120128
FACIAL_LANDMARKS_MODEL -gm GAZE_MODEL [-d DEVICE]
121129
[-pt PROB_THRESHOLD] -i INPUT [--out] [-mp [{high,low,medium}]]
122-
[-ms [{fast,slow,medium}]] [--enable-mouse] [--debug]
123-
[--show-bbox]
130+
[-ms [{fast,slow,medium}]] [--enable-mouse] [--show-bbox]
131+
[--debug] [--stats]
124132
125133
optional arguments:
126134
-h, --help show this help message and exit
@@ -136,12 +144,12 @@ optional arguments:
136144
-d DEVICE, --device DEVICE
137145
Specify the target device to infer on: CPU, GPU, FPGA
138146
or MYRIAD is acceptable. Sample will look for a
139-
suitable plugin for device specified (CPU by default)
147+
suitable plugin for device specified (Default: CPU)
140148
-pt PROB_THRESHOLD, --prob_threshold PROB_THRESHOLD
141-
Probability threshold for detections filtering(0.8 by
142-
default)
149+
Probability threshold for detections
150+
filtering(Default: 0.8)
143151
-i INPUT, --input INPUT
144-
Path to image or video file or 'cam' for Webcam.
152+
Path to image, video file or 'cam' for Webcam.
145153
--out Write video to file.
146154
-mp [{high,low,medium}], --mouse-precision [{high,low,medium}]
147155
The precision for mouse movement (how much the mouse
@@ -150,14 +158,16 @@ optional arguments:
150158
The speed (how fast it moves) by changing [Default:
151159
fast]
152160
--enable-mouse Enable Mouse Movement
153-
--debug Show output on screen [debugging].
154161
--show-bbox Show bounding box and stats on screen [debugging].
162+
--debug Show output on screen [debugging].
163+
--stats Verbose OpenVINO layer performance stats [debugging].
155164
```
156165
157166
158167
### Example
159168
```shell
160-
xvfb-run docker run --rm -ti \
169+
xhost +;
170+
docker run --rm -ti \
161171
--volume "$PWD":/app \
162172
--env DISPLAY=$DISPLAY \
163173
--volume=$HOME/.Xauthority:/root/.Xauthority \
@@ -198,6 +208,29 @@ mmphego/intel-openvino bash -c "\
198208
199209
```
200210
211+
## OpenVino API for Layer Analysis
212+
Queries performance measures per layer to get feedback of what is the most time consuming layer: [Read docs.](https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1InferRequest.html#a2194bc8c557868822bbfd260e8ef1a08)
213+
214+
```shell
215+
xhost +;
216+
docker run --rm -ti \
217+
--volume "$PWD":/app \
218+
--env DISPLAY=$DISPLAY \
219+
--volume=$HOME/.Xauthority:/root/.Xauthority \
220+
--volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
221+
--device /dev/video0 \
222+
mmphego/intel-openvino \
223+
bash -c "\
224+
source /opt/intel/openvino/bin/setupvars.sh && \
225+
python main.py \
226+
--face-model models/face-detection-adas-binary-0001 \
227+
--head-pose-model models/head-pose-estimation-adas-0001 \
228+
--facial-landmarks-model models/landmarks-regression-retail-0009 \
229+
--gaze-model models/gaze-estimation-adas-0002 \
230+
--input resources/demo.mp4 \
231+
--stat"
232+
```
233+
201234
## Edge Cases
202235
- Multiple People Scenario: If we encounter multiple people in the video frame, it will always use and give results one face even though multiple people detected,
203236
- No Head Detection: it will skip the frame and inform the user

main.py

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -215,10 +215,13 @@ def main(args):
215215
video_feed.show(video_feed.resize(frame))
216216

217217
if args.stats:
218-
pprint(face_detection.perf_stats)
219-
pprint(facial_landmarks.perf_stats)
220-
pprint(head_pose_estimation.perf_stats)
221-
pprint(gaze_estimation.perf_stats)
218+
stats = {
219+
"face_detection": face_detection.perf_stats,
220+
"facial_landmarks": facial_landmarks.perf_stats,
221+
"head_pose_estimation": head_pose_estimation.perf_stats,
222+
"gaze_estimation": gaze_estimation.perf_stats,
223+
}
224+
pprint(stats)
222225

223226
video_feed.close()
224227

0 commit comments

Comments
 (0)