To run use the command python proj.py
Options:
- [Required]
-i
or--input
followed by the path of the input file -o
or--output
followed by the path of the desired output file without extension. If it is not defined the result will be shown in a window.-v
or--verbose
, if defined will run the project in verbose mode-s
or--skip
followed by a number defines the number to frame to skip - 1. For example if value is 3 it will analyze 1 frame each 3.-d
or--debug
will enable debug-mode
The code is found under the code
folder.
Required python modules are found in code/requirements.txt
and therefore can be installed with pip install -r requirements.txt
The only file that we were not able to re-create or retrieve during runtime are YOLO network weights,
that must be downloaded here(our drive) or here(developer site) and put into code/detection/people/yolo-coco/
.
The output of the project consists in:
- An output video in which we annotate:
- For each painting bounding box, the ROI containing the rectified version of the original painting and the indication of the first retrieved db painting
- For each person, the ROI with the indication of the relative room
- A terminal output in which we print:
- The description of what the program is processing (ONLY IF VERBOSE MODE IS ACTIVATED)
- For each frame, and for each painting detected, the indication of the corresponding bounding box and the entire result of the ranked list retrieved (ALWAYS)
- For each frame, and for each person detected, the indication of the corresponding bounding box (ALWAYS)
- For each frame the detected room (ALWAYS)
Detail of the pipline can be found inside the paper document
These will open youtube videos