Video Object Detection: findface-video-manager and findface-video-worker
Note
The findface-video-worker is delivered in a CPU-accelerated (findface-video-worker-cpu) and a GPU-accelerated (findface-video-worker-gpu) package.
In this section:
Functions of findface-video-manager
The findface-video-manager service is the part of the video object detection module that is used for managing the video object detection functionality.
The findface-video-manager service interfaces with findface-video-worker as follows:
It supplies
findface-video-workerwith settings and the list of to-be-processed video streams. To do so, it issues a so-called job, a video processing task that contains configuration settings and stream data.In a distributed system, it distributes video streams (jobs) across vacant
findface-video-workerinstances.
Note
The configuration settings passed via jobs have priority over the findface-video-manager.yaml configuration file.
The findface-video-manager service functioning requires etcd, a third-party software that implements a distributed key-value store for findface-video-manager. In the FindFace core, etcd is used as a coordination service, providing the video object detector with fault tolerance.
findface-video-manager functionality:
allows for configuring video object detection parameters
allows for managing the list of to-be-processed video streams
Functions of findface-video-worker
The findface-video-worker service (on CPU/GPU) is the part of the video object detection module, that recognizes objects in the video. It can work with both live streams and files, and supports most video formats and codecs that can be decoded by FFmpeg.
The findface-video-worker service interfaces with the findface-video-manager and findface-facerouter services as follows:
By request,
findface-video-workergets a job with settings and the list of to-be-processed video streams fromfindface-video-manager.The
findface-video-workerposts extracted normalized object images, along with the full frames and metadata (such as bbox, camera ID and detection time) to thefindface-facerouterservice for further processing.
Note
In FindFace Multi, the findface-facerouter functions are performed by findface-multi-legacy.
findface-video-worker functionality:
detects objects in the video,
normalizes images of objects,
tracking objects in real time and posting the best object snapshot.
When processing a video, findface-video-worker consequently uses the following algorithms:
Motion detection. Used to reduce resource consumption. Only when the motion detector recognizes the motion of certain intensity that the object tracker can be triggered.
Object tracking. The object tracker traces, detects, and captures objects in the video. It can simultaneously be working with several objects. It also searches for the best object snapshot using the embedded neural network. After the best object snapshot is found, it is posted to
findface-facerouter.
The best object snapshot can be found in one of the following modes:
Real-time
Offline
Real-Time Mode
In the real-time mode, findface-video-worker posts an object on-the-fly after it appears in the camera field. The following posting options are available:
If
realtime_post_every_interval: true, the object tracker searches for the best object snapshot within each time period equal torealtime_post_intervaland posts it tofindface-facerouter.If
realtime_post_every_interval: false, the object tracker searches for the best face snapshot dynamically:First, the object tracker estimates whether the quality of an object snapshot exceeds a pre-defined internal threshold. If so, the snapshot is posted to
findface-facerouter.The threshold value increases after each post. Each time the object tracker gets a higher quality snapshot of the same object, it is posted.
When the object disappears from the camera field, the threshold value resets to default.
If
realtime_post_first_immediately: true, the object tracker doesn’t wait for the firstrealtime_post_intervalto complete and posts the first object from a track immediately after it passes through the quality, size, and ROI filters. The way the subsequent postings are sent depends on therealtime_post_every_intervalvalue. Ifrealtime_post_first_immediately: false, the object tracker posts the first object after the firstrealtime_post_intervalcompletes.
Offline Mode
The offline mode is less storage intensive than the real-time one as in this mode findface-video-worker posts only one snapshot per track but of the highest quality. In this mode, the object tracker buffers a video stream with an object until the object disappears from the camera field. Then the object tracker picks up the best object snapshot from the buffered video and posts it to findface-facerouter.
Configure Video Object Detection
The video object detector configuration is done through the following configuration files:
The
findface-video-managerconfiguration filefindface-video-manager.yaml. You can find its default contenthere.When configuring
findface-video-manager, refer to the following parameters:Option
Description
etcd→endpointsIP address and port of the
etcdservice. Default value:127.0.0.1:2379.ntls→enabledIf true,
findface-video-managerwill send a job tofindface-video-workeronly if the total number of processed cameras does not exceed the allowed number of cameras from the license. Default value: false.ntls→urlIP address and port of the
findface-ntlshost. Default value:http://127.0.0.1:3185/.router_urlIP address and port of the
findface-facerouterhost to receive detected faces fromfindface-video-worker. In FindFace Multi,findface-facerouterfunctions are performed byfindface-multi-legacy. Default value:http://127.0.0.1:18820/v0/frame.The following parameters are available for
stream_settingsconfiguration:Option
Description
play_speedIf less than zero, the speed is not limited. In other cases, the stream is read with the given
play_speed. Not applicable for live streams.disable_dropsEnables posting all appropriate objects without drops. By default, if
findface-video-workerdoes not have enough resources to process all frames with objects, it drops some of them. If this option is active,findface-video-workerputs odd frames on the waiting list to process them later. Default value: false.imotion_thresholdMinimum motion intensity to be detected by the motion detector. The threshold value is to be fitted empirically. Empirical units: zero and positive rational numbers. Milestones: 0 = detector disabled, 0.002 = default value, 0.05 = minimum intensity is too high to detect motion.
router_timeout_msTimeout for a
findface-facerouter(orfindface-multi-legacyin the standard FindFace Multi configuration) response to afindface-video-workerAPI request, in milliseconds. If the timeout has expired, the system will log an error. Default value:15000.router_verify_sslEnables a https certificate verification when
findface-video-workerandfindface-facerouter(orfindface-multi-legacyin the standard FindFace Multi configuration) interact over https. Default value: true. If false, a self-signed certificate can be accepted.router_headersAdditional header fields in a request when posting an object: [“key = value”]. Default value: headers not specified.
router_bodyAdditional body fields in a request body when posting an object: [“key = value”]. Default value: body fields not specified.
ffmpeg_paramsList of a video stream ffmpeg options with their values as a key=value array: [“rtsp_transpotr=tcp”, .., “ss=00:20:00”]. Check out the FFmpeg web site for the full list of options. Default value: options not specified.
ffmpeg_formatPass FFMPEG format (mxg, flv, etc.) if it cannot be detected automatically.
use_stream_timestampIf true, retrieve and post timestamps from a video stream. If false, post the actual date and time.
start_stream_timestampAdd the specified number of seconds to timestamps from a stream.
rotEnables detecting and tracking objects only inside a clipping rectangle WxH+X+Y. You can use this option to reduce
findface-video-workerload. Default value: rectangle not specified.stream_data_filterPOSIX extended regex, if a content of the data stream is matched a filter, it will be sent it to
router_url. Default value: not specified.video_transformChange a video frame orientation right after decoding. Values (case insensitive, JPEG Exif Orientation Tag in brackets): None (1), FlipHorizontal (2), Rotate180 (3), FlipVertical (4), Transpose (5), Rotate90 (6), Transverse (7), Rotate270 (8). Default value: not specified.
enable_recorderEnables video recording for Video Recorder (must be installed).
enable_livenessEnables liveness (must be installed). Default value: false.
record_audioEnables audio recording. Default value: false.
use_rtsp_timeIf
use_stream_timestamp: true, add start stream timestamp of the RTSP source. Default value: true.The following parameters are available for configuration for each detector type (face, body, car):
Option
Description
filter_min_qualityMinimum threshold value for an object image quality. Default value: subject to the object type. Do not change the default value without consulting with our technical experts (support@ntechlab.com).
filter_min_sizeMinimum size of an object in pixels. Calculated as the square root of the relevant bbox area. Undersized objects are not posted. Default value:
1.filter_max_sizeMaximum size of an object in pixels. Calculated as the square root of the relevant bbox area. Oversized objects are not posted. Default value:
8192.roiEnable posting objects detected only inside a region of interest WxH+X+Y. Default value: region not specified.
fullframe_crop_rotCrop posted full frames by ROT. Default value: false.
fullframe_use_pngSend full frames in PNG and not in JPEG as set by default. Do not enable this parameter without supervision from our team as it can affect the entire system functioning. Default value: false (send in JPEG).
jpeg_qualityQuality of an original frame JPEG compression, in percents. Default value: 95%.
overall_onlyEnables the offline mode for the best object search. Default value: true (CPU), false (GPU).
realtime_post_first_immediatelyEnable posting an object image right after it appears in a camera field of view (real-time mode). Default value: false.
realtime_post_intervalOnly for the real-time mode. Defines the time period in seconds within which the object tracker picks up the best snapshot and posts it to
findface-facerouter. Default value:1.realtime_post_every_intervalOnly for the real-time mode. Post best snapshots obtained within each
realtime_post_intervaltime period. If false, search for the best snapshot dynamically and send snapshots in order of increasing quality. Default value: false.track_interpolate_bboxesInterpolate missed bboxes of objects in track. For example, if frames #1 and #4 have bboxes and #2 and #3 do not, the system will reconstruct the absent bboxes #2 and #3 based on the #1 and #4 data. Enabling this option allows you to increase the detection quality on account of performance. Default value: true.
track_miss_intervalThe system closes a track if there has been no new object in the track within the specified time (seconds). Default value:
1.track_overlap_thresholdTracker IoU overlap threshold. Default value:
0.25.track_max_duration_framesThe maximum approximate number of frames in a track after which the track is forcefully completed. Enable it to forcefully complete “eternal tracks,” for example, tracks with objects from advertisement media. The default value:
0(option disabled).track_send_historySend track history. Default value: false.
post_best_track_frameSend full frames of detected objects. Default value: true.
post_best_track_normalizeSend normalized images for detected objects. Default value: true.
post_first_track_framePost the first frame of a track. Default value: false.
post_last_track_framePost the last frame of a track. Default value: false.
tracker_typeTracker type (simple_iou or deep_sort). Default value:
simple_iou.track_deep_sort_matching_thresholdTrack features matching threshold (confidence) for deep_sort tracker. Default value:
0.65.track_deep_sort_filter_unconfirmed_tracksFilter unconfirmed (too short) tracks in deep sort tracker. Default value: true.
track_object_is_principalTrack by this object in N-in-1 detector/tracker. Default value: false.
track_history_active_track_miss_intervalDon’t count track as active if N seconds have passed, only if
track_send_history=true. Default value:0.filter_track_min_duration_framesPost only if object track length at least N frames. Default value:
1.extractors_track_triggersTracker events that trigger the extractor.
The
findface-video-workerconfiguration filefindface-video-worker-cpu.yamlorfindface-video-worker-gpu.yaml, subject to the acceleration type in use.When configuring
findface-video-worker(on CPU/GPU), refer to the following parameters:CPU
GPU
Description
batch_sizePost faces in batches of the given size.
capacityMaximum number of video streams to be processed by
findface-video-worker.N/a
video_decoder→
cpu
If necessary, decode video on CPU.
N/a
device_numberGPU device number to use.
exit_on_first_finished(Only if
inputis specified) Exit on the first finished job.inputProcess streams from file, ignoring stream data from
findface-video-manager.labelsLabels used to allocate a video object detector instance to a certain group of cameras. See Allocate findface-video-worker to Camera Group.
mgr→cmd(Optional, instead of the
mgr→staticparameter) A command to obtain the IP address of thefindface-video-managerhost.mgr→staticIP address of the
findface-video-managerhost to providefindface-video-workerwith settings and the list of to-be-processed streams.metrics_portHTTP server port to send metrics. If 0, the metrics are not sent.
min_sizeMinimum object size to be detected.
ntls_addrIP address and port of the
findface-ntlshost.resize_scaleRescale video frames with the given coefficient.
resolutionsPreinitialize
findface-video-workerto work with specified resolutions. Example: “640x480;1920x1080”.save_dir(For debug) Save detected objects to the given directory.
streamer→port,urlIP address and port to access the video wall.
use_time_from_sei(For MPEG-2) Use SEI (supplemental enhancement information) timestamps.
If necessary, you can also enable neural network models and normalizers to detect bodies, cars, and liveness. You can find the detailed step-by-step instructions in the following sections:
See also
Jobs
The findface-video-manager service provides findface-video-worker with a so-called job, a video processing task that contains configuration settings and stream data.
You can find a job example here.
Each job has the following parameters:
id: job id.enabled: active status.stream_url: URL/address of video stream/file to process.labels: key-value labels, that will be used by the router component (findface-multi-legacyin the standard FindFace Multi configuration) to find processing directives for objects detected in this stream.router_url: URL/address of the router component (findface-facerouter,findface-multi-legacy) to receive detected objects from thefindface-video-workercomponent for processing.router_events_url: URL/address of the router component (findface-facerouter,findface-multi-legacy), that uses events extraction.single_pass: if true, disable restarting video processing upon error (by default, false).stream_settings: video stream settings that duplicate those in thefindface-video-manager.yamlconfiguration file (while having priority over them).stream_settings_gpu: deprecated video stream settings. Not recommended for use. Only for compatibility.status: job status.status_msg: additional job status info.statistic: job progress statistics (progress duration, number of posted and not posted objects, processing fps, the number of processed and dropped frames, job start time, etc.).restream_url: websocket URL where processing stream with detected objects streams live time.restream_direct_url: websocket URL where original stream with input quality streams live time.shots_url: HTTP URL where actual stream screenshot can be downloaded.worker_id: unique id of thefindface-video-workerinstance with a processing job.version: job version.
Time Settings
When you create a job, it is possible to specify time parameters. These parameters determine how the event timestamps will be generated on posting or upon recording in to the VMS, which is important for calculating the final timestamp of an event. The default time parameter in the video-worker.yaml is use_time_from_sei: false. The default time parameters in the video-manager.yaml are use_stream_timestamp: false, use_rtsp_time: true.
Let’s consider various configurations:
use_stream_timestamp: false,use_time_from_sei: eithertrueorfalse,use_rtsp_time: eithertrueorfalse.The current server time (wall-clock time) will be used.
use_stream_timestamp: true,use_time_from_sei: true,use_rtsp_time: eithertrueorfalse.SEI timestamps (if any) or stream timestamps (pts) will be used unchanged.
use_stream_timestamp: true,use_time_from_sei: false,use_rtsp_time: true.The final timestamp will be calculated by the formula:
final_ts = pts - start_pts + start_stream_timestamp + rtsp_start_time, wherepts— pts stream timestamps.start_pts— the minimum observed pts of the stream, subtracted to make the first frame time equal to0.start_stream_timestamp— a job setting withinstream_settings.rtsp_start_time— “Start time of the stream in real world time”, specified by certain RTSP servers.
use_stream_timestamp: true,use_time_from_sei: false,use_rtsp_time: false.The final timestamp will be calculated by the formula:
final_ts = pts - start_pts + start_stream_timestamp.