Understand object tracking in Shogun Live
With Shogun 1.6 and later, you can use one of two pipelines to track objects and subjects. Choose from either the standard pipeline or the ObjectTracker pipeline.
The standard pipeline is the conventional approach to tracking, which requires computing reconstructions on every frame, labeling them, and then fitting the subject/object model to those labeled reconstructions.
The ObjectTracker pipeline is a low-latency approach to tracking that enables a rigid object's pose to be updated using camera centroids. This means the reconstruction step can be bypassed during tracking, which reduces the latency.
When to use the standard or ObjectTracker pipeline
The pipeline to use depends on what you’re tracking. For rigid body objects, you can choose to use either the standard or ObjectTracker pipeline (the benefits of both approaches are listed below). For all other objects or subjects, use the standard pipeline.
When to use the standard pipeline
Use the standard pipeline, also known as the "Full-body" reconstruction-based approach, if you are tracking:
- Subjects (for example, performers or actors)
- Props, including:
- Multi-segment props
- Semi-rigid props
- Rigid bodies
Props held by a subject (for example, an actor holding a sword) are better tracked using this pipeline as it is easier for the system to handle any ambiguities between the prop and actor markers. This reduces the likelihood of mislabeling between prop markers and subject markers.
When to use the ObjectTracker pipeline
Use the ObjectTracker pipeline if you are tracking only rigid bodies.
While you can use the standard pipeline to track rigid bodies, if you require the lowest possible latency (for example, camera tracking in ICVFX),use the ObjectTracker pipeline.
In general, the ObjectTracker will keep better track of rigid-body props in low-camera coverage situations. This pipeline also offers more functionality including jitter reduction, smoothing, and support for motion models.
Additional pipeline information
ObjectTracker is an alternative set of algorithms based on computing the 6DoF pose of the object directly, using 2D centroid data. This approach offers the following advantages:
- Shogun can track the object into parts of the volume where the standard pipeline may not be able to form enough reconstructions.
- Latency is minimized as there is no need to wait for reconstruction to finish before an object pose is obtained for a given frame.
The reconstructions are used by the ObjectTracker to boot (initialize) an object pose.
However, these reconstructions are not needed for frame-to-frame tracking, enabling the ObjectTracker to generate results even when the reconstructor drops frames.
Note
The Shogun interface always shows the standard output. As a result, when the reconstructor (or labeler/solver) is forced to drop frames, this is indicated in the time bar beneath your workspace. However, the ObjectTracker does not (usually) drop these frames as it is less computationally expensive.
If you are streaming the outputs, use the following ports:
- Use Port 801 for standard pipeline data.
- Use Port 804 for ObjectTracker pipeline data.
For more information on streaming data, see Streaming via Vicon DataStream SDK.
Use the ObjectTracker pipeline
To use the ObjectTracker pipeline:
- Create the object in the usual way:
- In the 3D View, select the relevant markers.
- In the Tracking panel, in the Prop field, enter a name for the object and click Create.
-
In the Tracking panel, ensure the new prop is selected and in its Properties, ensure Track with ObjectTracker is selected.
At the top of the Tracking panel, an icon is displayed to the right of the object's name, to indicate that object tracking is now being used for this object.
Specify settings for optimal object tracking
If you can't see these parameters, at the top of the relevant panel, click Advanced Parameters
.System panel settings
Towards the top of the System tab, the following Advanced global parameters
that relate to object tracking are displayed:
Setting | Description |
---|---|
Force Lowest Latency | Some models of Vicon cameras can be forced into a mode to provide the lowest possible latency at the cost of reducing grayscale and centroid throughput. Generally, select this setting only when the ObjectTracking pipeline (ie, low-latency object tracking). However, note that for some large systems with a high number of cameras, this may cause network problems that manifest as cameras appearing to drop out. For more information, see Use the Force Lowest Latency option. Default: Cleared |
Object Tracking Level | Set the camera and processing parameters to the required level of object tracking:
Default: Standard See also Handling jitter. |
Use the Force Lowest Latency option
This is the ObjectTracking pipeline. To achieve the lowest latency possible for object tracking, so that external devices driven by object pose (such as an LED wall virtual camera driven by a Vicon-tracked hero camera) are as close to real time as possible, you can control whether any attached Vicon cameras have their Digital Signal Processors (DSPs) enabled or not. Do this by selecting or clearing the Force Lowest Latency option.
Selecting this option provides the lowest possible latency at the cost of reducing grayscale and centroid throughput. However, it is normally only useful if you are using low-latency object tracking (see Use the object tracker).
Note that this feature is not available for Vicon cameras that do not have a DSP.
- If you are tracking only a single object or a few rigid objects, you may want to reduce the camera delivery latency and instead allow Shogun, rather than the camera, to fit grayscale blobs. In this case, you can select Force Lowest Latency.
- However, we advise you not to select Force Lowest Latency if the system contains a significant number of Vantage cameras and/or a lot of markers, as this can overload the network with grayscale data.
To use the Force Lowest Latency option:
-
Connect Shogun Live to a system containing Vicon optical cameras that have a DSP, such as Vantage and/or ViperX, with a number of markers scattered throughout the volume.
-
On the System panel, ensure the Advanced parameters are displayed
. -
Select Force Lowest Latency.
This ensures that any attached cameras with DSPs do not use them.
If you select Force Lowest Latency in a system with a large number or cameras and/or markers, the cameras may overload the network with grayscale data, which can result in HAL error messages in the Log, and cameras appearing to drop out or show as not transmitting data (ie, in the System panel, their icons are gray or flicker to gray). To minimize this effect:
- If possible, use 10 GB networking, rather than 1 GB networking.
- On the System panel, check the Object Tracking Level. If possible, set it to standard, rather than Use Grayscale or Object Tracking Only, which forces the cameras to send grayscale data.
- For all cameras, check that their Grayscale Mode (found in the Optical Setup section of the System panel) is NOT set to Only.
Note that the Force Lowest Latency setting is not available if no cameras with DSPs are attached to the system.
Handling jitter
Jitter is mainly caused by these issues:
- Changing sets of centroid-label correspondences (rays) contribute to the object, interacting with small inaccuracies in the camera calibration so that flickering contributions cause the optimum pose to jump about.
- Noise on the centroids cause the optimum pose to change.
If you experience large levels of jitter and/or you know that it's caused by the first issue, first try increasing the Object Tracking Level (the lowest level is standard, the highest is Object Tracking Only).
If it's caused by the second issue, try selecting Enable Motion Model in the Object Tracking section.
Processing panel settings
On the Processing tab, in the Object Tracking section, the following Advanced parameters
that relate to object tracking are displayed:
Setting | Descriptions |
---|---|
Thread Count | Number of threads to use. If set to zero (the default setting), the thread count is calculated automatically. Default: 0 |
Reprojection Threshold | Maximum allowable distance (in pixels) between a centroid and the projection of a marker into the same camera. Applies only to markers that are tracked using the ObjectTracker. If you need to increase the Environmental Drift Tolerance, also increase this value. |
Entrance Threshold | Minimum proportion of markers that must be displayed before an object is booted. If the proportion of markers that are displayed is less than this value, the object is not booted. You can override this value for selected objects by using object presets. Default: 1.00. See also Create and apply object presets. |
Minimum Object Marker Separation | Minimum allowable separation distance between objects (in mm) to enable the objects to be tracked separately, based on the smallest distance between a marker on each object. Default: 10 |
Enable Motion Model | When selected, this ensures stationary objects don't have a pose jitter arising from image noise. Select this setting to eliminate jitter that occurs when a scene is viewed through a stationary tracked camera. This setting may cause issues with very slow-moving cameras. Default: Cleared. See also Handling jitter |
Create and apply object presets
Object presets enable you to specify the object tracking and smoothing (filtering) properties for selected objects. (Note that these properties override the overall object tracking properties that you can set in the Processing panel.)
By experimenting with different values for the filtering properties and applying them to an object using a saved object preset, you can evaluate filter performance to ensure the smoothest possible tracking for selected objects.
To create an object preset:
- In the Tracking panel, ensure the object whose tracking and/or filtering properties that you want to refine is selected.
-
In the object's Properties below, ensure the Advanced properties are displayed
and on the right of the Object Preset field, click Manage. -
In the Object Presets dialog box, enter a name for the new preset, click Add and ensure the new preset is selected.
-
Specify the required settings and close the dialog box.
Object presets are saved in the Subjects.mcp file (found in C:\Users\Public\Documents\Vicon\ShogunLive#.#\LastRun\UserName), or in any exported tracking configuration.
To apply an object preset:
- In the Tracking panel, ensure that the object to which you want to apply the preset is selected.
-
On the Properties tab below, ensure the Advanced properties are displayed
and in the General section, from the Object Preset list, select the required preset.The tracking and/or filtering properties specified in the preset are applied to the object.
Tracking panel parameters
On the Tracking tab, with the required object (prop) selected, in the Properties pane, the following Advanced setting
is displayed:Setting | Description |
---|---|
Track With ObjectTracker | When selected, objects are tracked using the object tracker. Default: Cleared |