Skip to content

Demonstrates how to use Unity Inference Engine and YOLO11 to perform image segmentation using the Meta Quest Passthrough Camera

License

Notifications You must be signed in to change notification settings

rikturnbull/xr-image-segmentation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

XR Image Segmentation Sample

This Unity Project demonstrates how to use Unity Inference Engine (previously Sentis) and the YOLO11 model to detect and label object bounding boxes and mask segments in images from the Meta Quest Passthrough Camera.

Much of this code originates from the Unity-PassthroughCameraApiSamples: https://github.com/oculus-samples/Unity-PassthroughCameraApiSamples/tree/main.

Test Scene

The test scene can run without a headset and uses a selection of images from here:

Common Objects and Context 2017 Dataset with Yolov8 Annotations https://www.kaggle.com/datasets/sarkisshilgevorkyan/coco-dataset-for-yolo

alt text

XR Scene

The XR scene must be built and deployed to a headset.

License

This project is licensed under the terms of the GNU Affero General Public License v3.0 (AGPL‑3.0).
See the LICENSE file for details.

About

Demonstrates how to use Unity Inference Engine and YOLO11 to perform image segmentation using the Meta Quest Passthrough Camera

Topics

Resources

License

Stars

Watchers

Forks