We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using demo.py, I can get detection boxes and masks, but I am trying to get keypoints for person and hand. How can I get keypoint results?
The text was updated successfully, but these errors were encountered:
Hello, we will refine our documentation recently and add keypoints demo
Sorry, something went wrong.
Hello,I want to get keypoints for person and hands too,it's two months since your last replay, I want to know when will it release?
Hello, sorry for the late reply, you can get the keypoints annotations by updating your API usage as:
task = DinoxTask( image_url=image_url, prompts=[TextPrompt(text=TEXT_PROMPT)], bbox_threshold=0.25, targets=[DetectionTarget.BBox, DetectionTarget.Mask, DetectionTarget.Pose] # add Pose target )
and the prediction results will contain the key points predictons in this format (x, y, visibility, _)
(x, y, visibility, _)
person
teacher
man
We will update the visualization in two days.
No branches or pull requests
Using demo.py, I can get detection boxes and masks, but I am trying to get keypoints for person and hand. How can I get keypoint results?
The text was updated successfully, but these errors were encountered: