You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the PDF included in the repository, Section 7.1 Real-World Deployment states:
“Our experiments show that DeepInfant V2 can operate in near-real-time on mobile devices when deployed via optimized ONNX or CoreML formats.”
The repo already ships the Core ML (*.mlmodel) artifacts, but an ONNX version (or the PyTorch weights needed to export one) is not present. This prevents:
Running the model on Android or in a browser (TF.js / ONNX.js).
Using the included predict.py, which expects an ONNX model.
Reproducing the “near-real-time” results on non-Apple hardware.
Could you please:
Publish the optimized ONNX model for DeepInfant V2 (and, if possible, VGGish & AFP).
Alternatively, provide the PyTorch checkpoints (*.pth) so the community can export to ONNX with torch.onnx.export.
(Optional) Add a minimal export script or update the README to clarify the deployment workflow.
This would let researchers and parents without macOS/iOS evaluate DeepInfant exactly as described in the paper.
The text was updated successfully, but these errors were encountered:
esteves023
changed the title
Request for Portable Model Formats (ONNX / TFLite) and Pre-built Demo App
Please publish the "optimized ONNX" model referenced in the paper (Section 7.1) and/or pytorch
May 15, 2025
Uh oh!
There was an error while loading. Please reload this page.
In the PDF included in the repository, Section 7.1 Real-World Deployment states:
The repo already ships the Core ML (*.mlmodel) artifacts, but an ONNX version (or the PyTorch weights needed to export one) is not present. This prevents:
Running the model on Android or in a browser (TF.js / ONNX.js).
Using the included predict.py, which expects an ONNX model.
Reproducing the “near-real-time” results on non-Apple hardware.
Could you please:
Publish the optimized ONNX model for DeepInfant V2 (and, if possible, VGGish & AFP).
Alternatively, provide the PyTorch checkpoints (*.pth) so the community can export to ONNX with torch.onnx.export.
(Optional) Add a minimal export script or update the README to clarify the deployment workflow.
This would let researchers and parents without macOS/iOS evaluate DeepInfant exactly as described in the paper.
The text was updated successfully, but these errors were encountered: