Skip to content

Please publish the "optimized ONNX" model referenced in the paper (Section 7.1) and/or pytorch #4

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
esteves023 opened this issue May 15, 2025 · 0 comments

Comments

@esteves023
Copy link

esteves023 commented May 15, 2025

In the PDF included in the repository, Section 7.1 Real-World Deployment states:

“Our experiments show that DeepInfant V2 can operate in near-real-time on mobile devices when deployed via optimized ONNX or CoreML formats.”

The repo already ships the Core ML (*.mlmodel) artifacts, but an ONNX version (or the PyTorch weights needed to export one) is not present. This prevents:

Running the model on Android or in a browser (TF.js / ONNX.js).

Using the included predict.py, which expects an ONNX model.

Reproducing the “near-real-time” results on non-Apple hardware.

Could you please:

  1. Publish the optimized ONNX model for DeepInfant V2 (and, if possible, VGGish & AFP).

  2. Alternatively, provide the PyTorch checkpoints (*.pth) so the community can export to ONNX with torch.onnx.export.

  3. (Optional) Add a minimal export script or update the README to clarify the deployment workflow.

This would let researchers and parents without macOS/iOS evaluate DeepInfant exactly as described in the paper.

@esteves023 esteves023 changed the title Request for Portable Model Formats (ONNX / TFLite) and Pre-built Demo App Please publish the "optimized ONNX" model referenced in the paper (Section 7.1) and/or pytorch May 15, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant