Skip to content

ONNX Import: switch to rank inferencing, rename shape to static_shape, decouple tensor shape info #3037

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 21 commits into from
Apr 25, 2025

Conversation

antimora
Copy link
Collaborator

@antimora antimora commented Apr 17, 2025

Pull Request Template

Checklist

  • Confirmed that run-checks all script has been executed.
  • Made sure the book is up to date with changes in this PR.

Related Issues/PRs

Fixes #2478

Changes

  1. Removed shape inferencing and used rank inferencing instead.
  2. Rename shape field name to static_shape to signal that the shape information is captured from ONNX file during build. Burn relies on dynamic shape information. This static_shape field was kept (instead of removing it completely) because there are some rare edge cases when we can use dimension information from static_shape.
  3. Decoupled shape information when a tensor input has a value. Previously value field only contained data (array of numbers). Now the value contains TensorData which has shape information.

Testing

  1. Made sure all existing tests pass (except unsqueeze that relies on dynamic rank which isn't supported by Burn).
  2. Tested with vgg16-12.onnx which was used in Adding flatten rank for auto generate from onnx code to conform with onnx spec #2940

antimora and others added 8 commits March 4, 2025 15:39
- Update struct TensorData field access patterns
- Add support for non-optional data fields
- Fix issues with tensor data handling
The PR resolves several compilation errors caused by changes to the TensorType structure:

1. Modified code to adapt to the removal of the `shape` field from TensorType
2. Fixed pattern matching issues in rank_inference.rs to properly match against TensorData.data
3. Updated from_onnx.rs's remap_unsqueeze_to_reshape function to work with the new API
4. Fixed unused imports across multiple files
5. Fixed function calls that were using Option.len() incorrectly
Enhance tensor type system to support both static shapes and dynamic ranks across multiple ONNX operations including Expand, RandomNormal, Constant, and related nodes. Ensure proper shape validation and improve type safety throughout the conversion process.
@antimora antimora requested a review from Copilot April 17, 2025 04:04
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR modernizes the ONNX IR processing by switching from shape inferencing to rank inferencing, renaming the "shape" field to "static_shape", and decoupling tensor shape information by encapsulating it in a dedicated TensorData structure.

  • Replaces shape inferencing with rank inferencing
  • Renames the field "shape" to "static_shape" to clarify its purpose
  • Refactors tensor data conversion to include explicit shape details via TensorData

Reviewed Changes

Copilot reviewed 25 out of 25 changed files in this pull request and generated no comments.

Show a summary per file
File Description
crates/onnx-ir/src/from_onnx.rs Refactors unsqueeze node remapping to use TensorData and replaces direct shape extraction with a decoupled approach.
crates/onnx-ir/src/coalesce.rs Adjusts tensor weight conversion logic to extract shape information from TensorData rather than a raw option.
crates/burn-import/* Updates multiple modules to remove direct use of shape in TensorType constructors and to rely on TensorData, reflecting the new rank inferencing model.
crates/burn-ndarray/* Minor changes including added allow attributes for unused imports.
crates/burn-import/onnx-tests/* Temporarily disables unsqueeze tests due to dynamic rank support changes in Burn.
Comments suppressed due to low confidence (3)

crates/onnx-ir/src/onnx/op_configuration.rs:1891

  • Using 'static_shape' with expect() assumes it is always available; verify that all tensors processed in this context have their static_shape set to avoid potential runtime panics.
let dim_size = tensor.static_shape.as_ref().expect("Split: Static shape must be known to calculate split size")[axis as usize];

crates/onnx-ir/src/from_onnx.rs:404

  • The new field 'static_shape' is assigned None here; please confirm this aligns with the intended decoupling of tensor shape information in favor of runtime rank inferencing.
static_shape: None,

crates/burn-import/src/onnx/to_burn.rs:438

  • The TensorType constructor has been updated to omit the explicit shape parameter; ensure that downstream logic correctly derives shape information from the TensorData structure.
ConstantValue::Tensor(TensorType::new(name, rank, kind), tensor_data)

@antimora antimora requested a review from laggui April 17, 2025 04:09
Copy link

codecov bot commented Apr 17, 2025

Codecov Report

Attention: Patch coverage is 80.42269% with 176 lines in your changes missing coverage. Please review.

Project coverage is 81.17%. Comparing base (e096b0c) to head (285e361).
Report is 3 commits behind head on main.

Files with missing lines Patch % Lines
crates/onnx-ir/src/rank_inference.rs 89.37% 46 Missing ⚠️
crates/burn-import/src/onnx/op_configuration.rs 74.55% 43 Missing ⚠️
crates/onnx-ir/src/ir.rs 50.00% 40 Missing ⚠️
crates/onnx-ir/src/coalesce.rs 46.66% 16 Missing ⚠️
crates/burn-import/src/onnx/to_burn.rs 83.92% 9 Missing ⚠️
crates/burn-import/src/burn/node/unsqueeze.rs 74.19% 8 Missing ⚠️
crates/onnx-ir/src/from_onnx.rs 11.11% 8 Missing ⚠️
crates/onnx-ir/src/proto_conversion.rs 88.46% 3 Missing ⚠️
crates/burn-import/src/burn/node/constant.rs 93.75% 1 Missing ⚠️
crates/burn-import/src/burn/ty.rs 92.85% 1 Missing ⚠️
... and 1 more
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3037      +/-   ##
==========================================
+ Coverage   81.16%   81.17%   +0.01%     
==========================================
  Files         815      815              
  Lines      117201   117294      +93     
==========================================
+ Hits        95121    95213      +92     
- Misses      22080    22081       +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@antimora antimora changed the title Switch to Rank Inferencing, Rename shape to static_shape, Decouple Tensor Shape Info ONNX Import: switch to rank inferencing, rename shape to static_shape, decouple tensor shape info Apr 19, 2025
Copy link
Member

@laggui laggui left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for tackling the ONNX import issues 🙏

See my comments below.

@antimora antimora requested a review from laggui April 24, 2025 04:48
Copy link
Member

@laggui laggui left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A couple minor comments, otherwise should be good to go after! 🙂

@antimora antimora requested a review from laggui April 25, 2025 17:28
@antimora antimora merged commit d6533da into tracel-ai:main Apr 25, 2025
11 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Refactor onnx-ir to remove static shape inference and rely on rank inference only
2 participants