Skip to content

Conversation

abhi-mal
Copy link

@abhi-mal abhi-mal commented Aug 14, 2025

Description

Fixes #1364

📝 Summary of the change

  • Added a new optimizer pass that modifies the model graph, effectively connecting the input node of reshape node to the output node.
  • This is to fix the assertion error when reshape follows input layer in Highly granular quantization (HGQ) models

Type of change

  • Bug fix (non-breaking change that fixes an issue)

Tests

📝 Details to reproduce the error mentioned in Issue: https://github.com/fastmachinelearning/hls4ml/issues/1364

Test Configuration:

Checklist

  • I have read the guidelines for contributing.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have made corresponding changes to the documentation.
  • My changes generate no new warnings.
  • I have installed and run pre-commit on the files I edited or added.
  • I have added tests that prove my fix is effective or that my feature works.

Tagging: @ligerlac

@vloncar
Copy link
Contributor

vloncar commented Aug 14, 2025

In what cases do you need a reshape layer as the first layer and not just create a model with that input shape?

@abhi-mal
Copy link
Author

Indeed, if we are training models from scratch, then we might not need the reshape layer in the front and this is how we circumvented this issue earlier. But in cases where the existing FPGA wrapper implementation provides inputs to the model in a fixed shape, we might need this. Also, this didn’t produce an error in QKeras and we think it would facilitate smoother transition from QKeras to HGQ.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Assertion error when reshape follows Input layer in HGQ models
2 participants