-
Notifications
You must be signed in to change notification settings - Fork 478
Description
Prerequisites
Please make sure to check off these prerequisites before submitting a bug report.
- Test that the bug appears on the current version of the master branch. Make sure to include the commit hash of the commit you checked out.
- Check that the issue hasn't already been reported, by checking the currently open issues.
- If there are steps to reproduce the problem, make sure to write them down below.
- If relevant, please include the hls4ml project files, which were created directly before and/or after the bug.
Quick summary
The convert_from_keras_model function fails with an AssertionError when converting a Highly Granular Quantization (HGQ) Keras model that has a Reshape layer immediately following the Input layer.
Details
The bug is triggered by a specific post-training model state, not just by the architecture itself. While this state can be reached intermittently when training from scratch, the attached saved model provides a 100% reliable way to reproduce the failure.
The full error message is:
`
hls_model = hls4ml.converters.convert_from_keras_model(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "./hls4ml_pr/hls4ml/hls4ml/utils/dependency.py", line 46, in inner
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "./hls4ml_pr/hls4ml/hls4ml/converters/init.py", line 223, in convert_from_keras_model
return keras_v3_to_hls(config)
^^^^^^^^^^^^^^^^^^^^^^^
File "./hls4ml_pr/hls4ml/hls4ml/converters/keras_v3_to_hls.py", line 295, in keras_v3_to_hls
return ModelGraph.from_layer_list(config, layer_list, input_layers, output_layers)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "./hls4ml_pr/hls4ml/hls4ml/model/graph.py", line 451, in from_layer_list
model.apply_flow(flow)
File "./hls4ml_pr/hls4ml/hls4ml/model/graph.py", line 519, in apply_flow
self._apply_sub_flow(flow, applied_flows)
File "./hls4ml_pr/hls4ml/hls4ml/model/graph.py", line 528, in _apply_sub_flow
self._apply_sub_flow(sub_flow, applied_flows)
File "./hls4ml_pr/hls4ml/hls4ml/model/graph.py", line 528, in _apply_sub_flow
self._apply_sub_flow(sub_flow, applied_flows)
File "./hls4ml_pr/hls4ml/hls4ml/model/graph.py", line 531, in apply_sub_flow
applied_passes = optimize_model(self, flow.optimizers)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "./hls4ml_pr/hls4ml/hls4ml/model/optimizer/optimizer.py", line 319, in optimize_model
res = opt.transform(model, node)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "./hls4ml_pr/hls4ml/hls4ml/model/optimizer/passes/bit_exact.py", line 838, in transform
assert isinstance(
^^^^^^^^^^^
AssertionError: Input inputs connected to non-quantizer reshape_layer with non-trivial configuration
`
Steps to Reproduce
Add what needs to be done to reproduce the bug. Add commented code examples and make sure to include the original model files / code, and the commit hash you are working on.
- Setup the environment:
The following commands will create a conda environment, install the necessary dependencies, and clone thehls4ml
repository at the specific commit on the main branch where the bug is observed.
mkdir -p ./hls4ml_pr
conda create --prefix ./hls4ml_pr/hls4ml_pr_env python=3.11
conda activate ./hls4ml_pr/hls4ml_pr_env
cd ./hls4ml_pr/
pip install HGQ2
git clone https://github.com/fastmachinelearning/hls4ml.git
cd hls4ml
git checkout 8a4f268
pip install -e .
python3 -m pip install 'tensorflow[and-cuda]'
pip install matplotlib scipy scikit-learn scikit-image pandas awkward uproot mplhep huggingface_hub calmjs tabulate pydot graphviz sympy
cd ..
- Download and unzip the attached model file:
test.zip
test.zip - cd into test folder and run using
python minimal_reshape.py
Expected behavior
Should print "Got the hls model successfully"
Actual behavior
The error message pasted in the Details section above appears.
Optional
Possible fix
I have a fix prepared in a branch named fix_reshape_issue
in my fork and will be opening a pull request to address this issue shortly.
Additional context
We found this bug while working on a GSoC project. Mentor: Lino Gerlach @ligerlac