Number of filters limitation #675
Replies: 22 comments
-
Depends on your config. Assuming you use |
Beta Was this translation helpful? Give feedback.
-
I have used the |
Beta Was this translation helpful? Give feedback.
-
Hi @wilfredkisku, can you share your model. You may look into tracing / profiling functionality. You can make 1D plots of the expected output vs hls4ml output like so: https://github.com/hls4ml-finn-mlperftiny/CIFAR10/blob/main/hls4ml/convert.py#L222-L233 This can help you pinpoint which layers are causing a mismatch. Then you can increase the precision of those layers. Usually it's required to either increase the precision of the outputs or the accumulators (or both). |
Beta Was this translation helpful? Give feedback.
-
Thanks @jmduarte for the help. I am including the model to give a better picture of the model that I am trying to synthesize using hls4ml. I have been testing with IFM bit precision of 4 and weight precision of 12, also, 16 and 16 but still the accuracy for the keras model and hls4ml model differs alot.
I am including the configuration for hls4ml that I have used.
Few other details I want to include are:
|
Beta Was this translation helpful? Give feedback.
-
Hi have you solve your problem? I also met the accracy problem when I tested on Resnet. |
Beta Was this translation helpful? Give feedback.
-
Hi have you solve your problem? Maybe you can compare the output of the "output_softmax" layer between the hls_model and the keras model. this is my problem. #590 |
Beta Was this translation helpful? Give feedback.
-
@liuhao-97 thank you for the reply. No i could not get it corrected, it's still has the accuracy drop. Is there anything else to rectify the issue that you have pointed out? |
Beta Was this translation helpful? Give feedback.
-
Hi have you tried with full precision model (ap_fix<32,16>)? I mean don't quantize the model and set hls config to ap_fix<32,16>. Maybe you can check the output of the last sofmax layer between the keras model and hls model. |
Beta Was this translation helpful? Give feedback.
-
For me I found there might be some problem with the softmax layer. I print the output of dense layer and it works fine. But for softmax layer, the output is totally different. If you check this link https://github.com/hls4ml-finn-mlperftiny/CIFAR10/blob/main/hls4ml/convert.py you will find it remove the softmax layer. so I assume there might be some problem with softmax layer. |
Beta Was this translation helpful? Give feedback.
-
@liuhao-97 I tried to profile the layer but came up with a `graph disconnected error'.
|
Beta Was this translation helpful? Give feedback.
-
The full precision layer works file, for me the accuracy drops only for the quantized hls model. |
Beta Was this translation helpful? Give feedback.
-
Can you print your output? Is it consisted of some same number and some zeros like [0.25, 0.25, 0.25, 0, 0, 0]? |
Beta Was this translation helpful? Give feedback.
-
I am not able to print the output yet. |
Beta Was this translation helpful? Give feedback.
-
Did you prune the model? I think quantized pruned model can't work fine with hls4ml. Maybe you can try quantized model but don't pruned it. |
Beta Was this translation helpful? Give feedback.
-
Besides which hls4ml are you using? hls4ml 0.6.0 or the newest branch? |
Beta Was this translation helpful? Give feedback.
-
I am using hls4ml 0.6.0. Has this issue been resolved in the new branch? |
Beta Was this translation helpful? Give feedback.
-
not sure. You can have a try with new brench. Besides, have you tried with "io_type='io_parallel'"? Maybe it can sove the problem. |
Beta Was this translation helpful? Give feedback.
-
#448 Maybe you can check this. |
Beta Was this translation helpful? Give feedback.
-
@liuhao-97 I tried but it still did not work for me. Did you get a workaround to make sure that the accuracy does not drop? |
Beta Was this translation helpful? Give feedback.
-
I think it is because you prune the model. When you prune the model, somehow the original connection of the model layer by layer goes wrong, which can be seen in your error.
|
Beta Was this translation helpful? Give feedback.
-
@liuhao-97 yes the error has been removed after I removed pruning. Thank you. |
Beta Was this translation helpful? Give feedback.
-
@jmduarte models that have concatenate layer or add layers have a considerable accuracy drop. Might be a bug. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Is there a limitation to the number of filters in a CNN, as a layer with 32 filters tends to be the bottleneck during synthesis. The synthesis is unable to complete and gets stuck at the conv2d layer having 32 filter.
Beta Was this translation helpful? Give feedback.
All reactions