Accuracy#
Possible Causes of Accuracy Issues#
Accuracy degradation could be caused by multiple issues, including:
A bug
Weights precision
Output channels precision
- Other issues
Using a quantized model
Ignoring “approximated operator” warning
..etc
Troubleshooting Steps#
We suggest that the user should follow the following troubleshooting steps:
1. Check the compiler warnings
To sort out operator approximation issues
2. Use the correct input and output data shapes
MemryX accelerators are using channel-last data format
3. Compile using unquantized models
To avoid double quantization errors
4. Check if your operators are supported
Check the supported operators list: Supported Operators
5. Use identical testing flow and post-processing (CPU vs. accelerator)
Use the same pre- and post-processing for both flows
6. Use accuracy metric rather than visual inspection whenever possible
If you train your models, use the accuracy metric you used while training.
If you use an off-the-shelf model, use standard accuracy metrics.
7. Run one model at a time on the accelerator (in the case of multi-model)
To figure out which model is injecting the error
The rest of the models should run on the host
8. Try double precision Weights
To sort out a weights-precision issue
Should be applied to unquantized models
Please check: Mixed-Precision Weights
If double precision is helping, you can use our new experimental auto-double-precision feature by compiling your model with the following flag:
--exp_auto_dp
9. Try High Precision Output Channels (HPOC)
Can be used to enhance the precision critical output channels (ex: bounding box coordinates)
Helps with output channel precession-related issues
Should be released soon
10. Crop your model
Will help to check if a given section of the model is causing the accuracy degradation
You can use the neural compiler manual cropping feature: Model Cropping