All Questions
Tagged with half-precision-float deep-learning
4 questions
0
votes
0
answers
25
views
Deviation caused by half() in ptyroch
I have met a question that the value of a tensor is 6.3982e-2 in float32. After I changed it to float16 using half() function, it became 6.3965e-2. Will there be a method to convert tensor without ...
17
votes
1
answer
28k
views
How to select half precision (BFLOAT16 vs FLOAT16) for your trained model?
how will you decide what precision works best for your inference model? Both BF16 and F16 takes two bytes but they use different number of bits for fraction and exponent.
Range will be different but I ...
1
vote
0
answers
120
views
Is there an implementation of Keras Adam Optimizer to support Float16
I am currently working with deploying tiny-yolov3 on openvino toolkit and for that i need to convert my model to float16. But for that I need an optimizer that supports FP16. I tried modifying SGD to ...
1
vote
0
answers
618
views
Falling to use TensorCore from Tensorflow Mixed Precision Tutorial
I have followed the Mixed precision tutorial from Tensorflow: https://www.tensorflow.org/guide/keras/mixed_precision but apparently I fail to use TensorCore.
My setup:
Windows 10
Nvidia driver: 441....