All Questions
Tagged with half-precision-float tensorflow
10 questions
2
votes
1
answer
6k
views
How to Enable Mixed precision training
i'm trying to train a deep learning model on vs code so i would like to use the GPU for that. I have cuda 11.6 , nvidia GeForce GTX 1650, TensorFlow-gpu==2.5.0 and pip version 21.2.3 for windows 10. ...
17
votes
1
answer
28k
views
How to select half precision (BFLOAT16 vs FLOAT16) for your trained model?
how will you decide what precision works best for your inference model? Both BF16 and F16 takes two bytes but they use different number of bits for fraction and exponent.
Range will be different but I ...
8
votes
1
answer
9k
views
tensorflow - how to use 16 bit precision float
Question
float16 can be used in numpy but not in Tensorflow 2.4.1 causing the error.
Is float16 available only when running on an instance with GPU with 16 bit support?
Mixed precision
Today, most ...
2
votes
0
answers
286
views
TensorFlow mixed precision training: Conv2DBackpropFilter not using TensorCore
I am using the keras mixed precision API in order to fit my networks in GPU.
Typically in my code this will look like this.
A MWE would be:
from tensorflow.keras.mixed_precision import experimental as ...
1
vote
0
answers
618
views
Falling to use TensorCore from Tensorflow Mixed Precision Tutorial
I have followed the Mixed precision tutorial from Tensorflow: https://www.tensorflow.org/guide/keras/mixed_precision but apparently I fail to use TensorCore.
My setup:
Windows 10
Nvidia driver: 441....
2
votes
0
answers
264
views
fp16 support in the Object Detection API(tensorflow)
fp16 support in the Object Detection API [Feature request] · Issue #3706 · tensorflow/models
https://github.com/tensorflow/models/issues/3706
I asked on the github issue: fp16 support in the Object ...
0
votes
1
answer
2k
views
Training with Keras/TensorFlow in fp16 / half-precision for RTX cards
I just got an RTX 2070 Super and I'd like to try out half precision training using Keras with TensorFlow back end.
So far I have found articles like this one that suggest using this settings:
import ...
2
votes
2
answers
2k
views
how to do convolution with fp16(Eigen::half) on tensorflow
How can I use tensorflow to do convolution using fp16 on GPU? (the python api using __half or Eigen::half).
I want to test a model with fp16 on tensorflow, but I got stucked. Actually, I found that ...
4
votes
0
answers
1k
views
Tensorflow automatic mixed precision fp16 slower than fp32 on official resnet
I am trying to use the official ResNet model benchmarks from https://github.com/tensorflow/models/blob/master/official/resnet/estimator_benchmark.py#L191 to experiment with the AMP support included in ...
2
votes
1
answer
3k
views
Mixed precision training report RET_CHECK failure, ShapeUtil::Equal(first_reduce->shape(), inst->shape())
New setup:
2x2080ti
Nvidia driver: 430
Cuda 10.0
Cudnn 7.6
Tensorflow 1.13.1
Old setup:
2x1080ti
Nvidia driver:410
Cuda 9.0
Tensorflow 1.10
I implemented a model for segmentation, it can be trained ...