Skip to main content
0 votes
1 answer
124 views

I want to apply a quantization function to a deep CNN. This CNN is used for an image classification(in 4 classes) task, and my data consists of 224×224 images. When I run this code, I get an error. ...
jasmine's user avatar
  • 31
0 votes
0 answers
58 views

I’m applying QAT to YOLOv8n model with the following configuration: QConfig( activation=FakeQuantize.with_args( observer=MovingAverageMinMaxObserver, quant_min=0, quant_max=...
Matteo's user avatar
  • 111
1 vote
0 answers
42 views

I am trying to quantize a model in tensorflow using tfmot. This is a sample model, inputs = keras.layers.Input(shape=(512, 512, 1)) x = keras.layers.Conv2D(3, kernel_size=1, padding='same')(inputs) x =...
Sai's user avatar
  • 11
1 vote
0 answers
109 views

I have am quantizing a neural network using QAT and I want to convert it into tflite. Quantization nodes get added to the skeleton graph and we get a new graph. I am able to load the trained QAT ...
Prateek Sharma's user avatar
0 votes
1 answer
654 views

I am trying to implement Quantization Aware Training(QAT) resnet18 model. While inferring I get this error NotImplementedError: Could not run 'aten::add.out' with arguments from the 'QuantizedCPU' ...
Pavan Varyani's user avatar
0 votes
1 answer
86 views

So I am training this small CNN model which has few Conv2D layers and some MaxPool2D, Activations, Dense, basically the basic layers that Tensorflow provides. I want it to run on an embedded system ...
Jhon Margalit's user avatar
0 votes
1 answer
261 views

I'm Trying to test Quantization Aware Training from TensorFlow Lite. The following source code creates an AI model (variable: model) trained with the MNIST dataset (just 1 epoch for testing purpose). ...
eddy33's user avatar
  • 1
0 votes
1 answer
941 views

I'm currently learning TinyML with Tensorflow Lite and Tensorflow Lite for Micro. I'm working with the book "Hands-on TinyML" from R. Banerjee. I'm trying to quantize a model but it ...
eddy33's user avatar
  • 1
1 vote
0 answers
70 views

I want to do Quantization Aware Training, Here's my model architecture. Model: "sequential_4" _________________________________________________________________ Layer (type) ...
Vina's user avatar
  • 27
0 votes
0 answers
185 views

I am using tensorflow lite framework in order to create a quantized model for an experiment. I want to deploy this model on my Raspberry Pi but it seems that using a pretrained model for quantizing ...
Ayush Dave's user avatar
0 votes
0 answers
116 views

I have been trying to use a pretrained model from tensorflow.keras library - which is MobileNet If I try to quantize it using tfmot.quantization.keras.quantize_model(base_model) It gives me an error ...
Ayush Dave's user avatar
1 vote
0 answers
279 views

I am trying to learn about quantization so was playing with a github repo trying to quantize it into int8 format. I have used the following code to quantize the model. modelClass = DTLN_model() ...
Niaz Palak's user avatar
0 votes
0 answers
205 views

I was Fine Tuning a Llama Architecture Model that supports multiple languages: English, Hindi as well as Roman Hindi. So, I loaded the model in quantized form using bitsandbytes in nf4 form along with ...
Killua's user avatar
  • 1
1 vote
0 answers
377 views

I am seeking assistance regarding the conversion of the MediaPipe FaceMeshV2 model for use with the Coral EdgeTPU Accelerator. As per the Coral documentation, a model must undergo full integer ...
Miass500's user avatar
1 vote
1 answer
392 views

I want to do 1D-CNN and quantization aware training, it gives error keras.src.layers.convolutional.conv1d.Conv1D'> is not supported.You can quantize this layer by passing a `tfmot.quantization....
Kia's user avatar
  • 21

15 30 50 per page
1
2 3 4 5