3

I followed the steps as outlined in the following documentation: https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html to train my custom object detection model. For reference, I'm using TF 2.10. However, after converting it to a tflite model and implementing it in an android application in Java, I get the following error:

EXCEPTION: Failed on interpreter inference -> Cannot copy from a TensorFlowLite tensor (StatefulPartionedCall:1) with shape [1,10] to a Java object with shape [1,10,4]. 

Prior to TensorFlow 2.6, the metadata order was boxes, classes, scores, number of detections. Now, it seems to have changed to scores, boxes, number of detections, classes.

I have tried two things: 1) downgrading to TF2.5 which solves this problem but raises incompatibility issues with other libraries so I do not prefer this method. 2) Declared the sequence of outputs explicitely using metadata writer based on one of the suggestions on here; however, this still raises the same exception as stated above. After loading the model (after the metadata writer process) and inspecting the output details, I see the following:

[{'name': 'StatefulPartitionedCall:1', 'index': 249, 'shape': array([ 1, 10]), 'shape_signature': array([ 1, 10]), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}, {'name': 'StatefulPartitionedCall:3', 'index': 247, 'shape': array([ 1, 10,  4]), 'shape_signature': array([ 1, 10,  4]), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}, {'name': 'StatefulPartitionedCall:0', 'index': 250, 'shape': array([1]), 'shape_signature': array([1]), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}, {'name': 'StatefulPartitionedCall:2', 'index': 248, 'shape': array([ 1, 10]), 'shape_signature': array([ 1, 10]), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}]

The order of the shapes displayed still do not match the order of boxes, classes, scores, number of detections. Without having to modify the android app code, is there anything else that can be done to avoid the distortion of the output shape during the tflite conversion?

If required, here is the simple conversion script I'm using to convert the tflite-friendly saved_model to a tflite:

import tensorflow as tf
import argparse

parser = argparse.ArgumentParser(
    description="tfLite Converter")

parser.add_argument("--saved_model_path",
                    help="", type=str)
parser.add_argument("--tflite_model_path",
                    help="", type=str)

args = parser.parse_args()

converter = tf.lite.TFLiteConverter.from_saved_model(args.saved_model_path)
tflite_model = converter.convert()


with open(args.tflite_model_path, 'wb') as f:
  f.write(tflite_model)

1 Answer 1

1

You can try to explicitly set the output order in the conversion script:

import tensorflow as tf
import argparse

parser = argparse.ArgumentParser(description="tfLite Converter")
parser.add_argument("--saved_model_path", help="Path to the saved model", type=str)
parser.add_argument("--tflite_model_path", help="Path to save the tflite model", type=str)
args = parser.parse_args()

converter = tf.lite.TFLiteConverter.from_saved_model(args.saved_model_path)

# Ensure the converter uses the new experimental converter
converter.experimental_new_converter = True

# Set the output tensor order explicitly
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS]
tflite_model = converter.convert()

with open(args.tflite_model_path, 'wb') as f:
    f.write(tflite_model)


After converting the model, verify the output details, to check the order, if it matches with what Android application expects


import tensorflow as tf

interpreter = tf.lite.Interpreter(model_path=args.tflite_model_path)
interpreter.allocate_tensors()

# Get output details
output_details = interpreter.get_output_details()
for detail in output_details:
    print(detail)

If the output order still does not match, you might need to adjust the Android code to read the outputs in the correct order.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.