I'm trying to do the tensorflow equivalent of torch.transforms.Resize(TRAIN_IMAGE_SIZE), which resizes the smallest image dimension to TRAIN_IMAGE_SIZE. Something like this
def transforms(filename):
parts = tf.strings.split(filename, '/')
label = parts[-2]
image = tf.io.read_file(filename)
image = tf.image.decode_jpeg(image)
image = tf.image.convert_image_dtype(image, tf.float32)
# this doesn't work with Dataset.map() because image.shape=(None,None,3) from Dataset.map()
image = largest_sq_crop(image)
image = tf.image.resize(image, (256,256))
return image, label
list_ds = tf.data.Dataset.list_files('{}/*/*'.format(DATASET_PATH))
images_ds = list_ds.map(transforms).batch(4)
The simple answer is here: Tensorflow: Crop largest central square region of image
But when I use the method with tf.data.Dataset.map(transforms), I get shape=(None,None,3) from inside largest_sq_crop(image). The method works fine when I call it normally.
EagerTensorsare not available withinDataset.map()so the shape is unknown. is there a workaround?largest_sq_crop?