My network takes images of size 100 x 100 pixels. Therefore I have to resize the images of my dataset which are of different size. I want to be able to extract the largest central square region from a given image and then resize it to 100 x 100.
To be more precisely, let's say an image has a width of 200 pixels and a height of 50 pixels. Then I want to extract the largest central square region which is in this example 50 x 50 followed by resizing the image to 100 x 100 pixels.
What is the right way to do that using Tensorflow? Right now I am using tf.image.resize_images() which distorts the image and I want to get rid of that.


(batch_size, height, width, channels)or something else?(width, height, channels). I can not present my dataset as(batch_size, height, width, channels)sinceheightandwidthof each image can vary.