0

Hi I am trying out a simple autoencoder in Python 3.5 using Keras library. The issue I face is - ValueError: Error when checking input: expected input_40 to have 2 dimensions, but got array with shape (32, 256, 256, 3). My dataset is very small (60 RGB images with dimension - 256*256 and one same type of image to validate). I am a bit new to Python. Please help.

import matplotlib.pyplot as plt
from keras.layers import Input, Dense
from keras.models import Model

#Declaring the model
encoding_dim = 32
input_img = Input(shape=(65536,))
encoded = Dense(encoding_dim, activation='relu')(input_img)
decoded = Dense(65536, activation='sigmoid')(encoded)
autoencoder = Model(input_img, decoded)
encoder = Model(input_img, encoded)
encoded_input = Input(shape=(encoding_dim,))
decoder_layer = autoencoder.layers[-1]
decoder = Model(encoded_input, decoder_layer(encoded_input))
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')


#Constructing a data generator iterator
from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
test_datagen = ImageDataGenerator(rescale = 1./255)
training_set=
train_datagen.flow_from_directory('C:\\Users\\vlsi\\Desktop\\train',
batch_size = 32,
class_mode = 'binary')
test_set =     
test_datagen.flow_from_directory('C:\\Users\\vlsi\\Desktop\\validation',
batch_size = 32,
class_mode = 'binary')


#fitting data
autoencoder.fit_generator(training_set,
steps_per_epoch = 80,
epochs = 25,
validation_data = test_set,
validation_steps = 20)

import numpy as np from keras.preprocessing import image
test_image =            
image.load_img('C:\\Users\\vlsi\\Desktop\\validation\\validate\\apple1.jpg')
test_image = image.img_to_array(test_image)
test_image = np.expand_dims(test_image, axis = 0)

#Displaying output
encoded_imgs = encoder.predict(test_image)
decoded_imgs = decoder.predict(encoded_imgs)
plt.imshow(decoded_imgs)

1 Answer 1

1

The problem is here:

input_img = Input(shape=(65536,))

You told Keras the input to the network will have 65K dimensions, meaning a vector of shape (samples, 65536), but your actual inputs have shape(samples, 256, 256, 3). Any easy solution would be to use the real input shape and for the network to perform the necessary reshaping:

input_img = Input(shape=((256, 256, 3))
flattened = Flatten()(input_img)
encoded = Dense(encoding_dim, activation='relu')(flattened)
decoded = Dense(256 * 256 * 3, activation='sigmoid')(encoded)
decoded = Reshape((256, 256, 3))(decoded)
autoencoder = Model(input_img, decoded)
encoder = Model(input_img, encoded)
encoded_input = Input(shape=(encoding_dim,))
decoder_layer = autoencoder.layers[-1]
decoder = Model(encoded_input, decoder_layer(encoded_input))

Note that I added a Flatten and a Reshape layer first to flatten the image, and then to take the flattened image back to the shape (256, 256, 3).

Sign up to request clarification or add additional context in comments.

5 Comments

Thank you. The dimensionality error is gone. I got confused with the 256*256=65,536 stuff.. thought it will be one one-dimensional input ..it is throwing another error but i think i will solve it now..
Hi ithink the new issue is in the line > decoder = Model(encoded_input, decoder_layer(encoded_input))
@mrin9san You have to be specific with error messages, just saying an issue is in this line doesn't tell me what the problem is or what the error message is.
"ValueError: total size of new array must be unchanged " is the error that comes..i think the arguments passed to the model decoder are creating the problem.
Hi, I figured out the error source. It turned out the error was coming due to the ImageDataGenerator construction. The class mode should be 'input' in the place of 'binary' which gave the value error.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.