2

The code is little long, so I request you to check it on this Google Colab link.

I am building an Auto-Encoder. I worked fine at first, but after adding one more CNN layer, I mean after changing this layer_filters = [32, 64] to layer_filters = [32, 64, 128] I am getting a dimension error. This one:

ValueError: Dimensions must be equal, but are 32 and 28 for '{{node mean_squared_error/SquaredDifference}} = SquaredDifference[T=DT_FLOAT](autoencoders/decoder/decoder_output/Sigmoid, IteratorGetNext:1)' with input shapes: [32,32,32,1], [32,28,28,1].

I think Encoders dimensions and Decoder's dimensions are different due to adding of one more layer. I don't know how to make them same. Can anyone help ?

EDIT - @Kaveh has answered this question below, I did what he said, and it worked. So if anyone is visiting this question now. Please note that the notebook that I mention earlier has been updated and has no trace back.

2
  • Please, update with the full error trace. Commented Aug 15, 2021 at 18:42
  • 1
    The encoder input dimension is (None, 28, 28, 1) and the decoder output dimension is (None, 32, 32, 1). You are sending X_train to both of them in autoencoder.fit(). Same for X_test. So, this means that X_train, and X_test have the dimension of (None, 32, 32, 1) which is incompatible with the encoder input dimension (None, 28, 28, 1). TL;DR: as encoder input and decoder output both end are getting same input, their dimensions should be the same. Commented Aug 15, 2021 at 19:24

1 Answer 1

4

Reason:

Your labels shape (28,28,3) is incompatible with the models' output shape (32,32,3), and it is because of divisions by your encoder and decoder.


Source:

Input shape is (28,28) and shape changes with layer_filters = [32, 64] is like this:

  • encoder: 28 -> 14 -> 7
  • decoder: 7 -> 14 -> 28

So, the input and output shape is the same (28) and it works fine. But when you add another layer with 128 neurons (layer_filters = [32, 64, 128]) the shape changes is like this:

  • encoder: 28 -> 14 -> 7 -> 4
  • decoder: 4 -> 8 -> 16 -> 32

Now, 32 and 28 are incompatible and you get error.


Solution:

Change you layer configuration in such a way that input and output get the same shape. For example:

  • You can remove strides = 2, padding = 'same' in both decoder and encoder for loop:
    • encoder: 28 -> 26 -> 24 -> 22
    • decoder: 22 -> 24 -> 26 -> 28

or

  • Do not add more than 2 Conv2D layers anymore, since in the 3rd layer the shape will be an odd number divided by two. And you can not get back in the same way.
Sign up to request clarification or add additional context in comments.

2 Comments

Can you tell me how did you knew that removing strides and padding will solve this problem ?
@Hobo I took a look at the full code (link) he has provided, and as I said, with removing this config from layer definition, then shapes will be just subtracted by 2 on each layer (no padding and strides=1), instead of divided by 2.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.