1

I am using Keras version 2.3.1 and TensorFlow 2.0.0.

I induce the titular error on my instantiation of the first convolutional layer in my network:

model = Sequential([
    Conv2D(16, 3, input_shape=(1, 10000, 80)),
    LeakyReLU(alpha=0.01),
    MaxPooling2D(pool_size=3),
    Conv2D(16, 3),
    LeakyReLU(alpha=0.01),
    MaxPooling2D(pool_size=3),
    Conv2D(16, 3),
    LeakyReLU(alpha=0.01),
    MaxPooling2D(pool_size=3),
    Conv2D(16, 3),
    LeakyReLU(alpha=0.01),
    MaxPooling2D(pool_size=3),
    Dense(256),
    LeakyReLU(alpha=0.01),
    Dense(32),
    LeakyReLU(alpha=0.01),
    Dense(1, activation='sigmoid')])

As I am aware, the TF dimensional ordering should be set as (samples, rows, columns). My input is an array of shape 1000, 80.

I have tried all of the fixes I have found online, including:

K.common.set_image_dim_ordering('tf')
K.set_image_data_format('channels_last')
K.tensorflow_backend.set_image_dim_ordering('tf')
K.set_image_dim_ordering('tf')

However, all of these either do not change anything (as in the case of the first two) or fail at those lines (the latter two).

1 Answer 1

3

None of these fixes will work if the input_shape is wrong. The input_shape for a Conv2D layers should be (width, height, channels), the samples dimension is not included as its implictly inserted by Keras.

The input_shape you gave would be interpreted with width of one, which is a problem. You need to format your input_shape correctly and also add the channels dimension.

Sign up to request clarification or add additional context in comments.

2 Comments

I see on Keras documentation ([keras.io/layers/convolutional/]) that Conv2D layers (batch, rows, cols, channels) when data format is channels_last. I figured that with with input of shape 1000, 80, I would need input_size=(1000, 80, 1). For 1000 rows, 80 columns, and a single channel. Can you explain how I am misinterpreting this and/or what the correct input size would be given this?
@BrownPhilip As I said, the samples dimension is not part of the input_shape, so it should be just (100, 80, 1), assuming its a one channel image.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.