1

Inference with TimeDistributed(Dense(1, activation="sigmoid")) in Keras 3 is a lot slower than in Keras 2 (tf.keras) Profiling shows TimeDistributed is the bottleneck.

Model: Conv1D → LSTM → Conv1DTranspose → TimeDistributed(Dense(1)).

input shape (none,none,1)

Tried: run_eagerly=False, tf.function, TensorFlow/JAX/PyTorch backends—no significant improvement.

Question

Is there a solution to fix the TimeDistributed slowdown in Keras 3 without changing the model architecture? Currently using tf_keras as a workaround.

Details

Keras Version: 3.9.0

1 Answer 1

0

While this does not explain or address the issue directly, there is actually no need to use TimeDistributed on Dense layers. The Dense implementation is such that it will always be applied on the last axis only.

So you can just use Dense on the Conv1D outputs and it will work the same way (assuming you are using channels_last format for convolutions). Unfortunately, this does imply "changing the model architecture" as you say.

1
  • thanks a lot for your answer ,i'll investigate and test with your suggestion, thanks again
    – Mouad blrs
    Commented Apr 18 at 15:07

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.