I have Conv1D autoencoder network, and i a have unreliable data source of a data stream, speaking i receive different event patterns at fixed interval of e.g. 2 seconds. Sometimes I have 4 different event types, sometimes only 2.
In preprocessing i transpose the data stream of (timestamps, event_type, value) to (timestamps, event_type1, ..., event_typeN) with forward fill and dropping the first row if any value is nan.
However, after training a time span may exist where only 2 types show up. After the transpose 2 features are missing to predict the values. However, is there a way to mask the features instead of the timesteps? The tf.keras.masking layer only mask the entire row if every feature is the mask value. In my case, a timestep may exist with only one or two features containing an imputed value, which should be ignored during prediction / mse calculation.
This should be a standard case, is there any method i can use for masking? During training every features is there, but after the training the datastream may not contain all event_types which cannot be fed into the autoencoder without errors.
Need to know some best practices.