In the code for deep learning models in tensorflow, Keras I see that Reshaping of the data (numpy tensor or ndarray) is a very frequent operation.
It is used to fit different components of a model together.
Why can I reshape data and assume everything is fine? Stochastic gradient and Backprop will get job done?
Does this mean that while training a model I can do whatever kind of "improvisations" and just be happy if I get the end result right?