I am trying to predict google stock price using LSTM model from PyTorch.
However after training my model and plotting the predicted results vs the real value, I see periodic sharp downward spikes.
Here is notebook I have wrote in kaggle : Notebook in kaggle
To summaries the steps I have done for training:
I considered to predict only one feature of the Google stock price, i.e. the Open feature.
I split the data into training set and test set (80% for training and 20% for test)
I used MinMaxScaler() method from sklearn, to fit scaler based on only training data, and transformed (scaled) the training and test data.
I considered sequence length of 10.
For LSTM model, I considered the input_size = 1, the hidden_size = 64, and num_layers=2.
self.lstm = nn.LSTM(1, hidden_size, num_layers) self.fc = nn.Linear(hidden_size, 1)I considered batch_size = 64, and I did not shuffle (because for training time series data, I learned we don't shuffle).
I defined the learning_rate = 0.001, the epochs = 3, used torch.optim.Adam for optimization, and nn.MSELoss() for loss function.
After training, when I plot the real value of test data (in blue) and the predicted value (purple), I see periodic sharp downward spikes

I don't know why this happen, I tried to make batch_size =1 and tried also training with different sequence length rather than 10 (different window size), but I saw still the periodic sharp downward spikes.
I am new to Deep learning and learned recently the LSTM, and I'm not sure why this happen or how to fix it
