Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

Required fields*

13
  • I did use model.get_weights() after training the model, but the weights did not seem to be properly tied. The weights of the decoder did not seem to be the transpose of the decoder. I have not tried model.summary() yet, but that is a good call. I will update you when I test this. Thank you for the answer. Commented Dec 13, 2018 at 16:41
  • I used model.get_weights() and model.summary(), but it did not seem like there were any indications that the weights were tied. Commented Dec 13, 2018 at 21:01
  • Could you try removing self._trainable_weights.append(self.kernel)? These are not trainable weights of this layer but of the other. I think what happens is that they get updated in two places of the graph and that's why they are different. Commented Dec 13, 2018 at 22:06
  • I did change it to self._non_trainable_weights.append(self.kernel), but the weights still seem to be different. If the kernel is not added to either of these lists, it will not print when using model.get_weights(). Commented Dec 13, 2018 at 22:27
  • Could you show your model.get_weights() and model.summary() when you add to non-trainable weights. Also, could you share the call method? Commented Dec 13, 2018 at 23:46