Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

Required fields*

9
  • \$\begingroup\$ Have you considered trying to add a convolutional layer or two to the network? I think it'd be better at getting the noisier aesthetic. Alternatively, you could try additional manual-crafted features like [x^2,y^2] or [x%5,y%5] to get those lined patterns. \$\endgroup\$ Commented Dec 4, 2017 at 5:51
  • 5
    \$\begingroup\$ @StevenH. Maybe? I haven’t developed much intuition for convolutional networks, but my assumption is that, although something along those lines could probably improve the result visually, to improve the pixel-difference error metric used in this challenge you’d need to encode a lot more information to line up the noisy patterns fairly precisely with the original. \$\endgroup\$ Commented Dec 4, 2017 at 7:15
  • 14
    \$\begingroup\$ Oh my god this is amazing! I was hoping someone would use a deep neural network approach, and you really went the extra mile to do it. This is also, without a doubt, the coolest looking of all the answers. I'm giving you the green check mark, at least temporarily, to draw attention to your answer. \$\endgroup\$ Commented Dec 4, 2017 at 10:02
  • 2
    \$\begingroup\$ @Nathaniel For now I have been training the same unquantized network (which currently scores about 4350), but working on tweaking the quantization strategy. The bulk of the improvement in this update came from allocating two digits instead of one to the constant term of each neuron. \$\endgroup\$ Commented Dec 13, 2017 at 1:19
  • 3
    \$\begingroup\$ How long did the training take for this (disregarding quantization)? \$\endgroup\$ Commented Jun 25, 2018 at 14:29