Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

Required fields*

14
  • 3
    A cool piece of news. If this works universally on either of the Intel, AMD, ARM, ... architectures, then the code re-design was indeed a brilliant move. If the trick was to just use the new possibilities, coming from hardware-based extended registers and vectorised operations instructions, not present on other processor architectures, the ARM and might be also the AMD ports will not enjoy the performance that you have enjoyed to observe. Anyway, enjoy the new powers available for further expanding your valued research. Commented Dec 15, 2017 at 11:51
  • 1
    Thank you for pointing me to this! I have forwarded a link to the Numba team for their encouragement. Commented Dec 16, 2017 at 19:42
  • 2
    @MichaelGrant I have a question for you, if you don't mind. Do you know if Numba provides a way to specify the "chunk size" when using prange() to parallelize a for-loop? Commented Dec 18, 2017 at 12:44
  • 2
    Thinking about it more, it makes sense that A * x would be slower in MATLAB than A' * x. With CSC storage, A' * x, is much easier to parallelize, because each row gets its own thread. Commented Dec 18, 2017 at 16:34
  • 1
    @GeoffreyNegiar I was hesitant to accept my own answer and to undo acceptance on a different answer, but you are right. I just made this the accepted answer. Commented Jun 18, 2020 at 19:08