Timeline for FizzBuzz in Javascript
Current License: CC BY-SA 4.0
17 events
| when toggle format | what | by | license | comment | |
|---|---|---|---|---|---|
| Nov 20, 2019 at 3:51 | comment | added | Blindman67 | @RolandIllig Machine Instruction, I am old school A "CPU cycle" is time to cycle an instruction from fetch (instruction & data) to write (data) which is distinct from a Clock Tick as provided by CLK pin. Considering modern CPUs like the 10nm Penryn with Fast Radix DIV the fizzBuzz divide is at most 2 ticks (ignoring fetch and write). Maybe 8-7nm could double the space for DIV logic and provide DIV at 8bits per tick and we edge closer to single tick divide (if the machines word size does not grow) | |
| Nov 20, 2019 at 1:40 | comment | added | Roland Illig | @Blindman67 Did you confuse "1 CPU cylcle" with "1 machine instruction"? If not, please provide an example for a CPU that actually divides in a single clock cycle. And by division I do not mean division by an integer constant, but division by an integer variable. | |
| Jan 11, 2019 at 20:50 | comment | added | crasic | Put another way if you are designing the CPU a 64 bit ALU divider will generally take 4 times longer than a 32 bit ALU divider and this impacts your timing and signal propogation (pipelining), but to the programmer this is irrelevant because the operation is constant time and constant input. 1 tick vs 2 for addition/division is noting compared 100 for cache miss due to memory access (even though this is "O(1)"). Complexity is only useful to compare the relative performance of different algorithms vs input size, once everything has been fixed in place only absolute time matters | |
| Jan 11, 2019 at 20:48 | comment | added | crasic | @Hemanshu the underlying hardware algorithm does scale based on your estimate of complexity. However, Hardware dividers have fixed input lengths and guaranteed times rounded to the next clock cycle. As such the amount of time to divide 4/2 and 400000/200000 is the same because both are fed in as hardware words. Effectively for all word sized inputs the time is constant for all ALU operations. If you are implementing arbitrary length division algorithm then this you must apply the theoretical computational complexity. For hardware division it is fixed time for all operations. | |
| Jan 10, 2019 at 20:12 | history | edited | Sᴀᴍ Onᴇᴌᴀ♦ | CC BY-SA 4.0 |
remove excess wording; update grammar, capitalization
|
| Jan 10, 2019 at 15:49 | comment | added | Blindman67 | CPUs do these OPs using a combination of hardware and internal instruction sets (or pure hardware in simpler CPUs). There are many tricks that can be used so the logic steps to perform an OP will depend on the CPU. The operations are separated from the CPU and performed on dedicated hardware, called the, Arithmetic Logic Unit (ALU) , and Floating point Logic Unit (FLU or FPLU). Because these units CPU, ALU, FLU are independent they can often perform operations in parallel. Making it very hard to know what is most performant in high level languages such as Javascript. Google CPU ALU for more. | |
| Jan 10, 2019 at 15:03 | comment | added | Hemanshu | @Blindman67 could you point me to the direction of the algorithm used for divisions | |
| Jan 10, 2019 at 15:02 | comment | added | Hemanshu | @Sulthan, i don't want to get into an argument here, you are completely ignoring the division fact. | |
| Jan 10, 2019 at 14:51 | history | undeleted | Hemanshu | ||
| Jan 10, 2019 at 14:50 | history | deleted | Hemanshu | via Vote | |
| Jan 10, 2019 at 14:12 | comment | added | Sulthan |
@Hemanshu I think you misunderstand what time complexity is. In time complexity the important metric is the amount of data, in this case the length of the for cycle. Everything inside is a constant, either with 2 or 3 divisions. Still a constant. The resulting time complexity is still O(n).
|
|
| Jan 10, 2019 at 14:08 | comment | added | Blindman67 |
On a modern CPU integer division takes 1 CPU cycle, the JS version of % a little longer. JS will use 32bit signed ints when it can so the operation is insignificant in comparison to just calling a function. BTW the linked computational complexity page has nothing to do with how CPUs ALU and FLU process the basic math operations.
|
|
| Jan 10, 2019 at 13:58 | comment | added | Hemanshu | @Sulthan The subject computational complexity doesn't take in to account how many operations a machine can perform, it takes into account how many operations/steps are required to solve a problem. While CC of addition i linear, division is quadratic. Dividing only by 3 and 5 would change the complexity and results of those can be saved in a bool variable which can be evaluated in constant time complexity. | |
| Jan 10, 2019 at 3:05 | comment | added | crasic | The theoretical computational complexity of the underlying division algorithm is practically irrelevant. | |
| Jan 9, 2019 at 23:37 | comment | added | Sulthan | This advice screams "premature optimization". Division is not an expensive operation. It's a bit more expensive that addition but it's not really expensive on today's computers. Your mobile phone can do millions of divisions in a fraction of a second. String concatenation is much more expensive, by far. The most important metric in code quality is readability. Also, if you are talking about computational complexity, you should note that your change won't change complexity at all. | |
| Jan 9, 2019 at 20:10 | comment | added | nostalgk | While I think it's a bit early for a beginning programmer to be learning about things such as memory management (assuming this is one of their FIRST actual programs), I think this is a good answer! | |
| Jan 9, 2019 at 17:07 | history | answered | Hemanshu | CC BY-SA 4.0 |