I'm porting some of my OpenGL code to WebGL and the fact that JavaScript doesn't have genuine arrays is sad. I can use Float32Array
's (and other other ArrayBuffer
types), but that doesn't seem to help performance.
As an experiment to compare Array
vs Float32Array
vs Float64Array
performance, I timed bubble sort on 100000 floats to see if there was any difference:
function bubbleSort(array) {
var N = array.length;
for (var i = 0; i < N; i++)
for (var j = i; j < N-1; j++)
if (array[j] > array[j+1]) {
var tmp = array[j];
array[j] = array[j+1];
array[j+1] = tmp;
}
}
// var nums = new Array(100000); // regular 'JS' array
// var nums = new Float32Array(100000); // actual buffer of 32-bit floats
var nums = new Float64Array(100000); // actual buffer of 64-bit floats
for (var i = 0; i < nums.length; i++)
nums[i] = Math.random() * 1000;
bubbleSort(nums);
for (var i = 0; i < nums.length; i++)
console.log(nums[i]);
Not much difference. Really the compiler would need some static type information for the array
argument for bubbleSort
to really get decent performance. Are we just stuck with bad array performance in JS? Any way around this? Short of using ASM.js that is...
Math.random()
being called 100k times that's causing your performance issues?bubbleSort
and its finishes immediately.