20

For the past month I've been messing with WebGL, and found that if I create and draw a large vertex buffer it causes low FPS. Does anyone know if it be the same if I used OpenGL with C++?

Is that a bottleneck with the language used (JavaScript in the case of WebGL) or the GPU?

WebGL examples like this show that you can draw 150,000 cubes using one buffer with good performance but anything more than this, I get FPS drops. Would that be the same with OpenGL, or would it be able to handle a larger buffer?

Basically, I've got to make a decision to continue using WebGL and try to optimise by code or - if you tell me OpenGL would perform better and it's a language speed bottleneck, switch to C++ and use OpenGL.

1
  • Facts here may have evolved. Just to add color on the main answer, float point operations in Javascript are 4-10x slower than C++. However after you load stuff onto the video card, WebGL and OpenGL should perform similarly. Chrome seems to bear that out, other browsers are slower. Commented Mar 5, 2018 at 14:18

4 Answers 4

18

If you only have a single drawArrays call, there should not be much of a difference between OpenGL and WebGL for the call itself. However, setting up the data in Javascript might be a lot slower, so it really depends on your problem. If the bulk of your data is static (landscape, rooms), WebGL might work well for you. Otherwise, setting up the data in JS might be too slow for your purpose. It really depends on your problem.

p.s. If you include more details of what you are trying to do, you'll probably get more detailed / specific answers.

Sign up to request clarification or add additional context in comments.

Comments

3

Anecdotally, I wrote a tile-based game in the early 2000's using the old glVertex() style API that ran perfectly smoothly. I recently started port it to WebGL and glDrawArrays() and now on my modern PC that is at least 10 times faster it gets terrible performance.

The reason seems to be that I was faking a call go glBegin(GL_QUADS); glVertex()*4; glEnd(); by using glDrawArrays(). Using glDrawArrays() to draw one polygon is much much slower in WebGL than doing the same with glVertex() was in C++.

I don't know why this is. Maybe it is because javascript is dog slow. Maybe it is because of some context switching issues in javascript. Anyway I can only do around 500 one-polygon glDrawArray() calls while still getting 60 FPS.

Everybody seems to work around this by doing as much on the GPU as possible, and doing as few glDrawArray() calls per frame as possible. Whether you can do this depends on what you are trying to draw. In the example of cubes you linked they can do everything on the GPU, including moving the cubes, which is why it is fast. Essentially they cheated - typically WebGL apps won't be like that.

Google had a talk where they explained this technique (they also unrealistically calculate the object motion on the GPU): https://www.youtube.com/watch?v=rfQ8rKGTVlg

2 Comments

Hi, the reason for what you see is because glDrawArray is MUCH more complicated than glBegin. You have to pay a static amount every time you call it, which is quite high. What you should do is buffer all the polygons into one VAO so you can only use one glDrawArray. The more polygons you have, the larger the speed distance between glDraArray and the old gl immediate becomes, and in a typical scene made of millions of polygons, you will see exponential gains in speed.
Yeah that's what I've done. Unfortunately streaming to a VBO is also quite slow - at least it seems slower than I remember glBegin() being. If you can avoid streaming then it is very fast, but that depends on the application.
3

WebGL is much slower on the same hardware compared to equivalent OpenGL, because of the high overheard for each WebGL call.

On desktop OpenGL, this overhead is at least limited, if still relatively expensive.

But in browsers like Chrome, WebGL requires that not only do we cross the FFI barrier to access those native OpenGL API calls (which still incur the same overhead), but we also have the cost of security checks to prevent the GPU being hijacked for computation.

If you are looking at something like glDraw* calls, which are called per frame, this means we are talking about perhaps (an) order(s) of magnitude fewer calls. All the more reason to opt for something like instancing, where the number of calls is drastically reduced.

Comments

1

OpenGL is more flexible and more optimized because of the newer versions of the api used. It is true if you say that OpenGL is faster and more capable, but it also depends on your needs.

If you need one cube mesh with texture, webGL would be sufficient. However, if you intend building large-scale projects with lots of vertices, post-processing effects and different rendering techniques (and kind of displacement, parallax mapping, per-vertex or maybe tessellation) then OpenGL might be a better and wiser choice actually.

Optimizing buffers to a single call, optimizing update of those can be done, but it has its limits, of course, and yes, OpenGL would most likely perform better anyway.

To answer, it is not a language bottleneck, but an api-version-used one. WebGL is based upon OpenGL ES, which has some pros but also runs a bit slower and it has more abstraction levels for code handling than pure OpenGL has, and that is reason for lowering performance - more code needs to be evaluated.

If your project doesn't require web-based solution, and doesn't care which devices are supported, then OpenGL would be a better and smarter choice.

Hope this helps.

3 Comments

Given that WebGL is based on OpenGL ES 2.0 and thus already uses the modern (and supposed to be more efficient) API that also the modern desktop OpenGL versions use, I would rather say it's the complete opposite of the APIs differing in performance. It is much more likely that Javascript isn't the best option for real-time computations.
Actually javascript is rather fast with math and close to a compiled C code. I have a 3D engine built both in WebGL/OpenGL and i'm yet to find a noticable difference. I've tested a colossal terrain grid of about 256x256 (hfz terrain) in both loading/rendering performance and between desktop OpenGL/ES and there wasn't really a difference.
The problem with javascript is the Function Call overhead and moving the data into the buffers ( faster with typed/native arrays but that is not used often enough in js) is MUCH higher in C so the number of draw calls have to be minimized .A single mesh is great , 1000 objects are fine in C but not so js.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.