Say I have a couple of meshes that I want to reupload to GPU for each frame. I can do this by generating a single array from all the meshes on CPU and then uploading to GPU (ex with glBufferData with OpenGL). I can regenerate the array and reupload for each frame.
This is something I've done before I became aware of buffer streaming. The way I understand buffer streaming is we first allocate a large enough vertex buffer, then we upload each mesh to it (ex with glBufferSubData with OpenGL) while keeping track of offset for each added mesh inside the buffer. We do this for each frame.
With streaming there are issues like implicit synchronization. We can avoid this problem by different means. But why not just do the work on CPU instead, by collecting all the data into one array and uploading at the end of the frame? Would it be a considerable performance hit?