Skip to main content
added 170 characters in body
Source Link
user377672
user377672

I don't know. I get counter-arguments here like implementing the simplest-possible arena allocator possible (just use maximum alignment for all requests) and dealing with it is too much work. My blunt thought is like, what are we, Python programmers? Might as well use Python if so. Please use Python if these details don't matter. I'm serious. I have had many C programming colleagues who would very likely write not only more correct code but possibly even more efficient with Python since they ignore things like locality of reference while stumbling over bugs they create left and right. I don't see what there is that's so scary about some simple arena allocator here if we are C programmers concerned with things like data locality and optimal instruction selection, and this is arguably much less to think about than at least the types of interfaces that require callers to explicitly free every single individual thing that the interface can return. It offers bulk deallocation over individual deallocation of a sort that's more error-prone. A proper C programmer challenges loopy calls to malloc as I see it, especially when it appears as a hotspot in their profiler. From my standpoint, there has to be more "oomph" to a rationale for still being a C programmer in 2020, and we can't shy away from things like memory allocators anymore.

I don't know. I get counter-arguments here like implementing the simplest-possible arena allocator possible (just use maximum alignment for all requests) and dealing with it is too much work. My blunt thought is like, what are we, Python programmers? Might as well use Python if so. Please use Python if these details don't matter. I'm serious. I have had many C programming colleagues who would very likely write not only more correct code but possibly even more efficient with Python since they ignore things like locality of reference while stumbling over bugs they create left and right. I don't see what there is that's so scary about some simple arena allocator here if we are C programmers concerned with things like data locality and optimal instruction selection, and this is arguably much less to think about than at least the types of interfaces that require callers to explicitly free every single individual thing that the interface can return. It offers bulk deallocation over individual deallocation of a sort that's more error-prone. A proper C programmer challenges loopy calls to malloc as I see it, especially when it appears as a hotspot in their profiler.

I don't know. I get counter-arguments here like implementing the simplest-possible arena allocator possible (just use maximum alignment for all requests) and dealing with it is too much work. My blunt thought is like, what are we, Python programmers? Might as well use Python if so. Please use Python if these details don't matter. I'm serious. I have had many C programming colleagues who would very likely write not only more correct code but possibly even more efficient with Python since they ignore things like locality of reference while stumbling over bugs they create left and right. I don't see what there is that's so scary about some simple arena allocator here if we are C programmers concerned with things like data locality and optimal instruction selection, and this is arguably much less to think about than at least the types of interfaces that require callers to explicitly free every single individual thing that the interface can return. It offers bulk deallocation over individual deallocation of a sort that's more error-prone. A proper C programmer challenges loopy calls to malloc as I see it, especially when it appears as a hotspot in their profiler. From my standpoint, there has to be more "oomph" to a rationale for still being a C programmer in 2020, and we can't shy away from things like memory allocators anymore.

added 82 characters in body
Source Link
user377672
user377672

I don't know. I get counter-arguments here like implementing the simplest-possible arena allocator possible (just use maximum alignment for all requests) and dealing with it is too much work. My blunt thought is like, what are we, Python programmers? Might as well use Python if so. Please use Python if these details don't matter. I'm serious. I have had many C programming colleagues who would very likely write not only more correct code but possibly even more efficient with Python since they ignore things like locality of reference while stumbling over bugs they create left and right. I don't see what there is that's so scary about some simple arena allocator here if we are C programmers concerned with things like data locality and optimal instruction selection, and this is arguably much less to think about than at least the types of interfaces that require callers to explicitly free every single individual thing that the interface can return. It offers bulk deallocation over individual deallocation of a sort that's more error-prone. A proper C programmer challenges loopy calls to malloc as I see it, especially when it appears as a hotspot in their profiler.

And this doesn't have the edge cases of allocating a fixed-sized buffer where we allocate, say, 256 bytes and the resulting string is larger. Even if our functions avoid buffer overruns (like with sprintf_s), there's more code to properly recover from such errors that's required which we can omit with the allocator case since it doesn't have those edge cases. We don't have to deal with such cases in the allocator case, unless we truly exhaust the physical addressing space of our hardware (which the above code handles, but it doesn't have to deal with "out of pre-allocated buffer" separately from "out of memory").

I don't know. I get counter-arguments here like implementing the simplest-possible allocator possible (just use maximum alignment) and dealing with it is too much work. My blunt thought is like, what are we, Python programmers? Might as well use Python if so. Please use Python if these details don't matter. I'm serious. I have had many C programming colleagues who would very likely write not only more correct code but possibly even more efficient with Python since they ignore things like locality of reference while stumbling over bugs they create left and right. I don't see what there is that's so scary about some simple arena allocator here if we are C programmers concerned with things like data locality and optimal instruction selection, and this is arguably much less to think about than at least the types of interfaces that require callers to explicitly free every single individual thing that the interface can return. A proper C programmer challenges loopy calls to malloc as I see it.

And this doesn't have the edge cases of allocating a fixed-sized buffer where we allocate, say, 256 bytes and the resulting string is larger. Even if our functions avoid buffer overruns (like with sprintf_s), there's more code to properly recover from such errors that's required which we can omit with the allocator case since it doesn't have those edge cases. We don't have to deal with such cases in the allocator case, unless we truly exhaust the physical addressing space of our hardware.

I don't know. I get counter-arguments here like implementing the simplest-possible arena allocator possible (just use maximum alignment for all requests) and dealing with it is too much work. My blunt thought is like, what are we, Python programmers? Might as well use Python if so. Please use Python if these details don't matter. I'm serious. I have had many C programming colleagues who would very likely write not only more correct code but possibly even more efficient with Python since they ignore things like locality of reference while stumbling over bugs they create left and right. I don't see what there is that's so scary about some simple arena allocator here if we are C programmers concerned with things like data locality and optimal instruction selection, and this is arguably much less to think about than at least the types of interfaces that require callers to explicitly free every single individual thing that the interface can return. It offers bulk deallocation over individual deallocation of a sort that's more error-prone. A proper C programmer challenges loopy calls to malloc as I see it, especially when it appears as a hotspot in their profiler.

And this doesn't have the edge cases of allocating a fixed-sized buffer where we allocate, say, 256 bytes and the resulting string is larger. Even if our functions avoid buffer overruns (like with sprintf_s), there's more code to properly recover from such errors that's required which we can omit with the allocator case since it doesn't have those edge cases. We don't have to deal with such cases in the allocator case, unless we truly exhaust the physical addressing space of our hardware (which the above code handles, but it doesn't have to deal with "out of pre-allocated buffer" separately from "out of memory").

added 136 characters in body
Source Link
user377672
user377672

I don't know. I get counter-arguments here like implementing the simplest-possible allocator possible (just use maximum alignment) and dealing with it is too much work. My blunt thought is like, what are we, Python programmers? Might as well use Python if so. Please use Python if these details don't matter. I'm serious. I have had many C programming colleagues who would very likely write not only more correct code but possibly even more efficient with Python since they ignore things like locality of reference while stumbling over bugs they create left and right. I don't see what there is that's so scary about some simple arena allocator here if we are C programmers concerned with things like data locality and optimal instruction selection, and this is arguably much less to think about than at least the types of interfaces that require callers to explicitly free every single individual thing that the interface can return. A proper C programmer challenges loopy calls to malloc as I see it.

I don't know. I get counter-arguments here like implementing the simplest-possible allocator possible (just use maximum alignment) and dealing with it is too much work. My blunt thought is like, what are we, Python programmers? Might as well use Python if so. I don't see what there is that's so scary about some simple arena allocator here if we are C programmers concerned with things like data locality and optimal instruction selection, and this is arguably much less to think about than at least the types of interfaces that require callers to explicitly free every single individual thing that the interface can return.

I don't know. I get counter-arguments here like implementing the simplest-possible allocator possible (just use maximum alignment) and dealing with it is too much work. My blunt thought is like, what are we, Python programmers? Might as well use Python if so. Please use Python if these details don't matter. I'm serious. I have had many C programming colleagues who would very likely write not only more correct code but possibly even more efficient with Python since they ignore things like locality of reference while stumbling over bugs they create left and right. I don't see what there is that's so scary about some simple arena allocator here if we are C programmers concerned with things like data locality and optimal instruction selection, and this is arguably much less to think about than at least the types of interfaces that require callers to explicitly free every single individual thing that the interface can return. A proper C programmer challenges loopy calls to malloc as I see it.

added 136 characters in body
Source Link
user377672
user377672
Loading
added 136 characters in body
Source Link
user377672
user377672
Loading
added 136 characters in body
Source Link
user377672
user377672
Loading
Source Link
user377672
user377672
Loading