Skip to main content
added 8 characters in body
Source Link
Lundin
  • 24.7k
  • 1
  • 35
  • 88

Using dynamic memory allocation on a 16 bit MCU with 4kb RAM is very poor engineering.

Not so much because of the usual problems with memory leaks. Not so much because of heap memory fragmentation. Not even because of the rather steep execution time overhead needed for the allocation routines. But because it is completely pointless and makes no sense.

You have some real world requirements stating what your program should do, and based on these your program will need to use exactly x bytes of RAM to handle the worst-case scenario. You will not need less, you will not need more, you will need that exact amount of RAM, which can be determined at compile time.

It doesn't make sense to save part of the 4kb, leaving them unused. Who is going to use them? Similarly, it doesn't make sense to allocate more memory than needed for the worst-case scenario. Simply statically allocate as much memory as is needed, period.

In addition you have a worst-case scenario with maximum stack usage, where you are deep into a call stack and all interrupts that are enabled have fired. This worst case scenario can be calculated at compile-time or measured in runtime. Good practice is to allocate more RAMstack space than what is needed for the worst-case, to prevent stack overflows (the stack is actually a dynamic memory area as well). Or rather: use every single byte of RAM not used by your program for the stack.

If your application needs exactly 3kb of RAM, then you should use the remaining 1kb for the stack.

Dynamic allocation/the computer heap is intended to be used on hosted applications such as Windows or Linux, where every process has a fixed amount of RAM, and in addition can use heap memory to use as much RAM as there is in the hardware. It only makes sense to use dynamic allocation in such complex, multi-process systems, with vast amounts of RAM available.

In a MCU program, either bare metal or RTOS-based, you have to realize that heap implementations work like this: reserve a fixed amount, x kb of RAM for the heap. Whenever using dynamic allocation, you get handed a chunk of this fixed amount of memory. So why not simply take those x kb you would use for allocating the heap and instead use them to store your variables instead?


I've expanded on this with a more detailed explanation and more possible problems, here:
Why should I not use dynamic memory allocation in embedded systems?

Using dynamic memory allocation on a 16 bit MCU with 4kb RAM is very poor engineering.

Not so much because of the usual problems with memory leaks. Not so much because of heap memory fragmentation. Not even because of the rather steep execution time overhead needed for the allocation routines. But because it is completely pointless and makes no sense.

You have some real world requirements stating what your program should do, and based on these your program will need to use exactly x bytes of RAM to handle the worst-case scenario. You will not need less, you will not need more, you will need that exact amount of RAM, which can be determined at compile time.

It doesn't make sense to save part of the 4kb, leaving them unused. Who is going to use them? Similarly, it doesn't make sense to allocate more memory than needed for the worst-case scenario. Simply statically allocate as much memory as is needed, period.

In addition you have a worst-case scenario with maximum stack usage, where you are deep into a call stack and all interrupts that are enabled have fired. This worst case scenario can be calculated at compile-time or measured in runtime. Good practice is to allocate more RAM than what is needed for the worst-case, to prevent stack overflows (the stack is actually a dynamic memory area as well). Or rather: use every single byte of RAM not used by your program for the stack.

If your application needs exactly 3kb of RAM, then you should use the remaining 1kb for the stack.

Dynamic allocation/the computer heap is intended to be used on hosted applications such as Windows or Linux, where every process has a fixed amount of RAM, and in addition can use heap memory to use as much RAM as there is in the hardware. It only makes sense to use dynamic allocation in such complex, multi-process systems, with vast amounts of RAM available.

In a MCU program, either bare metal or RTOS-based, you have to realize that heap implementations work like this: reserve a fixed amount, x kb of RAM for the heap. Whenever using dynamic allocation, you get handed a chunk of this fixed amount of memory. So why not simply take those x kb you would use for allocating the heap and instead use them to store your variables instead?


I've expanded on this with a more detailed explanation and more possible problems, here:
Why should I not use dynamic memory allocation in embedded systems?

Using dynamic memory allocation on a 16 bit MCU with 4kb RAM is very poor engineering.

Not so much because of the usual problems with memory leaks. Not so much because of heap memory fragmentation. Not even because of the rather steep execution time overhead needed for the allocation routines. But because it is completely pointless and makes no sense.

You have some real world requirements stating what your program should do, and based on these your program will need to use exactly x bytes of RAM to handle the worst-case scenario. You will not need less, you will not need more, you will need that exact amount of RAM, which can be determined at compile time.

It doesn't make sense to save part of the 4kb, leaving them unused. Who is going to use them? Similarly, it doesn't make sense to allocate more memory than needed for the worst-case scenario. Simply statically allocate as much memory as is needed, period.

In addition you have a worst-case scenario with maximum stack usage, where you are deep into a call stack and all interrupts that are enabled have fired. This worst case scenario can be calculated at compile-time or measured in runtime. Good practice is to allocate more stack space than what is needed for the worst-case, to prevent stack overflows (the stack is actually a dynamic memory area as well). Or rather: use every single byte of RAM not used by your program for the stack.

If your application needs exactly 3kb of RAM, then you should use the remaining 1kb for the stack.

Dynamic allocation/the computer heap is intended to be used on hosted applications such as Windows or Linux, where every process has a fixed amount of RAM, and in addition can use heap memory to use as much RAM as there is in the hardware. It only makes sense to use dynamic allocation in such complex, multi-process systems, with vast amounts of RAM available.

In a MCU program, either bare metal or RTOS-based, you have to realize that heap implementations work like this: reserve a fixed amount, x kb of RAM for the heap. Whenever using dynamic allocation, you get handed a chunk of this fixed amount of memory. So why not simply take those x kb you would use for allocating the heap and instead use them to store your variables instead?


I've expanded on this with a more detailed explanation and more possible problems, here:
Why should I not use dynamic memory allocation in embedded systems?

added 218 characters in body
Source Link
Lundin
  • 24.7k
  • 1
  • 35
  • 88

Using dynamic memory allocation on a 16 bit MCU with 4kb RAM is very poor engineering.

Not so much because of the usual problems with memory leaks. Not so much because of heap memory fragmentation. Not even because of the rather steep execution time overhead needed for the allocation routines. But because it is completely pointless and makes no sense.

You have some real world requirements stating what your program should do, and based on these your program will need to use exactly x bytes of RAM to handle the worst-case scenario. You will not need less, you will not need more, you will need that exact amount of RAM, which can be determined at compile time.

It doesn't make sense to save part of the 4kb, leaving them unused. Who is going to use them? Similarly, it doesn't make sense to allocate more memory than needed for the worst-case scenario. Simply statically allocate as much memory as is needed, period.

In addition you have a worst-case scenario with maximum stack usage, where you are deep into a call stack and all interrupts that are enabled have fired. This worst case scenario can be calculated at compile-time or measured in runtime. Good practice is to allocate more RAM than what is needed for the worst-case, to prevent stack overflows (the stack is actually a dynamic memory area as well). Or rather: use every single byte of RAM not used by your program for the stack.

If your application needs exactly 3kb of RAM, then you should use the remaining 1kb for the stack.

Dynamic allocation/the computer heap is intended to be used on hosted applications such as Windows or Linux, where every process has a fixed amount of RAM, and in addition can use heap memory to use as much RAM as there is in the hardware. It only makes sense to use dynamic allocation in such complex, multi-process systems, with vast amounts of RAM available.

In a MCU program, either bare metal or RTOS-based, you have to realize that heap implementations work like this: reserve a fixed amount, x kb of RAM for the heap. Whenever using dynamic allocation, you get handed a chunk of this fixed amount of memory. So why not simply take those x kb you would use for allocating the heap and instead use them to store your variables instead?


I've expanded on this with a more detailed explanation and more possible problems, here:
Why should I not use dynamic memory allocation in embedded systems?

Using dynamic memory allocation on a 16 bit MCU with 4kb RAM is very poor engineering.

Not so much because of the usual problems with memory leaks. Not so much because of heap memory fragmentation. Not even because of the rather steep execution time overhead needed for the allocation routines. But because it is completely pointless and makes no sense.

You have some real world requirements stating what your program should do, and based on these your program will need to use exactly x bytes of RAM to handle the worst-case scenario. You will not need less, you will not need more, you will need that exact amount of RAM, which can be determined at compile time.

It doesn't make sense to save part of the 4kb, leaving them unused. Who is going to use them? Similarly, it doesn't make sense to allocate more memory than needed for the worst-case scenario. Simply statically allocate as much memory as is needed, period.

In addition you have a worst-case scenario with maximum stack usage, where you are deep into a call stack and all interrupts that are enabled have fired. This worst case scenario can be calculated at compile-time or measured in runtime. Good practice is to allocate more RAM than what is needed for the worst-case, to prevent stack overflows (the stack is actually a dynamic memory area as well). Or rather: use every single byte of RAM not used by your program for the stack.

If your application needs exactly 3kb of RAM, then you should use the remaining 1kb for the stack.

Dynamic allocation/the computer heap is intended to be used on hosted applications such as Windows or Linux, where every process has a fixed amount of RAM, and in addition can use heap memory to use as much RAM as there is in the hardware. It only makes sense to use dynamic allocation in such complex, multi-process systems, with vast amounts of RAM available.

In a MCU program, either bare metal or RTOS-based, you have to realize that heap implementations work like this: reserve a fixed amount, x kb of RAM for the heap. Whenever using dynamic allocation, you get handed a chunk of this fixed amount of memory. So why not simply take those x kb you would use for allocating the heap and instead use them to store your variables instead?

Using dynamic memory allocation on a 16 bit MCU with 4kb RAM is very poor engineering.

Not so much because of the usual problems with memory leaks. Not so much because of heap memory fragmentation. Not even because of the rather steep execution time overhead needed for the allocation routines. But because it is completely pointless and makes no sense.

You have some real world requirements stating what your program should do, and based on these your program will need to use exactly x bytes of RAM to handle the worst-case scenario. You will not need less, you will not need more, you will need that exact amount of RAM, which can be determined at compile time.

It doesn't make sense to save part of the 4kb, leaving them unused. Who is going to use them? Similarly, it doesn't make sense to allocate more memory than needed for the worst-case scenario. Simply statically allocate as much memory as is needed, period.

In addition you have a worst-case scenario with maximum stack usage, where you are deep into a call stack and all interrupts that are enabled have fired. This worst case scenario can be calculated at compile-time or measured in runtime. Good practice is to allocate more RAM than what is needed for the worst-case, to prevent stack overflows (the stack is actually a dynamic memory area as well). Or rather: use every single byte of RAM not used by your program for the stack.

If your application needs exactly 3kb of RAM, then you should use the remaining 1kb for the stack.

Dynamic allocation/the computer heap is intended to be used on hosted applications such as Windows or Linux, where every process has a fixed amount of RAM, and in addition can use heap memory to use as much RAM as there is in the hardware. It only makes sense to use dynamic allocation in such complex, multi-process systems, with vast amounts of RAM available.

In a MCU program, either bare metal or RTOS-based, you have to realize that heap implementations work like this: reserve a fixed amount, x kb of RAM for the heap. Whenever using dynamic allocation, you get handed a chunk of this fixed amount of memory. So why not simply take those x kb you would use for allocating the heap and instead use them to store your variables instead?


I've expanded on this with a more detailed explanation and more possible problems, here:
Why should I not use dynamic memory allocation in embedded systems?

deleted 25 characters in body
Source Link
Lundin
  • 24.7k
  • 1
  • 35
  • 88

Using dynamic memory allocation on a 16 bit MCU with 4kb RAM is very poor engineering.

Not so much because of the usual problems with memory leaks. Not so much because of heap memory fragmentation. Not even because of the rather steep execution time overhead needed for the allocation routines. But because it is completely pointless and makes no sense.

You have some real world requirements stating what your program should do, and based on these your program will need to use exactly x bytes of RAM to handle the worst-case scenario. You will not need less, you will not need more, you will need that exact amount of RAM, which can be determined at compile time.

It doesn't make sense to save part of the 4kb, leaving them unused. Who is going to use them? Similarly, it doesn't make sense to allocate more memory than needed for the worst-case scenario. Simply statically allocate as much memory as is needed for the worst-case scenario, period.

In addition you have a worst-case scenario with maximum stack usage, where you are deep into a call stack and all interrupts that are enabled have fired. This worst case scenario can be calculated at compile-time or measured in runtime. Good practice is to allocate more RAM than what is needed for the worst-case, to prevent stack overflows (the stack is actually a dynamic memory area as well). Or rather: use every single byte of RAM not used by your program for the stack.

If your application needs exactly 3kb of RAM, then you should use the remaining 1kb for the stack.

Dynamic allocation/the computer heap is intended to be used on hosted applications such as Windows or Linux, where every process has a fixed amount of RAM, and in addition can use heap memory to use as much RAM as there is in the hardware. It only makes sense to use dynamic allocation in such complex, multi-process systems, with vast amounts of RAM available.

In a MCU program, either bare metal or RTOS-based, you have to realize that heap implementations work like this: reserve a fixed amount, x kb of RAM for the heap. Whenever using dynamic allocation, you get handed a chunk of this fixed amount of memory. So why not simply take those x kb you would use for allocating the heap and instead use them to store your variables instead?

Using dynamic memory allocation on a 16 bit MCU with 4kb RAM is very poor engineering.

Not so much because of the usual problems with memory leaks. Not so much because of heap memory fragmentation. Not even because of the rather steep execution time overhead needed for the allocation routines. But because it is completely pointless and makes no sense.

You have some real world requirements stating what your program should do, and based on these your program will need to use exactly x bytes of RAM to handle the worst-case scenario. You will not need less, you will not need more, you will need that exact amount of RAM, which can be determined at compile time.

It doesn't make sense to save part of the 4kb, leaving them unused. Who is going to use them? Similarly, it doesn't make sense to allocate more memory than needed for the worst-case scenario. Simply statically allocate as much memory as needed for the worst-case scenario, period.

In addition you have a worst-case scenario with maximum stack usage, where you are deep into a call stack and all interrupts that are enabled have fired. This worst case scenario can be calculated at compile-time or measured in runtime. Good practice is to allocate more RAM than what is needed for the worst-case, to prevent stack overflows (the stack is actually a dynamic memory area as well). Or rather: use every single byte of RAM not used by your program for the stack.

If your application needs exactly 3kb of RAM, then you should use the remaining 1kb for the stack.

Dynamic allocation/the computer heap is intended to be used on hosted applications such as Windows or Linux, where every process has a fixed amount of RAM, and in addition can use heap memory to use as much RAM as there is in the hardware. It only makes sense to use dynamic allocation in such complex, multi-process systems, with vast amounts of RAM available.

In a MCU program, either bare metal or RTOS-based, you have to realize that heap implementations work like this: reserve a fixed amount, x kb of RAM for the heap. Whenever using dynamic allocation, you get handed a chunk of this fixed amount of memory. So why not simply take those x kb you would use for allocating the heap and instead use them to store your variables instead?

Using dynamic memory allocation on a 16 bit MCU with 4kb RAM is very poor engineering.

Not so much because of the usual problems with memory leaks. Not so much because of heap memory fragmentation. Not even because of the rather steep execution time overhead needed for the allocation routines. But because it is completely pointless and makes no sense.

You have some real world requirements stating what your program should do, and based on these your program will need to use exactly x bytes of RAM to handle the worst-case scenario. You will not need less, you will not need more, you will need that exact amount of RAM, which can be determined at compile time.

It doesn't make sense to save part of the 4kb, leaving them unused. Who is going to use them? Similarly, it doesn't make sense to allocate more memory than needed for the worst-case scenario. Simply statically allocate as much memory as is needed, period.

In addition you have a worst-case scenario with maximum stack usage, where you are deep into a call stack and all interrupts that are enabled have fired. This worst case scenario can be calculated at compile-time or measured in runtime. Good practice is to allocate more RAM than what is needed for the worst-case, to prevent stack overflows (the stack is actually a dynamic memory area as well). Or rather: use every single byte of RAM not used by your program for the stack.

If your application needs exactly 3kb of RAM, then you should use the remaining 1kb for the stack.

Dynamic allocation/the computer heap is intended to be used on hosted applications such as Windows or Linux, where every process has a fixed amount of RAM, and in addition can use heap memory to use as much RAM as there is in the hardware. It only makes sense to use dynamic allocation in such complex, multi-process systems, with vast amounts of RAM available.

In a MCU program, either bare metal or RTOS-based, you have to realize that heap implementations work like this: reserve a fixed amount, x kb of RAM for the heap. Whenever using dynamic allocation, you get handed a chunk of this fixed amount of memory. So why not simply take those x kb you would use for allocating the heap and instead use them to store your variables instead?

added 3 characters in body
Source Link
Lundin
  • 24.7k
  • 1
  • 35
  • 88
Loading
typos
Source Link
Lundin
  • 24.7k
  • 1
  • 35
  • 88
Loading
added 1 character in body
Source Link
Lundin
  • 24.7k
  • 1
  • 35
  • 88
Loading
deleted 1 character in body
Source Link
Lundin
  • 24.7k
  • 1
  • 35
  • 88
Loading
deleted 2 characters in body
Source Link
Lundin
  • 24.7k
  • 1
  • 35
  • 88
Loading
Source Link
Lundin
  • 24.7k
  • 1
  • 35
  • 88
Loading