Introduce Mem_MallocA() that is like alloca() but falls back to heap memory for bigger allocations #627
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The use of plain alloca() causes stack overflows (or assertions meant to prevent them) when models with many polys are loaded, see #528
Mem_MallocA() is a macro that that uses _alloca16() for allocations < 1MB, and otherwise uses Mem_Alloc16(). It should be used together with Mem_FreeA() to free the memory if it came from Mem_Alloc16(). A
bool
variable must be passed to Mem_MallocA() and Mem_FreeA(), it will be set to true if _alloca16() was used and false otherwise, and Mem_FreeA() uses it to do the right thing.This is kinda like Microsoft's _malloca() and _freea(), except that doesn't need the additional bool because they can do platform-specific magic to detect whether memory is on the stack.
This could use a little more testing with unusually big models.
I tested it by playing a few levels of Prometheus and Doom 2553, according to #528 (comment) they caused problems before.