You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There are scaling issues with larger simulations (~1000+ neurons) during model construction and code generation. Models are slow to compile or may crash the compiler.
An example of the above can be seen in demo/pinksy_network.jl. If n_neurons is incremented to 500+ neurons, the compiler will hang for a very long time on the subsequent call to solve, possibly crashing before reaching the solution. On my Intel i5-12600k, it took ~40 minutes to solve for 500 neurons.
If the problem is scaled even further, to ~1000 neurons, code generation hard fails at the problem construction step due to inference failure related to numerous callbacks. Type inference hits a stack overflow because of the extremely high number of discrete callbacks (one for each neuron). Unfortunately, this is unavoidable due to the need to scalarize neurons combined with a lack of vectorized callback constructor for discrete events, analogous to VectorContinuousCallback
From what I can tell these are two separate problems but both arise from scalarizing components (neurons) for ModelingToolkit.jl
Until the (hopefully imminent) release of improved symbolic IR in ModelingToolkit, I recommend avoiding very large simulations and, where possible, using continuously integrated callback events for synapse models in larger systems. Continuous callbacks are more computationally expensive during model execution, but should scale better in terms of compilation since we can use VectorContinuousCallback instead of a very long CallbackSet
The text was updated successfully, but these errors were encountered:
There are scaling issues with larger simulations (~1000+ neurons) during model construction and code generation. Models are slow to compile or may crash the compiler.
An example of the above can be seen in
demo/pinksy_network.jl
. Ifn_neurons
is incremented to 500+ neurons, the compiler will hang for a very long time on the subsequent call tosolve
, possibly crashing before reaching the solution. On my Intel i5-12600k, it took ~40 minutes to solve for 500 neurons.If the problem is scaled even further, to ~1000 neurons, code generation hard fails at the problem construction step due to inference failure related to numerous callbacks. Type inference hits a stack overflow because of the extremely high number of discrete callbacks (one for each neuron). Unfortunately, this is unavoidable due to the need to scalarize neurons combined with a lack of vectorized callback constructor for discrete events, analogous to
VectorContinuousCallback
From what I can tell these are two separate problems but both arise from scalarizing components (neurons) for ModelingToolkit.jl
Until the (hopefully imminent) release of improved symbolic IR in ModelingToolkit, I recommend avoiding very large simulations and, where possible, using continuously integrated callback events for synapse models in larger systems. Continuous callbacks are more computationally expensive during model execution, but should scale better in terms of compilation since we can use
VectorContinuousCallback
instead of a very longCallbackSet
The text was updated successfully, but these errors were encountered: