NEMO is designed to provide robust performance for a wide variety of models and scenarios. However, it is certainly possible to build a NEMO model that calculates slowly. Most often, this is due to a long solve time (i.e., the solver takes a long time to find a solution), which in turn is driven by model complexity.
If your model isn't calculating as quickly as you'd like, there are several steps to consider.
Only save the output variables you need. Saving unnecessary variables increases disk operations and may result in a longer solve time (because additional constraints are needed to calculate the variables).
Don't save zeros. If you set the
false(the default), NEMO won't save output variable values that are equal to zero. This can substantially reduce the time and disk space needed to write scenario outputs.
calculatescenariocan have an important effect on performance. This option tells NEMO to make a greater effort to eliminate unnecessary variables from the model it provides to the solver. This filtering process requires a little time, but it can considerably reduce the solver's work load. In general, the trade-off is advisable for large models (you should set
truein these cases) but may not be for very small models (set
falsein these cases).
Use parallel processing. NEMO can parallelize certain operations, reducing their run time by spreading the load across multiple processors. NEMO utilizes Julia's multi-threading system for parallelization, so parallel processing is activated whenever you run NEMO in a Julia session with multiple threads enabled. To start a Julia session with multiple threads, use the
--threadsargument on the command line, or set the
JULIA_NUM_THREADSoperating system environment variable. If you install NEMO with the Windows installer program, the installer sets
JULIA_NUM_THREADSto a reasonable default value. In general, NEMO performs best when it has 1-2 times as many threads as your computer has logical processors (however, certain models may benefit from more or fewer threads).
Note that the number of Julia threads affects parallelization in NEMO's Julia code, but it doesn't control what happens with the solver. For maximum performance with large models, it's also helpful to use a solver that supports parallelization, such as CPLEX, Gurobi, or HiGHS.
Simplify the scenario. Substantial performance gains can be realized by reducing the number of dimensions in a scenario - for example, decreasing the number of regions, technologies, time slices, years, or nodes. You can also speed up calculations by forgoing nodal transmission modeling. Of course, this approach generally requires trade-offs: a simpler model may not respond as well to the analytic questions you are asking. The goal is to find a reasonable balance between your model's realism and its performance.
Relax the transmission simulation. If you're simulating transmission, there are some other performance tuning options to consider beyond reducing your model's dimensions. You can change the simulation method with the
TransmissionModelingEnabledparameter or use this parameter to model transmission only in selected years. The
calculatescenariofunction also has an argument (
continuoustransmission) that determines whether NEMO uses binary or continuous variables to represent the construction of candidate transmission lines. With binary variables (
continuoustransmission = false) candidate lines may only be built in their entirety, while with continuous variables (
continuoustransmission = true) partial line construction is allowed. Continuous simulations are generally faster but may not be as realistic.
CapacityOfOneTechnologyUnitselectively. This parameter sets the minimum increment for endogenously determined capacity additions for a technology. When it's specified, NEMO uses integer variables to solve for capacity, which increases model solve time. If you don't define
CapacityOfOneTechnologyUnit, NEMO solves for technology capacity with continuous variables. This approach assumes that any increment of new capacity is permissible (subject to limits on minimum and maximum capacity and capacity investment - see
Try a different solver. The open-source solvers delivered with NEMO (Cbc, HiGHS, and GLPK) may struggle with sizeable models. If you have access to one of the commercial solvers NEMO supports (currently, CPLEX, Gurobi, Mosek, and Xpress), it will usually be a better option. If you're choosing from Cbc, HiGHS, and GLPK, test each of them to see which performs better for your scenario.
Consider calculating selected years rather than all years. Calculating selected years in a scenario is a quick way to get results from complex models, but the results may differ from those you would get if you calculated all years. NEMO uses several methods to reduce the differences - see the documentation on calculating selected years for details.
Try disabling JuMP bridging or using JuMP's direct mode. By default, JuMP may reformulate the constraints, variables, and optimization objective that NEMO defines for a scenario to improve their compatibility with different solvers. This feature is called bridging. You can disable bridging in two ways: 1) by providing a
JuMP.Modelthat does not use bridging to
jumpmodelargument); or 2) by setting
falsein your NEMO configuration file. Deactivating bridging can reduce NEMO's memory use and decrease the solve time for large models, but it may also result in a solver error (depending on the solver you're using and the specifics of your NEMO scenario).
In the same vein, using JuMP in direct mode bypasses multiple features for cross-solver compatibility, including bridging. You can enable direct mode by supplying a
JuMP.Modelin direct mode to
calculatescenario, or with the
jumpdirectmodeoption in a NEMO configuration file. Direct mode generally reduces memory use and solve time, but it also carries a risk of a solver error. For both bridging and direct mode, a good practice is to test them with your model to see how they perform. For more information on both features, including how to activate/deactivate them when creating a
JuMP.Model, see the JuMP documentation.