Installation
Unzip OpenCS-xx.yy.zz.zip into any directory. The installation package contains the following structure:
Directory | Description |
---|---|
gnu_linux64 | Binaries for GNU/Linux x86_64 (g++, MPICH based) |
win64 | Binaries for Windows x64 (Microsoft VC++, MS-MPI 10.0) |
src | OpenCS source files |
docs | Documentation |
For every platform, the binary- directory contains the following sub-directories:
Directory | Description |
---|---|
bin | csSimulator binary |
include | c++ header files |
lib | libOpenCS_Models, libOpenCS_Evaluators and libOpenCS_Simulators libraries |
examples | Examples binaries |
System requirements
The OpenCS framework requires the following runtime libraries:
- OpenMP runtime library (provided by the C++ compiler)
- OpenCL framework (included in Intel, AMD and NVidia drivers)
- MPI interface: MPICH on GNU/Linux and macOS, and MS MPI on Windows
Compiling from source
OpenCS can be compiled for different platforms and MPI interface implementations using the instructions located in build_instructions.txt.
Quick Tutorial
1. Model specification
The procedure for specification of models using the OpenCS framework is basically identical for all models (except the parts specifying the equations expressions and initial conditions). A very similar code can be used for exporting models from third-party simulators.
Model specification in C++
#include <OpenCS/opencs.h>
/* Step 1. Initialise the model builder with the number of variables
* and the number of degrees of freedom in the ODE/DAE system. */
csModelBuilder_t mb;
uint32_t Nvariables = ...;
uint32_t Ndofs = ...;
mb.Initialize_DAE_System(Nvariables, Ndofs);
/* Step 2. Specify the OpenCS model. */
/* Model equations are specified in a symbolic form using
* the provided arrays of objects representing variables (x),
* time derivatives (dx_dt) and degrees of freedom (y).
* For ODE problems, given by:
* dx(i)/dt = RHS(x, y, time)
* the model equations define the right hand side (RHS),
* while for DAE problems in the following form:
* F(dx_dt, x, y, time) = 0
* model equations represent the residual function F.
* Variables, time derivatives and degrees of freedom are instances
* of the csNumber_t class. It provides the standard mathematical
* operators and functions as defined in <math.h>.
* This way, the model equations are specified in an identical way
* as if they were defined in C/C++. */
const csNumber_t& time = mb.GetTime();
const std::vector<csNumber_t>& x = mb.GetVariables();
const std::vector<csNumber_t>& dx_dt = mb.GetTimeDerivatives();
const std::vector<csNumber_t>& y = mb.GetDegreesOfFreedom();
/* Specify model equations. */
std::vector<csNumber_t> equations(Nvariables);
for(uint32_t i = 0; i < Nvariables; i++)
equations[i] = F(dx_dt, x, y, time); <-- MODEL SPECIFIC CODE
mb.SetModelEquations(equations);
/* Set the initial conditions (the default is 0.0). */
std::vector<real_t> x0 (Nvariables);
for(uint32_t i = 0; i < Nvariables; i++)
x0[i] = ...; <-- MODEL SPECIFIC CODE
mb.SetVariableValues(x0);
/* Optional: set variable names (the default is "x"). */
std::vector<std::string> var_names(Nvariables);
for(uint32_t i = 0; i < Nvariables; i++)
var_names[i] = ...; <-- MODEL SPECIFIC CODE
mb.SetVariableNames(var_names);
/* Optional: set names of degrees of freedom (the default is "y"). */
std::vector<std::string> dof_names(Ndofs);
for(uint32_t i = 0; i < Ndofs; i++)
dof_names[i] = ...; <-- MODEL SPECIFIC CODE
mb.SetDegreeOfFreedomNames(dof_names);
/* Optional: set absolute tolerances (the default is 1E-5). */
std::vector<real_t> abs_tolerances(Nvariables);
for(uint32_t i = 0; i < Nvariables; i++)
abs_tolerances[i] = ...; <-- MODEL SPECIFIC CODE
mb.SetAbsoluteTolerances(abs_tolerances);
/* Step 3. Generate Compute Stack models by partitioning the ODE/DAE system. */
/* For a single CPU simulation (Npe = 1) graph partitioner is not required. */
uint32_t Npe = 1;
std::vector<csModelPtr> cs_models = mb.PartitionSystem(Npe, nullptr);
/* Step 4. Use the generated model(s) for simulation or export. */
/* Set simulation options (specified as a string in JSON format).
* The default options are returned by ModelBuiler.GetSimulationOptions function
* and can be changed using the API from csSimulationOptions_t class.
* As a minium, the TimeHorizon and the ReportingInterval must be set. */
csSimulationOptionsPtr options = mb.GetSimulationOptions();
options->SetDouble("Simulation.TimeHorizon", 100.0);
options->SetDouble("Simulation.ReportingInterval", 1.0);
options->SetDouble("Solver.Parameters.RelativeTolerance", 1e-5);
std::string simulationOptions = options->ToString();
/* Specify the directory for input files. */
std::string inputFilesDirectory = "...";
/* Export the models into a directory: */
mb.ExportModels(cs_models, inputFilesDirectory, simulationOptions);
/* Or use the models for a simulation: */
MPI_Init(&argc, &argv);
csSimulate(cs_models[0], simulationOptions, outputDirectory);
MPI_Finalize();
For simulation on message-passing systems the system of equations must be partitioned
into a number of sub-systems. In this case, the step 3. is the following:
/* For distributed memory systems a graph partitioner must be specified.
* Available partitioners:
* - Simple: splits a graph into the specified number of partitions
* with no load balancing analysis
* - 2D_Npde: partitions the specified number of partial differential equations (Npde)
* distributed on a uniform two-dimensional grid by dividing the grid into
* the requested number of regions
* - METIS: partitions the graphs into a user-specified number of parts using one of
* the following algorithms:
* - PartGraphKway: 'Multilevel k-way partitioning' algorithm
* - PartGraphRecursive: 'Multilevel recursive bisectioning' algorithm
* METIS partitioner can additionally balance the specified quantities in partitions
* using the balancing constraints:
* - Ncs: balance number of compute stack items in equations
* - Nnz: balance number of non-zero items in the incidence matrix
* - Nflops: balance number of FLOPS required to evaluate equations
* - Nflops_j: balance number of FLOPS required to evaluate derivatives
* (Jacobian) matrix */
/* Use Simple graph partitioner: */
uint32_t Npe = 4;
csGraphPartitionerPtr partitioner = createGraphPartitioner_Simple();
std::vector<csModelPtr> cs_models = mb.PartitionSystem(Npe, partitioner.get());
/* Or use METIS graph partitioner: */
// Instantiate METIS graph partitioner.
csGraphPartitioner_Metis partitioner(PartGraphRecursive);
// Set the load balancing constraints
std::vector<std::string> balancingConstraints = {"Nflops", "Nflops_j"};
// Partitioner options can be set using the SetOptions function.
// First, obtain the array with options already initialised to default values.
std::vector<int32_t> metis_options = partitioner.GetOptions();
// Then, set the options (as described in the Section 5.4 of the METIS manual;
// requires <metis.h> included).
metis_options[METIS_OPTION_DBGLVL] = METIS_DBG_INFO | METIS_DBG_TIME;
metis_options[METIS_OPTION_NITER] = 10;
metis_options[METIS_OPTION_UFACTOR] = 30;
// Finally, set the updated options to the Metis partitioner.
partitioner.SetOptions(metis_options);
/* Graph partitioners can optionally use dictionaries with a number of FLOPs required
* for individual mathematical operations.
* This way, the total number of FLOPs can be precisely estimated for every equation.
* Number of FLOPs are specified using two dictionaries:
* 1. unaryOperationsFlops for:
* - unary operators (+, -)
* - unary functions (sqrt, log, log10, exp, sin, cos, tan, asin, acos, atan, sinh,
* cosh, tanh, asinh, acosh, atanh, erf, floor, ceil, and abs)
* 2. binaryOperationsFlops for:
* - binary operators (+, -, *, /, ^)
* - binary functions (pow, min, max, atan2)
* If a mathematical operation is not in the dictionary, it is assumed that it requires
* a single FLOP. */
std::map<csUnaryFunctions,uint32_t> unaryOperationsFlops;
std::map<csBinaryFunctions,uint32_t> binaryOperationsFlops;
unaryOperationsFlops[eSqrt] = 12
unaryOperationsFlops[eExp] = 5
binaryOperationsFlops[eMulti] = 4
binaryOperationsFlops[eDivide] = 6
std::vector<csModelPtr> cs_models = mb.PartitionSystem(Npe,
&partitioner,
balancingConstraints,
true,
unaryOperationsFlops,
binaryOperationsFlops);
Model specification in Python
from daetools.solvers.opencs import csModelBuilder_t, csNumber_t, csSimulate
# Step 1. Initialise the Model Builder.
Nvariables = ...
Ndofs = ...
mb = csModelBuilder_t()
mb.Initialize_DAE_System(Nvariables, Ndofs)
# Step 2. Specify the OpenCS model.
time = mb.Time # Current simulation time (t)
x = mb.Variables # State variables (x)
dx_dt = mb.TimeDerivatives # Derivatives of state variables (x')
y = mb.DegreesOfFreedom # Fixed variables (y)
# Set model equations.
mb.ModelEquations = F(dx_dt, x, y, time)
# Set initial conditions (the default is 0.0).
mb.VariableValues = [...]
# Set variable names (the default is "x").
mb.VariableNames = [...]
# Optional: set names of degrees of freedom (the default is "y").
mb.DegreeOfFreedomNames = [...]
# Optional: set absolute tolerances (the default is 1E-5).
mb.SetAbsoluteTolerances = [...]
# Step 3. Generate Compute Stack models by partitioning the ODE/DAE system.
# Partition the system to create the OpenCS model for a single CPU simulation.
# For single CPU simulations (Npe = 1) graph partitioner is not required.
# For distributed memory systems a graph partitioner must be specified.
Npe = 1
graphPartitioner = None
cs_models = mb.PartitionSystem(Npe, graphPartitioner)
# Step 4. Use the generated model(s) for simulation or export.
# Set simulation options (specified as a string in JSON format).
# The default options are returned by SimulationOptions function as a Python dictionary.
# The TimeHorizon and the ReportingInterval must be set.
options = mb.SimulationOptions
options['Simulation']['TimeHorizon'] = 100.0
options['Simulation']['ReportingInterval'] = 1.0
options['Solver']['Parameters']['RelativeTolerance'] = 1e-5
mb.SimulationOptions = options
# Export the models into a directory:
inputFilesDirectory = '...'
csModelBuilder_t.ExportModels(cs_models, inputFilesDirectory, mb.SimulationOptions)
# Or use the models for a simulation:
csSimulate(cs_models[0], simulationOptions, outputDirectory)
Generated input files and simulation options
The directory with the generated OpenCS model contains input files with the model specification - one set per processing element (PE):
Input file | Contents |
---|---|
model_structure-[PE].csdata | Serialised csModelStructure_t data structure |
model_equations-[PE].csdata | Serialised csModelEquations_t data structure |
sparsity_pattern-[PE].csdata | Serialised csSparsityPattern_t data structure |
partition_data-[PE].csdata | Serialised csPartitionData_t data structure |
simulation_options.json | Simulation, ODE/DAE solver and linear solver parameters |
While the model specification remains unaltered, the simulations can be performed for different time horizons and different solver and preconditioner options. Thus, the simulation options are specified in a human readable JSON format and contain four sections: Simulation (run-time data), Model (ODE/DAE model options), Solver (options for the ODE/DAE solver) and LinearSolver (the linear solver and the preconditioner options). This way, for instance, switching to a different computing device is an input parameter (Model.ComputeStackEvaluator). Names of the solver/preconditioner parameters are identical to the original names used by the corresponding libraries or to the names of Set_ functions (i.e. the MaxOrd parameter specified using the IDASetMaxOrd function in the SUNDIALS suite). Sample simulation options using the OpenMP/OpenCL Compute Stack Evaluator implementations, CSV and HDF5 data reporters, SUNDIALS gmres linear solver and Ifpack ILU preconditioner are given in:
- simulation_options-ode.json (for ODE systems)
- simulation_options-dae.json (for DAE systems)
2. Simulation on shared memory systems
(a) Embedded into a host simulator using the csSimulate function from libOpenCS_Simulators:
/* 1. Initialise MPI. */
MPI_Init(&argc, &argv);
/* Perform the simulation. */
csSimulate(inputFilesDirectory);
/* 3. Finalise MPI. */
MPI_Finalize();
(b) Embedded into a host simulator using the API from libOpenCS_Simulators:
/* Initialise MPI. */
int rank;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
std::string inputFilesDirectory = "...";
std::string outputFilesDirectory = inputFilesDirectory + "/results";
/* Load the simulation_options.json with the simulation options such as
* startTime, timeHorizon, reportingInterval and outputDirectory,
* Compute Stack Evaluators and linear and DAE solver options. */
std::string simulationOptionsFile = inputFilesDirectory + "/simulation_options.json";
daeSimulationOptions& cfg = daeSimulationOptions::GetConfig();
cfg.Load(simulationOptionsFile);
real_t startTime = cfg.GetFloat("Simulation.StartTime");
real_t timeHorizon = cfg.GetFloat("Simulation.TimeHorizon");
real_t reportingInterval = cfg.GetFloat("Simulation.ReportingInterval");
daeModel_t model;
daeSolver_t daesolver;
daeSimulation_t simulation;
/* Load the model from the directory with inputs files. */
model.Load(rank, inputFilesDirectory);
/* Create and connect log object. */
csLogPtr log = createLog_StdOut();
log->Connect(rank);
/* Create and connect data reporter object. */
std::string csvFilePath = outputFilesDirectory + "/variables.csv";
std::string csvFilePath_der = outputFilesDirectory + "/derivatives.csv";
csDataReporterPtr datareporter = createDataReporter_CSV(csvFilePath, csvFilePath_der);
datareporter->Connect(rank);
/* Create and set the Compute Stack Evaluator. */
csComputeStackEvaluatorPtr csEvaluator = createEvaluator_Sequential();
model.SetComputeStackEvaluator(csEvaluator);
/* Initialize the simulation. */
simulation.Initialize(&model,
log.get(),
datareporter.get(),
&daesolver,
startTime, timeHorizon, reportingInterval,
outputFilesDirectory);
/* Calculate corrected initial conditions at t = 0 (DAE systems only). */
simulation.SolveInitial();
/* Run the simulation. */
simulation.Run();
/* Finalize the simulation and free the allocated resources. */
simulation.PrintStats();
simulation.Finalize();
/* Finalise MPI. */
MPI_Finalize();
(c) Using standalone simulators (csSimulator):
$ csSimulator "inputFilesDirectory"
Depending on the selected data reporter, the simulation results are saved in the .csv or .hdf5 format into the specified directory. In addition, the output directory contains the detailed simulation stats in JSON format (one file per processing element).
3. Simulation on message-passing systems
(a) Embedded into a host simulator (using the csSimulate function from libOpenCS_Simulators):
/* 1. Initialise MPI. */
int rank;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
/* Perform the simulation. */
// On every node the csSimulate function will load model specification
// files based on the MPI rank.
csSimulate(inputFilesDirectory);
/* 3. Finalise MPI. */
MPI_Finalize();
(b) Using standalone simulator (csSimulator):
# In GNU/Linux and macOS using OpenMPI:
$ mpiexec -n Npe csSimulator "inputFilesDirectory"
$ mpiexec -n Npe --hostfile hostfile csSimulator "inputFilesDirectory"
# In Windows using MS-MPI:
$ mpiexec -n Npe csSimulator "inputFilesDirectory"
$ mpiexec -n Npe --hostfile hostfile csSimulator "inputFilesDirectory"
Depending on the selected data reporter, the simulation results are saved in the .csv or .hdf5 format into the specified directory. In addition, the output directory contains the detailed simulation stats in JSON format (one file per processing element).
4. Model exchange
The main purpose of the OpenCS framework is specification of large scale models for simulation on shared and distributed memory systems. However, the models can be used for model-exchange as well. An overview of the steps required for use of ODE/DAE models in third-party simulators is outlined below.
/* Step 1. Initialise MPI. */
int rank;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
/* Step 2. Instantiate the Compute Stack model
* (an implementation of the csDifferentialEquationModel_t interface).
* A reference implementation (csDifferentialEquationModel) is provided
* in the libOpenCS_Models shared library. */
csDifferentialEquationModel model;
/* Step 3. Load the model from the specified directory with input files. */
model.Load(rank, inputFilesDirectory);
/* Step 4. Instantiate and set the Compute Stack Evaluator.
* For simplicity, the sequential one is used here. */
csComputeStackEvaluatorPtr csEvaluator = createEvaluator_Sequential();
model.SetComputeStackEvaluator(csEvaluator);
/* Step 5. Obtain the information from the model such as
* number of variables, variable names, types,
* absolute tolerances, initial conditions
* and the sparsity pattern (in CRS format). */
int N, Nnz;
std::vector<int> IA, JA;
model.GetSparsityPattern(N, Nnz, IA, JA);
/* Step 6. In a loop: evaluation of equations and derivatives. */
/* 6.1 Set the current values of state variables and derivatives
* using the time, x and dx_dt values from the ODE/DAE solver.
* In parallel simulations, the MPI C interface will be used
* to exchange the values and derivatives of adjacent variables
* between the processing elements. */
model.SetAndSynchroniseData(time, x, dx_dt);
model.SetDegreesOfFreedom(y);
/* 6.2 Evaluate residuals. */
model.EvaluateEquations(time, residuals);
/* 6.3 Evaluate derivatives (the Jacobian matrix).
* csMatrixAccess_t is used as a generic interface for sparse matrix storage.
* inverseTimeStep is an inverse of the current step taken by the solver (1/h).
* It is assumed that a call to SetAndSynchroniseData has already been performed
* (therefore the current values set and exchanged between processing elements)
* which is a typical procedure in ODE/DAE solvers where the residuals/RHS are
* always evaluated first and then, if necessary, the derivatives evaluated and
* a preconditioner recomputed (in iterative methods) or a matrix re-factored
* (in direct methods). */
model.EvaluateJacobian(time, inverseTimeStep, ma);
/* Step 7. Free the resources allocated in the model and the evaluator. */
model.Free();
/* Step 8. Finalise MPI. */
MPI_Finalize();