Skip to content

entity

Entity is a cutting-edge particle-in-cell (PIC) code that serves as an excellent tool for studying plasma physics in astrophysical environments. What makes Entity particularly special is its coordinate-agnostic design and modern C++17 implementation. Entity offers several advantages: it's highly modular, works efficiently across different computing architectures (both CPUs and GPUs), and comes with a Python library called nt2py for easy data analysis and visualization. The code's flexible design and support for various output formats make it an ideal platform for learning about advanced plasma physics simulations while using industry-standard tools and practices. Check out the entity wiki for all questions at different levels!

Entity on Dartmouth's Discovery Cluster

We provide libraries to fulfill entity's dependencies on Discovery. You can load the required modules by running the following in your Discovery shell:

module use --append /dartfs/rc/lab/E/EPaCO/.mods
module load entity/cuda/mpi/a100
module use --append /dartfs/rc/lab/E/EPaCO/.mods
module load entity/host/mpi/zen

Discovery Cluster

If you do not have access to the EPaCO lab space, please check our groups' allocations and user policies.

To compile entity, follow the standard procedure and add the following specifications to your cmake command:

cmake -B build -D pgen=... -D output=ON -D mpi=ON -D CMAKE_CXX_COMPILER=/dartfs-hpc/rc/home/7/f007gj7/epaco/libs/openmpi/bin/mpicxx -D CMAKE_C_COMPILER=/dartfs/rc/lab/E/EPaCO/libs/openmpi/bin/mpicc
cmake -B build -D pgen=... -D output=ON -D mpi=ON -D CMAKE_CXX_COMPILER=/dartfs-hpc/rc/home/7/f007gj7/epaco/libs/openmpi_host/bin/mpicxx -D CMAKE_C_COMPILER=/dartfs/rc/lab/E/EPaCO/libs/openmpi_host/bin/mpicc

When setting up the compilation, specify a problem generator, for example srpic/weibel.

Discovery Cluster

Compilation of entity on Discovery's login node can be slow at times. If you encounter long compilation times, consider building the code in an interactive session, requested for example like this:

srun --nodes=1 --ntasks-per-node=1 --cpus-per-task=16 --pty /bin/bash

To run the code, you can again follow standard procedures.

Discovery Cluster

You typically need three things to run a simulation, you can set up all of them in a new folder named suitably for your simulation.

  1. The executable entity.xc, located in entity/build/src.
  2. A .toml file for your problem generator, for the weibel setup located in entity/setups/srpic/weibel/weibel.tom.
  3. A typical submit script for Dartmouth infrastructure looks as follows:

    #!/bin/bash
    
    #SBATCH -J job_name
    #SBATCH --partition gpuq
    #SBATCH --gres=gpu:1
    #SBATCH -o ./entity.out
    #SBATCH -e ./entity.err
    #SBATCH --nodes=1
    #SBATCH --ntasks-per-node=1
    #SBATCH --cpus-per-task=1
    #SBATCH --time=00:20:00
    
    module use --append /dartfs/rc/lab/E/EPaCO/.mods
    module load entity/cuda/mpi/a100
    
    mpirun -np 1 ./entity.xc -input ...
    
    #!/bin/bash
    
    #SBATCH -J job_name
    #SBATCH --partition standard
    #SBATCH -o ./entity.out
    #SBATCH -e ./entity.err
    #SBATCH --nodes=1
    #SBATCH --ntasks-per-node=1
    #SBATCH --cpus-per-task=1
    #SBATCH --time=00:20:00
    
    module use --append /dartfs/rc/lab/E/EPaCO/.mods
    module load entity/host/mpi/zen
    
    mpirun -np 1 ./entity.xc -input ...
    

    The three dots should be replaced with the name of your .toml file.