Message Passing Interface (MPI) is a standardized and portable message-passing system designed to function on parallel computing architectures. MPI is commonly used for developing parallel applications and is particularly useful in HPC environments. Academic Technology's cluster supports OpenMPI.
Using OpenMPI with Slurm:
We use the scheduling software Slurm for job scheduling on our cluster. Below are some of the commands that you’ll need to know in order to run MPI programs using MPI with Slurm.
Loading the OpenMPI Module:
Before running MPI program, you’ll need to load the OpenMPI module. To do so you can use the following command(s):
Writing a Slurm Job Script for MPI:
To run an MPI program, you need to create a Slurm job script. Below is a basic example of a Slurm job script:
Submitting the Job:
To submit the job script with specific resource requests, you can use the ‘sbatch’ command.
Example Commands:
Here are some example commands to help you get started with MPI programs:
1. Compiling MPI Programs: Use ‘mspice’ to complete your MPI programs. For example:
2. Running MPI programs:
Checking Job Status:
To check the status of your submitted job, use the ‘squeue’ command:
To learn more about MPI, visit the following links:
Comments
0 comments
Article is closed for comments.