About HPC
Academic Technology offers faculty and students access to centrally-managed High-Performance Computing (HPC) cluster, designed to support faculty and students in handling large-scale data workflows that benefit from enhanced computational power.
Cluster Specifications
Partition | # nodes | CPU | Memory (per node) | # total cores | GPU |
cputest | 1 | 2 X AMD EPYC 9534 | 0.5 Tb | 128 | --- |
highmem | 1 | 2 X AMD EPYC 9534 | 1.5 Tb | 128 | |
cpucluster | 3 | 2 X AMD EPYC 9534 | 0.5 Tb | 384 | --- |
gpucluster | 1 | 2 X AMD EPYC 9334 | 1 Tb | 64 | 4 X NVIDIA A100(80GB) |
login | 1 | 1 X AMD EPYC 9124 | 200 Gb | 16 | --- |
Additional Features
Network: All nodes in the cluster are interconnected via an InfiniBand data network (HDR/Ethernet 200 Gb/s).
Operating System: The cluster is running Rocky Linux 8.9 operating system.
Software: Slurm, OpenMPI, GCC v8.5, GCC v12.2.1, CUDA v12.4.
Why Use HPC?
- Speed: HPC clusters process data and perform computations much faster than traditional computing environments.
- Efficiency: HPC efficiently handles large datasets and complex simulations, improving productivity and outcomes.
- Parallel Workloads: HPC is ideal for tasks that can be divided into smaller parallel tasks, such as machine learning models and extensive data analyses.
- Resource-Intensive Applications: HPC supports applications that demand significant computational power, memory, or storage, such as large-scale genome sequencing data).
Recommended uses
- Scientific Research: Fields like bioinformatics, chemistry, and astrophysics can benefit from HPC for complex simulations and data analysis.
- Engineering: HPC aids in the design and testing of new products, enhancing innovation and development.
- Large-Scale Data Processing: HPC is crucial for big data analytics, machine learning, and artificial intelligence applications.
If you are interested in accessing HPC for your research you may submit a request via ServiceNOW or email at@sfsu.edu. This will usually lead to a brief introductory meeting with our systems team to discuss your research needs.
Note: we are still in the testing phase of the HPC cluster, which means it is not yet ready for production-level work. The environment is still being fine-tuned, so users may encounter unexpected behavior or downtime. Therefore, any work done during this phase should be considered non-critical, as system parameters are still being adjusted. Thank you for understanding and cooperation!
Comments
0 comments
Article is closed for comments.