Content
The different VM queues of SLURM
VM (Partition) Name |
CPU [EPYC 7J13 Cores] |
MEM [GB] |
DKK/Hour Estimate |
min |
1 |
1 |
0.18 |
low |
2 |
4 |
0.38 |
med |
4 |
8 |
0.75 |
medFastcore* (NOT SUPPORTED YET) |
4 |
8 |
0.97 |
high |
8 |
16 |
1.51 |
ultra |
16 |
32 |
3.02 |
ultraMem |
16 |
1024 |
13.03 |
extreme |
32 |
64 |
6.03 |
extremeMem |
32 |
1760 |
23.16 |
coresUltra |
64 |
64 |
11.42 |
coresExtreme |
96 |
96 |
20.34 |
big |
96 |
512 |
21.33 |
max |
114 |
1760 |
36.96 |
GPUATenSingle |
2 [+ 1 Nvidia A10] |
16 |
..? (to be measured) |
* If single-core performance, core frequency/boost and IPC is of very high importance to your project (eg. your project is not very parallelizeable and tend to run on a single core), this uses [EPYC 9J43] cores instead of the [EPYC 7J13] cores and could be beneficial.
GPU functionality can also be made available upon request, but requires some initial setup. Therefore, if you think you're going to need GPU(s) for your project, it will be easier to set it up at the start of the project.
The DKK/Hour estimates are based purely on compute, and not on storage. The storage volume for the instance's OS amounts to about 0.05 DKK/Hour.
Custom configurations can be made upon request, which can change memory, cores and core type within some limitations, such as minimum 1 core per GB, and maximum 64GB per core.
These custom configurations should be available by default on your HPC Stack. To use them, specify the partition in your slurm task, eg.
#!/bin/bash
#SBATCH --job-name=yourjobname
#SBATCH --partition coresExtreme
# Your script goes here
sleep 30
If you are using queues with GPU's, its imperative you remember to add the line #SBATCH --gres=gpu:1 to the task, or the gpu will be "hidden" from the slurm job, eg.
#!/bin/bash
#SBATCH --job-name=yourjobname
#SBATCH --partition GPUATenSingle
#SBATCH --gres=gpu:1
# Your script goes here
sleep 30
echo "hello"
whoami
nvidia-smi
sudo dnf install hashcat -y
hashcat -b
sleep 300
Do also note that the home directory of your login-node is accessible from your scripts. If you create a file "/home/opc/myfile.txt", you can access it with your script like so:
#!/bin/bash
#SBATCH --job-name=singlecpu
#SBATCH --partition min
# Your script goes here
sleep 30
cat /home/opc/myfile.txt