You are here



Singularity is one of the container virtualiztion programs and thus have many common features with the other ones. However, there are some important differences between singularity and others (such as Docker), since Singularity is explicitly focused on HPC systems. For example, non-root users can use containers without complicated prerequisites and GPUs can be used easily in Singularity.

Contents of this page will be added from time to time.

Container Image

While Singularity has its own container image format, Singularity can also use Docker image (NOT ALWAYS, though). Docker images on NVIDIA NGC are available for example.

[user@ccfep1 ~]$ singularity pull docker://

Other References:

Building Container Image

Currently, you can't build singularity image from definition file on RCCS (fakeroot setting is not yet supported). Please use Sylabs, Singularity Hub, or your lab's system. to build an image.

Container images prepared by RCCS

Experimentally, we have prepared some container images under /local/apl/lx/singularity/containers/ directory, where not only the image file (.sif) but also the definition file (.def) are available.

anaconda3-2020.11 (cpu, gpu+tensorflow, gpu+pytorch)

Anaconda3-2020.11 environment on Ubuntu 20.04. You can load an image directly or by using "" in the directory. This image might be loaded faster than anaconda environment installed by stanard procedure (/local/apl/lx/anaconda*) due to the nature of RCCS filesystem. System files (such as /usr and /lib) are from Ubuntu.

Example (cpu version, bash env):

[user@ccfep3 ~]$ . /local/apl/lx/singularity/containers/anaconda3-2020.11-cpu/

Example (gpu version):

[user@ccgpuv ~]$ singularity shell --nv /local/apl/lx/singularity/containers/anaconda3-2020.11-gpu-pytorch/anaconda3-2020.11-gpu-pytorch.sif


  • These images are academic-only due to the anaconda chennel license.
  • Don't forget --nv when you use GPUs.
  • Home directory can be accessed without special argument upon launching.
  • You need to add "--bind /lustre:/lustre,/local:/local,/save:/save" to access /local, /save from the singularity environment.
    • (those options are specified in "".)
    • Example: singularity shell --nv --bind /lustre:/lustre,/local:/local,/save:/save /local/apl/lx/singularity/containers/anaconda3-2020.11-gpu-pytorch/anaconda3-2020.11-gpu-pytorch.sif
  • You can use thses images in job scripts. (May not be very easy, though...)
  • In case you want to add packages such as opencv, please run "pip install opencv --user" to install it under your home directory after loading singularity image.



  • Minimal example can be found in /local/apl/lx/singularity364-samples.
    • (Pulling container from NGC (on ccefp), and then run MNIST sample on GPU node.)
  • For GPUs, GPU driver of the host will be used.
    • If container requires newer GPU driver than the host one, it will result in error.
    • You may need to verify the required and installed GPU driver (or CUDA) versions.
  • It is not easy to run MPI program, since the version and configuration of MPI on host and containers is needed to be consistent.
  • If you want to use applications installed under /local/apl together with container ones, you may need to add /lustre:/lustre,/local:/local,/homeg:/homeg to --bind option.