Skip to content

Containers (Apptainer/Singularity)

Overview

Apptainer (formerly known as Singularity) is a container platform designed specifically for high-performance computing (HPC) environments. Unlike Docker, Apptainer was developed to run containers securely on shared systems without requiring root privileges.

Apptainer vs Singularity

Singularity was renamed to Apptainer in 2021. The commands and functionalities are the same, and Apptainer maintains full compatibility with Singularity images. On the cluster, you can use both apptainer and singularity commands.

Why use containers in HPC?

  • Reproducibility: Package your entire software environment to ensure it works the same way everywhere
  • Isolation: Run different versions of libraries without conflicts
  • Portability: Move applications between different clusters and systems
  • Docker compatibility: Convert Docker images to Apptainer

Check availability

# Check if Apptainer is available
module avail apptainer

# Or check for Singularity
module avail singularity

# Load module
module load apptainer
# or
module load singularity

# Check version
apptainer --version

Use existing images

Docker Hub images

You can run Docker images directly:

# Execute command in Docker container
apptainer exec docker://ubuntu:22.04 cat /etc/os-release

# Interactive shell
apptainer shell docker://ubuntu:22.04

Convert Docker image to SIF

SIF (Singularity Image Format) is more efficient for HPC use:

# Create .sif file from Docker image
apptainer build ubuntu.sif docker://ubuntu:22.04

# Use the converted image
apptainer shell ubuntu.sif

Local image repository

Consult with NOC about available images on the cluster at /scratch/singularity/ or shared locations.

Run containers

Shell mode (interactive)

# Open shell inside container
apptainer shell ubuntu.sif

Interactive session example:

# Request compute node
srun --pty --time=01:00:00 --cpus-per-task=4 bash

# Run container
apptainer shell /scratch/projetos/<your_project>/containers/ubuntu.sif

# You are now inside the container
Apptainer> python --version
Apptainer> exit

Exec mode (run command)

# Execute specific command
apptainer exec ubuntu.sif python script.py

# With arguments
apptainer exec ubuntu.sif python script.py --input data.csv --output results.txt

Run mode (execute runscript)

# Execute container's default script
apptainer run ubuntu.sif

Bind directories

By default, Apptainer automatically mounts: - Your home directory ($HOME) - Current directory ($PWD) - /tmp

To access other directories, use --bind or -B:

# Bind specific directory
apptainer shell --bind /scratch/projetos/<your_project> ubuntu.sif

# Bind multiple directories
apptainer shell --bind /scratch,/opt ubuntu.sif

# Bind with different path inside container
apptainer shell --bind /scratch/projetos/<your_project>:/data ubuntu.sif

Configure permanent bind:

# Add to ~/.bashrc
export APPTAINER_BINDPATH="/scratch/projetos/<your_project>,/opt"

Use containers in SLURM jobs

Simple job

#!/bin/bash
#SBATCH --job-name=apptainer_job
#SBATCH --output=/scratch/projetos/<your_project>/logs/job_%j.out
#SBATCH --time=02:00:00
#SBATCH --cpus-per-task=4
#SBATCH --mem=8G

# Load Apptainer if needed
module load apptainer

# Run script inside container
apptainer exec /scratch/projetos/<your_project>/containers/python.sif \
    python /scratch/projetos/<your_project>/scripts/analysis.py

GPU job

#!/bin/bash
#SBATCH --job-name=gpu_container
#SBATCH --partition=gpu
#SBATCH --gpus=1
#SBATCH --cpus-per-task=4
#SBATCH --mem=16G
#SBATCH --time=04:00:00
#SBATCH --output=/scratch/projetos/<your_project>/logs/gpu_%j.out

module load apptainer
module load cuda/11.8

# Use --nv for NVIDIA GPU access
apptainer exec --nv \
    /scratch/projetos/<your_project>/containers/pytorch.sif \
    python train_model.py

MPI job

#!/bin/bash
#SBATCH --job-name=mpi_container
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=8
#SBATCH --cpus-per-task=1
#SBATCH --time=02:00:00

module load apptainer
module load openmpi

# Run MPI application
mpirun apptainer exec \
    /scratch/projetos/<your_project>/containers/mpi_app.sif \
    /opt/app/bin/mpi_program

Build your own images

Root privileges required

To build images, you need root access on a local Linux machine or use online services like Sylabs Cloud.

Create definition file

example.def:

Bootstrap: docker
From: ubuntu:22.04

%post
    # Update system
    apt-get update && apt-get install -y \
        python3 \
        python3-pip \
        git \
        vim

    # Install Python packages
    pip3 install numpy pandas scipy matplotlib

%environment
    export LC_ALL=C
    export PATH=/usr/local/bin:$PATH

%runscript
    echo "Container ready for use!"
    exec /bin/bash "$@"

%labels
    Author your_name@email.com
    Version v1.0

Build image (on machine with root)

# Build SIF image
sudo apptainer build my_image.sif example.def

# Transfer to cluster
scp my_image.sif user@cluster:/scratch/projetos/<your_project>/containers/

Use Sylabs Cloud (without root)

# Login
apptainer remote login

# Build remotely
apptainer build --remote my_image.sif example.def

Convert Docker images

If you have a Dockerfile or Docker image:

# Convert local Docker image
apptainer build my_app.sif docker-daemon://my_app:latest

# Convert from Docker Hub
apptainer build tensorflow.sif docker://tensorflow/tensorflow:latest-gpu

# Convert with specific options
apptainer build --sandbox my_app/ docker://my_app:latest

Environment variables

Define variables for container

# Via command line
apptainer exec --env VAR1=value1,VAR2=value2 ubuntu.sif env

# Via APPTAINERENV_ prefix
export APPTAINERENV_MYVAR="hello"
apptainer exec ubuntu.sif echo $MYVAR

Pass variables from host

# Clean host variables (default)
apptainer exec --cleanenv ubuntu.sif env

# Keep host variables
apptainer exec --containall --env-file vars.env ubuntu.sif command

Complete example: scientific Python environment

1. Create project structure:

mkdir -p /scratch/projetos/<your_project>/containers
mkdir -p /scratch/projetos/<your_project>/scripts

2. Download scientific Python image:

cd /scratch/projetos/<your_project>/containers
apptainer build python_scientific.sif docker://continuumio/miniconda3:latest

3. Python script (scripts/analysis.py):

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt

print("Analysis started...")
# Your code here

4. SLURM job (submit_analysis.sh):

#!/bin/bash
#SBATCH --job-name=analysis
#SBATCH --output=/scratch/projetos/<your_project>/logs/analysis_%j.out
#SBATCH --time=01:00:00
#SBATCH --cpus-per-task=4

module load apptainer

apptainer exec \
    --bind /scratch/projetos/<your_project>:/workspace \
    /scratch/projetos/<your_project>/containers/python_scientific.sif \
    python /workspace/scripts/analysis.py

5. Submit job:

sbatch submit_analysis.sh

Best practices

1. Image organization

/scratch/projetos/<your_project>/
├── containers/
   ├── python_base.sif
   ├── tensorflow_gpu.sif
   └── custom_app.sif
├── scripts/
└── logs/

2. Use versioned images

# Good: specify version
apptainer build python.sif docker://python:3.11

# Avoid: using latest (can change)
apptainer build python.sif docker://python:latest

3. Document your images

Keep a README file or comments in the definition file:

# python_scientific.txt
Image: python_scientific.sif
Based on: continuumio/miniconda3
Packages: numpy, pandas, matplotlib, scipy
Created: 2024-01-15
Usage: Scientific data analysis

4. Check image size

# Check size
ls -lh /scratch/projetos/<your_project>/containers/

# Inspect image
apptainer inspect python.sif

5. Test locally before submitting jobs

# Always test interactively first
srun --pty bash
apptainer exec my_image.sif python script.py

Common problems

Error: "No space left on device"

Problem: Apptainer cache filled /tmp.

Solution:

# Set cache to your directory
export APPTAINER_CACHEDIR=/scratch/projetos/<your_project>/.apptainer_cache
mkdir -p $APPTAINER_CACHEDIR

# Clean cache
apptainer cache clean

Error: "Permission denied" accessing files

Problem: Directory not mounted in container.

Solution:

# Check active binds
apptainer exec ubuntu.sif mount | grep bind

# Add necessary bind
apptainer exec --bind /scratch ubuntu.sif command

GPU not detected

Problem: --nv flag was not used.

Solution:

# Always use --nv for NVIDIA GPUs
apptainer exec --nv pytorch.sif nvidia-smi

# Check CUDA inside container
apptainer exec --nv pytorch.sif nvcc --version

Image too large

Problem: .sif image too large for /home.

Solution:

# Build in /scratch
cd /scratch/projetos/<your_project>/containers
apptainer build --force large_image.sif docker://image:tag

# Check and clean unnecessary layers in Dockerfile

Apptainer vs Docker

Feature Apptainer Docker
Root privileges Not required Required
HPC security Designed for HPC Not recommended
Image format SIF (single file) Layers
Compatibility Reads Docker images Native
MPI/GPU Native support Limited
Recommended use HPC clusters Local development

Additional resources

Support

If you encounter problems using Apptainer:

  1. Check job error logs
  2. Test the image interactively before submitting jobs
  3. Consult the official documentation
  4. Contact support or email hpc@fieb.org.br