Athena boasts a 1PB main storage system based on traditional hard disk drives and a 10TB high speed cache for use exclusively by the POWER8 subsystem.

All shared storage systems use GPFS.


The system is not backed up. Whilst reasonable endeavours are made to avoid data loss HPC Midlands+ makes no assurances that this will not occur. Users are advised to make their own backups of critical files, but not to archive large amounts of intermediate data, and to look to reproducbility techniques instead.

Transfer to/from Athena

This can be done via pull from Athena, e.g. using wget, git, etc., or push from the approved access routes onto Athena, or push or pull from Athena as required.

Consider using incremental systems such as rsync where possible.

Standard parallel storage

This is mounted on /gpfs on login and compute nodes, and has a quota of 1TB.

Typical aggregate I/O performance for read or write is around 15GB/s.

Fast parallel storage pool (NVMe) for OpenPOWER

This is not yet in full operation but exhibits performance around 22GB/s for
read and write.

Transfer between storage pools

This is not yet in full operation.

Making the most of storage

  • Where possible, avoid many small files in jobs/calculations, and use wrappers such as NetCDF and HDF5 where possible to intermediate where there are many data objects in play.
  • Use in-memory techniques and final flushes where you can.
  • Use streaming I/O techniques where possible, rather than writes to random locations, which combines well with in-memory techniques too.
  • Use parallel I/O techniques, e.g. MPI IO, which whilst it does not change the bottleneck of the GPFS storage system, spreads the load of I/O over many threads or nodes, and makes the best use of the aggregate throughput, meaning your program is not waiting on a single thread to handle all I/O. In some instances I/O can be handed off to non-blocking calls and threads can do new calculations whilst data is finally flushing to disk.