Technical specifications

Artemis is made up of a number of components. The computer components include the login nodes, compute nodes, storage system, and management.

The compute nodes, storage and management nodes are all connected by a high performance, low latency “interconnect” based on Mellanox InfiniBand (IB), which is a propriety networking infrastructure.

There is also a 10Gbps Ethernet management network interconnecting the compute, login and management nodes to facilitate installation of the compute nodes and PBSpro batch job management.

On this page:

Standard Compute Nodes


There are 56 Standard Haswell compute nodes and 80 Standard Broadwell compute nodes based on dual socket server.

The key features of the Haswell nodes are as follows:

Attribute Value
Base vendor model Dell PowerEdge R630 Server
CPU model Intel Xeon E5-2680 V3
CPU generation Haswell
Number cores per node 24 (2 x 12)
Resident RAM 128GB (8 x 16GB) DDR3 DIMMs
Disk storage 2 x 1TB 7k NL-SAS in RAID 1
Number 10Gbps interfaces 2
Number 1Gbps interfaces 2
InfiniBand interface FDR InfiniBand

The key features of the Broadwell nodes are as follows:

Attribute Value
Base vendor model Dell PowerEdge C6320 Server
CPU model Intel Xeon E5-2697A-V4
CPU generation Broadwell
Number cores per node 32 (2 x 16)
Resident RAM 128GB (8 x 16GB) DDR3 DIMMs
Disk storage 2 x 1.2TB 10k SAS in RAID 1
Number 10Gbps interfaces 2
Number 1Gbps interfaces 2
InfiniBand interface FDR InfiniBand

Management and Control Nodes


The management nodes are accessible to the vendor’s system administrators only and are used to manage workflow within the cluster.

These nodes consist of two Dell R630 servers with dual 16 core CPUs and 512 GB of memory with a Compellant storage subsystem running virtualised VMware ESXI 6.

High Memory Compute Nodes


There are three very High Memory compute nodes based on a quad socket server with 6 TB of memory. The details of these nodes are shown below:

Attribute Value
Base vendor model Dell PowerEdge R930 Server
CPU mode Intel Xeon E7-8860 V3
CPU generation Haswell
Number cores per node 64 (4 x 16)
Resident RAM 6TB (96 x 64GB) DDR4 DIMMs
Disk storage

2 x 200GB 12 Gbps SAS SSD

5 x 2TB 12 Gbps SAS SSD

Number 10Gbps interfaces 2
Number 1Gbps interfaces 2
InfiniBand interface FDR InfiniBand

There are two High Memory compute nodes based on a quad socket server with 512 GB of memory. The details of these nodes are shown below:

Attribute Value
Base vendor model Dell PowerEdge R630 Server
CPU mode Intel Xeon E5-2680 V3
CPU generation Haswell
Number cores per node 24 (2 x 12)
Resident RAM 512GB (16 x 32GB) DDR4 DIMMs
Disk storage 4 1TB 7k NL-SAS in RAID 10
Number 10Gbps interfaces 2
Number 1Gbps interfaces 2
InfiniBand interface FDR InfiniBand

GPU Compute Nodes


There are 5 GPU compute nodes, based on Dell PowerEdge R730 servers. Each is fitted with dual 12 core CPUs and 2 NVIDIA K40 GPUs, giving 10 GPUs in total.

High Performance File System


Artemis has a high-performance storage solution that is optimised for I/O, and built on the Intel Enterprise Edition of the Lustre 2.5 file system. The components of the storage system and the compute and login nodes are connected via FDR InfiniBand.

The high performance file system will provide the home, scratch and project areas for use by the applications running in the cluster.

Mellanox FDR InfiniBand


All nodes are connected using FDR InfiniBand interconnect. The InfiniBand interconnect provides low latency communications between compute nodes for maximum MPI bandwidth.

All compute nodes, login nodes and the Lustre filesystem are connected via 56 GB/s FDR InfiniBand in a 2:1 blocking configuration.

Ethernet Networking


10GB redundant switched connections are available between the MPLS switches provided by the University and the Ethernet fabric. The 10Gb Ethernet provides access to the login nodes from AARnet, the management network and OOB connectivity to the compute nodes.