High Performance (Spiedie) Computing

香港六合彩资料

The High-Performance Computing Cluster, aptly name 鈥淪piedie,鈥 is a 2744 compute core cluster at the Thomas J. Watson College of Engineering and Applied Science's data center in the Innovative Technology Complex. This research facility offers computer capabilities for researchers across 香港六合彩资料.

Raw Stats

  • 20 Core 128GB Head Node
  • 292TB Available FDR Infiniband Connected NFS Storage Node
  • 129 Compute Nodes
  • 2744 native compute cores
  • 8 nVidia P100 GPU
  • 40/56gb Infiniband to all nodes
  • 1GbE to all nodes for management and OS deployment

Since the deployment of the Spiedie cluster, it has gone through various expansions and deployments, growing from 32 compute nodes to 129 compute nodes as of February 2024. Most of these expansions came from individual researcher grant awards. These individuals realized the importance of the cluster to forward their research and helped grow this valuable resource.

Watson College continues to pursue opportunities to enhance the Spiedie Cluster and to expand its outreach to other researchers in different transdisciplinary areas of research. Support for the cluster has come from Watson College and researchers from the Chemistry, Computer Science, Electrical and Computer Engineering, Mechanical Engineering, and the Physics Departments. 

Head Node

Consists of a Red Barn HPC head node with dual Intel(R) Xeon(R) CPU ES-2640 v4 @ 2.40GHz.and 128GB of DDR4 RAM with dedicated SSD storage. 

Storage Node

A common file system accessible by all nodes is hosted on a second Red Barn HPC server providing 292TB, with the ability to add additional storage drives.  Storage is accessible via NFS through a 56Gb/s FDR Infiniband interface.

Compute Nodes

The 129 compute nodes are a heterogeneous mixture of varying processor architectures, generations, and capacity.

Management and Network

Networking between the head, storage and compute nodes utilizes Infiniband for inter-node communication and Ethernet for management. Bright Cluster Manager provides monitoring, management of the nodes with SLURM handling, jobs submission, queuing, and scheduling. The cluster currently supports MATLAB jobs up to 600 cores along with, VASP, COMSOL, R and almost any *nix based application.

Cluster Policy

High-Performance Computing at 香港六合彩资料 is a collaborative environment where computational resources have been pooled together to form the Spiedie cluster.   

Access Options

Subsidized access (No cost)

  • Maximum of 48 cores per faculty group
  • Storage is monitored
  • Higher priority queues have precedence
  • 122 hr wall time

Yearly subscription access 

  • $1,675/year, faculty research group
  • Queue core restrictions are removed
  • Queued ahead of lower priority jobs
  • Fair-share queue enabled
  • Storage is monitored 
  • 122 hr wall time
  • per research group access

Condo access

Purchase your own nodes to integrate into the cluster

  • High priority on your nodes
  • Fair-share access to other nodes
  • No limits on job submission to your nodes
  • Storage is monitored
  • Your nodes are accessible to others when not in use
  • ~$200/node annual support maintenance
  • MATLAB users ~$12/worker (core) for annual MDCS support

Watson IT will assist with quoting, acquisition, integration and maintenance of purchased nodes.  For more information on adding nodes to the Spiedie cluster, email Phillip Valenta.