Slurm Workload Manager
Lua error in package.lua at line 80: module 'strict' not found.
Stable release | 15.08 |
---|---|
Development status | active |
Written in | C |
Operating system | Linux, AIX, BSDs, Mac OS X, Solaris |
Type | Job Scheduler for Clusters and Supercomputers |
License | GNU General Public License |
Website | slurm |
The Slurm Workload Manager (formally known as Simple Linux Utility for Resource Management (SLURM)), or Slurm for short, is a free and open-source job scheduler for Linux and Unix-like kernels, used by many of the world's supercomputers and computer clusters. It provides three key functions. First, it allocates exclusive and/or non-exclusive access to resources (computer nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (typically a parallel job such as MPI) on a set of allocated nodes. Finally, it arbitrates contention for resources by managing a queue of pending jobs.
Slurm is the workload manager on about 60% of the TOP500 supercomputers, including Tianhe-2 that (as of June 2014[update]) is the world's fastest computer.
Slurm uses a best fit algorithm based on Hilbert curve scheduling or fat tree network topology in order to optimize locality of task assignments on parallel computers.[1]
Contents
History
Slurm began development as a collaborative effort primarily by Lawrence Livermore National Laboratory, SchedMD,[2] Linux NetworX, Hewlett-Packard, and Groupe Bull as a Free Software resource manager. It was inspired by the closed source Quadrics RMS and shares a similar syntax. The name is a reference to the soda in Futurama.[3] Over 100 people around the world have contributed to the project. It has since evolved into a sophisticated batch scheduler capable of satisfying the requirements of many large computer centers.
As of November 2015[update], TOP500 list of most powerful computers in the world indicates that Slurm is the workload manager on six of the top ten systems. Some of the systems in the top ten running Slurm include Tianhe-2, a 33.86 PetaFlop system at NUDT, IBM Sequoia, an IBM Bluegene/Q with 1.57 million cores and 17.2 Petaflops at Lawrence Livermore National Laboratory; Piz Daint a 7.78 PetaFlop Cray computer at Swiss National Supercomputing Centre, Stampede, a 5.17 PetaFlop Dell computer at the Texas Advance Computing Center;[4] and Vulcan, a 4.29 Petaflop IBM Bluegene/Q at Lawrence Livermore National Laboratory;.[5]
Structure
Slurm's design is very modular with about 100 optional plugins. In its simplest configuration, it can be installed and configured in a couple of minutes. More sophisticated configurations provide database integration for accounting, management of resource limits and workload prioritization. Slurm also works with several meta-schedulers such as Moab Cluster Suite, Maui Cluster Scheduler, and Platform LSF.
Notable features
Notable Slurm features include the following:[citation needed]
- No single point of failure, backup daemons, fault-tolerant job options
- Highly scalable (schedules up to 100,000 independent jobs on the 100,000 sockets of IBM Sequoia)
- High performance (up to 1000 job submissions per second and 600 job executions per second)
- Free and open-source software (GNU General Public License)
- Highly configurable with about 100 plugins
- Fair-share scheduling with hierarchical bank accounts
- Preemptive and gang scheduling (time-slicing of parallel jobs)
- Integrated with database for accounting and configuration
- Resource allocations optimized for network topology and on-node topology (sockets, cores and hyperthreads)
- Advanced reservation
- Idle nodes can be powered down
- Different operating systems can be booted for each job
- Scheduling for generic resources (e.g. Graphics processing unit)
- Real-time accounting down to the task level (identify specific tasks with high CPU or memory usage)
- Resource limits by user or bank account
- Accounting for power usage by job
- Support of IBM Parallel Environment (PE/POE)
- Support for job arrays
- Job profiling (periodic sampling of each tasks CPU use, memory use, power consumption, network and file system use)
- Accounting for a job's power consumption
- Sophisticated multifactor job prioritization algorithms
- Support for MapReduce+
The following features are announced for version 14.11 of Slurm, was released in November 2014:[6]
- Improved job array data structure and scalability
- Support for heterogeneous generic resources
- Add user options to set the CPU governor
- Automatic job requeue policy based on exit value
- Report API use by user, type, count and time consumed
- Communication gateway nodes improve scalability
Supported platforms
While Slurm was originally written for the Linux kernel, the latest version supports many other operating systems, including AIX, BSDs (FreeBSD, NetBSD and OpenBSD), Linux, Mac OS X, and Solaris.[7] Slurm also supports several unique computer architectures, including:
- IBM BlueGene L, P and Q models including the 20 petaflop IBM Sequoia
- Cray XT, XE and Cascade
- Tianhe-2 a 33.9 petaflop system with 32,000 Intel Ivy Bridge chips and 48,000 Intel Xeon Phi chips with a total of 3.1 million cores
- IBM Parallel Environment
- Anton
License
Slurm is available under the GNU General Public License V2.
Commercial support
In 2010, the developers of Slurm founded SchedMD, which maintains the canonical source, provides development, level 3 commercial support and training services. Commercial support is also available from Bright Computing, Bull. Cray, and Science + Computing
References
- ↑ Lua error in package.lua at line 80: module 'strict' not found.
- ↑ Lua error in package.lua at line 80: module 'strict' not found.
- ↑ Lua error in package.lua at line 80: module 'strict' not found.
- ↑ Lua error in package.lua at line 80: module 'strict' not found.
- ↑ Lua error in package.lua at line 80: module 'strict' not found.
- ↑ Lua error in package.lua at line 80: module 'strict' not found.
- ↑ SLURM Platforms
Further reading
<templatestyles src="Div col/styles.css"/>
- Lua error in package.lua at line 80: module 'strict' not found.
- Lua error in package.lua at line 80: module 'strict' not found.
- Lua error in package.lua at line 80: module 'strict' not found.
- Lua error in package.lua at line 80: module 'strict' not found.
External links
- Articles containing potentially dated statements from June 2014
- Articles containing potentially dated statements from November 2015
- Articles with unsourced statements from September 2014
- Pages using div col with unknown parameters
- Job scheduling
- Parallel computing
- Grid computing
- Cluster computing
- Free software