Cluster list

From Molecular Modeling Wiki

(Difference between revisions)
Jump to: navigation, search
(gallium)
(platinum)
 
(22 intermediate revisions not shown)
Line 155: Line 155:
|- bgcolor="#C0FFC0"
|- bgcolor="#C0FFC0"
![[#vanad|vanad]]
![[#vanad|vanad]]
-
|align="right"|2010 - 2012
+
|align="right"|2010 - 2013
|align="right"|
|align="right"|
|align="right"|3 - 8
|align="right"|3 - 8
Line 173: Line 173:
|- bgcolor="#C0FFC0"
|- bgcolor="#C0FFC0"
![[#oxygen|oxygen]]
![[#oxygen|oxygen]]
-
|align="right"|2010 - 2012
+
|align="right"|2010 - 2013
|align="right"|
|align="right"|
-
|align="right"|1.5 - 2
+
|align="right"|1 - 2
|align="right"|850
|align="right"|850
|align="center"|single-node parallel calculations
|align="center"|single-node parallel calculations
Line 182: Line 182:
|- bgcolor="#C0FFC0"
|- bgcolor="#C0FFC0"
![[#gallium|gallium]]
![[#gallium|gallium]]
-
|align="right"|2011
+
|align="right"|2011 - 2013
|align="right"|
|align="right"|
|align="right"|2
|align="right"|2
Line 191: Line 191:
|- bgcolor="#C0FFC0"
|- bgcolor="#C0FFC0"
![[#magnesium|magnesium]]
![[#magnesium|magnesium]]
-
|align="right"|2011 - 2012
+
|align="right"|2011 - 2013
|align="right"|
|align="right"|
|align="right"|1 - 8
|align="right"|1 - 8
Line 198: Line 198:
|restricted
|restricted
|Pavel Jungwirth
|Pavel Jungwirth
 +
|- bgcolor="#C0FFC0"
 +
![[#platinum|platinum]]
 +
|align="right"|2016
 +
|align="right"|
 +
|align="right"|2 - 6
 +
|align="right"|1000 - 5400
 +
|align="center"|general use, parallel calculations, CUDA
 +
|restricted
 +
|Pavel Hobza
|-
|-
|}
|}
Line 686: Line 695:
==vanad==
==vanad==
-
*Total nodes: 21
+
*Total nodes: 23
-
*Total cores: 164
+
*Total cores: 188
*Parallel calculations: yes, within a single node, MPI
*Parallel calculations: yes, within a single node, MPI
*Queuing system: SGE
*Queuing system: SGE
Line 725: Line 734:
|align="right"|7.8 GB
|align="right"|7.8 GB
|align="right"|950 GB
|align="right"|950 GB
 +
|-
 +
|v22 - v23
 +
|align="right"|2
 +
|vq-12
 +
|Intel Xeon E5-2640 2.50GHz
 +
|align="right"|12
 +
|align="right"|5.2 GB
 +
|align="right"|1900 GB
|}
|}
<br>
<br>
Line 782: Line 799:
==oxygen==
==oxygen==
-
*Total nodes: 7
+
*Total nodes: 13
-
*Total cores: 256
+
*Total cores: 640
*Parallel calculations: yes, within a single node, MPI
*Parallel calculations: yes, within a single node, MPI
*Queuing system: SGE
*Queuing system: SGE
Line 820: Line 837:
|align="right"|32
|align="right"|32
|align="right"|2 GB
|align="right"|2 GB
 +
|align="right"|850 GB
 +
|-
 +
|o08 - o13
 +
|align="right"|6
 +
|oq-64
 +
|AMD Opteron 6272
 +
|align="right"|64
 +
|align="right"|1 GB
|align="right"|850 GB
|align="right"|850 GB
|}
|}
Line 826: Line 851:
==gallium==
==gallium==
-
*Total nodes: 44
+
*Total nodes: 60
-
*Total cores: 440
+
*Total cores: 632
*Parallel calculations: yes, within a single node, MPI
*Parallel calculations: yes, within a single node, MPI
*Queuing system: SGE
*Queuing system: SGE
Line 850: Line 875:
|align="right"|900 GB
|align="right"|900 GB
|-
|-
-
|g23 - g44
+
|g23 - g60
-
|align="right"|22
+
|align="right"|38
|gq-12-2
|gq-12-2
|Intel Xeon E5645 2.40GHz
|Intel Xeon E5645 2.40GHz
Line 886: Line 911:
|align="right"|2.8 GB
|align="right"|2.8 GB
|align="right"|900 GB
|align="right"|900 GB
-
|2x nVidia Tesla M2090 (2x512 CUDA cores)
+
|2x nVidia Tesla M2090 (2x512 cores)
|-
|-
|m03 - m04
|m03 - m04
Line 913: Line 938:
|align="right"|2.8 GB
|align="right"|2.8 GB
|align="right"|900 GB
|align="right"|900 GB
-
|2x nVidia Tesla M2090 (2x512 CUDA cores)
+
|2x nVidia Tesla M2090 (2x512 cores)
 +
|-
 +
|(m37 - m40)
 +
|align="right"|4
 +
|mq-8-3-test
 +
|Intel Xeon CPU E5620 2.40GHz
 +
|align="right"|8
 +
|align="right"|5.8 GB
 +
|align="right"|1800 GB
 +
|2x nVidia Tesla M2090 (2x512 cores)
 +
|-
 +
|m41 - m46
 +
|align="right"|6
 +
|mq-12-2-cuda-k20
 +
|Intel Xeon CPU E5-2640 2.50GHz
 +
|align="right"|12
 +
|align="right"|1.8 GB
 +
|align="right"|900 GB
 +
|2x nVidia Tesla K20m (2x2496 cores)
 +
|}
 +
<br>
 +
 
 +
==platinum==
 +
 
 +
*Total nodes: 20
 +
*Total cores: 256
 +
*Parallel calculations: yes, within a single node, MPI, CUDA
 +
*Queuing system: SGE
 +
<br>
 +
 
 +
{| border="1" cellpadding="6" cellspacing="0"
 +
|-
 +
!Nodes
 +
!align="right"|# of nodes
 +
!Queue
 +
!CPU
 +
!align="right"|Cores/node
 +
!align="right"|Available RAM/core
 +
!align="right"|Scratch/node
 +
!Note
 +
|-
 +
|p01 - p04
 +
|align="right"|4
 +
|pq-cuda
 +
|Intel Xeon CPU E5620 2.40GHz
 +
|align="right"|8
 +
|align="right"|5.8 GB
 +
|align="right"|1800 GB
 +
|2x nVidia Tesla M2090 (2x512 cores)
 +
|-
 +
|p05 - p12
 +
|align="right"|8
 +
|pq-16-6
 +
|Intel Xeon CPU E5-2630 v3 2.40GHz
 +
|align="right"|16
 +
|align="right"|5.8 GB
 +
|align="right"|5400 GB
 +
|
 +
|-
 +
|p13 - p20
 +
|align="right"|8
 +
|pq-12-2
 +
|Intel Xeon CPU E5-2620 v3 2.40GHz
 +
|align="right"|12
 +
|align="right"|2.6 GB
 +
|align="right"|900 GB
 +
|
|}
|}
<br>
<br>
Line 1,077: Line 1,168:
!Total disk space
!Total disk space
|-
|-
-
|align="right"|457
+
|align="right"|501
-
|align="right"|803
+
|align="right"|904
-
|align="right"|3168
+
|align="right"|4056
-
|align="right"|(8.9 TB)
+
|align="right"|(11.7 TB)
-
|align="right"|(992 disks / 596 TB)
+
|align="right"|(800 disks / 700 TB)
|}
|}
<br>
<br>
[[Category:Clusters]]
[[Category:Clusters]]

Latest revision as of 09:51, 15 June 2016

Contents

Overview

Description

Depending on the hardware equipment and the age of a cluster there is always a set of typical applications that fit the cluster capabilities. The idea is to run any calculation on a cluster that just fits the job requirements; running it on less capable cluster would slow down the calculation or prevent running it at all, starting a low-demanding job on highly equippped cluster would waste the resources and prevent running jobs that require such equipment.

Cluster survey

Name Year of construction Year of destruction Memory size [GB] / core Disk (scratch) size [GB] Usage Access Owner/manager*
cobalt 2003 2010 1 120 simple calculations, student tests, teaching public
argon 2005 2011 2 160 single-processor calculations with lower requirements public
krypton 2005 2011 1 - 2 160 single-processor calculations with lower requirements public
radon 2005 2 - 4 320 single-processor calculations with average requirements public
palladium 2004 - 2005 2010 3 - 6 240 - 600 memory and/or disk space demanding calculations public
iridium 2006 - 2008 2 320 - 1000 general use restricted Petr Nachtigall
Ota Bludsky
helium 2006 - 2008 2 250 - 320 parallel molecular dynamics calculations restricted Pavel Jungwirth
francium 2007 4 640 parallel molecular dynamics calculations with infiniband restricted Pavel Jungwirth
lithium 2006 - 2008 4 - 16 1600 - 6000 highly demanding parallel and/or single-processor calculations with large memory and disk space requirements restricted Pavel Hobza
uranium 2008 2 1500 general use public
barium 2007 - 2009 1 - 2 320 - 1500 mostly parallel molecular dynamics calculations restricted Pavel Hobza
thallium 2007 - 2008 1 - 2 1500 general use restricted Pavel Hobza
neon 2009 2 900 parallel molecular dynamics calculations with infiniband restricted Pavel Hobza
erbium 2009 - 2010 2 - 6 900 - 3800 general use, parallel calculations restricted Martin Kabelac
sodium 2009 - 2010 1.5 850 general use, parallel calculations restricted Pavel Jungwirth
vanad 2010 - 2013 3 - 8 450 - 950 general use, parallel calculations restricted Ota Bludsky
zinc 2010 - 2012 3 - 20 1800 - 17000 general use, parallel calculations
node z55 with large memory and disk
restricted Pavel Hobza
oxygen 2010 - 2013 1 - 2 850 single-node parallel calculations restricted Pavel Jungwirth
gallium 2011 - 2013 2 900 general use, parallel calculations restricted Pavel Hobza
magnesium 2011 - 2013 1 - 8 900 - 3600 general use, parallel calculations, CUDA restricted Pavel Jungwirth
platinum 2016 2 - 6 1000 - 5400 general use, parallel calculations, CUDA restricted Pavel Hobza


*Please contact this person when requiring an account on a restricted-access cluster.

Clusters

The following text provides detailed information about clusters and nodes. Tables show resources available for calculations, which are usually smaller than total resources mentioned in the general overview above; some of the total node memory (RAM) must always remain reserved for operating system tasks.

radon

  • Total nodes: 46
  • Total cores: 46
  • Parallel calculations: no
  • Queuing system: SGE


Nodes # of nodes Queue CPU Cores/node Available RAM/core Scratch/node
r01 - r24 24 rq4 AMD Athlon 64 3500+ 1 3.6 GB 320 GB
r25 - r46 22 rq2 AMD Athlon 64 3500+ 1 1.6 GB 320 GB


iridium

  • Total nodes: 50
  • Total cores: 120
  • Parallel calculations: yes, special setup
  • Queuing system: SGE


Nodes # of nodes Queue CPU Cores/node Available RAM/core Scratch/node
i01 - i34 34 <unknown> Intel Core2 6600 2.40GHz 2 1.6 GB 320 GB
i35 - i40 6 <unknown> Intel Core2 Duo E6750 2.66GHz 2 1.6 GB 320 GB
i41 - i42 2 <unknown> Intel Core2 Quad Q6600 2.40GHz 4 1.8 GB 1000 GB
i43 - i50 8 <unknown> Intel Core2 Quad Q9550 2.83GHz 4 1.8 GB 1000 GB


helium

  • Total nodes: 16
  • Total cores: 64
  • Parallel calculations: yes, accross nodes, MPI
  • Queuing system: SGE


Nodes # of nodes Queue CPU Cores/node Available RAM/core Scratch/node
h01 - h06 6 hq AMD Opteron Dual Core 275 4 1.8 GB 320 GB
h07 - h16 10 hq AMD Opteron Dual Core 2216 4 1.8 GB 250 GB


francium

  • Total nodes: 24
  • Total cores: 96
  • Parallel calculations: yes, accross nodes, MPI, infiniband
  • Queuing system: SGE


Nodes # of nodes Queue CPU Cores/node Available RAM/core Scratch/node
f01 - f24 24 fq AMD Opteron Dual Core 2214 4 3.8 GB 640 GB


lithium

  • Total nodes: 16
  • Total cores: 82
  • Parallel calculations: yes, within a single node, MPI
  • Queuing system: SGE


Nodes # of nodes Queue CPU Cores/node Available RAM/core Scratch/node
l01 - l04 4 lq-4-8 AMD Opteron Dual Core 285 4 7.8 GB 3000 GB
l05 - l08 4 lq-2-16 AMD Opteron 256 2 15.4 GB 3000 GB
l09 1 lq-2-16 AMD Opteron 854 2 15.4 GB 1600 GB
l10 - l16 7 lq-8-4 Intel Xeon E5430 2.66GHz 8 3.8 GB 6000 GB


uranium

  • Total nodes: 24
  • Total cores: 192
  • Parallel calculations: yes, within a single node, MPI
  • Queuing system: SGE


Nodes # of nodes Queue CPU Cores/node Available RAM/core Scratch/node
u01 - u14 14 columbusq Intel Xeon E5430 2.66GHz 8 1.8 GB 1500 GB
u15 - u24 10 uq Intel Xeon E5430 2.66GHz 8 1.8 GB 1500 GB


barium

  • Total nodes: 28
  • Total cores: 204
  • Parallel calculations: yes, within a single node, MPI
  • Queuing system: SGE


Nodes # of nodes Queue CPU Cores/node Available RAM/core Scratch/node
b01 - b02 2 bq-4-1 Intel Core2 Quad 2.40GHz 4 0.8 GB 320 GB
b03 - b05 3 bq-4-2 Intel Core2 Quad 2.40GHz 4 1.8 GB 320 GB
b06 - b14 9 bq-8-2 Intel Xeon E5345 2.33GHz 8 1.8 GB 1320 GB
b15 - b19 5 bq-8-2 Intel Xeon E5430 2.66GHz 8 1.8 GB 1500 GB
b20 - b24 5 bq-8-2-f Intel Xeon E5430 2.66GHz 8 1.8 GB 1500 GB
b25 - b28 4 bq-8-1 Intel Xeon E5520 2.27GHz 8 0.8 GB 1500 GB


thallium

  • Total nodes: 19
  • Total cores: 124
  • Parallel calculations: yes, accross multiple nodes, MPI
  • Queuing system: SGE


Nodes # of nodes Queue CPU Cores/node Available RAM/core Scratch/node
t01 - t07 7 tq-4-2 Intel Core2 Quad 2.40GHz 4 1.8 GB 1500 GB
t08 - t14 7 tq-8-1 Intel Xeon E5345 2.33GHz 8 0.8 GB 1500 GB
t15 - t19 5 tq-4-2 Intel Xeon E5430 2.66GHz 8 0.8 GB 1500 GB


neon

  • Total nodes: 24
  • Total cores: 192
  • Parallel calculations: yes, accross nodes, MPI, infiniband
  • Queuing system: SGE


Nodes # of nodes Queue CPU Cores/node Available RAM/core Scratch/node
n01 - n24 24 nq Intel Xeon E5430 2.66GHz 8 1.8 GB 900 GB


erbium

  • Total nodes: 7
  • Total cores: 26
  • Parallel calculations: yes, within a single node, MPI
  • Queuing system: SGE


Nodes # of nodes Queue CPU Cores/node Available RAM/core Scratch/node
e01 1 eq-6-4 Intel Core i7 970 @ 3.2 GHz 6 3.8 GB 3600 GB
e02 - e03 2 eq-4-2 Intel Core2 Extreme X9650 3.00GHz 4 1.8 GB 900 GB
e04 1 eq-4-2 Intel Core2 Quad Q6600 2.40GHz 4 1.8 GB 900 GB
e05 - e07 3 eq-4-4 Intel Core2 Quad Q9650 3.00GHz 4 3.8 GB 1800 GB
e08 - e10 3 eq-4-6 Intel Core i7 950 @ 3.07 GHz 4 5.8 GB 3600 GB


sodium

  • Total nodes: 46
  • Total cores: 368
  • Parallel calculations: yes, within a single node, MPI
  • Queuing system: SGE


Nodes # of nodes Queue CPU Cores/node Available RAM/core Scratch/node
s01 - s24 24 sq Intel Xeon E5530 2.40GHz 8 1.3 GB 850 GB
s25 - s46 22 sq Intel Xeon E5620 2.40GHz 8 1.3 GB 850 GB


vanad

  • Total nodes: 23
  • Total cores: 188
  • Parallel calculations: yes, within a single node, MPI
  • Queuing system: SGE


Nodes # of nodes Queue CPU Cores/node Available RAM/core Scratch/node
v01 - v04 4 vq-8 Intel Xeon E5620 2.40GHz 8 2.8 GB 450 GB
v05 - v13 9 vq-4 Intel Xeon E5620 2.40GHz 4 2.8 GB 450 GB
v14 - v21 8 vq-12 Intel Xeon E5645 2.40GHz 12 7.8 GB 950 GB
v22 - v23 2 vq-12 Intel Xeon E5-2640 2.50GHz 12 5.2 GB 1900 GB


zinc

  • Total nodes: 56
  • Total cores: 480
  • Parallel calculations: yes, within a single node, MPI
  • Queuing system: SGE


Nodes # of nodes Queue CPU Cores/node Available RAM/core Scratch/node
z01 - z26 26 zq-8-3 Intel Xeon E5620 2.40GHz 8 2.8 GB 1800 GB
z27 - z36 10 zq-8-6-large Intel Xeon E5620 2.40GHz 8 5.8 GB 5400 GB
z37 - z54 18 zq-8-6 Intel Xeon E5630 2.53GHz 8 5.8 GB 3600 GB
z55-z56 2 zq-24-20 Intel Xeon E7-4807 1.87GHz 24 21 GB 17000 GB


oxygen

  • Total nodes: 13
  • Total cores: 640
  • Parallel calculations: yes, within a single node, MPI
  • Queuing system: SGE


Nodes # of nodes Queue CPU Cores/node Available RAM/core Scratch/node
o01 - o02 2 oq-32 AMD Opteron 6134 32 2 GB 850 GB
o03 - o04 2 oq-48 AMD Opteron 6172 48 1.5 GB 850 GB
o05 - o07 3 oq-32 AMD Opteron 6134 32 2 GB 850 GB
o08 - o13 6 oq-64 AMD Opteron 6272 64 1 GB 850 GB


gallium

  • Total nodes: 60
  • Total cores: 632
  • Parallel calculations: yes, within a single node, MPI
  • Queuing system: SGE


Nodes # of nodes Queue CPU Cores/node Available RAM/core Scratch/node
g01 - g22 22 gq-8-2 Intel Xeon E5630 2.53GHz 8 1.8 GB 900 GB
g23 - g60 38 gq-12-2 Intel Xeon E5645 2.40GHz 12 1.8 GB 900 GB


magnesium

  • Total nodes: 36
  • Total cores: 408
  • Parallel calculations: yes, within a single node, MPI, CUDA
  • Queuing system: SGE


Nodes # of nodes Queue CPU Cores/node Available RAM/core Scratch/node Note
m01 - m02 2 mq-8-3-cuda Intel Xeon CPU E5640 2.67GHz 8 2.8 GB 900 GB 2x nVidia Tesla M2090 (2x512 cores)
m03 - m04 2 mq-8-8 Intel Xeon CPU X5647 2.93GHz 8 7.8 GB 3600 GB
m05 - m34 30 mq-12-1 Intel Xeon CPU E5645 2.40GHz 12 1.2 GB 900 GB
m35 - m36 2 mq-8-3-cuda Intel Xeon CPU X5647 2.93GHz 8 2.8 GB 900 GB 2x nVidia Tesla M2090 (2x512 cores)
(m37 - m40) 4 mq-8-3-test Intel Xeon CPU E5620 2.40GHz 8 5.8 GB 1800 GB 2x nVidia Tesla M2090 (2x512 cores)
m41 - m46 6 mq-12-2-cuda-k20 Intel Xeon CPU E5-2640 2.50GHz 12 1.8 GB 900 GB 2x nVidia Tesla K20m (2x2496 cores)


platinum

  • Total nodes: 20
  • Total cores: 256
  • Parallel calculations: yes, within a single node, MPI, CUDA
  • Queuing system: SGE


Nodes # of nodes Queue CPU Cores/node Available RAM/core Scratch/node Note
p01 - p04 4 pq-cuda Intel Xeon CPU E5620 2.40GHz 8 5.8 GB 1800 GB 2x nVidia Tesla M2090 (2x512 cores)
p05 - p12 8 pq-16-6 Intel Xeon CPU E5-2630 v3 2.40GHz 16 5.8 GB 5400 GB
p13 - p20 8 pq-12-2 Intel Xeon CPU E5-2620 v3 2.40GHz 12 2.6 GB 900 GB


Closed

The following clusters have been decomposed and annihilated.

cobalt

  • Total nodes: 40
  • Total cores: 40
  • Parallel calculations: no
  • Queuing system: DQS


Nodes # of nodes Queue CPU Cores/node Available RAM/core Scratch/node
c01 - c40 40 c01_a - c40_a Intel Pentium 4 2.80GHz 1 800 MB 120 GB


palladium

  • Total nodes: 24
  • Total cores: 48
  • Parallel calculations: yes, within a single node (in pqXp queues, max 2 processor per job)
  • Queuing system: SGE


Nodes # of nodes Queue CPU Cores/node Available RAM/core Scratch/node
p01 - p04 4 pqos AMD Opteron 244 2 2.6 GB 240 GB
p05 - p14 10 pqop AMD Opteron 244 2 2.6 GB 240 GB
p15 - p18 4 pqns AMD Opteron 250 2 5.6 GB 600 GB
p19 - p24 6 pqnp AMD Opteron 250 2 5.6 GB 600 GB


argon

  • Total nodes: 44
  • Total cores: 44
  • Parallel calculations: no
  • Queuing system: SGE


Nodes # of nodes Queue CPU Cores/node Available RAM/core Scratch/node
a01 - a44 44 aq Intel Pentium 4 3.40 GHz 1 1.8 GB 160 GB


krypton

  • Total nodes: 44
  • Total cores: 44
  • Parallel calculations: no
  • Queuing system: SGE


Nodes # of nodes Queue CPU Cores/node Available RAM/core Scratch/node
k01 - k16 16 kq2 Intel Pentium 4 3.40 GHz 1 1.8 GB 160 GB
k17 - k44 28 kq1 Intel Pentium 4 3.40 GHz 1 0.8 GB 160 GB


Totals

The total numbers only include counts from working clusters.

Total # of nodes Total # of processors Total # of cores Total RAM Total disk space
501 904 4056 (11.7 TB) (800 disks / 700 TB)


Personal tools