Statistics Search

banner1
banner4
banner5
banner6
banner7
 
 

The Little Green500 List - November 2013

The November 2013 release of the Green500 list was announced today at the SC|13 conference in Denver, Colorado, USA. Continuing the trend from previous years, heterogeneous supercomputing systems totally dominates the top 10 spots of the Green500. A heterogeneous system uses computational building blocks that consist of two or more types of “computing brains.” These types of computing brains include traditional processors (CPUs), graphics processing units (GPUs), and co-processors. In this edition of the Green500, one system smashes through the 4-billion floating-point operation per second (gigaflops) per watt barrier.

TSUBAME-KFC, a heterogeneous supercomputing system developed at the Tokyo Institute of Technology (TITech) in Japan, tops the list with an efficiency of 4.5 gigaflops/watt. Each computational node within TSUBAME-KFC consists of two Intel Ivy Bridge processors and four NVIDIA Kepler GPUs. In fact, all systems in the top ten of the Green500 use a similar architecture, i.e., Intel CPUs combined with NVIDIA GPUs. Wilkes, a supercomputer housed at Cambridge University, takes the second spot. The third position is filled by the HA-PACS TCA system at the University of Tsukuba. Of particular note, this list also sees two petaflop systems, each capable of computing over one quadrillion operations per second, achieve an efficiency of over 3 gigaflops/watt, namely Piz Daint at Swiss National Supercomputing Center and TSUBAME 2.5 at Tokyo Institute of Technology. Thus, Piz Daint is the greenest petaflop supercomputer on the Green500. As a point of reference, Tianhe-2, the fastest supercomputer in the world according to the Top500 list, achieves an efficiency of 1.9 gigaflops/watt.

This list marks a number of “firsts” for the Green500. It is the first time that a supercomputer has broken through the 4 gigaflops/watt barrier. Second, it is first time that all of the top 10 systems on the Green500 are heterogeneous systems. Third, it is the first time that the average of the measured power consumed by the systems on the Green500 dropped with respect to the previous edition of the list. “A decrease in the average measured power coupled with an overall increase in performance is an encouraging step along the trail to exascale,” noted Wu Feng of the Green500. Fourth, assuming that TSUBAME-KFC’s energy efficiency can be maintained for an exaflop system, it is the first time that an extrapolation to an exaflop supercomputer has dropped below 300 megawatts (MW), specifically 222 MW. “This 222-MW power envelope is still a long way away from DARPA’s target of an exaflop system in the 20-MW power envelope,” says Feng.

Starting with this release, the Little Green500 list only includes machines with power values submitted directly to the Green500. In fact, there are more than 400 systems that have submitted directly to the Green500 over the past few years. As in previous years, the Little Green500 list has better overall efficiency than the Green500 list on average.

Earlier this year, the Green500 adopted new methodologies for measuring the power of supercomputing systems and providing a more accurate representation of the energy efficiency of large-scale systems. In June 2013, the Green500 formally adopted measurement rules (a.k.a. “Level 1” measurements), developed in cooperation with the Energy-Efficient High-Performance Computing Working Group (EE HPC WG). Moreover, power-measurement methodologies with higher precision and accuracy were developed as a part of this effort (a.k.a. “Level 2” and “Level 3” measurements). With growing support and interest in the energy efficiency of large-scale computing systems, the Green500 is welcoming two more submissions at Level 2 and Level 3 than in the previous edition of the Green500 list. Of particular note, Piz Daint, the greenest petaflop supercomputer in the world, submitted the highest-quality Level 3 measurement.

The Green500 List

Listed below are the November 2013 Little Green500's energy-efficient supercomputers ranked from 101 to 200.

Green500 Rank MFLOPS/W Site* Computer* Total Power
(kW)
101 875.33 Navy DSRC Haise - iDataPlex DX360M4, Xeon E5-2670 8C 2.600GHz, Infiniband FDR 373.97
102 875.33 Navy DSRC Kilrain - iDataPlex DX360M4, Xeon E5-2670 8C 2.600GHz, Infiniband FDR 373.97
103 875.32 Army Research Laboratory DoD Supercomputing Resource Center (ARL DSRC) Hercules - iDataPlex DX360M4, Xeon E5-2670 8C 2.600GHz, Infiniband FDR 347.26
104 875.32 Electronics iDataPlex DX360M4, Xeon E5-2670 8C 2.600GHz, Infiniband FDR 148.19
105 875.32 University of Miami iDataPlex DX360M4, Xeon E5-2670 8C 2.600GHz, Infiniband FDR 100.49
106 874.02 Saudi Aramco Makman - iDataPlex DX360M4, Xeon E5-2670 8C 2.600GHz, Infiniband QDR 505.50
107 869.83 Automotive IBM Flex System x240, Xeon E5-2670 8C 2.600GHz, Infiniband FDR 181.31
108 869.83 Automotive IBM Flex System x240, Xeon E5-2670 8C 2.600GHz, Infiniband FDR 181.31
109 869.83 Automotive IBM Flex System x240, Xeon E5-2670 8C 2.600GHz, Infiniband FDR 181.31
110 869.81 Automotive IBM Flex System x240, Xeon E5-2670 8C 2.600GHz, Infiniband FDR 148.60
111 869.81 Automotive IBM Flex System x240, Xeon E5-2670 8C 2.600GHz, Infiniband FDR 148.60
112 869.81 Automotive IBM Flex System x240, Xeon E5-2670 8C 2.600GHz, Infiniband FDR 148.60
113 865.90 Geoscience iDataPlex DX360M3, Xeon X5650 6C 2.66 GHz, Infiniband, NVIDIA 2090 108.15
114 865.90 Geoscience iDataPlex DX360M3, Xeon X5650 6C 2.66 GHz, Infiniband, NVIDIA 2090 108.15
115 865.89 Petroleum Company xSeries x3650 Cluster, Xeon E5649 6C 2.530GHz, Infiniband, NVIDIA 2090 162.87
116 865.88 Geoscience (P) iDataPlex DX360M3, Xeon X5650 6C 2.66 GHz, Infiniband, NVIDIA 2090 99.14
117 865.88 Geoscience (P) iDataPlex DX360M3, Xeon X5650 6C 2.66 GHz, Infiniband, NVIDIA 2090 99.14
118 865.88 Geoscience (P) iDataPlex DX360M3, Xeon X5650 6C 2.66 GHz, Infiniband, NVIDIA 2090 99.14
119 865.88 Geoscience (P) iDataPlex DX360M3, Xeon X5650 6C 2.66 GHz, Infiniband, NVIDIA 2090 99.14
120 865.88 Geoscience (P) iDataPlex DX360M3, Xeon X5650 6C 2.66 GHz, Infiniband, NVIDIA 2090 99.14
121 865.88 Geoscience (P) iDataPlex DX360M3, Xeon X5650 6C 2.66 GHz, Infiniband, NVIDIA 2090 99.14
122 865.88 Geoscience (P) iDataPlex DX360M3, Xeon X5650 6C 2.66 GHz, Infiniband, NVIDIA 2090 99.14
123 865.88 Geoscience (P) iDataPlex DX360M3, Xeon X5650 6C 2.66 GHz, Infiniband, NVIDIA 2090 99.14
124 865.88 Geoscience (P) iDataPlex DX360M3, Xeon X5650 6C 2.66 GHz, Infiniband, NVIDIA 2090 99.14
125 865.88 Geoscience (P) iDataPlex DX360M3, Xeon X5650 6C 2.66 GHz, Infiniband, NVIDIA 2090 99.14
126 865.88 Geoscience (P) iDataPlex DX360M3, Xeon X5650 6C 2.66 GHz, Infiniband, NVIDIA 2090 99.14
127 865.88 Geoscience (P) iDataPlex DX360M3, Xeon X5650 6C 2.66 GHz, Infiniband, NVIDIA 2090 99.14
128 846.15 Maui High-Performance Computing Center (MHPCC) Riptide - iDataPlex DX360M4, Xeon E5-2670 8C 2.600GHz, Infiniband FDR
Level 3 measurement data available
251.20
129 837.69 Technische Universitaet Darmstadt iDataPlex DX360M4, Xeon E5-2670 8C 2.600GHz, Infiniband FDR 255.20
130 823.83 Financial Securities iDataPlex DX360M4, Xeon E5-2680 8C 2.700GHz, Infiniband 156.06
131 823.80 Aerospace Company iDataPlex DX360M4, Xeon E5-2680 8C 2.700GHz, Infiniband 118.29
132 819.93 Max-Planck-Gesellschaft MPI/IPP iDataPlex DX360M4, Intel Xeon E5-2680v2 10C 2.800GHz, Infiniband FDR 1,260.00
133 816.00 Energy Company (A) BladeCenter HS23 Cluster, Xeon E5-2650 8C 2.000GHz, Infiniband QDR 74.80
134 809.54 National Center for High Performance Computing Formosa 5 - Hybrid Cluster, Xeon X5670 6C 2.930GHz, Infiniband QDR, NVIDIA 2070 111.10
135 806.01 Electronics Company iDataPlex DX360M4, Xeon E5-2690 8C 2.900GHz, Infiniband FDR 172.77
136 787.63 CNRS/IDRIS-GENCI Ada - xSeries x3750 Cluster, Xeon E5-2680 8C 2.700GHz, Infiniband FDR 243.69
137 764.22 Instituto Tecnológico y de Energías Renovables S.A. TEIDE-HPC - Fujitsu PRIMERGY CX250 S1, Xeon E5-2670 8C 2.600GHz, Infiniband QDR 358.50
138 732.08 University of California, Los Angeles Dawson2 - HP ProLiant SL390s G7 Xeon X5650, Nvidia M2070, Infiniband QDR 96.00
139 727.66 Electronics xSeries x3650M4 Cluster, Xeon E5-2680 8C 2.700GHz, Infiniband 213.82
140 727.66 Electronics xSeries x3650M4 Cluster, Xeon E5-2680 8C 2.700GHz, Infiniband 213.82
141 727.66 Electronics xSeries x3650M4 Cluster, Xeon E5-2680 8C 2.700GHz, Infiniband 179.71
142 727.66 Electronics xSeries x3650M4 Cluster, Xeon E5-2680 8C 2.700GHz, Infiniband 179.71
143 727.65 Petroleum Company xSeries x3550M3 Cluster, Xeon E5-2670 8C 2.600GHz, Gigabit Ethernet 141.44
144 727.65 Petroleum Company xSeries x3650M4 Cluster, Xeon E5-2670 8C 2.600GHz, Infiniband 191.36
145 727.63 Government xSeries x3650M4 Cluster, Xeon E5-2670 8C 2.600GHz, Infiniband 170.98
146 727.63 Government xSeries x3650M4 Cluster, Xeon E5-2670 8C 2.600GHz, Infiniband 170.98
147 706.41 Aerospace Company xSeries x3650 Cluster, Xeon E5-2680 8C 2.700GHz, Infiniband QDR 228.28
148 669.12 Petroleum Company x3650M4 Cluster, Xeon E5-2670 8C 2.600GHz, Infiniband FDR 285.38
149 669.12 Electronics xSeries x3650M4 Cluster, Xeon E5-2670 8C 2.600GHz, Infiniband 208.42
150 643.69 National Institute for Environmental Studies GOSAT Research Computation Facility - Asterism ID318, Intel Xeon E5530, NVIDIA C2050, Infiniband 117.93
151 637.43 CEA/TGCC-GENCI Curie thin nodes - Bullx B510, Xeon E5-2680 8C 2.700GHz, Infiniband QDR 2,132.00
152 628.13 Lawrence Livermore National Laboratory Edge - Appro GreenBlade Cluster, Xeon X5660 6C 2.80 GHz, Infiniband QDR, NVIDIA 2050 160.00
153 467.73 CeSViMa - Centro de Supercomputacin y Visualizacin de Madrid Magerit - BladeCenter PS702 Express, Power7 3.3GHz, Infiniband 154.00
154 457.94 Environment Canada Power 775, POWER7 8C 3.84 GHz, Custom 462.30
155 457.94 Environment Canada Power 775, POWER7 8C 3.84 GHz, Custom 462.30
156 457.93 United Kingdom Meteorological Office Power 775, POWER7 8C 3.836GHz, Custom Interconnect 1,040.18
157 457.93 ECMWF Power 775, POWER7 8C 3.836GHz, Custom Interconnect 1,386.91
158 457.93 ECMWF Power 775, POWER7 8C 3.836GHz, Custom Interconnect 1,386.91
159 457.93 United Kingdom Meteorological Office Power 775, POWER7 8C 3.836GHz, Custom Interconnect 866.82
160 457.93 United Kingdom Meteorological Office Power 775, POWER7 8C 3.836GHz, Custom Interconnect 288.94
161 457.93 IBM Poughkeepsie Benchmarking Center Power 775, POWER7 8C 3.836GHz, Custom Interconnect 390.07
162 449.41 HWW/Universitaet Stuttgart HERMIT - Cray XE6, Opteron 6276 16C 2.30 GHz, Cray Gemini interconnect 1,850.00
163 444.35 DOE/NNSA/LANL Roadrunner - BladeCenter QS22/LS21 Cluster, PowerXCell 8i 3.2 Ghz / Opteron DC 1.8 GHz, Voltaire Infiniband 2,345.00
164 441.47 Slovak Academy of Sciences (SAV) Power 775, POWER7 8C 3.836GHz, Custom Interconnect 173.36
165 423.70 IBM Development Engineering Power 775, POWER7 8C 3.836GHz, Custom Interconnect 3,575.63
166 421.74 Geoscience iDataPlex DX360M3, Xeon X5660 6C 2.80 GHz, Infiniband QDR 212.62
167 414.92 University of Southampton xSeries iDataPlex, Xeon E5645 6C 2.400GHz, Infiniband 228.22
168 410.51 Interdisciplinary Centre for Mathematical and Computational Modelling, University of Warsaw Boreas - Power 775, POWER7 8C 3.836GHz, Custom 156.70
169 410.44 IBM Poughkeepsie Benchmarking Center Power 775, Power7 3.836 GHz 172.40
170 404.45 CLUMEQ - McGill University Guillimin - iDataPlex DX360M3, Xeon 2.66, Infiniband 337.00
171 400.68 Taiwan National Center for High-performance Computing ALPS - Acer AR585 F1 Cluster, Opteron 12C 2.2GHz, QDR infiniband 442.00
172 378.77 King Abdullah University of Science and Technology Shaheen - Blue Gene/P Solution 504.00
173 378.77 EDF R&D; Frontier2 BG/L - Blue Gene/P Solution 252.00
174 378.76 IDRIS Blue Gene/P Solution 315.00
175 377.99 Rice University BlueGene/P, PowerPC 450 4C 850 MHz, Proprietary 189.00
176 368.62 Automotive BladeCenter HS22 Cluster, WM Xeon 6-core 2.93Ghz, Infiniband 240.41
177 368.62 Automotive BladeCenter HS22 Cluster, WM Xeon 6-core 2.93Ghz, Infiniband 240.41
178 366.58 DOE/NNSA/LLNL Dawn - Blue Gene/P Solution 1,134.00
179 363.98 DOE/SC/Argonne National Laboratory Intrepid - Blue Gene/P Solution 1,260.00
180 361.77 RQCHP/Calcul Quebec/Compute Canada Universit de Montral iDataPlex DX360M3, Xeon 2.66, Infiniband 197.80
181 357.50 Classified BladeCenter HS22 Cluster, Xeon QC X55xx 2.26 GHz, Infiniband 184.00
182 354.32 Lockheed Martin BladeCenter HS22 Cluster (WM), Xeon X5650 6C 2.660GHz, Infiniband QDR 260.62
183 354.29 Electronics BladeCenter HS22 Cluster, WM Xeon 6-core, Xeon X5650 6C 2.66 GHz, Infiniband 203.90
184 354.26 Electronics BladeCenter HS22 Cluster, WM Xeon 6-core 2.66Ghz, Infiniband 175.70
185 354.26 Electronics BladeCenter HS22 Cluster, WM Xeon 6-core 2.66Ghz, Infiniband 175.70
186 354.24 Electronics BladeCenter HS22 Cluster (WM), Xeon X5650 6C 2.660GHz, Infiniband QDR 211.30
187 354.24 Electronics BladeCenter HS22 Cluster (WM), Xeon X5650 6C 2.660GHz, Infiniband QDR 211.30
188 332.62 Leibniz Rechenzentrum SuperMIG - BladeCenter HX5, Xeon E7-4870 10C 2.40 GHz, Infiniband QDR 195.00
189 330.98 IBM Development Engineering iDataPlex DX360M3, Xeon X5670 6C 2.93 GHz, Infiniband QDR 400.50
190 330.98 Vestas Wind Systems A/S iDataPlex DX360M3, Xeon X5670 6C 2.93 GHz, Infiniband QDR 489.75
191 330.98 EDF R&D; Ivanhoe - iDataPlex, Xeon X56xx 6C 2.93 GHz, Infiniband 510.00
192 330.98 IBM Development Engineering iDataPlex DX360M3, Xeon X5670 6C 2.93 GHz, Infiniband QDR 316.50
193 308.77 Communications BladeCenter HS23 Cluster, Xeon E5-2670 8C 2.600GHz, Gigabit Ethernet 414.35
194 305.33 Electronics xSeries x3550M3 Cluster, Xeon X5650 6C 2.66 GHz, Infiniband QDR 251.10
195 305.33 Electronics xSeries x3550M3 Cluster, Xeon X5650 6C 2.66 GHz, Infiniband QDR 251.10
196 305.33 Electronics xSeries x3550M3 Cluster, Xeon X5650 6C 2.66 GHz, Infiniband QDR 241.80
197 305.33 Government xSeries x3650M3 Cluster, Xeon X5650 6C 2.660GHz, Infiniband 302.81
198 305.33 Defence xSeries x3650M3 Cluster, Xeon X5650 6C 2.660GHz, Infiniband 334.06
199 286.74 Defence BladeCenter HX5, Xeon E7-4870 10C 2.40 GHz, Infiniband QDR 226.20
200 273.77 Financial Securities iDataPlex DX360M4, Xeon E5-2670 8C 2.600GHz, 10G Ethernet 339.87

* Only systems with power data submitted directly to the Green500 are included in the Little list. Performance data may be obtained from publicly available sources including TOP500