Statistics Search

banner1
banner4
banner5
banner6
banner7
 
 

The Green500 List - June 2013

BLACKSBURG, Va., June 28, 2013 -- Today’s release of the Green500 List (http://www.green500.org/lists/green201306) shows that the top end of the list is again dominated by heterogeneous supercomputers, those that combine two or more types of processing elements together, such as a traditional processor or central processing unit (CPU) combined with a graphical processing unit (GPU) or coprocessor.

Two heterogeneous systems, based on NVIDIA’s Kepler K20 GPU accelerators, claim the top two positions and break through the three-billion floating-point operations per second (gigaflops or GFLOPS) per watt barrier. Eurora, located at Cineca, debuts at the top of the Green500 at 3.21 gigaflops/watt, followed closely by Aurora Tigon at 3.18 gigaflops/watt. The energy efficiency of these machines, manufactured by Eurotech, improves upon the previous greenest supercomputer in the world by nearly 30%. Two other heterogeneous systems, Beacon with an efficiency of 2.449 gigaflops/watt* and SANAM with an efficiency of 2.35 gigaflops/watt, come in at numbers 3 and 4 on the Green500. The former is based on Intel Xeon Phi 5110P coprocessors while the latter is based on AMD FirePro S10000 GPUs.

Rounding out the top five is CADMOS BlueGene/Q, which is based on a previously list-leading custom design of the IBM BlueGene/Q architecture with PowerPC-based CPU processors. In fact, the next swath of supercomputers down to No. 28 are dominated by IBM BlueGene/Q systems with one set at approximately 2.30 gigaflops/watt and another at approximately 2.18 gigaflops/watt. Overall, the greenest fifty systems are either heterogeneous systems, which incorporate accelerators (GPUs) or coprocessors, or custom BlueGene/Q systems. The exceptions are a pair of homogeneous systems at 40 and 41, which are the only homogeneous systems based on Intel’s new Ivy Bridge processors and three more at 45-47 based on Intel Sandy Bridge processors.

The current fastest supercomputer in the world, Tianhe-2, uses heterogeneous computing elements based on Intel Xeon Phi. It delivers 1.9 gigaflops/watt, which is commensurate with the majority of Xeon Phi systems that are ranked between No. 30 and 35, inclusive.

With the DARPA’s goal of an exascale supercomputer within a power envelope of 20 megawatts (MW), extrapolating Beacon, the previous No. 1 supercomputer on the Green500, to exascale results in a 408 MW machine. However, due to the improved efficiency of Eurora, this exascale power envelope comes down to 312 MW, a sizeable 24% drop in power consumption for an exascale machine -- a move in the right direction. Nevertheless, the electricity bills for such a system would still be more than 300 million U.S. dollars per year.

This 13th edition of the Green500 List also marks the Green500’s first use of new energy measurement methodologies developed in tandem with the Energy Efficient High Performance Computing Working Group, Top500, and The Green Grid. In addition to the current Green500 submission requirements, denoted as Level 1, the Green500 also accepts higher-precision measurements, denoted as Level 2 and 3. The Green500 received Level 2 and 3 submissions from several systems, including a superset of the Sequoia BlueGene/Q supercomputer at LLNL called Sequoia-25 and SuperMUC from LRZ. “While the higher quality measurements taken for these systems show lower energy efficiency, they provide a much better picture of the real-world costs of running each system as well as a more in-depth picture of how the system handles a Linpack run,” noted Wu Feng, founder of the Green500. “Additionally, the Sequoia-25 Level 1 submission includes the networking infrastructure, as required for higher levels.”

Since the launch of the Green500 in 2007, the energy efficiency of the highest-ranked machines on the Green500 has been improving much more rapidly than the mean and the median. For instance, while the energy efficiency of the greenest system improved by nearly 30%, the median improved by 14% and the mean by only 11%. “Overall, the performance of machines on the Green500 List has increased at a higher rate than their power consumption. That's why the machines' efficiencies are going up," says Feng. For commodity-based machines -- machines built with off-the-shelf components -- a great deal of the efficiency gains can be attributed to heterogeneous designs, i.e., using traditional CPUs along with GPUs or coprocessors. Such a design allows these systems to keep pace and in some cases even outpace custom systems like IBM’s Blue Gene/Q. "While the gains at the top end of the Green500 appear impressive, overall the improvements have been much more modest," says Feng. “This clearly indicates that there is still work to be done.”

The Green500 has provided a ranking of the most energy-efficient supercomputers in the world since November 2007. For decades, the notion of supercomputer "performance" has been synonymous with "speed" (as measured in FLOPS, short for floating-point operations per second). This particular focus has led to the emergence of supercomputers that consume egregious amounts of electrical power and produce so much heat that extravagant cooling facilities must be constructed to ensure proper operation. In addition, when there is an emphasis on speed as the ultimate metric, it often comes at the expense of other performance metrics, such as energy efficiency, reliability, availability, and usability. As a result, there has been an extraordinary increase in the total cost of ownership (TCO) of a supercomputer. Consequently, the Green500 seeks to raise the awareness in energy efficiency of supercomputing and to drive it as a first-order design constraint on par with speed.

ERRATA: We had a typo in the original release that reported 2.49 gigaflops/watt for the Beacon machine when it should have read 2.449 as it does now. The numbers for the MVS-10P at the Joint Supercomputer Center were revised and the rankings adjusted. The TOP500 data was used in lieu of submitted data for the MVS-10P machine causing the error in ranking. As a result the MVS-10P machine was moved from 34 to 30 in the rankings and the others adjusted to reflect the updated numbers.

The Green500 List

Listed below are the June 2013 The Green500's energy-efficient supercomputers ranked from 101 to 200.

Green500 Rank MFLOPS/W Site* Computer* Total Power
(kW)
101 809.36 Bull Bull Benchmarks SuperComputer I - Bullx B510, Xeon E5 (Sandy Bridge - EP) 8C 2.70GHz, Infiniband QDR 132.20
102 808.47 Central Research Institute of Electric Power Industry/CRIEPI SGI ICE X, Xeon E5-2670 8C 2.600GHz, Infiniband FDR 720.00
103 806.01 Electronics Company iDataPlex DX360M4, Xeon E5-2690 8C 2.900GHz, Infiniband FDR 172.77
104 799.62 Sandia National Laboratories Pecos - Xtreme-X , Xeon E5-2670 8C 2.600GHz, Infiniband QDR 421.20
105 799.31 National Supercomputer Centre (NSC) Triolith - Cluster Platform SL230s Gen8, Xeon E5-2660 8C 2.200GHz, Infiniband FDR 380.00
106 797.43 UCSD/San Diego Supercomputer Center Gordon - Xtreme-X GreenBlade GB512X, Xeon E5-2670 8C 2.600GHz, Infiniband QDR 358.40
107 791.72 Commissariat a l'Energie Atomique (CEA) Tera-100 Hybrid - Bullx B505, Xeon E56xx (Westmere-EP) 2.40 GHz, Infiniband QDR 194.51
108 787.63 CNRS/IDRIS-GENCI Ada - xSeries x3750 Cluster, Xeon E5-2680 8C 2.700GHz, Infiniband FDR 243.69
109 786.78 University of Oslo Abel - MEGWARE MiriQuid, Xeon E5-2670 8C 2.600GHz, Infiniband FDR 227.00
110 786.17 CSIR Centre for Mathematical Modelling and Computer Simulation Cluster Platform 3000 BL460c Gen8, Xeon E5-2670 8C 2.60GHz, Infiniband FDR 386.56
111 775.45 Los Alamos National Laboratory Luna - Xtreme-X GreenBlade GB512X, Xeon E5-2670 8C 2.600GHz, Infiniband QDR 448.00
112 751.10 Atomic Weapons Establishment WillowA - Bullx B510, Xeon E5-2670 8C 2.600GHz, Infiniband QDR 182.00
113 749.45 Atomic Weapons Establishment WillowB - Bullx B510, Xeon E5-2670 8C 2.600GHz, Infiniband QDR 182.00
114 741.67 Purdue University Carter - Cluster Platform 3000 SL6500, Xeon E5 (Sandy Bridge - EP) 8C 2.60GHz, FDR Infiniband 252.00
115 741.06 National Supercomputing Center in Jinan Sunway Blue Light - Sunway BlueLight MPP, ShenWei processor SW1600 975.00 MHz, Infiniband QDR 1,074.00
116 739.54 Research Institute for Information Technology, Kyushu University Fujitsu PRIMERGY CX400, Xeon E5-2680 8C 2.700GHz, Infiniband FDR, NVIDIA K20/K20x 839.71
117 738.73 Norwegian University of Science and Technology SGI ICE X, Xeon E5-2670 8C 2.600GHz, Infiniband FDR 537.00
118 732.53 Super Computer Center in Guangzhou Tianhe-1A Guangzhou Solution - NUDT YH MPP, Xeon X56xx (Westmere-EP) 2.93 GHz, Proprietary 289.00
119 731.92 Sandia National Laboratories Chama - Xtreme-X GreenBlade GB512X, Xeon E5-2670 8C 2.600GHz, Infiniband QDR 453.60
120 727.65 Petroleum Company xSeries x3550M3 Cluster, Xeon E5-2670 8C 2.600GHz, Gigabit Ethernet 141.44
121 718.12 Universitaet Frankfurt LOEWE-CSC - Supermicro Cluster, QC Opteron 2.1 GHz, ATI Radeon GPU, Infiniband 416.78
122 712.37 Atomic Weapons Establishment Blackthorn - Bullx B510, Xeon E5-2670 8C 2.600GHz, Infiniband QDR 446.40
123 706.41 Aerospace Company xSeries x3650 Cluster, Xeon E5-2680 8C 2.700GHz, Infiniband QDR 228.28
124 689.56 Sandia National Laboratories Dark Sand - Appro Xtreme-X Supercomputer, Xeon E5-2670 8C 2.600GHz, Infiniband QDR 388.80
125 688.10 Petr?leo Brasileiro S.A Grifo04 - Itautec Cluster, Xeon X5670 6C 2.930GHz, Infiniband QDR, NVIDIA 2050 365.50
126 682.74 Commissariat a l'Energie Atomique (CEA)/CCRT airain - Bullx B510, Xeon E5-2680 8C 2.700GHz, Infiniband QDR 260.00
127 669.12 Petroleum Company x3650M4 Cluster, Xeon E5-2670 8C 2.600GHz, Infiniband FDR 285.38
128 669.12 Electronics xSeries x3650M4 Cluster, Xeon E5-2670 8C 2.600GHz, Infiniband 208.42
129 668.10 National Super Computer Center in Hunan Tianhe-1A Hunan Solution - NUDT YH MPP, Xeon X5670 6C 2.93 GHz, Proprietary, NVIDIA 2050 1,155.07
130 654.05 Amazon Web Services Amazon EC2 Cluster Compute Instances - Amazon EC2 Cluster, Xeon 8C 2.60GHz, 10G Ethernet 367.08
131 650.79 Clemson University Palmetto2 - Cluster Platform SL250s Gen8, Xeon E5-2665 8C 2.400GHz, Infiniband FDR, NVIDIA K20 403.20
132 643.69 Kyoto University Laurel - Xtreme-X , Xeon E5-2670 8C 2.600GHz, Infiniband FDR 210.35
133 637.43 CEA/TGCC-GENCI Curie thin nodes - Bullx B510, Xeon E5-2680 8C 2.700GHz, Infiniband QDR 2,132.00
134 635.53 Japan Advanced Institute of Science and Technology Cray XC30, Xeon E5-2670 8C 2.600GHz, Aries interconnect 165.00
135 635.15 National Supercomputing Center in Tianjin Tianhe-1A - NUDT YH MPP, Xeon X5670 6C 2.93 GHz, NVIDIA 2050 4,040.00
136 634.70 CSC (Center for Scientific Computing) Sisu - Cray Cascade, Xeon E5-2670 8C 2.600GHz, Aries interconnect 337.32
137 634.01 Vikram Sarabhai Space Centre, Indian Space Research Organisation SAGA - Z24XX/SL390s Cluster, Xeon E5530/E5645 6C 2.40GHz, Infiniband QDR, NVIDIA 2090/2070 297.63
138 628.13 Lawrence Livermore National Laboratory Edge - Appro GreenBlade Cluster, Xeon X5660 6C 2.80 GHz, Infiniband QDR, NVIDIA 2050 160.00
139 627.84 Cray Inc. Crystal - Cray XC30, Xeon E5-2670 8C 2.600GHz, Aries interconnect 234.66
140 610.63 Swiss Scientific Computing Center (CSCS) Piz Daint - Cray XC30, Xeon E5-2670 8C 2.600GHz, Aries interconnect
Level 3 measurement data available
1,026.64
141 609.67 National Computational Infrastructure, Australian National University Fujitsu PRIMERGY CX250 S1, Xeon E5-2670 8C 2.600GHz, Infiniband FDR 1,605.12
142 606.66 National Astronomical Observatory of Japan (NAOJ) Cray XC30, Xeon E5-2670 8C 2.600GHz, Aries interconnect 692.98
143 590.23 Research Institute for Information Technology, Kyushu University PRIMEHPC FX10, SPARC64 IXfx 16C 1.848GHz, Tofu interconnect 282.43
144 582.00 NOAA/Oak Ridge National Laboratory Gaea C2 - Cray XE6, Opteron 6276 16C 2.30GHz, Cray Gemini interconnect 972.00
145 562.27 International Fusion Energy Research Centre (IFERC), EU(F4E) - Japan Broader Approach collaboration Helios - Bullx B510, Xeon E5-2680 8C 2.700GHz, Infiniband QDR 2,200.00
146 554.65 Louisiana State University SuperMike-II - Dell PowerEdge C6220, Xeon E5-2670 8C 2.600GHz, Infiniband QDR 200.00
147 548.17 Universitaet Bayreuth btrzx3 - Saxonid 6300, Opteron 6348 12C 2.800GHz, Infiniband QDR 178.00
148 521.96 HPC2N - Umea University Abisko - Supermicro H8QG6, Opteron 6238 12C 2.600GHz, Infiniband QDR 252.70
149 511.17 Institute for Molecular Science Fujitsu PRIMERGY CX250 & RX300, Xeon E5-2690 8C 2.900GHz, Infiniband FDR/QDR 465.40
150 492.64 National Supercomputing Centre in Shenzhen (NSCS) Nebulae - Dawning TC3600 Blade System, Xeon X5650 6C 2.66GHz, Infiniband QDR, NVIDIA 2050 2,580.00
151 483.12 Universitaet Mainz MOGON - Saxonid 6100, Opteron 6272 16C 2.100GHz, Infiniband QDR 467.00
152 467.72 University of Bergen Hexagon - Cray XE6m-200, Opteron 6276 16C 2.300GHz, Cray Gemini interconnect 342.30
153 449.41 HWW/Universitaet Stuttgart HERMIT - Cray XE6, Opteron 6276 16C 2.30 GHz, Cray Gemini interconnect 1,850.00
154 443.08 Calcul Canada/Calcul Québec/Université de Sherbrooke Rackable C2112-4G3 Cluster, Opteron 12 Core 2.10 GHz, Infiniband QDR 542.34
155 441.43 Environment Canada Power 775, POWER7 8C 3.84 GHz, Custom 462.30
156 441.43 Environment Canada Power 775, POWER7 8C 3.84 GHz, Custom 462.30
157 441.43 United Kingdom Meteorological Office Power 775, POWER7 8C 3.836GHz, Custom Interconnect 1,040.18
158 441.43 United Kingdom Meteorological Office Power 775, POWER7 8C 3.836GHz, Custom Interconnect 866.82
159 441.43 ECMWF Power 775, POWER7 8C 3.836GHz, Custom Interconnect 1,386.91
160 441.43 ECMWF Power 775, POWER7 8C 3.836GHz, Custom Interconnect 1,386.91
161 441.43 IBM Poughkeepsie Benchmarking Center Power 775, POWER7 8C 3.836GHz, Custom Interconnect 390.07
162 441.43 United Kingdom Meteorological Office Power 775, POWER7 8C 3.836GHz, Custom Interconnect 288.94
163 438.43 Institute for Materials Research, Tohoku University (IMR) Hitachi SR16000 Model M1, POWER7 8C 3.836GHz, Custom 556.30
164 429.31 Government Sunway 4000H Cluster, Xeon X56xx (Westmere-EP) 2.93 GHz, Infiniband QDR 339.15
165 426.72 Los Alamos National Laboratory Mustang - Xtreme-X 1320H-LANL, Opteron 12 Core 2.30 GHz, Infiniband QDR 540.40
166 423.70 IBM Development Engineering DARPA Trial Subset - Power 775, POWER7 8C 3.836GHz, Custom Interconnect 3,575.63
167 415.11 DOE/Bettis Atomic Power Laboratory Cray CS300-AC, Xeon E5-2670 8C 2.600GHz, Infiniband QDR 445.20
168 415.11 Knolls Atomic Power Laboratory Cray CS300-AC, Xeon E5-2670 8C 2.600GHz, Infiniband QDR 445.20
169 405.38 Swiss Scientific Computing Center (CSCS) Monte Rosa - Cray XE6, Opteron 6272 16C 2.10 GHz, Cray Gemini interconnect 780.00
170 404.45 CLUMEQ - McGill University Guillimin - iDataPlex DX360M3, Xeon 2.66, Infiniband 337.00
171 400.68 Taiwan National Center for High-performance Computing ALPS - Acer AR585 F1 Cluster, Opteron 12C 2.2GHz, QDR infiniband 442.00
172 378.77 King Abdullah University of Science and Technology Shaheen - Blue Gene/P Solution 504.00
173 378.76 IDRIS Blue Gene/P Solution 315.00
174 366.58 DOE/NNSA/LLNL Dawn - Blue Gene/P Solution 1,134.00
175 363.98 DOE/SC/Argonne National Laboratory Intrepid - Blue Gene/P Solution 1,260.00
176 362.20 DOE/SC/LBNL/NERSC Hopper - Cray XE6, Opteron 6172 12C 2.10GHz, Custom 2,910.00
177 361.79 University of Edinburgh HECToR - Cray XE6, Opteron 6276 16C 2.30 GHz, Cray Gemini interconnect 1,824.93
178 359.65 Energy Company (A) HP ProLiant SL390s G7 Xeon 6C X5650, Infiniband 364.80
179 357.24 National Institute for Fusion Science (NIFS) Plasma Simulator - Hitachi SR16000 Model M1, POWER7 8C 3.836GHz, Custom Interconnect 708.20
180 355.58 Vienna Scientific Cluster VSC-2 - Megware Saxonid 6100, Opteron 8C 2.2 GHz, Infiniband QDR 430.00
181 349.78 South Ural State University SKIF Aurora - SKIF Aurora Platform - Intel Xeon X5680, Infiniband QDR 287.04
182 347.89 E-Security - Selex Elsag Aurora - Eurotech Aurora, Xeon X5690 6C 3.470GHz, Infiniband QDR, NVIDIA 2090 317.70
183 342.92 Information Initiative Center, Hokkaido University Hitachi SR16000 Model M1, POWER7 8C 3.836GHz, Custom 354.60
184 330.98 Vestas Wind Systems A/S iDataPlex DX360M3, Xeon X5670 6C 2.93 GHz, Infiniband QDR 489.75
185 330.98 EDF R&D; Ivanhoe - iDataPlex, Xeon X56xx 6C 2.93 GHz, Infiniband 510.00
186 330.98 IBM Development Engineering iDataPlex DX360M3, Xeon X5670 6C 2.93 GHz, Infiniband QDR 316.50
187 322.11 Moscow State University - Research Computing Center Lomonosov - T-Platforms T-Blade2/1.1, Xeon X5570/X5670/E5630 2.93/2.53 GHz, Nvidia 2070 GPU, PowerXCell 8i Infiniband QDR 2,800.00
188 314.31 Volvo Car Group Cluster Platform 3000 BL460c/SL250/ML350 Xeon E5-2670 8C 2.600GHz, Infiniband QDR 392.40
189 313.94 Manufacturing Company Cluster Platform 3000 BL460c Gen8, Xeon E5-2670 8C 2.600GHz, Infiniband FDR 475.20
190 311.77 NASA/Ames Research Center/NAS Pleiades - SGI ICE X/8200EX/8400EX, Xeon 54xx 3.0/5570/5670/E5-2670 2.93/2.6/3.06/3.0 Ghz, Infiniband QDR/FDR 3,986.96
191 305.33 Defence xSeries x3650M3 Cluster, Xeon X5650 6C 2.660GHz, Infiniband 334.06
192 297.44 National Institute for Computational Sciences/University of Tennessee Kraken XT5 - Cray XT5-HE Opteron Six Core 2.6 GHz 3,090.00
193 297.26 National Computational Infrastructure National Facility (NCI-NF) Vayu - Sun Blade x6048, Xeon X5570 2.93 Ghz, Infiniband QDR 425.22
194 294.62 Jefferson Lab Seneca Data Cluster, Xeon E5-2650 8C 2.000GHz, Infiniband FDR, NVIDIA K20 397.80
195 292.38 Petr?leo Brasileiro S.A Grifo06 - Itautec Cluster, Xeon E5-2643 4C 3.300GHz, Infiniband FDR, NVIDIA 2075 548.25
196 278.89 DOE/NNSA/LANL/SNL Cielo - Cray XE6, Opteron 6136 8C 2.40GHz, Custom 3,980.00
197 277.69 Banking Cluster Platform 3000 BL460c Gen8, Xeon E5-2680 8C 2.700GHz, 10G Ethernet 350.10
198 276.25 IT Service Provider (B) Cluster Platform 3000 BL460c Gen8, Xeon E5-2680 8C 2.700GHz, 10G Ethernet 419.40
199 275.25 Network Company Cluster Platform 3000 BL460c Gen8, Xeon E5-2680 8C 2.700GHz, 10G Ethernet 468.00
200 274.97 Airline Cluster Platform 3000 BL460c Gen8, Xeon E5-2680 8C 2.700GHz, 10G Ethernet 481.50

* Performance data obtained from publicly available sources including TOP500