Statistics Search

banner1
banner4
banner5
banner6
banner7
 
 

The Green500 List - June 2013

BLACKSBURG, Va., June 28, 2013 -- Today’s release of the Green500 List (http://www.green500.org/lists/green201306) shows that the top end of the list is again dominated by heterogeneous supercomputers, those that combine two or more types of processing elements together, such as a traditional processor or central processing unit (CPU) combined with a graphical processing unit (GPU) or coprocessor.

Two heterogeneous systems, based on NVIDIA’s Kepler K20 GPU accelerators, claim the top two positions and break through the three-billion floating-point operations per second (gigaflops or GFLOPS) per watt barrier. Eurora, located at Cineca, debuts at the top of the Green500 at 3.21 gigaflops/watt, followed closely by Aurora Tigon at 3.18 gigaflops/watt. The energy efficiency of these machines, manufactured by Eurotech, improves upon the previous greenest supercomputer in the world by nearly 30%. Two other heterogeneous systems, Beacon with an efficiency of 2.449 gigaflops/watt* and SANAM with an efficiency of 2.35 gigaflops/watt, come in at numbers 3 and 4 on the Green500. The former is based on Intel Xeon Phi 5110P coprocessors while the latter is based on AMD FirePro S10000 GPUs.

Rounding out the top five is CADMOS BlueGene/Q, which is based on a previously list-leading custom design of the IBM BlueGene/Q architecture with PowerPC-based CPU processors. In fact, the next swath of supercomputers down to No. 28 are dominated by IBM BlueGene/Q systems with one set at approximately 2.30 gigaflops/watt and another at approximately 2.18 gigaflops/watt. Overall, the greenest fifty systems are either heterogeneous systems, which incorporate accelerators (GPUs) or coprocessors, or custom BlueGene/Q systems. The exceptions are a pair of homogeneous systems at 40 and 41, which are the only homogeneous systems based on Intel’s new Ivy Bridge processors and three more at 45-47 based on Intel Sandy Bridge processors.

The current fastest supercomputer in the world, Tianhe-2, uses heterogeneous computing elements based on Intel Xeon Phi. It delivers 1.9 gigaflops/watt, which is commensurate with the majority of Xeon Phi systems that are ranked between No. 30 and 35, inclusive.

With the DARPA’s goal of an exascale supercomputer within a power envelope of 20 megawatts (MW), extrapolating Beacon, the previous No. 1 supercomputer on the Green500, to exascale results in a 408 MW machine. However, due to the improved efficiency of Eurora, this exascale power envelope comes down to 312 MW, a sizeable 24% drop in power consumption for an exascale machine -- a move in the right direction. Nevertheless, the electricity bills for such a system would still be more than 300 million U.S. dollars per year.

This 13th edition of the Green500 List also marks the Green500’s first use of new energy measurement methodologies developed in tandem with the Energy Efficient High Performance Computing Working Group, Top500, and The Green Grid. In addition to the current Green500 submission requirements, denoted as Level 1, the Green500 also accepts higher-precision measurements, denoted as Level 2 and 3. The Green500 received Level 2 and 3 submissions from several systems, including a superset of the Sequoia BlueGene/Q supercomputer at LLNL called Sequoia-25 and SuperMUC from LRZ. “While the higher quality measurements taken for these systems show lower energy efficiency, they provide a much better picture of the real-world costs of running each system as well as a more in-depth picture of how the system handles a Linpack run,” noted Wu Feng, founder of the Green500. “Additionally, the Sequoia-25 Level 1 submission includes the networking infrastructure, as required for higher levels.”

Since the launch of the Green500 in 2007, the energy efficiency of the highest-ranked machines on the Green500 has been improving much more rapidly than the mean and the median. For instance, while the energy efficiency of the greenest system improved by nearly 30%, the median improved by 14% and the mean by only 11%. “Overall, the performance of machines on the Green500 List has increased at a higher rate than their power consumption. That's why the machines' efficiencies are going up," says Feng. For commodity-based machines -- machines built with off-the-shelf components -- a great deal of the efficiency gains can be attributed to heterogeneous designs, i.e., using traditional CPUs along with GPUs or coprocessors. Such a design allows these systems to keep pace and in some cases even outpace custom systems like IBM’s Blue Gene/Q. "While the gains at the top end of the Green500 appear impressive, overall the improvements have been much more modest," says Feng. “This clearly indicates that there is still work to be done.”

The Green500 has provided a ranking of the most energy-efficient supercomputers in the world since November 2007. For decades, the notion of supercomputer "performance" has been synonymous with "speed" (as measured in FLOPS, short for floating-point operations per second). This particular focus has led to the emergence of supercomputers that consume egregious amounts of electrical power and produce so much heat that extravagant cooling facilities must be constructed to ensure proper operation. In addition, when there is an emphasis on speed as the ultimate metric, it often comes at the expense of other performance metrics, such as energy efficiency, reliability, availability, and usability. As a result, there has been an extraordinary increase in the total cost of ownership (TCO) of a supercomputer. Consequently, the Green500 seeks to raise the awareness in energy efficiency of supercomputing and to drive it as a first-order design constraint on par with speed.

ERRATA: We had a typo in the original release that reported 2.49 gigaflops/watt for the Beacon machine when it should have read 2.449 as it does now. The numbers for the MVS-10P at the Joint Supercomputer Center were revised and the rankings adjusted. The TOP500 data was used in lieu of submitted data for the MVS-10P machine causing the error in ranking. As a result the MVS-10P machine was moved from 34 to 30 in the rankings and the others adjusted to reflect the updated numbers.

The Green500 List

Listed below are the June 2013 The Green500's energy-efficient supercomputers ranked from 1 to 100.

Green500 Rank MFLOPS/W Site* Computer* Total Power
(kW)
1 3,208.83 CINECA Eurora - Eurotech Aurora HPC 10-20, Xeon E5-2687W 8C 3.100GHz, Infiniband QDR, NVIDIA K20 30.70
2 3,179.88 Selex ES Chieti Aurora Tigon - Eurotech Aurora HPC 10-20, Xeon E5-2687W 8C 3.100GHz, Infiniband QDR, NVIDIA K20 31.02
3 2,449.57 National Institute for Computational Sciences/University of Tennessee Beacon - Appro GreenBlade GB824M, Xeon E5-2670 8C 2.600GHz, Infiniband FDR, Intel Xeon Phi 5110P 45.11
4 2,351.10 King Abdulaziz City for Science and Technology SANAM - Adtech, ASUS ESC4000/FDR G2, Xeon E5-2650 8C 2.000GHz, Infiniband FDR, AMD FirePro S10000 179.15
5 2,299.15 IBM Thomas J. Watson Research Center BlueGene/Q, Power BQC 16C 1.60 GHz, Custom 82.19
6 2,299.15 DOE/SC/Argonne National Laboratory Cetus - BlueGene/Q, Power BQC 16C 1.600GHz, Custom Interconnect 82.19
7 2,299.15 Ecole Polytechnique Federale de Lausanne CADMOS BG/Q - BlueGene/Q, Power BQC 16C 1.600GHz, Custom Interconnect 82.19
8 2,299.15 Interdisciplinary Centre for Mathematical and Computational Modelling, University of Warsaw BlueGene/Q, Power BQC 16C 1.600GHz, Custom Interconnect 82.19
9 2,299.15 DOE/SC/Argonne National Laboratory Vesta - BlueGene/Q, Power BQC 16C 1.60GHz, Custom 82.19
10 2,299.15 University of Rochester BlueGene/Q, Power BQC 16C 1.60GHz, Custom 82.19
11 2,243.44 Swiss Scientific Computing Center (CSCS) Todi - Cray XK7 , Opteron 6272 16C 2.100GHz, Cray Gemini interconnect, NVIDIA Tesla K20 Kepler 122.00
12 2,243.04 University of Southern California HPCC - Cluster Platform SL250s Gen8, Xeon E5-2665 8C 2.400GHz, Infiniband FDR, Nvidia K20m 237.00
13 2,177.13 DOE/NNSA/LLNL Vulcan - BlueGene/Q, Power BQC 16C 1.600GHz, Custom Interconnect 1,972.00
14 2,176.82 Forschungszentrum Juelich (FZJ) JUQUEEN - BlueGene/Q, Power BQC 16C 1.600GHz, Custom Interconnect 2,301.00
15 2,176.60 University of Edinburgh DiRAC - BlueGene/Q, Power BQC 16C 1.60GHz, Custom 493.12
16 2,176.60 High Energy Accelerator Research Organization /KEK HIMAWARI - BlueGene/Q, Power BQC 16C 1.600GHz, Custom Interconnect 246.56
17 2,176.60 High Energy Accelerator Research Organization /KEK SAKURA - BlueGene/Q, Power BQC 16C 1.60GHz, Custom 246.56
18 2,176.59 Science and Technology Facilities Council - Daresbury Laboratory Blue Joule - BlueGene/Q, Power BQC 16C 1.60GHz, Custom 575.31
19 2,176.58 DOE/NNSA/LLNL Sequoia - BlueGene/Q, Power BQC 16C 1.60 GHz, Custom 7,890.00
20 2,176.58 CNRS/IDRIS-GENCI BlueGene/Q, Power BQC 16C 1.60GHz, Custom 328.75
21 2,176.58 EDF R&D Zumbrota - BlueGene/Q, Power BQC 16C 1.60GHz, Custom 328.75
22 2,176.58 Victorian Life Sciences Computation Initiative Avoca - BlueGene/Q, Power BQC 16C 1.60GHz, Custom 328.75
23 2,176.58 DOE/SC/Argonne National Laboratory Mira - BlueGene/Q, Power BQC 16C 1.60GHz, Custom 3,945.00
24 2,176.57 CINECA Fermi - BlueGene/Q, Power BQC 16C 1.60GHz, Custom 821.88
25 2,176.52 IBM - Rochester BlueGene/Q, Power BQC 16C 1.60 GHz, Custom 164.38
26 2,176.52 IBM - Rochester BlueGene/Q, Power BQC 16C 1.60 GHz, Custom 164.38
27 2,176.52 Southern Ontario Smart Computing Innovation Consortium/University of Toronto BGQ - BlueGene/Q, Power BQC 16C 1.600GHz, Custom Interconnect 164.38
28 2,176.52 Rensselaer Polytechnic Institute BlueGene/Q, Power BQC 16C 1.60GHz, Custom 164.38
29 2,142.77 DOE/SC/Oak Ridge National Laboratory Titan - Cray XK7 , Opteron 6274 16C 2.200GHz, Cray Gemini interconnect, NVIDIA K20x 8,209.00
30 1,949.37 Joint Supercomputer Center MVS-10P - RSC Tornado, Xeon E5-2690 8C 2.900GHz, Infiniband FDR, Intel Xeon Phi SE10X 181.70
31 1,935.32 NASA Center for Climate Simulation Discover - iDataPlex DX360M4, Xeon E5-2670 8C 2.600GHz, Infiniband QDR, Intel Xeon Phi 5110P 215.60
32 1,901.54 National University of Defense Technology Tianhe-2 (MilkyWay-2) - TH-IVB-FEP Cluster, Intel Xeon E5-2692 12C 2.200GHz, TH Express-2, Intel Xeon Phi 31S1P 17,808.00
33 1,886.00 Purdue University Conte - Cluster Platform SL250s Gen8, Xeon E5-2670 8C 2.600GHz, Infiniband FDR, Intel Xeon Phi 5110P 510.00
34 1,861.42 DOE/NNSA/LLNL Sequoia-25 - BlueGene/Q, Power BQC 16C 1.60 GHz, Custom
Level 2 measurement data available
11,501.98
35 1,612.97 NASA/Ames Research Center/NAS Maia - SGI Rackable C1104G-RP5, Xeon E5-2670 8C 2.600GHz, Infiniband FDR, Intel Xeon Phi 132.00
36 1,467.03 Seoul National University Chundoong - Chundoong Cluster, Xeon E5-2650 8C 2.000GHz, Infiniband QDR, AMD Radeon HD 7970 72.80
37 1,304.01 Indiana University Big Red II - Cray XK7 , Opteron 6276 16C 2.300GHz, Cray Gemini interconnect, NVIDIA K20 458.13
38 1,266.26 Barcelona Supercomputing Center Bullx B505, Xeon E5649 6C 2.53GHz, Infiniband QDR, NVIDIA 2090 81.50
39 1,264.86 Intel Endeavor - Intel Cluster, Xeon E5-2670 8C 2.600GHz, Infiniband FDR, Intel Xeon Phi 299.88
40 1,247.57 Météo France Bullx DLC B710 Blades, Intel Xeon E5 v2 12C 2.700GHz, Infiniband FDR 401.00
41 1,247.39 Bull Manny - Bullx DLC B710 Blades, Intel Xeon E5 v2 12C 2.700GHz, Infiniband FDR 101.00
42 1,145.92 Texas Advanced Computing Center/Univ. of Texas Stampede - PowerEdge C8220, Xeon E5-2680 8C 2.700GHz, Infiniband FDR, Intel Xeon Phi SE10P 4,510.00
43 1,050.26 Los Alamos National Laboratory Moonlight - Xtreme-X , Xeon E5-2670 8C 2.600GHz, Infiniband QDR, NVIDIA 2090 226.80
44 1,038.29 CSIRO CSIRO GPU Cluster - Nitro G16 3GPU, Xeon E5-2650 8C 2.000GHz, Infiniband FDR, NVIDIA 2050 128.77
45 1,036.64 Electronics BladeCenter HS23 Cluster, Xeon E5-2670 8C 2.600GHz, Infiniband QDR 109.21
46 1,036.64 Electronics BladeCenter HS23 Cluster, Xeon E5-2670 8C 2.600GHz, Infiniband QDR 109.21
47 1,036.64 Electronics BladeCenter HS23 Cluster, Xeon E5-2670 8C 2.600GHz, Infiniband QDR 109.21
48 1,035.13 Center for Computational Sciences, University of Tsukuba HA-PACS - Xtream-X GreenBlade 8204, Xeon E5-2670 8C 2.600GHz, Infiniband QDR, NVIDIA 2090 407.29
49 1,010.11 CEA/TGCC-GENCI Curie hybrid - Bullx B505, Xeon E5640 2.67 GHz, Infiniband QDR 108.80
50 995.52 South Ural State University RSC Tornado SUSU - RSC Tornado, Xeon X5680 6C 3.330GHz, Infiniband QDR, Intel Xeon Phi SE10X 147.46
51 990.60 Total Exploration Production Pangea - SGI ICE X, Xeon E5-2670 8C 2.600GHz, Infiniband FDR 2,118.00
52 953.51 Virginia Tech HokieSpeed - SuperServer 2026GT-TRF, Xeon E5645 6C 2.40GHz, Infiniband QDR, NVIDIA 2050 126.27
53 932.64 Geoscience iDataPlex DX360M4, Xeon E5-2670 8C 2.600GHz, Infiniband FDR 167.50
54 932.63 Science and Technology Facilities Council - Daresbury Laboratory Blue Wonder - iDataPlex DX360M4, Xeon E5-2670 8C 2.600GHz, Infiniband FDR 170.16
55 932.60 NASA/Goddard Space Flight Center iDataPlex DX360M3, Xeon E5-2670 8C 2.600GHz, Infiniband QDR 159.53
56 932.60 Centro Euro-Mediterraneo per i Cambiamenti Climatici iDataPlex DX360M4, Xeon E5-2670 8C 2.600GHz, Infiniband FDR 159.53
57 932.59 Durham University iDataPlex DX360M4, Xeon E5-2670 8C 2.600GHz, Infiniband FDR 139.59
58 922.52 University of Miami iDataPlex DX360M4, Xeon E5-2670 8C 2.600GHz, Infiniband FDR, Intel Xeon Phi 119.25
59 919.44 Institute of Process Engineering, Chinese Academy of Sciences Mole-8.5 - Mole-8.5 Cluster, Xeon X5520 4C 2.27 GHz, Infiniband QDR, NVIDIA 2050 540.00
60 910.88 National Center for Medium Range Weather Forecast iDataPlex DX360M4, Xeon E5-2670 8C 2.600GHz, Infiniband FDR 349.60
61 910.85 Barcelona Supercomputing Center MareNostrum - iDataPlex DX360M4, Xeon E5-2670 8C 2.600GHz, Infiniband FDR 1,015.60
62 910.82 Max-Planck-Gesellschaft MPI/IPP iDataPlex DX360M4, Xeon E5-2670 8C 2.600GHz, Infiniband FDR 205.72
63 910.80 Indian Institute of Tropical Meteorology iDataPlex DX360M4, Xeon E5-2670 8C 2.600GHz, Infiniband FDR 789.66
64 910.80 University of Chicago Midway - iDataPlex DX360M4, Xeon E5-2670 8C 2.600GHz, Infiniband FDR 142.91
65 910.79 Electronics iDataPlex DX360M4, Xeon E5-2670 8C 2.600GHz, Infiniband 138.59
66 910.79 Electronics iDataPlex DX360M4, Xeon E5-2670 8C 2.600GHz, Infiniband 138.59
67 910.79 Electronics iDataPlex DX360M4, Xeon E5-2670 8C 2.600GHz, Infiniband 138.59
68 910.79 Electronics iDataPlex DX360M4, Xeon E5-2670 8C 2.600GHz, Infiniband 138.59
69 910.79 Electronics iDataPlex DX360M4, Xeon E5-2670 8C 2.600GHz, Infiniband 138.59
70 910.79 Electronics iDataPlex DX360M4, Xeon E5-2670 8C 2.600GHz, Infiniband 138.59
71 910.79 Electronics iDataPlex DX360M4, Xeon E5-2670 8C 2.600GHz, Infiniband 138.59
72 910.79 Electronics iDataPlex DX360M4, Xeon E5-2670 8C 2.600GHz, Infiniband 138.59
73 910.76 CLUMEQ - McGill University iDataPlex DX360M4, Xeon E5-2670 8C 2.600GHz, Infiniband FDR 154.88
74 908.83 Leibniz Rechenzentrum SuperMUC - iDataPlex DX360M4, Xeon E5-2680 8C 2.70GHz, Infiniband FDR
Level 3 measurement data available
2,841.00
75 903.47 Shanghai Jiaotong University Inspur TS10000, Xeon E5-2670 8C 2.600GHz, Infiniband FDR, K20M/Xeon Phi 5110P 217.20
76 891.87 Forschungszentrum Juelich (FZJ) iDataPlex DX360M3, Xeon X5650 6C 2.66 GHz, Infiniband QDR, NVIDIA 2070 128.75
77 891.88 CINECA / SCS - SuperComputing Solution iDataPlex DX360M3, Xeon E5645 6C 2.40 GHz, Infiniband QDR, NVIDIA 2070 160.00
78 886.30 Information Technology Center, The University of Tokyo Oakleaf-FX - PRIMEHPC FX10, SPARC64 IXfx 16C 1.848GHz, Tofu interconnect 1,176.80
79 881.36 Air Force Research Laboratory Spirit - SGI ICE X, Xeon E5-2670 8C 2.600GHz, Infiniband FDR 1,606.00
80 875.34 NCAR (National Center for Atmospheric Research) Yellowstone - iDataPlex DX360M4, Xeon E5-2670 8C 2.600GHz, Infiniband FDR 1,436.72
81 875.33 Army Research Laboratory DoD Supercomputing Resource Center (ARL DSRC) Pershing - iDataPlex DX360M4, Xeon E5-2670 8C 2.600GHz, Infiniband FDR 400.68
82 875.33 National Centers for Environment Prediction Gyre - iDataPlex DX360M4, Xeon E5-2670 8C 2.600GHz, Infiniband FDR 203.52
83 875.33 National Centers for Environment Prediction Tide - iDataPlex DX360M4, Xeon E5-2670 8C 2.600GHz, Infiniband FDR 203.52
84 875.33 Navy DSRC Kilrain - iDataPlex DX360M4, Xeon E5-2670 8C 2.600GHz, Infiniband FDR 373.97
85 875.33 Navy DSRC Haise - iDataPlex DX360M4, Xeon E5-2670 8C 2.600GHz, Infiniband FDR 373.97
86 875.32 Army Research Laboratory DoD Supercomputing Resource Center (ARL DSRC) Hercules - iDataPlex DX360M4, Xeon E5-2670 8C 2.600GHz, Infiniband FDR 347.26
87 875.32 Electronics iDataPlex DX360M4, Xeon E5-2670 8C 2.600GHz, Infiniband FDR 148.19
88 874.02 Saudi Aramco Makman - iDataPlex DX360M4, Xeon E5-2670 8C 2.600GHz, Infiniband QDR 505.50
89 869.83 Automotive IBM Flex System x240, Xeon E5-2670 8C 2.600GHz, Infiniband FDR 181.31
90 869.83 Automotive IBM Flex System x240, Xeon E5-2670 8C 2.600GHz, Infiniband FDR 181.31
91 869.83 Automotive IBM Flex System x240, Xeon E5-2670 8C 2.600GHz, Infiniband FDR 181.31
92 852.27 GSIC Center, Tokyo Institute of Technology TSUBAME 2.0 - HP ProLiant SL390s G7 Xeon 6C X5670, Nvidia GPU, Linux/Windows 1,398.61
93 851.89 TOTAL Laure - SGI ICE X, Xeon E5-2670 8C 2.600GHz, Infiniband FDR 319.00
94 848.69 Sandia National Laboratories Dark Bridge - Appro Xtreme-X Supercomputer, Xeon E5-2670 8C 2.600GHz, Infiniband QDR 315.90
95 837.69 Technische Universitaet Darmstadt iDataPlex DX360M4, Xeon E5-2670 8C 2.600GHz, Infiniband FDR 255.20
96 837.19 Lawrence Livermore National Laboratory Zin - Xtreme-X GreenBlade GB512X, Xeon E5 (Sandy Bridge - EP) 8C 2.60GHz, Infiniband QDR 924.16
97 836.82 Maui High-Performance Computing Center (MHPCC) Riptide - iDataPlex DX360M4, Xeon E5-2670 8C 2.600GHz, Infiniband FDR
Level 3 measurement data available
254.00
98 830.18 RIKEN Advanced Institute for Computational Science (AICS) K computer, SPARC64 VIIIfx 2.0GHz, Tofu interconnect 12,659.89
99 824.79 Lawrence Livermore National Laboratory Cab - Xtreme-X , Xeon E5-2670 8C 2.600GHz, Infiniband QDR 421.20
100 823.80 Aerospace Company iDataPlex DX360M4, Xeon E5-2680 8C 2.700GHz, Infiniband 118.29

* Performance data obtained from publicly available sources including TOP500