Supercomputers contains high level speed and memory. These kinds of computers are usually thousands of times faster than ordinary computers. Performance of a supercomputer is measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS) as of 2015, which can perform up to quadrillions of FLOPS. It can do arithmetic jobs very fast, so they are used for weather forecasting, code-breaking, genetic analysis and other jobs that need many calculations.
The biggest supercomputers out there
The semiannual topmost five hundred supercomputers are listed in the world. The brand new or the most powerful system on the top ten are very interesting computing machines on the planet, as of June 2015. The ten most powerful supercomputers on earth according to their rankings are-
Cores: 3120000, Computational power: 33.86 petaflops
The winner, and still champion – for the fifth straight time – is the Chinese National University of Defense Technology’s Tianhe-2, or Milky Way 2. At a whopping 33.86 petaflops of computing power, Tianhe-2 is used to do largely whatever the Chinese government wants with it, with relatively few specifics of the massive machine’s workload available.
Cores: 560640, Computational power: 17.59 petaflops
The most powerful American supercomputer resides at the Oak Ridge National Laboratory, where it’s used for research on materials science, fuel combustion, chemistry simulations and meteorology. But as powerful as it is, at 17.59 petaflops, it remains second-best, behind. Titan can do 27,000 trillion calculations per second.
Cores: 1572864, Computational power: 17.17 petaflops
IBM Sequoia is a petascale Blue Gene/Q supercomputer constructed by IBM for the National Nuclear Security Administration as part of the Advanced Simulation and Computing Program (ASC). It was delivered to the Lawrence Livermore National Laboratory (LLNL) in 2011 and was fully deployed in June 2012. Sequoia, at Lawrence Livermore National Laboratory alongside Vulcan and other less powerful supercomputers, maintains its No.3 ranking and tops the list of most powerful machines that do not use GPU-type accelerators alongside its main processor cores. used it to solve a complex fluid dynamics problem — the prediction of noise generated by a supersonic jet engine.
Cores: 705024, Computational power: 10.51 petaflops
RIKEN’s K computer belongs to the highest caliber of supercomputers in the world based on a distributed memory architecture with over 80,000 computer nodes. One of the longest-standing members of the top 10, Fujitsu’s K Computer operates at Japan’s Advanced Institute for Computational Science. It’s running simulations for weather forecasting, pharmacological research, space science, climate research, disaster prevention and medical research. The K computer’s operating system is based on the Linux kernel, with additional drivers designed to make use of the computer’s hardware. In June 2011, TOP500 ranked K the world’s fastest supercomputer, with a computation speed of over 8 petaflops, and in November 2011, K became the first computer to top 10 petaflops.
Cores: 786432, Computational power: 8.58 petaflops
Mira, an IBM Blue Gene/Q supercomputer at the Argonne Leadership Computing Facility, is equipped with 786,432 cores, 768 terabytes of memory and has a peak performance of 10 petaflops at 2176.58 megaflops per watt. The Argonne National Laboratory’s Mira is the third-most powerful supercomputer in the U.S. and among the most energy-efficient on the list will be used for scientific research, including studies in the fields of material science, climatology, seismology, and computational chemistry. The supercomputer is being utilized initially for sixteen projects, selected by the Department of Energy.
Cores: 301056, Computational power: 8.1 petaflops
The Trinity supercomputer is designed to provide increased computational capability for the NNSA Nuclear Security Enterprise in support of ever-demanding workloads, e.g., increasing geometric and physics fidelities while maintaining expectations for total time to solution. The capabilities of Trinity are required for supporting the NNSA Stockpile Stewardship program’s certification and assessments to ensure that the nation’s nuclear stockpile is safe, reliable, and secure.
Cores: 115984, Computational power: 6.27 petaflops
The Swiss National Supercomputing Center operates the second of two European machines in the latest top 10. Piz Daint is used for HPC research, as a computing resource for national and international projects, and even as a meteorology platform for MeteoSwiss.
Cores: 185088, Computational power: 5.64 petaflops
Hazel Hen, the new CRAY XC40-System of the High Performance Computing Center Stuttgart (HLRS), delivers a peak performance of 7.42 Petaflops (quadrillion floating point operations per second). computer known as Hornet and is designed for sustained application performance and high scalability. Hazel Hen delivers a peak performance of 7.42 Petaflops and is perfectly suited to serve HLRS’s prime science and research fields such as scientific engineering, health, energy, environment, and mobility. Hazel Hen is composed of 7,712 compute nodes with a total of 185,088 Intel Haswell E5-2680 v3 compute cores. The system features 965 Terabyte of Main Memory and a total of 11 Petabyte of storage capacity spread over 32 additional cabinets containing more than 8,300 disk drives. The input-/output rates are +/- 350 Gigabyte per second. The pre- and post- processing infrastructure of HLRS aims to support users with complex workflows and with advanced access methods including remote graphics rendering and simulation steering.
Cores: 196608, Computational power: 5.53 petaflops
Shaheen II is the first supercomputer from Saudi Arabia or indeed from the middle east, to crack the top 10. The Housed at the King Abdullah University of Science and Technology north of Jeddah, Shaheen means “peregrine falcon” in Arabic and is designed to support scientific and academic research at KAUST.
Cores: 462462, Computational power: 5.16 petaflops
Stampede is one of the most powerful and significant current supercomputers in the U.S. for open science research. Able to perform nearly 10 quadrillion operations per second, Stampede offers infinite opportunities for computational science and technology— ranging from highly parallel algorithms, high-throughput computing, scalable visualization, and next generation programming languages. Stampede is the lone Dell-built entry on the most recent list, It’s used to improve brain tumor imaging, research new types of biofuels, and study earthquakes and climate change, among many other things.
As we head towards the 2020s, supercomputers will continue to play an important role in helping us to address the next industrial, scientific and societal challenges, from nanoscience and genomics to climatology, aeronautics and energy. We’re seeing new approaches being introduced that are helping us to generate more compute power, from parallel computing to software.