Recently, the TOP500 Project, a prestigious project that aims to provide a reliable basis for tracking and identifying tendencies in high-performance computing (HPC), has released its latest 35th list of world's top supercomputers performance, on May 31, 2010 during International Supercomputing Conference 2010 (ISC10) in Hamburg, Germany (for full list, please visit TOP500 website). Guess what, the Cray Jaguar has currently won the crown as the world's faster supercomputer. The machine, which powered by 22,4162 cores of Cray XT5-HE Opteron Six Core 2.6 GHz and run in Cray Linux Environment (CLE), has scored a top speed of 1.75 petaflops per second. Notably, the system has provide supports for scientists to simulate the star explosions and, worth to mention, the uranium flow in Columbia river as a result of old underground storage infrastructure.
So, there goes the tag line of recent update for this technology. For those who are working on this related field, surely it is not so much difficult to understand. In contrast, it is almost impossible for those who are not, to really see the sacred beauty behind this technology. Well, a supercomputer is actually a computer that is at the front-line of current processing capacity, especially in term of computational speed. So, what is a big deal about that? Actually, the most wonderful thing about this technology is how these monsters manage to organize their speed for solving a number of computational tasks simultaneously in parallel through innovative frameworks. Therefore, memory hierarchy of these systems' architecture has to be very well designed in order make sure the processors are kept fed with data and instructions at all times. More recently, chip maker giant, Intel, has planned to introduce a new architecture consists of multicore 64-bit processors configured as co-processors. The architecture, called Many Integrated Core (MIC), tend to focus on replacing lots of standard Xeon processors used in massively parallel supercomputers with many-cored system-on-chip (SoC) processors.
Another important part for supercomputers is the software. As we are already belief, hardware and software must get along together for any system to operate efficiently. Most supercomputers currently are usually run in diverse Linux distributions. This is due to fact that this operating system permits customization and good lower-level monitoring features, which important to observe hardware operational processes, including memory allocation and parallel compatibility. On the other hand, special programming techniques have to be considered particularly to penetrate the speed performance. Fortran or C programming languages are usually used as the base language, using special libraries such as Message Passing Interface (MPI) to share data between nodes. In further extend, an intelligent solution provider company, Adaptive Computing, has launched innovative web portal for deploying HPC as a service and workload-driven HPC cloud architecture (for more information, go to the Market Watch website and company's website). The service, Moab Viewpoint(TM), gives organizations operating the world's leading supercomputers to provide resources as services for computational challenging environments without require users to understand the system architecture.
So, there goes the tag line of recent update for this technology. For those who are working on this related field, surely it is not so much difficult to understand. In contrast, it is almost impossible for those who are not, to really see the sacred beauty behind this technology. Well, a supercomputer is actually a computer that is at the front-line of current processing capacity, especially in term of computational speed. So, what is a big deal about that? Actually, the most wonderful thing about this technology is how these monsters manage to organize their speed for solving a number of computational tasks simultaneously in parallel through innovative frameworks. Therefore, memory hierarchy of these systems' architecture has to be very well designed in order make sure the processors are kept fed with data and instructions at all times. More recently, chip maker giant, Intel, has planned to introduce a new architecture consists of multicore 64-bit processors configured as co-processors. The architecture, called Many Integrated Core (MIC), tend to focus on replacing lots of standard Xeon processors used in massively parallel supercomputers with many-cored system-on-chip (SoC) processors.
Another important part for supercomputers is the software. As we are already belief, hardware and software must get along together for any system to operate efficiently. Most supercomputers currently are usually run in diverse Linux distributions. This is due to fact that this operating system permits customization and good lower-level monitoring features, which important to observe hardware operational processes, including memory allocation and parallel compatibility. On the other hand, special programming techniques have to be considered particularly to penetrate the speed performance. Fortran or C programming languages are usually used as the base language, using special libraries such as Message Passing Interface (MPI) to share data between nodes. In further extend, an intelligent solution provider company, Adaptive Computing, has launched innovative web portal for deploying HPC as a service and workload-driven HPC cloud architecture (for more information, go to the Market Watch website and company's website). The service, Moab Viewpoint(TM), gives organizations operating the world's leading supercomputers to provide resources as services for computational challenging environments without require users to understand the system architecture.
No comments:
Post a Comment