APAC HPC users eye solid-state drives

Some high-performance computing (HPC) users in the Asia-Pacific region are keen on incorporating flash memory technology, but solid-state drives (SSD) in supercomputers are not likely to become mainstream so soon.
Steve Tolnai, chief technologist for HPC at Hewlett-Packard Asia-Pacific and Japan, told ZDNet Asia in an e-mail interview that the company has customers in the region that "are interested in SSDs and are currently conducting pilot deployments".
According to Tolnai, HPC workloads are typically mixed, running a significant amount of reading and writing jobs. SSDs are ideal in such read-intensive environments as "they have 25 times the performance on read applications than [on] traditional hard disks".
Tolnai added: "In terms of I/O (input/output) operations, SSD delivers between one-tenth and one-one hundredth of an enterprise hard disk drive, and also speeds up boot time significantly."
SSDs also afford HPC users energy savings, he said. Despite being more expensive than a hard disk drive (HDD), an SSD requires less than half its power. "This energy effectiveness can help customers running large cluster systems for HPC save costs," he explained.
Hewlett-Packard (HP) has also availed the option of SSDs into the HP Cluster Platform 3000, which is designed to support between five and 1,024 nodes.
Earlier this month, the San Diego Supercomputer Center (SDSC) announced a new HPC system that it claims is the first to use flash memory technology. Using Intel SSDs, the system has undergone trial runs and is expected, among other applications, to be capable of searching sky survey data for near-earth asteroids that aid researchers to better understand periodic extinctions on Earth.
Marek T. Michalewicz, senior deputy director at Singapore's A*STAR (Agency for Science, Technology and Research) Computational Resource Centre (CRC), said the SDSC announcement was "interesting". The CRC, which oversees the operations of HPC systems at the agency, would be monitoring and learning from the U.S. supercomputing facility's use of flash memory technology, he added in an e-mail.
The CRC's HPC demands are currently "compute-intensive", which is affected by processor speed, number of processors as well as memory I/O. Moving data generated from the computer programs to storage is not critical for most applications run by CRC's systems, added Michalewicz.
However, this could change as computational biologists in Singapore are becoming a sizable user community of CRC's HPC systems, he noted. "We will be carefully following the developments and experiences of SDSC so as to assess its potential usefulness for our scientists in their research areas such as gene sequencing, bio-image analysis, data mining of very large data sets, and visualization of molecular dynamics computations for systems composed of multi-billion atoms or molecules."
Hardware vendor NEC is also prioritizing computing speed for its customers' supercomputing needs, and according to a senior executive at the company, has no definite plans to incorporate flash memory into its HPC systems.
Hiroshi Takahara, senior director of HPC division at NEC Corporation, explained that NEC's HPC customers are engaged in "mid-range to high-end scientific and technical computing", where a key issue is how fast massive arithmetic or floating point operations are tackled. In this respect, the speed of data transfer between the computer and disk device is not as critical, he noted in an e-mail.
"While the amount of numerical data transacted between the computer and the disk device is huge, its frequency is not so significant compared to the intensity of numerical computing," he said. "Rather, the speed of computing is a critical factor in achieving the expected reduced time-to-solution."
According to Takahara, applications suitable for SSD-based supercomputers are financial computing, gene analysis, manipulation of geophysical data or "applications that must capitalize on a huge cache space during the code execution". Cloud computing, he added, will boost SSD-based data manipulation.
Cost, reliability determine adoption
Jim McGregor, chief technology strategist at In-Stat, said in an e-mail that SSDs will not become mainstream in supercomputing "anytime soon".
Nonetheless, flash is not new to extensive computing environments, and "makes more sense now than it has before because of the increased performance, higher endurance cycles, better wear algorithms, and lower cost".
Although the cost per bit of flash is for now still higher than HDDs, flash is set "to become more mainstream for all computing solutions, even PCs over the next five years", he said.
HP's Tolnai noted the reliability of SSDs as another key factor influencing their use in supercomputing.
"As with any new technology, SSD adoption in HPC will be a gradual process. It is still at the leading edge implementation testing stages so there are a number of factors and challenges that need to be addressed before its maturity in the HPC space," he said. "With further testing, the industry will develop a method to measure, protect and improve the reliability of SSD."
HP's Tolnai pointed out that ultimately, SSDs are not recommended for every HPC initiative. He said it is ideally suited for businesses such as Web 2.0 companies, which have needs for power-efficient yet intensive I/O processing--requiring high performance, high capacity and fast disk read speed.