Introduction - 8th Workshop on Latest Advances in
Scalable Algorithms for Large-Scale Systems
Author/Presenters
Event Type
Workshop
Algorithms
Exascale
Resiliency
SIGHPC Workshop
TimeMonday, November 13th9am -
9:05am
Location607
DescriptionNovel scalable scientific algorithms are needed in
order to enable key science applications to exploit the
computational power of large-scale systems. This is
especially true for the current tier of leading
petascale machines and the road to exascale computing as
HPC systems continue to scale up in compute node and
processor core count. These extreme-scale systems
require novel scientific algorithms to hide network and
memory latency, have very high computation/communication
overlap, have minimal communication, and have no
synchronization points. With the advent of Big Data in
the past few years, the need of such scalable
mathematical methods and algorithms able to handle data
and compute intensive applications at scale becomes even
more important.
Scientific algorithms for multi-petaflop and exaflop systems also need to be fault tolerant and fault resilient, since the probability of faults increases with scale. Resilience at the system software and at the algorithmic level is needed as a crosscutting effort. Finally, with the advent of heterogeneous compute nodes that employ standard processors as well as GPGPUs, scientific algorithms need to match these architectures to extract the most performance. This includes different system-specific levels of parallelism as well as co-scheduling of computation. Key science applications require novel mathematics and mathematical models and system software that address the scalability and resilience challenges of current- and future-generation extreme-scale HPC systems.
Scientific algorithms for multi-petaflop and exaflop systems also need to be fault tolerant and fault resilient, since the probability of faults increases with scale. Resilience at the system software and at the algorithmic level is needed as a crosscutting effort. Finally, with the advent of heterogeneous compute nodes that employ standard processors as well as GPGPUs, scientific algorithms need to match these architectures to extract the most performance. This includes different system-specific levels of parallelism as well as co-scheduling of computation. Key science applications require novel mathematics and mathematical models and system software that address the scalability and resilience challenges of current- and future-generation extreme-scale HPC systems.




