Today we are seeing a new revival in the field of Distributed and High Performance Computing made possible by new technological advances which are heralding an exciting time for researchers and developers in the discipline. Some of these are advances in Petascale Computing, Cloud Computing, Cell Processors, Graphics Processing Units, Multicore Systems, to name a few.
Distributed computing research today is having a major impact on what is known today as transformative research - things that really transform the way that scientists and engineers carry out new kinds of research. This means that the problems in Science and Engineering are not just about data or simulation or computing or networks or visualization, but about complex problem solving and these solutions normally involve a multitude of elements from the field of parallel and distributed computing (e.g., algorithms, software platforms, hardware).
Parallel and distributed computing remains an important and, probably the most prominent, pillar for many cyberinfrastructure developments around the world (USA’s National Science Foundation, Australian Research Council, European Frameworks, etc). As an example, in the USA, the TeraGrid and the new Extreme Digital initiative will heavily relay on HPC to support national scientific computing communities and research and development. In Australia the NCRIS initiative plays a similar role. These initiatives enable a wider range of people to access HPC facilities with resources being available without worrying about where the jobs are stored or computed. This creates more opportunities for research in distributed computing and impact many field of endeavour.
Another important advent is Petascale Computing which naturally leads us into the massive data problem. These are the kind of problems that arise in many disciplines, with entire communities concerned about how to deal with the data deluge as well as the kind of computation that goes with it. There will be so much data out there and the way of doing science will change so much. In this case, the hardest job is not the ‘science’ itself but figuring out how to meaningfully turn it into a problem that can be parallelized and solved on a distributed computing platform (cluster, grid, cloud, etc) and made to work properly when scaled.
Providing a stronger sense of the distributed architecture for how everything fits together and then making sure that the different components are integrated would go a long way toward changing the mindset of scientists from other disciplines. This is a golden opportunity for distributed computing researchers and developers to lead the way by engaging scientists and engineers that are working at the boundaries of multiple scientific cultures.