High Performance Computing Courses 11 and 18 October at Uni Melb
The University of Melbourne is offering three courses in October to help new and experienced researchers make use of the high performance computing facilities.
* Introduction to Linux and HPC
Thu. 11 October 2018 10:00 am – 3:00 pm
If your desktop system is too slow for your “big datasets” or the problems too complex, High Performance Computing (HPC) is the tool you need. At the University of Melbourne we have one general purpose HPC systems, Spartan, which is available to researchers. This introductory course will provide an overview of this systems and the Linux command-line, how to write Slurm script so you can submit data or task parallel jobs, and with plenty of “hands-on” examples of running with research applications. Bring a laptop and a desire to learn about supercomputers.
Register at the following URL:
https://linux-hpc-oct-2018.eventbrite.com.au
* Shell Scripting for HPC
Thu. 18 October 2018 10:00 am – 3:00 pm AEDT
Basic job submission scripts in Slurm allow for the allocation of resources and computation in batch mode. However these job submissions can be made even more powerful and flexible with a working knowledge of shell scripting and more advanced Slurm commands. This course assumes knowledge of the ‘Introduction to High Performance Computing course’ and describes more advanced Linux commands, regular expressions and regex tools, various shells, the use of variables, loops, conditional statements, the incorporation of shell scripting knowledge in job submission scripts, and autogenerating scripts with heredoc. Please bring your laptop and a desire to make your supercomputer job submission scripts even more powerful.
Register at the following URL:
https://shell-scripting-hpc-oct-2018.eventbrite.com.au
* Parallel Processing with HPC
Parallel processing is how applications can scale to deal with large datasets and complex problems. In this course, an initial revision of parallel job submission on Spartan and parallel extensions available in some applications leads to a consideration of HPC system architecture and the various limitations and bottle-necks in parallel processing. This is followed with an introduction to the two core approaches in parallel programming, shared memory and distributed memory, using OpenMP threads and MPI message passing routines. A number of programming examples and opportunities for development are provided. Please bring your laptop and a desire to learn the basics of parallel processing.
Register at the following URL:
Categories