High performance computing
==========================
What is high performance computing?
-----------------------------------
High performance computing (HPC) is any computing that makes intensive use of a
computer’s resources. HPC therefore includes all tasks that make intensive use
of the CPU, memory, storage, accelerators (such as GPU), or that require a long
processing time. This definition is purposefully broad et applies to the work of
many researchers.
It is tempting to believe that our personal computer is sufficient for our
research needs. After all, some recurring tasks take only a few minutes. Other,
longer tasks can be run at night when the computer is not otherwise used.
Massive datasets can be analysed in a step-wise manner. However, would your
research objectives be the same if the task that takes a few minutes took only a
few seconds? If you could run 100 tasks overnight instead of one? If you could
analyse your massive dataset in a single automated step?
The vast computing power and massively parallel analysis afforded by high
performance computing is not merely a convenience but also an invitation to
rethink our objectives and push back our research’s boundaries.
Learning HPC
------------
If you are not already familiar with HPC, we highly recommend the `Calcul Québec
training
`_. If no
training is available soon, feel free to contact our :doc:`technical support
<../aide/support>` for custom local training. In addition, the Alliance `getting
started `_ guide covers a
variety of HPC-related topics: using software, running tasks, transferring data,
etc. Finally, if you are not already familiar with the :doc:`Linux command line
`, learning it is a mandatory first step.