Rhino: What's New
timestamp1561508340001
HCC will soon make Rhino available for general use by the University community. This post is meant to provide a brief overview of Rhino itself, and highlight changes as compared to existing HCC clusters.
What is Rhino?
Rhino was created by combining the still-useful components of Tusker and Sandhills, two previous HCC clusters. Rhino nodes consist of AMD Interlagos CPUs with 4 CPUs/64 cores per node and QDR Infiniband. Most nodes have either 192 or 256GB RAM, with 2 x 512GB and 2 x 1TB RAM nodes. Rhino is intended primarily to run large-memory (RAM) workflows. Rhino will have ~350TB of BeeGFS scratch space for /work
, with the same /home
, /common
, /work
directory layout as other HCC clusters.
What’s different on Rhino?
Compared to Crane, the environment should seem quite familiar. However, there are a few but notable differences:
Rhino is based on CentOS 7
Previous clusters, and including Crane currently, used CentOS 6. It’s important to note that compiled programs may not work correctly if they are simply copied from a machine using CentOS 6 to one using CentOS 7. HCC strongly recommends recompiling all programs from source on Rhino to avoid issues. Interpreted language code such as Python, R, SAS, MATLAB, etc. are not affected by this and may be directly copied from Crane, for example. If you don’t see a package you need, fill out the Software Request form and we will be happy to assist you.Rhino has a default set of modules
HCC uses the Lmod modules software to enable easy use of, and switching between, different software packages. On previous clusters, no modules were loaded by default at login. On Rhino, a small set of “essential” software will be loaded on login. This includes the most recent GCC compiler and OpenMPI version, Python, Perl, and a few others. You can see the complete set by runningmodule list
after logging in. Of course, you can customize this default set if you’d like using Lmod’s collections feature.Coming Soon: Rhino will support a JupyterHub interface
Shortly after Rhino is opened to general use, a JupyterHub interface will be added similar to Crane. This will allow Jupyter Notebooks to be transparently run within a SLURM job for interactive analysis, visualization, etc.Rhino uses BeeGFS for
/work
Though largely a “behind-the-scenes” change, Rhino uses the BeeGFS parallel filesystem as opposed to Lustre. Users still access/work
in the same manner as before, but the use of BeeGFS is intended to provide performance and stability improvements. Please note that the 6-month purge policy also applies to Rhino’s/work
.The Tusker home directories are available read-only for a limited time
For convenience, your home directory from Tusker is available for you to copy any needed files from. This may be accessed using the$TUSKER_HOME
environment variable( i.e.cd $TUSKER_HOME
). Please note that your directory is read-only, and not intended to be used to run programs from. Please copy what you need in a timely manner to another secure location, such as Rhino $HOME, Attic etc. At some point in the future, HCC will make an announcement and permanently remove these directories.