With the retirement of Crane, all data stored on both the $HOME and $WORK filesystems of Crane will be removed following decommissioning. Users with data on Crane are strongly encouraged to move their files as soon as possible. For precious data, HCC provides the Attic resource to reliably store data for a nominal cost. The $COMMON filesystem is another option, as it is available on both Crane and Swan, and is not subject to the 6-month purge policy in effect on the $WORK filesystem. Please note data on $COMMON is not backed up; precious data should be additionally saved elsewhere. Each group is allocated 30TB of $COMMON space at no charge; additional space is available for a fee. For large data transfers, it is strongly encouraged to use Globus to transfer your data. The Globus transfer servers for Crane, Swan, and Attic provide a faster connection for the data transfer and provide checks on both ends of the data transfer.
As a part of the long-running plan to retire Crane, HCC deployed the Swan cluster in May 2022 as Crane’s replacement and made possible by investments from the UNL Office of Research and Economic Development and the Nebraska Research Initiative. Swan already has a significant number of resources available for general use, and will have the remaining in-warranty resources from Crane migrated into the Swan cluster. Most workflows from Crane can be used on Swan with little to no modification to software or submission scripts.
HCC will work to make this transition as minimally disruptive as possible. More information can be found at this page. Please contact [email protected] with any questions or concerns.
]]>With the retirement of Crane, all data stored on both the $HOME and $WORK filesystems of Crane will be removed following decommissioning. Users with data on Crane are strongly encouraged to move their files as soon as possible. For precious data, HCC provides the Attic resource to reliably store data for a nominal cost. The $COMMON filesystem is another option, as it is available on both Crane and Swan, and is not subject to the 6-month purge policy in effect on the $WORK filesystem. Please note data on $COMMON is not backed up; precious data should be additionally saved elsewhere. Each group is allocated 30TB of $COMMON space at no charge; additional space is available for a fee. For large data transfers, it is strongly encouraged to use Globus to transfer your data. The Globus transfer servers for Crane, Swan, and Attic provide a faster connection for the data transfer and provide checks on both ends of the data transfer.
As a part of the long-running plan to retire Crane, HCC deployed the Swan cluster in May 2022 as Crane’s replacement and made possible by investments from the UNL Office of Research and Economic Development and the Nebraska Research Initiative. Swan already has a significant number of resources available for general use, and will have the remaining in-warranty resources from Crane migrated into the Swan cluster. Most workflows from Crane can be used on Swan with little to no modification to software or submission scripts.
HCC will work to make this transition as minimally disruptive as possible. More information can be found at this page. Please contact [email protected] with any questions or concerns.
]]>With the retirement of Crane, all data stored on the $HOME and $WORK filesystems of Crane will be removed following decommissioning. Users with data on Crane are strongly encouraged to move their files sooner rather than later. For precious data, HCC provides the Attic resource to reliably store data for a nominal cost. The $COMMON filesystem is another option, as it is available on both Crane and Swan, and is not subject to the 6-month purge policy in effect on the $WORK filesystem. Please note data on $COMMON is not backed up; precious data should be additionally saved elsewhere. Each group is allocated 30TB of $COMMON space at no charge; additional space is available for a fee. For large data transfers, it is strongly encouraged to use Globus to transfer your data. The Globus transfer servers for Crane, Swan, and Attic provide a faster connection for the data transfer and provide checks on both ends of the data transfer.
As a part of the long-running plan to retire Crane, HCC deployed the Swan cluster in May 2022 as Crane’s replacement and made possible by investments from the UNL Office of Research and Economic Development and the Nebraska Research Initiative. Swan already has a significant number of resources available for general use, and will have the remaining in-warranty resources from Crane migrated into the Swan cluster. Most workflows from Crane can be used on Swan with little to no modification to software or submission scripts.
REMINDER:
As part of the retirement process, GPUs on Crane will no longer be available on May 24th. Any jobs that can't complete by May 24th will not start. This is to allow HCC to migrate the GPUs from Crane to Swan and consolidate the pool of GPU resources into Swan. This will greatly increase the number of available GPUs on the Swan cluster.
Transiting GPU workflows to Swan should only require re-creating any environments needed and migrating any data needed. If you have any questions about this process, please contact [email protected] or join our Open Office Hours every Tuesday and Thursday from 2-3PM via Zoom.
HCC will work to make this transition as minimally disruptive as possible. More information can be found at this page. Please contact [email protected] with any questions or concerns.
]]>With the retirement of Crane, all data stored on the $HOME and $WORK filesystems of Crane will be removed following decommissioning. Users with data on Crane are strongly encouraged to move their files sooner rather than later. For precious data, HCC provides the Attic resource to reliably store data for a nominal cost. The $COMMON filesystem is another option, as it is available on both Crane and Swan, and is not subject to the 6-month purge policy in effect on the $WORK filesystem. Please note data on $COMMON is not backed up; precious data should be additionally saved elsewhere. Each group is allocated 30TB of $COMMON space at no charge; additional space is available for a fee. For large data transfers, it is strongly encouraged to use Globus to transfer your data. The Globus transfer servers for Crane, Swan, and Attic provide a faster connection for the data transfer and provide checks on both ends of the data transfer.
As a part of the long-running plan to retire Crane, HCC deployed the Swan cluster in May 2022 as Crane’s replacement and made possible by investments from the UNL Office of Research and Economic Development and the Nebraska Research Initiative. Swan already has a significant number of resources available for general use, and will have the remaining in-warranty resources from Crane migrated into the Swan cluster. Most workflows from Crane can be used on Swan with little to no modification to software or submission scripts.
REMINDER:
As part of the retirement process, GPUs on Crane will no longer be available on May 24th. Any jobs that can't complete by May 24th will not start. This is to allow HCC to migrate the GPUs from Crane to Swan and consolidate the pool of GPU resources into Swan. This will greatly increase the number of available GPUs on the Swan cluster.
Transiting GPU workflows to Swan should only require re-creating any environments needed and migrating any data needed. If you have any questions about this process, please contact [email protected] or join our Open Office Hours every Tuesday and Thursday from 2-3PM via Zoom.
HCC will work to make this transition as minimally disruptive as possible. More information can be found at this page. Please contact [email protected] with any questions or concerns.
]]>With the retirement of Crane, all data stored on the $HOME and $WORK filesystems of Crane will be removed following decommissioning. Users with data on Crane are strongly encouraged to move their files sooner rather than later. For precious data, HCC provides the Attic resource to reliably store data for a nominal cost. The $COMMON filesystem is another option, as it is available on both Crane and Swan, and is not subject to the 6-month purge policy in effect on the $WORK filesystem. Please note data on $COMMON is not backed up; precious data should be additionally saved elsewhere. Each group is allocated 30TB of $COMMON space at no charge; additional space is available for a fee. For large data transfers, it is strongly encouraged to use Globus to transfer your data. The Globus transfer servers for Crane, Swan, and Attic provide a faster connection for the data transfer and provide checks on both ends of the data transfer.
As a part of the long-running plan to retire Crane, HCC deployed the Swan cluster in May 2022 as Crane’s replacement and made possible by investments from the UNL Office of Research and Economic Development and the Nebraska Research Initiative. Swan already has a significant number of resources available for general use, and will have the remaining in-warranty resources from Crane migrated into the Swan cluster. Most workflows from Crane can be used on Swan with little to no modification to software or submission scripts.
UPDATE:
As part of the retirement process, GPUs on Crane will no longer be available starting May 20th. This is to allow HCC to migrate the GPUs from Crane to Swan and consolidate the pool of GPU resources into Swan. The GPUs on Swan will be unaffected by this and GPUs on Swan will still be able to run jobs.
HCC will work to make this transition as minimally disruptive as possible. More information can be found at this page. Please contact [email protected] with any questions or concerns.
]]>With the retirement of Crane, all data stored on the $HOME and $WORK filesystems of Crane will be removed following decommissioning. Users with data on Crane are strongly encouraged to move their files sooner rather than later. For precious data, HCC provides the Attic resource to reliably store data for a nominal cost. The $COMMON filesystem is another option, as it is available on both Crane and Swan, and is not subject to the 6-month purge policy in effect on the $WORK filesystem. Please note data on $COMMON is not backed up; precious data should be additionally saved elsewhere. Each group is allocated 30TB of $COMMON space at no charge; additional space is available for a fee. For large data transfers, it is strongly encouraged to use Globus to transfer your data. The Globus transfer servers for Crane, Swan, and Attic provide a faster connection for the data transfer and provide checks on both ends of the data transfer.
As a part of the long-running plan to retire Crane, HCC deployed the Swan cluster in May 2022 as Crane’s replacement and made possible by investments from the UNL Office of Research and Economic Development and the Nebraska Research Initiative. Swan already has a significant number of resources available for general use, and will have the remaining in-warranty resources from Crane migrated into the Swan cluster. Most workflows from Crane can be used on Swan with little to no modification to software or submission scripts.
UPDATE:
As part of the retirement process, GPUs on Crane will no longer be available starting May 20th. This is to allow HCC to migrate the GPUs from Crane to Swan and consolidate the pool of GPU resources into Swan. The GPUs on Swan will be unaffected by this and GPUs on Swan will still be able to run jobs.
HCC will work to make this transition as minimally disruptive as possible. More information can be found at this page. Please contact [email protected] with any questions or concerns.
]]>With the retirement of Crane, all data stored on the $HOME and $WORK filesystems of Crane will be removed following decommissioning. Users with data on Crane are strongly encouraged to move their files sooner rather than later. For precious data, HCC provides the Attic resource to reliably store data for a nominal cost. The $COMMON filesystem is another option, as it is available on both Crane and Swan, and is not subject to the 6-month purge policy in effect on the $WORK filesystem. Please note data on $COMMON is not backed up; precious data should be additionally saved elsewhere. Each group is allocated 30TB of $COMMON space at no charge; additional space is available for a fee. For large data transfers, it is strongly encouraged to use Globus to transfer your data. The Globus transfer servers for Crane, Swan, and Attic provide a faster connection for the data transfer and provide checks on both ends of the data transfer.
As a part of the long running plan to retire Crane, HCC deployed the Swan cluster in May 2022 as Crane’s replacement and made possible by investments from the UNL Office of Research and Economic Development and the Nebraska Research Initiative. Swan already has a significant number of resources available for general use, and will have the remaining in-warranty resources from Crane migrated into the Swan cluster. Most workflows from Crane can be used on Swan with little to no modification to software or submission scripts.
HCC will work to make this transition as minimally disruptive as possible. More information can be found at this page. Please contact [email protected] with any questions or concerns.
]]>With the retirement of Crane, all data stored on the $HOME and $WORK filesystems of Crane will be removed following decommissioning. Users with data on Crane are strongly encouraged to move their files sooner rather than later. For precious data, HCC provides the Attic resource to reliably store data for a nominal cost. The $COMMON filesystem is another option, as it is available on both Crane and Swan, and is not subject to the 6-month purge policy in effect on the $WORK filesystem. Please note data on $COMMON is not backed up; precious data should be additionally saved elsewhere. Each group is allocated 30TB of $COMMON space at no charge; additional space is available for a fee. For large data transfers, it is strongly encouraged to use Globus to transfer your data. The Globus transfer servers for Crane, Swan, and Attic provide a faster connection for the data transfer and provide checks on both ends of the data transfer.
As a part of the long running plan to retire Crane, HCC deployed the Swan cluster in May 2022 as Crane’s replacement and made possible by investments from the UNL Office of Research and Economic Development and the Nebraska Research Initiative. Swan already has a significant number of resources available for general use, and will have the remaining in-warranty resources from Crane migrated into the Swan cluster. Most workflows from Crane can be used on Swan with little to no modification to software or submission scripts.
HCC will work to make this transition as minimally disruptive as possible. More information can be found at this page. Please contact [email protected] with any questions or concerns.
]]>With the retirement of Crane, all data stored on the $HOME and $WORK filesystems of Crane will be removed following decommissioning. Users with data on Crane are strongly encouraged to move their files sooner rather than later. For precious data, HCC provides the Attic resource to reliably store data for a nominal cost. The $COMMON filesystem is another option, as it is available on both Crane and Swan, and is not subject to the 6-month purge policy in effect on the $WORK filesystem. Please note data on $COMMON is not backed up; precious data should be additionally saved elsewhere. Each group is allocated 30TB of $COMMON space at no charge; additional space is available for a fee. For large data transfers, it is strongly encouraged to use Globus to transfer your data. The Globus transfer servers for Crane, Swan, and Attic provide a faster connection for the data transfer and provide checks on both ends of the data transfer.
As a part of the long running plan to retire Crane, HCC deployed the Swan cluster in May 2022 as Crane’s replacement and made possible by investments from the UNL Office of Research and Economic Development and the Nebraska Research Initiative. Swan already has a significant number of resources available for general use, and will have the remaining in-warranty resources from Crane migrated into the Swan cluster. Most workflows from Crane can be used on Swan with little to no modification to software or submission scripts.
HCC will work to make this transition as minimally disruptive as possible. More information can be found at this page. Please contact [email protected] with any questions or concerns.
]]>With the retirement of Crane, all data stored on the $HOME and $WORK filesystems of Crane will be removed following decommissioning. Users with data on Crane are strongly encouraged to move their files sooner rather than later. For precious data, HCC provides the Attic resource to reliably store data for a nominal cost. The $COMMON filesystem is another option, as it is available on both Crane and Swan, and is not subject to the 6-month purge policy in effect on the $WORK filesystem. Please note data on $COMMON is not backed up; precious data should be additionally saved elsewhere. Each group is allocated 30TB of $COMMON space at no charge; additional space is available for a fee. For large data transfers, it is strongly encouraged to use Globus to transfer your data. The Globus transfer servers for Crane, Swan, and Attic provide a faster connection for the data transfer and provide checks on both ends of the data transfer.
As a part of the long running plan to retire Crane, HCC deployed the Swan cluster in May 2022 as Crane’s replacement and made possible by investments from the UNL Office of Research and Economic Development and the Nebraska Research Initiative. Swan already has a significant number of resources available for general use, and will have the remaining in-warranty resources from Crane migrated into the Swan cluster. Most workflows from Crane can be used on Swan with little to no modification to software or submission scripts.
HCC will work to make this transition as minimally disruptive as possible. More information can be found at this page. Please contact [email protected] with any questions or concerns.
]]>With the retirement of Crane, all data stored on the $HOME and $WORK filesystems of Crane will be removed following decommissioning. Users with data on Crane are strongly encouraged to move their files sooner rather than later. For precious data, HCC provides the Attic resource to reliably store data for a nominal cost. The $COMMON filesystem is another option, as it is available on both Crane and Swan, and is not subject to the 6-month purge policy in effect on the $WORK filesystem. Please note data on $COMMON is not backed up; precious data should be additionally saved elsewhere. Each group is allocated 30TB of $COMMON space at no charge; additional space is available for a fee. For large data transfers, it is strongly encouraged to use Globus to transfer your data. The Globus transfer servers for Crane, Swan, and Attic provide a faster connection for the data transfer and provide checks on both ends of the data transfer.
As a part of the long running plan to retire Crane, HCC deployed the Swan cluster in May 2022 as Crane’s replacement and made possible by investments from the UNL Office of Research and Economic Development and the Nebraska Research Initiative. Swan already has a significant number of resources available for general use, and will have the remaining in-warranty resources from Crane migrated into the Swan cluster. Most workflows from Crane can be used on Swan with little to no modification to software or submission scripts.
HCC will work to make this transition as minimally disruptive as possible. More information can be found at this page. Please contact [email protected] with any questions or concerns.
]]>With the retirement of Crane, all data stored on the $HOME and $WORK filesystems of Crane will be removed following decommissioning. Users with data on Crane are strongly encouraged to move their files sooner rather than later. For precious data, HCC provides the Attic resource to reliably store data for a nominal cost. The $COMMON filesystem is another option, as it is available on both Crane and Swan, and is not subject to the 6-month purge policy in effect on the $WORK filesystem. Please note data on $COMMON is not backed up; precious data should be additionally saved elsewhere. Each group is allocated 30TB of $COMMON space at no charge; additional space is available for a fee. For large data transfers, it is strongly encouraged to use Globus to transfer your data. The Globus transfer servers for Crane, Swan, and Attic provide a faster connection for the data transfer and provide checks on both ends of the data transfer.
As a part of the long running plan to retire Crane, HCC deployed the Swan cluster in May 2022 as Crane’s replacement and made possible by investments from the UNL Office of Research and Economic Development and the Nebraska Research Initiative. Swan already has a significant number of resources available for general use, and will have the remaining in-warranty resources from Crane migrated into the Swan cluster. Most workflows from Crane can be used on Swan with little to no modification to software or submission scripts.
HCC will work to make this transition as minimally disruptive as possible. More information can be found at this page. Please contact [email protected] with any questions or concerns.
]]>The SLURM queue of jobs was not maintained during this upgrade to limit unexpected side effects, and you will need to resubmit any work you previously had in queue when the downtime began.
After an upgrade of this scale, there will likely be some latent issues that crop up due to the complexity of the changes. Please email [email protected] with any problems or questions you may have, and we will work to resolve them as quickly as possible.
]]>The SLURM queue of jobs was not maintained during this upgrade to limit unexpected side effects, and you will need to resubmit any work you previously had in queue when the downtime began.
After an upgrade of this scale, there will likely be some latent issues that crop up due to the complexity of the changes. Please email [email protected] with any problems or questions you may have, and we will work to resolve them as quickly as possible.
]]>/work
filesystem, and out of an abundance of caution we are letting the affected components rebuild to completion before allowing access. This process will hopefully finish overnight, and allow us to open Crane in a healthy state tomorrow. No data on /work
was affected by the earlier failure, so all your files on /work
should remain.We will send an announcement as soon as Crane is open for general use, including details about changes made during this downtime and how they may impact your use of the system.
]]>/work
filesystem, and out of an abundance of caution we are letting the affected components rebuild to completion before allowing access. This process will hopefully finish overnight, and allow us to open Crane in a healthy state tomorrow. No data on /work
was affected by the earlier failure, so all your files on /work
should remain.We will send an announcement as soon as Crane is open for general use, including details about changes made during this downtime and how they may impact your use of the system.
]]>