If you have any questions during this period, please feel free to contact HCC support at [email protected].
]]>If you have any questions during this period, please feel free to contact HCC support at [email protected].
]]>If you have any questions during this period, please feel free to contact HCC support at [email protected].
]]>If you have any questions during this period, please feel free to contact HCC support at [email protected].
]]>HCC staff will be monitoring the systems to ensure availability through the break. HCC User Services staff will be periodically monitoring the ticketing system during the break and will address any system critical issues. Non-critical tickets/issues will be addressed when we return after the winter break on January 2, 2024.
Please email [email protected] if you have any questions.
Happy Holidays!
Hongfeng Yu, Director
Holland Computing Center
HCC staff will be monitoring the systems to ensure availability through the break. HCC User Services staff will be periodically monitoring the ticketing system during the break and will address any system critical issues. Non-critical tickets/issues will be addressed when we return after the winter break on January 2, 2024.
Please email [email protected] if you have any questions.
Happy Holidays!
Hongfeng Yu, Director
Holland Computing Center
These courses are only available to those within the University of Nebraska system at this time.
Link: https://nebraska.bridgeapp.com/learner/courses/a90b1752/enroll
A short course containing important information that is strongly recommended for new group owners.
This includes information about HCC policies, account creation, an overview of data storage covering details about the purge policy on $WORK, support, training, and acknowledgment credits.
Link: https://nebraska.bridgeapp.com/learner/courses/77529776/enroll
A short course containing important information that is strongly recommended for those new to using HCC resources.
This includes information about HCC policies, managing your account, an overview of data storage covering details about the purge policy on $WORK, support, training, and acknowledgment credits.
Link: https://nebraska.bridgeapp.com/learner/courses/edb77314/enroll
This is a longer course providing an introduction to using HCC’s Swan supercomputer including basic data management, software usage, submitting jobs, reviewing jobs, and acknowledgment credits. This course is recommended for those new to using HCC resources, and anyone wanting to learn more about HCC
]]>These courses are only available to those within the University of Nebraska system at this time.
Link: https://nebraska.bridgeapp.com/learner/courses/a90b1752/enroll
A short course containing important information that is strongly recommended for new group owners.
This includes information about HCC policies, account creation, an overview of data storage covering details about the purge policy on $WORK, support, training, and acknowledgment credits.
Link: https://nebraska.bridgeapp.com/learner/courses/77529776/enroll
A short course containing important information that is strongly recommended for those new to using HCC resources.
This includes information about HCC policies, managing your account, an overview of data storage covering details about the purge policy on $WORK, support, training, and acknowledgment credits.
Link: https://nebraska.bridgeapp.com/learner/courses/edb77314/enroll
This is a longer course providing an introduction to using HCC’s Swan supercomputer including basic data management, software usage, submitting jobs, reviewing jobs, and acknowledgment credits. This course is recommended for those new to using HCC resources, and anyone wanting to learn more about HCC
]]>With the retirement of Crane, all data stored on both the $HOME and $WORK filesystems of Crane will be removed following decommissioning. Users with data on Crane are strongly encouraged to move their files as soon as possible. For precious data, HCC provides the Attic resource to reliably store data for a nominal cost. The $COMMON filesystem is another option, as it is available on both Crane and Swan, and is not subject to the 6-month purge policy in effect on the $WORK filesystem. Please note data on $COMMON is not backed up; precious data should be additionally saved elsewhere. Each group is allocated 30TB of $COMMON space at no charge; additional space is available for a fee. For large data transfers, it is strongly encouraged to use Globus to transfer your data. The Globus transfer servers for Crane, Swan, and Attic provide a faster connection for the data transfer and provide checks on both ends of the data transfer.
As a part of the long-running plan to retire Crane, HCC deployed the Swan cluster in May 2022 as Crane’s replacement and made possible by investments from the UNL Office of Research and Economic Development and the Nebraska Research Initiative. Swan already has a significant number of resources available for general use, and will have the remaining in-warranty resources from Crane migrated into the Swan cluster. Most workflows from Crane can be used on Swan with little to no modification to software or submission scripts.
HCC will work to make this transition as minimally disruptive as possible. More information can be found at this page. Please contact [email protected] with any questions or concerns.
]]>With the retirement of Crane, all data stored on both the $HOME and $WORK filesystems of Crane will be removed following decommissioning. Users with data on Crane are strongly encouraged to move their files as soon as possible. For precious data, HCC provides the Attic resource to reliably store data for a nominal cost. The $COMMON filesystem is another option, as it is available on both Crane and Swan, and is not subject to the 6-month purge policy in effect on the $WORK filesystem. Please note data on $COMMON is not backed up; precious data should be additionally saved elsewhere. Each group is allocated 30TB of $COMMON space at no charge; additional space is available for a fee. For large data transfers, it is strongly encouraged to use Globus to transfer your data. The Globus transfer servers for Crane, Swan, and Attic provide a faster connection for the data transfer and provide checks on both ends of the data transfer.
As a part of the long-running plan to retire Crane, HCC deployed the Swan cluster in May 2022 as Crane’s replacement and made possible by investments from the UNL Office of Research and Economic Development and the Nebraska Research Initiative. Swan already has a significant number of resources available for general use, and will have the remaining in-warranty resources from Crane migrated into the Swan cluster. Most workflows from Crane can be used on Swan with little to no modification to software or submission scripts.
HCC will work to make this transition as minimally disruptive as possible. More information can be found at this page. Please contact [email protected] with any questions or concerns.
]]>With the retirement of Crane, all data stored on the $HOME and $WORK filesystems of Crane will be removed following decommissioning. Users with data on Crane are strongly encouraged to move their files sooner rather than later. For precious data, HCC provides the Attic resource to reliably store data for a nominal cost. The $COMMON filesystem is another option, as it is available on both Crane and Swan, and is not subject to the 6-month purge policy in effect on the $WORK filesystem. Please note data on $COMMON is not backed up; precious data should be additionally saved elsewhere. Each group is allocated 30TB of $COMMON space at no charge; additional space is available for a fee. For large data transfers, it is strongly encouraged to use Globus to transfer your data. The Globus transfer servers for Crane, Swan, and Attic provide a faster connection for the data transfer and provide checks on both ends of the data transfer.
As a part of the long-running plan to retire Crane, HCC deployed the Swan cluster in May 2022 as Crane’s replacement and made possible by investments from the UNL Office of Research and Economic Development and the Nebraska Research Initiative. Swan already has a significant number of resources available for general use, and will have the remaining in-warranty resources from Crane migrated into the Swan cluster. Most workflows from Crane can be used on Swan with little to no modification to software or submission scripts.
REMINDER:
As part of the retirement process, GPUs on Crane will no longer be available on May 24th. Any jobs that can't complete by May 24th will not start. This is to allow HCC to migrate the GPUs from Crane to Swan and consolidate the pool of GPU resources into Swan. This will greatly increase the number of available GPUs on the Swan cluster.
Transiting GPU workflows to Swan should only require re-creating any environments needed and migrating any data needed. If you have any questions about this process, please contact [email protected] or join our Open Office Hours every Tuesday and Thursday from 2-3PM via Zoom.
HCC will work to make this transition as minimally disruptive as possible. More information can be found at this page. Please contact [email protected] with any questions or concerns.
]]>With the retirement of Crane, all data stored on the $HOME and $WORK filesystems of Crane will be removed following decommissioning. Users with data on Crane are strongly encouraged to move their files sooner rather than later. For precious data, HCC provides the Attic resource to reliably store data for a nominal cost. The $COMMON filesystem is another option, as it is available on both Crane and Swan, and is not subject to the 6-month purge policy in effect on the $WORK filesystem. Please note data on $COMMON is not backed up; precious data should be additionally saved elsewhere. Each group is allocated 30TB of $COMMON space at no charge; additional space is available for a fee. For large data transfers, it is strongly encouraged to use Globus to transfer your data. The Globus transfer servers for Crane, Swan, and Attic provide a faster connection for the data transfer and provide checks on both ends of the data transfer.
As a part of the long-running plan to retire Crane, HCC deployed the Swan cluster in May 2022 as Crane’s replacement and made possible by investments from the UNL Office of Research and Economic Development and the Nebraska Research Initiative. Swan already has a significant number of resources available for general use, and will have the remaining in-warranty resources from Crane migrated into the Swan cluster. Most workflows from Crane can be used on Swan with little to no modification to software or submission scripts.
REMINDER:
As part of the retirement process, GPUs on Crane will no longer be available on May 24th. Any jobs that can't complete by May 24th will not start. This is to allow HCC to migrate the GPUs from Crane to Swan and consolidate the pool of GPU resources into Swan. This will greatly increase the number of available GPUs on the Swan cluster.
Transiting GPU workflows to Swan should only require re-creating any environments needed and migrating any data needed. If you have any questions about this process, please contact [email protected] or join our Open Office Hours every Tuesday and Thursday from 2-3PM via Zoom.
HCC will work to make this transition as minimally disruptive as possible. More information can be found at this page. Please contact [email protected] with any questions or concerns.
]]>With the retirement of Crane, all data stored on the $HOME and $WORK filesystems of Crane will be removed following decommissioning. Users with data on Crane are strongly encouraged to move their files sooner rather than later. For precious data, HCC provides the Attic resource to reliably store data for a nominal cost. The $COMMON filesystem is another option, as it is available on both Crane and Swan, and is not subject to the 6-month purge policy in effect on the $WORK filesystem. Please note data on $COMMON is not backed up; precious data should be additionally saved elsewhere. Each group is allocated 30TB of $COMMON space at no charge; additional space is available for a fee. For large data transfers, it is strongly encouraged to use Globus to transfer your data. The Globus transfer servers for Crane, Swan, and Attic provide a faster connection for the data transfer and provide checks on both ends of the data transfer.
As a part of the long-running plan to retire Crane, HCC deployed the Swan cluster in May 2022 as Crane’s replacement and made possible by investments from the UNL Office of Research and Economic Development and the Nebraska Research Initiative. Swan already has a significant number of resources available for general use, and will have the remaining in-warranty resources from Crane migrated into the Swan cluster. Most workflows from Crane can be used on Swan with little to no modification to software or submission scripts.
UPDATE:
As part of the retirement process, GPUs on Crane will no longer be available starting May 20th. This is to allow HCC to migrate the GPUs from Crane to Swan and consolidate the pool of GPU resources into Swan. The GPUs on Swan will be unaffected by this and GPUs on Swan will still be able to run jobs.
HCC will work to make this transition as minimally disruptive as possible. More information can be found at this page. Please contact [email protected] with any questions or concerns.
]]>With the retirement of Crane, all data stored on the $HOME and $WORK filesystems of Crane will be removed following decommissioning. Users with data on Crane are strongly encouraged to move their files sooner rather than later. For precious data, HCC provides the Attic resource to reliably store data for a nominal cost. The $COMMON filesystem is another option, as it is available on both Crane and Swan, and is not subject to the 6-month purge policy in effect on the $WORK filesystem. Please note data on $COMMON is not backed up; precious data should be additionally saved elsewhere. Each group is allocated 30TB of $COMMON space at no charge; additional space is available for a fee. For large data transfers, it is strongly encouraged to use Globus to transfer your data. The Globus transfer servers for Crane, Swan, and Attic provide a faster connection for the data transfer and provide checks on both ends of the data transfer.
As a part of the long-running plan to retire Crane, HCC deployed the Swan cluster in May 2022 as Crane’s replacement and made possible by investments from the UNL Office of Research and Economic Development and the Nebraska Research Initiative. Swan already has a significant number of resources available for general use, and will have the remaining in-warranty resources from Crane migrated into the Swan cluster. Most workflows from Crane can be used on Swan with little to no modification to software or submission scripts.
UPDATE:
As part of the retirement process, GPUs on Crane will no longer be available starting May 20th. This is to allow HCC to migrate the GPUs from Crane to Swan and consolidate the pool of GPU resources into Swan. The GPUs on Swan will be unaffected by this and GPUs on Swan will still be able to run jobs.
HCC will work to make this transition as minimally disruptive as possible. More information can be found at this page. Please contact [email protected] with any questions or concerns.
]]>With the retirement of Crane, all data stored on the $HOME and $WORK filesystems of Crane will be removed following decommissioning. Users with data on Crane are strongly encouraged to move their files sooner rather than later. For precious data, HCC provides the Attic resource to reliably store data for a nominal cost. The $COMMON filesystem is another option, as it is available on both Crane and Swan, and is not subject to the 6-month purge policy in effect on the $WORK filesystem. Please note data on $COMMON is not backed up; precious data should be additionally saved elsewhere. Each group is allocated 30TB of $COMMON space at no charge; additional space is available for a fee. For large data transfers, it is strongly encouraged to use Globus to transfer your data. The Globus transfer servers for Crane, Swan, and Attic provide a faster connection for the data transfer and provide checks on both ends of the data transfer.
As a part of the long running plan to retire Crane, HCC deployed the Swan cluster in May 2022 as Crane’s replacement and made possible by investments from the UNL Office of Research and Economic Development and the Nebraska Research Initiative. Swan already has a significant number of resources available for general use, and will have the remaining in-warranty resources from Crane migrated into the Swan cluster. Most workflows from Crane can be used on Swan with little to no modification to software or submission scripts.
HCC will work to make this transition as minimally disruptive as possible. More information can be found at this page. Please contact [email protected] with any questions or concerns.
]]>With the retirement of Crane, all data stored on the $HOME and $WORK filesystems of Crane will be removed following decommissioning. Users with data on Crane are strongly encouraged to move their files sooner rather than later. For precious data, HCC provides the Attic resource to reliably store data for a nominal cost. The $COMMON filesystem is another option, as it is available on both Crane and Swan, and is not subject to the 6-month purge policy in effect on the $WORK filesystem. Please note data on $COMMON is not backed up; precious data should be additionally saved elsewhere. Each group is allocated 30TB of $COMMON space at no charge; additional space is available for a fee. For large data transfers, it is strongly encouraged to use Globus to transfer your data. The Globus transfer servers for Crane, Swan, and Attic provide a faster connection for the data transfer and provide checks on both ends of the data transfer.
As a part of the long running plan to retire Crane, HCC deployed the Swan cluster in May 2022 as Crane’s replacement and made possible by investments from the UNL Office of Research and Economic Development and the Nebraska Research Initiative. Swan already has a significant number of resources available for general use, and will have the remaining in-warranty resources from Crane migrated into the Swan cluster. Most workflows from Crane can be used on Swan with little to no modification to software or submission scripts.
HCC will work to make this transition as minimally disruptive as possible. More information can be found at this page. Please contact [email protected] with any questions or concerns.
]]>With the retirement of Crane, all data stored on the $HOME and $WORK filesystems of Crane will be removed following decommissioning. Users with data on Crane are strongly encouraged to move their files sooner rather than later. For precious data, HCC provides the Attic resource to reliably store data for a nominal cost. The $COMMON filesystem is another option, as it is available on both Crane and Swan, and is not subject to the 6-month purge policy in effect on the $WORK filesystem. Please note data on $COMMON is not backed up; precious data should be additionally saved elsewhere. Each group is allocated 30TB of $COMMON space at no charge; additional space is available for a fee. For large data transfers, it is strongly encouraged to use Globus to transfer your data. The Globus transfer servers for Crane, Swan, and Attic provide a faster connection for the data transfer and provide checks on both ends of the data transfer.
As a part of the long running plan to retire Crane, HCC deployed the Swan cluster in May 2022 as Crane’s replacement and made possible by investments from the UNL Office of Research and Economic Development and the Nebraska Research Initiative. Swan already has a significant number of resources available for general use, and will have the remaining in-warranty resources from Crane migrated into the Swan cluster. Most workflows from Crane can be used on Swan with little to no modification to software or submission scripts.
HCC will work to make this transition as minimally disruptive as possible. More information can be found at this page. Please contact [email protected] with any questions or concerns.
]]>With the retirement of Crane, all data stored on the $HOME and $WORK filesystems of Crane will be removed following decommissioning. Users with data on Crane are strongly encouraged to move their files sooner rather than later. For precious data, HCC provides the Attic resource to reliably store data for a nominal cost. The $COMMON filesystem is another option, as it is available on both Crane and Swan, and is not subject to the 6-month purge policy in effect on the $WORK filesystem. Please note data on $COMMON is not backed up; precious data should be additionally saved elsewhere. Each group is allocated 30TB of $COMMON space at no charge; additional space is available for a fee. For large data transfers, it is strongly encouraged to use Globus to transfer your data. The Globus transfer servers for Crane, Swan, and Attic provide a faster connection for the data transfer and provide checks on both ends of the data transfer.
As a part of the long running plan to retire Crane, HCC deployed the Swan cluster in May 2022 as Crane’s replacement and made possible by investments from the UNL Office of Research and Economic Development and the Nebraska Research Initiative. Swan already has a significant number of resources available for general use, and will have the remaining in-warranty resources from Crane migrated into the Swan cluster. Most workflows from Crane can be used on Swan with little to no modification to software or submission scripts.
HCC will work to make this transition as minimally disruptive as possible. More information can be found at this page. Please contact [email protected] with any questions or concerns.
]]>