Acceptable Use Policy

This document is the current UAHPC acceptable use policy. These policies are intended to specify acceptable and unacceptable behavior on the UAHPC cluster. However, this document is not exhaustive. The goal of the policy is to promote safe, secure and efficient use of the cluster by appropriate individuals and departments. OIT may take action regarding the cluster in ways not articulated or specified in this document in order to meet this goal. These policies are subject to change in order to adapt to the changing computing environment. Violations of the Acceptable Use Policy may result in warnings, account restriction, or account termination. In the case of students, it may also result in referral to the Office of Student Conduct.

References

Use of this system is governed by additional usage policies as defined in the University of Alabama Network and Computing Support Terms of Use for Computer Accounts and Computer Resources Acceptable Use and Security Policy.

UAHPC Access

Valid Users

All cluster users must have an active myBama account. User accounts (logins) are available to the following classes of user:

  • Current University of Alabama faculty and staff who have a legitimate academic use for the system.
  • Current University of Alabama students. Student accounts must be sponsored by a faculty member. The sponsoring faculty must already be an account holder on UAHPC.
  • Affiliates and third-parties. Affiliate accounts may be available, and will be assigned on a case-by-case basis. Affiliates must be involved in research with, and sponsored by, a current UA faculty member. The sponsoring faculty member must already have an account.

Getting an account

An appropriate online account application must be completed by each account holder and held on file by the Office of Information Technology. Account information must be renewed annually to ensure OIT has current contact information for all system users. Student account requests must be confirmed by a UA faculty member. All account holders are required to be on the RC2-ANNOUNCE listserv. This is a low-traffic announcement list for UAHPC activity.

System Accessibility and Security

At this time UAHPC is accessible from the University of Alabama IP address space. From off-campus, you must use the VPN to get to the UA network. The system provides services via SSH/SFTP, and informational services via HTTP/HTTPS.

Telnet, unencrypted FTP and other clear text protocols are not allowed to connect to UAHPC.

Users are expected to protect login information and use non-trivial passwords. Sharing of account login information is not allowed – in other words, do not give out your user credentials to other individuals to use.

Prohibited uses

The following are prohibited on the UAHPC cluster:

  • Activities prohibited by above-referenced Terms of Use and Acceptable Use Policy are likewise prohibited on the UAHPC system.
  • Reselling of cluster time for any reason is strictly prohibited.
  • Commercial use of the cluster is prohibited.

UAHPC Infrastructure

System Administration

OIT administers and operates the UAHPC cluster. Individual departments may not have general root access to the system. For any suggestions or system configuration requests, please contact The IT Service Desk by calling 348-5555 or emailing itsd@ua.edu.

System Uptime and Maintenance Windows

No system of the complexity of UAHPC can operate without downtime; from time to time there will be a need to perform software updates or address other issues that require the system to be down. For the purposes of this document, the failure of the overall system constitutes downtime. One or more individual compute-nodes may be down at any given time, but individual node failures do not cause the entire cluster to become unavailable.

In the case of system changes which can be planned, such work will take place during a defined maintenance ‘window’, or time period in which the system may go down for service. If there is no work which needs to happen during a maintenance window, then the system will remain operational through the window. The maintenance window simply exists as a defined time in which disruptive work may take place.

The standing maintenance window for proactive maintenance or planned operational work is the first Thursday of the month. If this maintenance window will be used, an email announcement will be sent to RC2-ANNOUNCE. If there is no notification to that list, this indicates we will not take the system offline. There may be instances of downtime which are not planned. Examples of this include network outages, power failures, server room cooling emergencies, or other extenuating circumstances. In these cases the cluster may be powered down until the situation is resolved.

UAHPC Data Storage

UAHPC provides three types of data storage. Each user has a login or home directory. There is also a large external array for a file system called /bighome in which each user also gets a directory. Finally, there is a large scratch area to support job runs. Users may create their own directories under /scratch.

User Storage Quotas

Initially users will get a 20 gigabyte limit for storage in their login home directory. They get an additional 200 Gb in the /bighome shared file system. As we gather information on actual space requirements this amount may change. Scratch space does not have a quota. A daily job deletes all files that have not been used for three weeks. If scratch space begins to run low, the retention time may be reduced.

Online Publication of Data

UAHPC is not positioned as a general-use web server. User ‘home pages’ should be hosted on other resources.

Data Integrity

Only data in the user login directory is backed up to an on-site backup server and replicated offsite. The large storage systems are not. Backed up data is retained for 30 days. For storage on the larger file system (/bighome) users are strenuously encouraged to maintain their own backups of data files and to use the cluster data storage only for working data sets.

UAHPC Job Submission

All jobs on UAHPC should be submitted through the SLURM structure. No compute jobs should be run on the master or NAS nodes. Compilation may be done on the master node or initiated through SLURM. Direct initiation of jobs on compute nodes is grounds for removal from login authorization.

Users are grouped according to resources shares or research groups for the purpose of Fair Share scheduling. Students, in particular, must be grouped with the sponsoring professor.