Article #483: Hansen Cluster is in Production
As of 10:00 AM, September 15, 2011, the Hansen Community cluster has gone into production. Hansen is comprised of 200 Dell computer nodes, with four 1...
As of 10:00 AM, September 15, 2011, the Hansen Community cluster has gone into production. Hansen is comprised of 200 Dell computer nodes, with four 1...
To provide an optimal computing experience on the new Hansen cluster, ITaP would like to offer some tips on utilizing Hansen most effectively. Hansen...
Updated 11/30/11: Network engineers have identified the cause of the network issue in question, and have applied a workaround, which has restored the...
As a follow up to our earlier tip describing the optimal MPI for computations on Hansen, we are undertaking an effort to reduce the number of maintain...
Update The error condition on the Lustre filesystem has been cleared, and Hansen is back in production and accepting new jobs. Jobs already running sh...
Update: As of 9:45pm, Lustre is back in production and scheduling has resumed on Hansen. Original Notice: As of approximately 8:00pm February 7, an is...
Beginning with the new Carter cluster, RCAC users will note some differences in the PBS batch system and the module names available for use. This arti...
In August 2012, some RCAC systems will be down for maintenance for up to three days in order to accommodate electrical service and chilled water upgr...
Update: 10:00pm Tuesday As of 8:30pm Tuesday 21 August 2012, the LustreB filesystem has been returned to full service. Our storage engineers with assi...
During scheduled network maintenance on network equipment connecting storage to ITaP clusters, all scheduling will be paused from 4-6pm. Running jobs...
During the New Years' weekend holiday, all ITaP HPC resources will be unavailable due to a scheduled upgrade of research home directories. While the s...
Update - 7:00pm, 1/4/2013: - All community clusters (Steele, Coates, Rossmann, Hansen, Carter, and Peregrine1) are back in production. Radon is curr...
Campus chilled water serving the MATH data center is experiencing above-normal temperatures, and as a precaution, scheduling on the Coates, Rossmann,...
Update: As of about 11:00 am, the problem with the chilled water has been corrected, and scheduling has resumed on all RCAC clusters. Thank you for yo...
As of 9:00am, are seeing a problem with the LustreC scratch filesystem that serves Carter, Hansen, and Peregrine1. To prevent any more jobs from runn...
Update: ITaP engineers have corrected the issue affecting the LustreC filesystem. The system is back in production. Job scheduling on Carter, Hansen a...
Update: May 13, 2013 11:00pm: LustreC has been returned to service. Carter, Hansen, and Peregrine1 are back in production with queues enabled. Update...
Between July 8 and July 16, Carter will be unavailable due to scheduled maintenance. On July 8, there will be changes made to the software stack on mo...
The high performance scratch file system (LustreC) supporting the Carter, Hansen, Peregrine1, and WinHPC research clusters is in need of mandatory mai...
On Thursday, Oct 10, a BlueArc scratch fileserver suffered a filesystem failure that resulted in data loss on several scratch filesystems. In light of...