Article #529: Batch system and module changes on RCAC systems
Beginning with the new Carter cluster, RCAC users will note some differences in the PBS batch system and the module names available for use. This arti...
Beginning with the new Carter cluster, RCAC users will note some differences in the PBS batch system and the module names available for use. This arti...
In August 2012, some RCAC systems will be down for maintenance for up to three days in order to accommodate electrical service and chilled water upgr...
Update: 10:00pm Tuesday As of 8:30pm Tuesday 21 August 2012, the LustreB filesystem has been returned to full service. Our storage engineers with assi...
During scheduled network maintenance on network equipment connecting storage to ITaP clusters, all scheduling will be paused from 4-6pm. Running jobs...
Update - 7:00pm, 1/4/2013: - All community clusters (Steele, Coates, Rossmann, Hansen, Carter, and Peregrine1) are back in production. Radon is curr...
Campus chilled water serving the MATH data center is experiencing above-normal temperatures, and as a precaution, scheduling on the Coates, Rossmann,...
Update: As of about 11:00 am, the problem with the chilled water has been corrected, and scheduling has resumed on all RCAC clusters. Thank you for yo...
As of 9:00am, are seeing a problem with the LustreC scratch filesystem that serves Carter, Hansen, and Peregrine1. To prevent any more jobs from runn...
Update: ITaP engineers have corrected the issue affecting the LustreC filesystem. The system is back in production. Job scheduling on Carter, Hansen a...
Update: 8:12pm Scheduling on Carter has been resumed, and Carter is back in full production. Original Message: Beginning the morning of April 16, a nu...
Update: May 13, 2013 11:00pm: LustreC has been returned to service. Carter, Hansen, and Peregrine1 are back in production with queues enabled. Update...
As you may be aware, on April 5, the Board of Trustees approved the purchase of the next generation of community cluster, to be named "Conte"...
The high performance scratch file system (LustreC) supporting the Carter, Hansen, Peregrine1, and WinHPC research clusters is in need of mandatory mai...
On Thursday, Oct 10, a BlueArc scratch fileserver suffered a filesystem failure that resulted in data loss on several scratch filesystems. In light of...
Update: 11:00pm, Nov. 12, 2013 ITaP storage engineers have returned the offline hardware to production and LustreC is back in production. Queues on Ha...
Nearly all major clusters operated by ITaP Research Computing are stopped due to issues with their storage systems relating to the power loss on the W...
All ITaP Research Computing systems are currently experiencing an issue with accessing network filesystems. A case has been opened with our vendor as...
UPDATE - As of 7:45pm Sunday, March 16th, 2014, the fileserver maintenance has completed successfully, and cluster systems are back online. All Resea...
Update: Due to issues with the automated processes indexing the Lustre filesystems, resumption of scratch purging has been postponed. The first automa...
In order to repair a hardware issue with the underlying disk storage comprising LustreC, ITaP storage engineers will execute a brief maintenance on th...