Have a request for an upcoming news/science story? Submit a Request

Hansen

  • Hansen Cluster is in Production

    As of 10:00 AM, September 15, 2011, the Hansen Community cluster has gone into production. Hansen is comprised of 200 Dell computer nodes, with four 12-core AMD Opteron 6176 processors (48 cores per node), and is interconnected with RDMA-enabled 10Gb...

  • Optimizing MPI performance on Hansen

    To provide an optimal computing experience on the new Hansen cluster, ITaP would like to offer some tips on utilizing Hansen most effectively. Hansen utilizes Mellanox 10 Gbps Ethernet adapters for its network interconnect. In addition to handling H...

  • Hansen Network Maintenance

    Updated 11/30/11: Network engineers have identified the cause of the network issue in question, and have applied a workaround, which has restored the Hansen network to full functionality. The next maintenance window to address to address the root cau...

  • Hansen MPI Software Changes - December 16, 2011

    As a follow up to our earlier tip describing the optimal MPI for computations on Hansen, we are undertaking an effort to reduce the number of maintained MPI implementations. This will make it easier for you, the user, to identify and use the best MPI...

  • Hansen: unscheduled outage to Lustre scratch

    Update The error condition on the Lustre filesystem has been cleared, and Hansen is back in production and accepting new jobs. Jobs already running should have resumed at the point where they were blocked waiting when the Lustre error occurred. This...

  • Lustre unavailable on Hansen cluster

    Update: As of 9:45pm, Lustre is back in production and scheduling has resumed on Hansen. Original Notice: As of approximately 8:00pm February 7, an issue was found the Lustre filesystem on Hansen making the filesystem unavailable for use. ITaP engine...

  • Batch system and module changes on RCAC systems

    Beginning with the new Carter cluster, RCAC users will note some differences in the PBS batch system and the module names available for use. This article aims to outline the reasons for these changes and describe some of the details. Why has ITaP cha...

  • Scheduled Maintenance - August 2012

    In August 2012, some RCAC systems will be down for maintenance for up to three days in order to accommodate electrical service and chilled water upgrades in the Math building and OS and scheduler upgrades on the systems. Planned Maintenance Timelin...

  • Unscheduled Power outage in Math Datacenter

    Update: 10:00pm Tuesday As of 8:30pm Tuesday 21 August 2012, the LustreB filesystem has been returned to full service. Our storage engineers with assistance of the vendor have verified that the system is stable. If you encounter any issues, please co...

  • Scheduling paused on ITaP research clusters

    During scheduled network maintenance on network equipment connecting storage to ITaP clusters, all scheduling will be paused from 4-6pm. Running jobs will continue to execute, and new jobs may be submitted to PBS queues, but no new jobs will start u...

  • Software Stack Changes during Scheduled Maintenance

    During the New Years' weekend holiday, all ITaP HPC resources will be unavailable due to a scheduled upgrade of research home directories. While the systems are down they will also receive several updates to the software stack and modules. These upda...

  • Scheduled Maintenance - RCAC home directory upgrades

    Update - 7:00pm, 1/4/2013: - All community clusters (Steele, Coates, Rossmann, Hansen, Carter, and Peregrine1) are back in production. Radon is currently not in production, as ITaP engineers are addressing issues encountered during the upgrade. T...

  • Chilled water outage in MATH

    Campus chilled water serving the MATH data center is experiencing above-normal temperatures, and as a precaution, scheduling on the Coates, Rossmann, Hansen, Carter, and Radon clusters has been stopped. Steele is not affected. There should be no impa...

  • Chilled water outage in MATH

    Update: As of about 11:00 am, the problem with the chilled water has been corrected, and scheduling has resumed on all RCAC clusters. Thank you for your patience. If you encounter any issues or have questions, please contact us at rcac-help@purdue.ed...

  • Scratch Filesystem Problem

    As of 9:00am, are seeing a problem with the LustreC scratch filesystem that serves Carter, Hansen, and Peregrine1. To prevent any more jobs from running into this, we have temporarily suspended scheduling of new jobs, though you may still submit to...

  • Unscheduled Outage to LustreC

    Update: ITaP engineers have corrected the issue affecting the LustreC filesystem. The system is back in production. Job scheduling on Carter, Hansen and Peregrine1 has been restarted. As always, thank you for your patience. If you encounter any issue...

  • LustreC filesystem unavailable

    Update: May 13, 2013 11:00pm: LustreC has been returned to service. Carter, Hansen, and Peregrine1 are back in production with queues enabled. Update: May 13, 2013 3:00pm: storage engineers are continuing to work with vendor support to return Lustre...

  • Software Stack Changes during Carter Maintenance

    Between July 8 and July 16, Carter will be unavailable due to scheduled maintenance. On July 8, there will be changes made to the software stack on most of ITaP's community clusters. Changes will include updates to the default version of the Intel co...

  • LustreC Filesystem Maintenance

    The high performance scratch file system (LustreC) supporting the Carter, Hansen, Peregrine1, and WinHPC research clusters is in need of mandatory maintenance work. The work should be performed as soon as possible in order to ensure full performance...

  • Effective Use of Research Storage

    On Thursday, Oct 10, a BlueArc scratch fileserver suffered a filesystem failure that resulted in data loss on several scratch filesystems. In light of this event, we'd like to take this opportunity to remind all of our cluster users of the most effec...