Skip to main content
Have a request for an upcoming news/science story? Submit a Request

Halstead

  • Halstead Cluster Maintenance

    • Last updated:

    The Halstead cluster will be unavailable beginning at Thursday, September 14th, 2017 at 8:00am EDT, for scheduled maintenance. The cluster will return to full production by %enddatetime%. During this time, Halstead will have critical security patches...

  • Holiday Break

    Purdue University will be observing a holiday break from December 23 - January 2. During this time, Research Computing services will continue to be available, but all staff will be on leave. Critical system outages will be dealt with should they occ...

  • Unscheduled Depot Outage on Compute Clusters

    • Last updated:

    The servers providing access to Data Depot from Brown, Conte, Halstead, HalsteadGPU, Radon, Rice, Scholar, and Snyder suffered a partial failure. Many nodes in these clusters temporarily lost access to Depot. Jobs accessing files on Depot may have pa...

  • New Windows Network Drive (SMB) Access

    All Windows network drives (SMB/CIFS access) for scratch filesystems on all clusters and home directories have moved! You should change your mapped network drives to connect to: \\scratch.my_cluster_name_here.rcac.purdue.edu\my_cluster_name_here Or f...

  • New Thinlinc Servers

    Thinlinc access to all clusters is moving! Effective immediately, please direct your Thinlinc client and/or browser for Thinlinc to: desktop.my_cluster_name_here.rcac.purdue.edu The old Thinlinc service on thinlinc.rcac.purdue.edu will be retired at...

  • Halstead Scratch Upgrade

    • Last updated:

    Halstead and HalsteadGPU will be having a new scratch filesystem installed on Tuesday, March 20th, 2018. All access to these systems will be stopped at 5:00am EDT in order to allow for engineers to install the new hardware. Any jobs whose requested...

  • All Clusters Outage

    All Research Computing systems suffered an unplanned outage Saturday, March 24th, 2018 at 8:15pm EDT due to a widespread power failure in the area. Thanks to diligent efforts all night and today by many teams across ITaP, all computational clusters h...

  • Halstead Upgrade to CentOS7

    In order to continue to offer a current computational platform for research at Purdue, Halstead is going to receive a complete upgrade to CentOS7 - the Community Development Platform for the Red Hat family of Linux distributions. CentOS7 is just one...

  • New Halstead Scratch Storage

    • Last updated:

    The Halstead and HalsteadGPU scratch storage will be moving to a new storage system over Thursday, April 12, 2018. There will not be any automatic transfer of files from your old scratch space to your new scratch space. You will find there are two ne...

  • Common CentOS7 Upgrade Questions

    The Halstead, Rice, and Snyder clusters will be upgraded to newer CentOS7 operating system in May 2018 (detailed announcements in: Rice Upgrade to CentOS7, Halstead Upgrade to CentOS7, Snyder Upgrade to CentOS7). Along with operating system upgrade,...

  • Job Scheduling Issue on Clusters

    • Last updated:

    As of Monday, April 16th, 2018 at 10:00am EDT, Halstead, HalsteadGPU, and Hammer are not properly scheduling new jobs due to a problem with the Moab scheduler. Existing jobs are unaffected. We are working with the vendor to address this and expect...

  • Halstead Cluster Maintenance Upgrade

    • Last updated:

    The Halstead cluster will be taken down on Monday, May 14th, 2018 at 8:00am EDT for a planned upgrade to CentOS7. This process is expected to take two days, and Halstead will not return to service until 5:00pm on %enddate%. This upgrade may affect ap...

  • Old Halstead Scratch Retirement

    The old Halstead and HalsteadGPU scratch storage will be retired July 9, 2018 8:00am - July 16, 2018 5:00pm EDT. This storage has not been the default scratch space on Halstead and HalsteadGPU since May 15, and its retirement will complete the pro...

  • All Systems Maintenance

    • Last updated:

    On Thursday, August 2nd, 2018 at 6:00am EDT, all Research Computing systems will be going offline to allow for major coordinated maintenance of central cooling systems across data centers and upgrades to the research networking core. This will affect...

  • Emergency Halstead Cluster Maintenance

    • Last updated:

    The Halstead cluster will be unavailable beginning at Thursday, January 24th, 2019 at 8:00am EST, for emergency maintenance. The cluster is expected to return to full production by %enddatetime%. During this time, Halstead will have critical security...

  • Change to Multi-user Shared Node Access

    Research Computing has been assessing how to approach the Meltdown and Spectre security vulnerabilities discovered in Intel processors for some time. Unfortunately, applying the existing patches for these to all cluster nodes could pose a significant...

  • Cancelled: Off-Campus access to Community Clusters to require VPN, BoilerKey

    • Last updated:

    Starting Monday, February 11, 2019, login access to the community clusters from off-campus will require using Purdue's virtual private network (VPN). As with any use of the Purdue VPN, access to community clusters through Purdue’s VPN will require Bo...

  • Halstead and Brown unscheduled outage

    • Last updated:

    Halstead, HalsteadGPU, Brown, and BrownGPU went offline during a campus power event around 8:40 am this morning. Engineers are working to bring the compute nodes and the scratch system back online. Other systems are back online at this time. Job sche...

  • Home Filesystem / Slowdown / Login Issue on all Clusters

    • Last updated:

    All clusters began experiencing issues with logins and a general slowdown around 3:10pm EST. This has been identified as being due to an issue on the /home/ filesystem. Engineers are continuing to examine this and are working to alleviate the issue r...

  • Clusters feature interactive HPC web portal

    An interactive HPC web portal, dubbed Gateway, has been deployed on the Community Clusters. Gateway is an open-source HPC portal, called Open OnDemand, developed by the Ohio Supercomputing Center. Open OnDemand allows one to interact with HPC resourc...