<?xml version="1.0" encoding="UTF-8" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:media="http://search.yahoo.com/mrss/">
	<channel>
		<title>RCAC - Announcements, Outages and Maintenance, Outages, Maintenance, Outages, Maintenance, Science Highlights</title>
		<link>https://www.rcac.purdue.edu/index.php/news/rss/2,1,6,7,3,Negishi</link>
		<description><![CDATA[news::news.feed description]]></description>
		<atom:link href="https://www.rcac.purdue.edu/index.php/news/rss/2,1,6,7,3,Negishi" rel="self" type="application/rss+xml" />
		<language>en</language>
		<lastBuildDate>Sat, 07 Mar 2026 22:16:38 EST</lastBuildDate>
					<item>
				<title><![CDATA[Data Depot Filesystem issue: Scheduling Resumed]]></title>
				<link>https://www.rcac.purdue.edu/index.php/news/2594</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/index.php/news/2594</guid>
				<description><![CDATA[<p>An internal portion of the Data Depot filesystem is currently offline, as a result, all scheduling has been paused until this issue is resolved.</p>
<p><strong>Impact to you</strong>
Attempts to read files that are on the affected storage may result in error messages</p>
<p>Our IT team is actively working with the vendor to restore service as quickly as possible. We will send an update as soon as more information is available.</p>
]]></description>
				<pubDate>Wed, 11 Feb 2026 14:30:00 -0500</pubDate>
									<category>Outages</category>
							</item>
					<item>
				<title><![CDATA[February 5 Maintenance – Math Data Center Upgrades and Service Impact]]></title>
				<link>https://www.rcac.purdue.edu/index.php/news/2535</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/index.php/news/2535</guid>
				<description><![CDATA[<p>On Thursday, February 5, RCAC will perform planned maintenance in the MATH data center to support cooling upgrades and capacity improvements as part of the ongoing MATH datacenter renovation project.</p>
<p>During this maintenance window, several clusters will experience a temporary outage so that hardware can be safely powered down while facility work is performed:</p>
<ul>
<li>
<p>Gautschi, Gilbreth, Negishi, Bell, and Anvil cluster nodes will be powered down.</p>
</li>
<li>
<p>The Gilbreth’s legacy V100 GPUs, that are well past their lifetime, will be decommissioned.</p>
</li>
<li>
<p>Hammer (Math nodes) and Geddes: A subset of nodes will be powered down but the services will be available, unless communicated separately.</p>
</li>
</ul>
<h3>How does this maintenance impact you?</h3>
<ul>
<li>
<p>Clusters listed in this message won’t be available to run jobs during the maintenance.</p>
</li>
<li>
<p>Any jobs requesting a walltime which would take them past the start of the maintenance will not start and will remain in the queue until after the maintenance is completed.</p>
</li>
<li>
<p>Users can continue to access their data.</p>
</li>
<li>
<p>GenAI studio will remain available. This maintenance will position Purdue to support growing computational needs. Users should see long‑term benefits in system reliability and our ability to support future computing and AI resources.</p>
</li>
</ul>
<p>If you have questions about how this outage will affect your work or need support, please contact <a href="mailto:rcac%E2%80%91help@purdue.edu">rcac-help@purdue.edu</a>.</p>
]]></description>
				<pubDate>Thu, 05 Feb 2026 07:00:00 -0500</pubDate>
									<category>Maintenance</category>
							</item>
					<item>
				<title><![CDATA[Network Slowness Notice]]></title>
				<link>https://www.rcac.purdue.edu/index.php/news/2553</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/index.php/news/2553</guid>
				<description><![CDATA[<p>We are currently investigating network performance issues affecting network traffic.</p>
<p>Impact To You:
At this time, you may notice latency or brief disruptions when accessing certain on-campus or external resources, especially during peak usage periods.</p>
<p>We appreciate your patience while we work to fully resolve the underlying problem and restore normal network performance. We will provide an update by 5:00PM EST today or sooner.</p>
]]></description>
				<pubDate>Mon, 02 Feb 2026 15:00:00 -0500</pubDate>
									<category>Outages and Maintenance</category>
							</item>
					<item>
				<title><![CDATA[Globus access to Depot degraded; slow Depot logins and Depot access on clusters]]></title>
				<link>https://www.rcac.purdue.edu/index.php/news/2580</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/index.php/news/2580</guid>
				<description><![CDATA[<p>Users of Data Depot on RCAC clusters are currently experiencing degraded performance, and some Globus transfers to and from Depot are failing or running slowly.  In addition, some users may see slow Globus logins or be temporarily unable to log in to Globus when accessing Depot collections.</p>
<p>System monitoring has identified an issue where heavy job activity was overloading the Data Depot filesystem used by the clusters and Globus.</p>
<p>You may see the following impacts:</p>
<ul>
<li>Globus transfers to and from Depot collections may fail, stall, or run much more slowly than usual.</li>
<li>Globus logins may be slow or occasionally fail when accessing Depot endpoints.</li>
<li>Jobs on RCAC clusters that read from or write to Depot may experience slow file access, delayed directory listings, or timeouts.</li>
</ul>
<p>Our engineers are investigating the high load from a large number of concurrent jobs and are working to reduce the impact on Depot, Globus, and cluster workloads.  Existing jobs will continue to run, but any that are heavily Depot‑I/O‑bound may run more slowly or see I/O errors until performance improves.  We will provide another update by 5:00PM EST or sooner if the issue is resolved.</p>
]]></description>
				<pubDate>Fri, 30 Jan 2026 15:00:00 -0500</pubDate>
									<category>Outages</category>
							</item>
					<item>
				<title><![CDATA[Unscheduled Cluster & Data Depot Outage]]></title>
				<link>https://www.rcac.purdue.edu/index.php/news/2542</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/index.php/news/2542</guid>
				<description><![CDATA[<p>Data Depot and clusters began experiencing issues around 8:00AM EST. Engineers are currently diagnosing the issue and are working to identify a fix. Job scheduling has been paused while this issue is being addressed.</p>
<p>We will provide an update by 2:00PM EST today.</p>
]]></description>
				<pubDate>Tue, 20 Jan 2026 08:00:00 -0500</pubDate>
									<category>Outages</category>
							</item>
					<item>
				<title><![CDATA[Negishi Cluster Open OnDemand Maintenance (January 20)]]></title>
				<link>https://www.rcac.purdue.edu/index.php/news/2529</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/index.php/news/2529</guid>
				<description><![CDATA[<p>The Open Ondemand service for Negishi will be unavailable from <strong>Tuesday, January 20 at 8:00am EDT, 2026 to Tuesday, January 20 at 5:00pm EDT, 2026</strong>. During the maintenance, RCAC team will perform a reconfiguration to the Open Ondemand dashboard for Negishi which include a brand new design of the dashboard with new features listed below.</p>
<h3>What’s New on the dashboard?</h3>
<ul>
<li>
<strong>CPU/GPU Usage:</strong> Monitor your group usages and remaining available cores on Negishi.</li>
<li>
<strong>Disk Usage:</strong> Monitor your storage utilization across Negishi’s file systems.</li>
<li>
<strong>Job Queue:</strong> View and manage your running and queued jobs on Negishi.</li>
<li>
<strong>News Feed:</strong> Stay updated with the latest Negishi news, outages and announcements.</li>
<li>
<strong>Partition Status:</strong> Monitor the current state of partitions/queues on Negishi.</li>
<li>
<strong>My Jobs Page:</strong> Re-designed page to show detailed job information for your jobs and jobs in your group(s) as well as job management.</li>
<li>
<strong>Performance Metrics Page:</strong> Analyze your job performance and resource utilization patterns over time.</li>
</ul>
<h3>What will impact you?</h3>
<ul>
<li>All Slurm jobs on Negishi (including jobs that have already submitted through Open Ondemand before this maintenance) will continue and <strong>NOT</strong> be impacted.</li>
<li>All functions related to Open Ondemand including login  will be unavailable during the maintenance.</li>
</ul>
<p>Negishi Open Ondemand service will return to full production by Tuesday, January 20 at 5:00pm EDT, 2026.</p>
<p>Please submit a ticket through RCAC Help Desk <a href="mailto:rcac-help@purdue.edu">rcac-help@purdue.edu</a> if you have any questions or suggestions.</p>
]]></description>
				<pubDate>Tue, 20 Jan 2026 08:00:00 -0500</pubDate>
									<category>Maintenance</category>
							</item>
					<item>
				<title><![CDATA[Upcoming February 5 Maintenance – Math Data Center Upgrades and Service Impact]]></title>
				<link>https://www.rcac.purdue.edu/index.php/news/2527</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/index.php/news/2527</guid>
				<description><![CDATA[<img width="400" style="padding:10px;" class="float-right" alt="Image description" src="https://www.rcac.purdue.edu/files/images/mathrenno.png" />
On Thursday, February 5, RCAC will perform planned maintenance in the MATH data center to support cooling upgrades and capacity improvements as part of the ongoing MATH data center renovation project. This renovation will allow Purdue to better support growing AI, data‑intensive, and HPC workloads for research. When completed, MATH will see a 32% increase in floor space, a 60% increase in usable power, and a two-fold increase in cooling capacity. 
<p>During this maintenance window, several clusters will experience a temporary outage so that hardware can be safely powered down while facility work is performed:</p>
<ul>
<li>
<p>Gautschi, Gilbreth, Negishi, Bell, and Anvil cluster nodes will be powered down.</p>
</li>
<li>
<p>The Gilbreth’s legacy V100 GPUs, that are well past their lifetime, will be decommissioned.</p>
</li>
<li>
<p>Hammer (Math nodes) and Geddes: A subset of nodes will be powered down but the services will be available, unless communicated separately.</p>
</li>
</ul>
<h3>How does this maintenance impact you?</h3>
<ul>
<li>
<p>Clusters listed in this message won’t be available to run jobs during the maintenance.</p>
</li>
<li>
<p>Any jobs requesting a walltime which would take them past the start of the maintenance will not start and will remain in the queue until after the maintenance is completed.</p>
</li>
<li>
<p>Users can continue to access their data.</p>
</li>
<li>
<p>GenAI studio will remain available. This maintenance will position Purdue to support growing computational needs. Users should see long‑term benefits in system reliability and our ability to support future computing and AI resources.</p>
</li>
</ul>
<p>If you have questions about how this outage will affect your work or need support, please contact <a href="mailto:rcac%E2%80%91help@purdue.edu">rcac-help@purdue.edu</a>.</p>
]]></description>
				<pubDate>Mon, 12 Jan 2026 14:30:00 -0500</pubDate>
									<category>Announcements</category>
							</item>
					<item>
				<title><![CDATA[Research Computing Holiday Break]]></title>
				<link>https://www.rcac.purdue.edu/index.php/news/2487</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/index.php/news/2487</guid>
				<description><![CDATA[<p>Research Computing personnel will observe the university winter break from 12:00am EST on 12/23/25 and will resume normal business hours on January 5th, 2026  During this time, Research Computing services will continue to be available, but all staff will be on leave.</p>
<p>Research Computing staff members will monitor the status of all computing and data resources in an effort to ensure continuous availability.</p>
<p>Research Computing staff members will monitor the ticketing system throughout the holiday period and answer critical issues and problems. Non-critical user issues and questions will be addressed beginning January 5th, 2026. There will also be no coffee hour consultations during this break.</p>
<p><strong>Scratch file purging (on community clusters with scratch space) will continue as normal during the break, so be sure to archive your files in scratch storage. This does not apply to Data Depot or home directories -- only scratch storage.</strong></p>
<p>Have a wonderful break, everyone, and we look forward to great things in the new year!</p>
]]></description>
				<pubDate>Tue, 16 Dec 2025 13:00:00 -0500</pubDate>
									<category>Announcements</category>
							</item>
					<item>
				<title><![CDATA[Unscheduled Data Depot Outage]]></title>
				<link>https://www.rcac.purdue.edu/index.php/news/2449</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/index.php/news/2449</guid>
				<description><![CDATA[<p>The Data Depot storage system began experiencing issues starting around 4:30pm EDT today. Engineers are currently diagnosing the issue and are working to identify a fix. Job scheduling has been paused while this issue is being addressed.</p>
<p>We will provide an update by 9pm.</p>
]]></description>
				<pubDate>Sat, 01 Nov 2025 16:30:00 -0400</pubDate>
									<category>Outages</category>
							</item>
					<item>
				<title><![CDATA[Unscheduled Data Depot outage]]></title>
				<link>https://www.rcac.purdue.edu/index.php/news/2417</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/index.php/news/2417</guid>
				<description><![CDATA[<p>Edit:</p>
<p>The Data Depot file system has returned to full service and scheduling has resumed on all clusters.</p>
<hr />
<p>The Data Depot storage system began experience issues starting around 9am EDT this morning. Engineers are currently diagnosing the issue and are working to identify a fix. Job scheduling has been paused while this issue is being addressed.</p>
<p>We will provide an update by 12pm (noon).</p>
]]></description>
				<pubDate>Fri, 17 Oct 2025 09:00:00 -0400</pubDate>
									<category>Outages</category>
							</item>
					<item>
				<title><![CDATA[Unscheduled Data Depot outage]]></title>
				<link>https://www.rcac.purdue.edu/index.php/news/2409</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/index.php/news/2409</guid>
				<description><![CDATA[<p>Edit:</p>
<p>Data Depot functionality has been restored.</p>
<hr />
<p>The Data Depot file system began experiencing issues with writes around 2:30pm EDT. The data migration process currently ongoing from Data Depot 2 to Data Depot 3 ran into an unexpected problem. Engineers have identified the problem and are correcting it. Users may have seen &quot;no space left on device&quot; for approximately 30 minutes. Job scheduling has been paused while this issue is being addressed.</p>
<p>We will provide an update by 5 PM.</p>
]]></description>
				<pubDate>Wed, 15 Oct 2025 14:30:00 -0400</pubDate>
									<category>Outages</category>
							</item>
					<item>
				<title><![CDATA[Purdue professor in Indianapolis uses RCAC clusters to study materials, predict failures]]></title>
				<link>https://www.rcac.purdue.edu/index.php/news/2381</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/index.php/news/2381</guid>
				<description><![CDATA[<p>Shengfeng Yang, an assistant professor of mechanical engineering in Indianapolis, uses the Rosen Center for Advanced Computing (RCAC)’s Negishi community cluster supercomputer to help with his research simulating complex materials. To see how failure of materials happen at the atomic level, he and his research group simulate systems with more than a million atoms, which requires a great deal of computational power, and wouldn’t be possible without a powerful computer like <a href="https://www.rcac.purdue.edu/compute/negishi">Negishi</a>.</p>
<p>Currently, Yang and his team focus on semiconductor materials and metals such as copper that are used in semiconductor packaging to study how cracking and deformation happens at the atomic level in critical areas.</p>
<p>Yang and his research group also use the <a href="https://www.rcac.purdue.edu/compute/gilbreth">Gilbreth</a> cluster’s GPUs to train machine learning models to predict material behavior and properties. This means in the future they won’t have to run time-consuming simulations because the trained machine learning model will be able to make fast predictions about material behavior and failure.</p>
<p>Yang says there have been no obstacles to using the clusters as a faculty member at the Indianapolis campus, and he’s been able to access the clusters remotely without any difficulties.</p>
<p>He says tapping into RCAC resources has also connected him to faculty in West Lafayette he might not otherwise have met.</p>
<p>“I’ve gotten a lot of connection opportunities, and chances to collaborate with faculty in West Lafayette that are more focused on the experiment side, so we can have that connection between the computational simulation and the experimental science. So that’s been a big benefit as well.”</p>
<p>To learn more about Negishi, Gilbreth and other RCAC resources, contact <a href="mailto:rcac-help@purdue.edu">rcac-help@purdue.edu</a> and visit the <a href="https://www.rcac.purdue.edu/">RCAC website</a>.</p>
]]></description>
				<pubDate>Tue, 26 Aug 2025 00:00:00 -0400</pubDate>
									<category>Science Highlights</category>
							</item>
					<item>
				<title><![CDATA[Bell and Negishi Cluster Maintenance]]></title>
				<link>https://www.rcac.purdue.edu/index.php/news/2321</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/index.php/news/2321</guid>
				<description><![CDATA[<p>As part of an ongoing effort to <a href="https://www.rcac.purdue.edu/news/6807">renovate the Mathematical Sciences building (MATH) Data Center</a> and expand its power and cooling capacity, the cooling loop connected to the Bell and Negishi clusters will undergo a maintenance between Thursday, August 14th, 2025 at 5:00am EDT and Thursday, August 14th, 2025 at 8:00pm EDT. This critical infrastructure update is vital for continued operation of the racks hosting the compute nodes for .</p>
<p>To facilitate the data center renovation project, the  clusters will be operating at limited capacity between Thursday, August 14th, 2025 at 5:00am EDT and Thursday, August 14th, 2025 at 8:00pm EDT.</p>
<p>Jobs that request a wall time that will take them past the beginning of the maintenance will remain in queue until after the maintenance is complete.</p>
<p><strong>What to Expect During Maintenance:</strong></p>
<ol>
<li>You <strong>can log in</strong> to the cluster front-ends and <strong>access your files</strong> stored on Bell and Negishi.</li>
<li>Jobs running at the start of maintenance will be <strong>preempted</strong> and automatically <strong>resubmitted</strong> to their respective queues.</li>
<li>
<strong>New jobs</strong> submitted during the maintenance window will be <strong>queued</strong>, but they will not begin running until after the maintenance is complete.</li>
</ol>
<p>We appreciate your understanding and cooperation. If you have any questions or concerns, please contact us at <a href="mailto:rcac-help@purdue.edu">rcac-help@purdue.edu</a>.</p>
]]></description>
				<pubDate>Thu, 14 Aug 2025 05:00:00 -0400</pubDate>
									<category>Maintenance</category>
							</item>
					<item>
				<title><![CDATA[Negishi Scheduler Modernization]]></title>
				<link>https://www.rcac.purdue.edu/index.php/news/2297</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/index.php/news/2297</guid>
				<description><![CDATA[<p>As part of an ongoing effort to utilize modern features in the Slurm scheduler and to streamline the process of usage reporting for research groups--something that is often requested by various PIs, the scheduler configurations on the Bell cluster will be modified <a href="https://www.rcac.purdue.edu/news/7231">in an upcoming maintenance</a> . <strong>Users will be required to update their job scripts</strong> to conform to the guidelines described below.</p>
<ul>
<li>All jobs on the cluster will be required to explicitly specify a partition and an account (i.e. your group's name) at submission time. You can find the names of the available partitions and accounts from the <code>showpartitions</code> and <code>slist</code> commands respectively. Any job that does not specify an account <em>and</em> a partition will be rejected at submission time.</li>
<li>The output of <code>slist</code> and the default output of <code>squeue</code> will be modified to reflect the new scheduler design.</li>
<li>All &quot;shared accounts&quot; such as <code>standby</code>, <code>highmem</code>, etc. that represent resources outside of your typical &quot;group accounts&quot; will continue to exist but will require a different request syntax.
<ul>
<li>Standby will become a Quality of Service (QoS) and jobs that previously ran under the &quot;standby&quot; account, will now be submitted to your &quot;group account&quot; and be tagged with the standby QoS. I.e. if your job previously used the <code>-A standby</code> option, you would now use <code>-A mylab -q standby</code>
</li>
<li>The <code>highmem</code> and <code>gpu</code> shared accounts will become partitions and jobs that previously ran under these accounts will now be submitted to your &quot;group account&quot; and should be submitted to the appropriate partition, i.e., <code>-A highmem</code> will become <code>-A mylab -p highmem</code>
</li>
<li>Groups with access to the <code>interactive</code> account will now submit to their &quot;group account&quot; and the interactive partition. i.e. <code>-A interactive</code> will become <code>-A mylab -p interactive</code>
</li>
</ul>
</li>
</ul>
<table> <caption>Summary of Changes</caption>
<thead>
<tr>
<th scope="col">Use Case</th>
<th scope="col">Old Syntax</th>
<th scope="col">New Syntax</th>
<th scope="col">What Changed</th>
</tr>
</thead>
<tbody>
<tr>
<td>Submit a job to your group's account</td>
<td><code>sbatch -A mygroup</code></td>
<td><code>sbatch -A mygroup -p cpu</code></td>
<td>The <code>cpu</code> partition must be specified.</td>
</tr>
<tr>
<td>Submit a standby job</td>
<td><code>sbatch -A standby</code></td>
<td><code>sbatch -A mygroup -q standby -p cpu</code></td>
<td><code>standby</code> is now a QoS instead of an account</td>
</tr>
<tr>
<td>Submit a highmem job</td>
<td><code>sbatch -A highmem</code></td>
<td><code>sbatch -A mygroup -p highmem</code></td>
<td><code>highmem</code> is now a partition instead of an account</td>
</tr>
<tr>
<td>Submit a gpu job</td>
<td><code>sbatch -A gpu</code></td>
<td><code>sbatch -A mygroup -p gpu</code></td>
<td><code>gpu</code> is now a partition instead of an account</td>
</tr>
<tr>
<td>Submit an interactive job</td>
<td><code>sbatch -A interactive</code></td>
<td><code>sbatch -A mygroup -p interactive</code></td>
<td><code>interactive</code> is now a partition instead of an account</td>
</tr>
</tbody>
</table>
<p><strong>How will this affect you</strong>:</p>
<ol>
<li>You will need to change your jobscripts and your method of invocation to include the required options outlined above.</li>
<li>If you have any scripts or tooling that rely on the current output of <code>slist</code> or <code>squeue</code>, those scripts will need to be modified to use the new formatted output.</li>
</ol>
<p>You can prepare for this maintenance by reviewing the new Slurm organization in our user guide's <a href="https://www.rcac.purdue.edu/knowledge/negishi/run/slurm/new-queues">Queues Page</a>.</p>
<p>If you have any questions about these upcoming changes, please reach out to us at <a href="mailto:rcac-help@purdue.edu">rcac-help@purdue.edu</a>.</p>
]]></description>
				<pubDate>Tue, 29 Jul 2025 08:00:00 -0400</pubDate>
									<category>Announcements</category>
							</item>
					<item>
				<title><![CDATA[Negishi Cluster Maintenance]]></title>
				<link>https://www.rcac.purdue.edu/index.php/news/2296</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/index.php/news/2296</guid>
				<description><![CDATA[<h3>When will it Happen?</h3>
<p>The Negishi cluster is scheduled for maintenance and will be unavailable <strong>on Tuesday, July 29th, 2025 at 8:00am EDT until July 30th, 2025 at 5:00pm EDT</strong>.</p>
<h3>What is being upgraded?</h3>
<p>During this maintenance, Negishi will have its <a href="https://www.rcac.purdue.edu/news/7245">scheduler's configurations updated</a> to allow for the use of more modern features within Slurm, its ZFS and lustre storage systems will be updated, and maintenance will be performed on the cooling loop serving its compute nodes.</p>
<h3>How does this affect you?</h3>
<ul>
<li>The Negishi cluster will be unavailable during the maintenance window.</li>
<li>Slurm jobs that are still queued when this maintenance begins on Tuesday, July 29th, 2025 at 8:00am EDT will be deleted.</li>
<li>A reservation will be created that will prevent jobs from starting if their end time would take the job past the start of maintenance. Because pending jobs will be deleted, these jobs will never run.</li>
<li>
<strong>The Slurm options required for job submission will change during this maintenance. <a href="https://www.rcac.purdue.edu/news/7245">See our related news posting</a>.</strong>
</li>
<li>The output of <code>slist</code> and the default output of <code>squeue</code> will be modified to be more useful under the new design.</li>
<li>The available options for creating jobs through Open OnDemand will change to accomodate the new options.</li>
</ul>
<h3>How can you prepare for these changes?</h3>
<p>In order to minimize disruption in researcher workflows, we have updated <a href="https://www.rcac.purdue.edu/knowledge/negishi/run/slurm/new-queues">Negishi's User Guide page on Job Submission</a> to describe the new method of job submission and this should be reviewed by users before the maintenance.</p>
<p>If you have questions about this upgrade or need help from our support staff, please reach out to us at <a href="mailto:rcac-help@purdue.edu">rcac-help@purdue.edu</a>.</p>
]]></description>
				<pubDate>Tue, 29 Jul 2025 05:00:00 -0400</pubDate>
									<category>Maintenance</category>
							</item>
					<item>
				<title><![CDATA[Incorrect Account Email]]></title>
				<link>https://www.rcac.purdue.edu/index.php/news/2307</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/index.php/news/2307</guid>
				<description><![CDATA[<p>Today, RCAC user management systems sent incorrect email messages to many faculty partners and their resource managers. Please ignore any recent email about expirations or removals. You may verify who has access to your resources through our site <a href="http://www.rcac.purdue.edu">www.rcac.purdue.edu</a>, at any time, or email <a href="mailto:rcac-help@purdue.edu">rcac-help@purdue.edu</a> if you have concerns..</p>
<p>Thank you!</p>
]]></description>
				<pubDate>Tue, 15 Jul 2025 14:30:00 -0400</pubDate>
									<category>Outages</category>
							</item>
					<item>
				<title><![CDATA[Power outage after thunderstorm]]></title>
				<link>https://www.rcac.purdue.edu/index.php/news/2284</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/index.php/news/2284</guid>
				<description><![CDATA[<p>A power outage affected our datacenters at approx 3:15pm. It appears cluster services are restored, however access and authentication may be slowed while we work to get other core infrastructure back online.</p>
<p>OoD and desktop of Negishi and gilbreth may affect due to node unexpectedly rebooted.</p>
<p>Our engineer is working on the back up right now.</p>
<p>Update on 3:46 pm: Negishi and Gilbreth Nodes are back to normal.</p>
]]></description>
				<pubDate>Wed, 18 Jun 2025 15:15:00 -0400</pubDate>
									<category>Outages</category>
							</item>
					<item>
				<title><![CDATA[Resources down due to power event]]></title>
				<link>https://www.rcac.purdue.edu/index.php/news/2277</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/index.php/news/2277</guid>
				<description><![CDATA[<p>Update 3:40PM, EDT: All systems have been returned to full service and queues are accepting new jobs.</p>
<p>Update: 3:00PM, EDT: Systems returning but queues are still paused on all clusters except for Bell and Hammer. Some clusters are waiting for full cooling to come fully back online. Next update at 4:00PM, EDT, or when all systems are returned to service.</p>
<p>Update: 2:30PM, EDT: Systems are steadily coming back online. Queues are still paused. Next update at 3:00PM, EDT.</p>
<p>The Rosen Center is currently experiencing an outage on all clusters due to a widespread power outage event on the Purdue West Lafayette Campus.</p>
<p>Currently engineers are bringing all services back up.  Scheduling on all clusters will be resumed once services have stabilized.</p>
<p>We expect services to back shortly.  We will update on the situation at 2:30PM, EDT.</p>
]]></description>
				<pubDate>Sat, 14 Jun 2025 12:30:00 -0400</pubDate>
									<category>Outages and Maintenance</category>
							</item>
					<item>
				<title><![CDATA[Account Provisioning and Access Issues]]></title>
				<link>https://www.rcac.purdue.edu/index.php/news/2248</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/index.php/news/2248</guid>
				<description><![CDATA[<p>We’re currently seeing some delays with account provisioning and access for RCAC systems. The issue stems from upstream systems outside of our direct control. Please report any access issues to us as soon as you encounter them so we can assist and track the impact.</p>
<p>We know how important timely access is to your research and truly appreciate your patience as this gets sorted out. If you have an urgent need or specific concerns, please don’t hesitate to contact us at <a href="mailto:rcac-help@purdue.edu">rcac-help@purdue.edu</a>.</p>
<p>Thanks again for your understanding.</p>
]]></description>
				<pubDate>Wed, 04 Jun 2025 15:00:00 -0400</pubDate>
									<category>Outages and Maintenance</category>
							</item>
					<item>
				<title><![CDATA[Negishi Cluster Maintenance]]></title>
				<link>https://www.rcac.purdue.edu/index.php/news/2238</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/index.php/news/2238</guid>
				<description><![CDATA[<p>The Negishi cluster will be partially unavailable Wednesday, May 28, 2025 from 8:00am - 3:00pm EDT for scheduled maintenance. The cluster will return to full production by Wednesday, May 28th, 2025 at 3:00pm EDT.</p>
<p>During this time, Negishi will have its operating system patched, and upgrades will be made to SMB network drive connectivity. Filesystems will remain available on the cluster, but existing network drive connections may be disrupted. Login sessions via <a href="https://www.rcac.purdue.edu/desktop.negishi.rcac.purdue.edu">ThinLinc</a>, SSH, and <a href="https://gateway.negishi.rcac.purdue.edu/">Open OnDemand</a> will terminate as the servers reboot.</p>
<p>Any Slurm jobs submitted during the maintenance will not start and will remain in the queue until after the maintenance is completed.</p>
<p><strong>What Users Can Expect</strong></p>
<ul>
<li>You can log into the clusters and submit jobs during the maintenance, though new jobs will not start.</li>
<li>Login nodes will reboot, terminating existing sessions, but you will be able to log back in without waiting for the end of the maintenance window.</li>
<li>While you will have login node access, the ability to check job status or submit new jobs will be briefly interrupted while the scheduler reboots.</li>
<li>While you will have access to the Open OnDemand interface through <a href="https://gateway.negishi.rcac.purdue.edu/">Gateway</a>, you may experience a brief interruption when it is rebooted. If that happens, simply reload the web page. If you are running an interactive job, you will be able to reconnect to it.</li>
<li>The Slurm scheduler will reboot, during which time new Slurm queries and job submissions will be interrupted. Any Slurm jobs running before the maintenance will continue to run to completion under the existing OS.</li>
</ul>
]]></description>
				<pubDate>Wed, 28 May 2025 08:00:00 -0400</pubDate>
									<category>Maintenance</category>
							</item>
			</channel>
</rss>