<?xml version="1.0" encoding="UTF-8" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:media="http://search.yahoo.com/mrss/">
	<channel>
		<title>RCAC - Outages and Maintenance, Announcements, Science Highlights, Events, Coffee Hour Consultations, Outages, Maintenance, Student Events</title>
		<link>https://www.rcac.purdue.edu/news/rss/Peregrine1</link>
		<description><![CDATA[news::news.feed description]]></description>
		<atom:link href="https://www.rcac.purdue.edu/news/rss/Peregrine1" rel="self" type="application/rss+xml" />
		<language>en</language>
		<lastBuildDate>Tue, 14 Apr 2026 20:11:56 EDT</lastBuildDate>
					<item>
				<title><![CDATA[Home Filesystem Maintenance - All Clusters]]></title>
				<link>https://www.rcac.purdue.edu/news/569</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/news/569</guid>
				<description><![CDATA[<p>Conte has been returned to normal operations as well now.  This concludes the home directory maintenance on all systems.</p>
<p><strong>Update: September 27, 2016  11:55pm</strong></p>
<p>All systems other than Conte have been successfully returned to normal operations with the new home directory filesystem.  Work continues at this point on Conte to ensure the Phi accelerators are properly reconfigured.</p>
<p>Carter has also been given a new scratch filesystem during this maintenance.  This should alleviate some of the problems with the previous scratch filesystem on Carter.  For more details, please see the Carter-specific announcement on this topic:  <a href="https://www.rcac.purdue.edu/news/887">New Carter Scratch Filesystem</a></p>
<p><strong>Reminder:</strong></p>
<p>This is a reminder of the Home Filesystem Maintenance taking place next week on Tuesday, September 27th.</p>
<p>Details below.</p>
<p><strong>Original Message:</strong></p>
<p>All of the research clusters () as well as some other minor systems will be unavailable beginning at Tuesday, September 27th, 2016 at 7:00am EDT, for scheduled maintenance. All clusters other than Conte will return to full production by 11:59pm.</p>
<p>Conte will return to partial capacity by that time, but will not return to full production until the following day. Many Conte nodes will remain offline and gradually be returned to service over the following 12-24 hours to allow for power reconfiguration in the data center.  Please see the separate article on Conte: <a href="https://www.rcac.purdue.edu/news/873">Conte Cluster Maintenance</a>.</p>
<p>During the large all-systems maintenance Tuesday, the /home filesystem used by all Research Computing systems will be replaced by a new filesystem.  The new filesystem will be based on DDN's GRIDScalar technology and running on new hardware dedicated exclusively to Research Computing home directories.</p>
<p>All files on the existing /home filesystem will be migrated to the new system during the maintenance window and prior to any of the clusters returning to service.</p>
<p>In the coming weeks, any jobs which request a walltime which would take them past Tuesday, September 27th, 2016 at 7:00am EDT will not start and will remain in the queue until after the maintenance is completed.</p>
]]></description>
				<pubDate>Tue, 27 Sep 2016 07:00:00 -0400</pubDate>
									<category>Maintenance</category>
							</item>
					<item>
				<title><![CDATA[Software stack changes and upgrades]]></title>
				<link>https://www.rcac.purdue.edu/news/594</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/news/594</guid>
				<description><![CDATA[<p>During the <a href="https://www.rcac.purdue.edu/news/871">Home Filesystem Maintenance - All Clusters</a> maintenance on September 27th, several upgrades and changes will be made to the software stack on the clusters. Changes will include updates to the default version of the Intel compiler and associated software stack as well to the default MPI libraries. Some older versions of other software will also be removed. These changes are being made in order to bring clusters in line with the software environment that is being planned for the new Halstead cluster.</p>
<p>These upgrades will provide the best performance for the new and existing clusters and will provide a consistent Intel version stack across all of our clusters. The new software stack is currently available on the clusters for testing and upgrade. ITaP research computing staff recommends testing out the new compilers and upgrading prior to September 27th.</p>
<p>WHAT WILL BE THE IMPACT TO INTEL COMPILERS?</p>
<p>We will be upgrading the default Intel version from 13.1.1.163 to 16.0.1.150. The current default has been around for several years, and many researchers are already switching to the latest versions of Intel compilers. The 13.1.1.163 version will remain available on the current clusters for a period of time to give researchers time to finish up projects and upgrade to the latest. Any software dependent on the default version of Intel 13.1.1.163 will also have it's default upgraded.</p>
<p>WHAT WILL BE THE IMPACT TO MPI LIBRARIES?</p>
<p>We will be upgrading the default version of OpenMPI from 1.6.3 to 1.8.1. This new versions offers stability and performance enhancements and some new features. Version 1.8.1 has been available for some time and many researchers have already moved to 1.8.1.</p>
<p>We will be upgrading the default version of IMPI from 4.1.1.036 to 5.1.2.150. This new versions offers stability and performance enhancements and some new features. Version 5.1.2.150 has been available for some time and many researchers have already moved to 5.1.2.150.</p>
<p>Any software dependent on one of these default MPI versions will also have it's default upgraded appropriately.</p>
<p>It is recommended that you upgrade to these new libraries, however, if you need to continue using the old default versions you may do so by switching your &quot;module load&quot; to the specific version. The Intel 13 stack will remain available for those who require it. These new compilers offer bug fixes and enhanced performance and stability. Users are encouraged to send in any experiences with these new compilers to help us evaluate the direction of new compilers on RCAC systems.</p>
<p>WHAT OTHER SOFTWARE WILL BE IMPACTED?</p>
<p>There will be several changes to other miscellaneous software. Older versions of some software will be removed in favor of newer versions. Default versions of a few software will be updated to the latest version. In most cases, these older versions are being infrequently used so most should not be impacted by these changes.</p>
<p>If any software you are using will be impacted by these changes you will see a notice message being printed to your session or in your job output files when loading an affected module. This notice will provide recommendations on the latest version.</p>
<p>HOW DO I KNOW IF MY WORKFLOW WILL BE IMPACTED?</p>
<p>Whenever a module that will be impacted is loaded a notice is printed to your screen or job output log. Please take a look at your job output over the next couple of weeks and make note of any changes being advertised. You may continue using these modules as-is until September 27th to allow time to make any changes necessary. Users are encouraged to make any changes necessary beforehand to avoid disruption when changes are made.</p>
<p>WHAT IF AN IMPACTED MODULE IS REQUIRED BY MY RESEARCH?</p>
<p>We understand some users may not be able to change compilers or MPI libraries in the middle of a research project. Modules involved in a default version update will continue to be available, however, you will need to update your job scripts to request the specific version of the module. If you are already loading specific versions no changes are necessary.</p>
<p>If a version of software you depend on is being completely removed and you are unable to upgrade, please contact us at <a href="mailto:rcac-help@purdue.edu">rcac-help@purdue.edu</a>. We will help you transition to a newer version if possible, or provide you with a copy of the old software version.</p>
<p>WHY ARE YOU CHANGING THE SOFTWARE STACK?</p>
<p>ITaP aims to provide a software stack that allows for optimal use (performance and stability) of the clusters. This necessitates periodic updates to the stack as compilers, libraries, and software are improved over time. By removing older modules from the main stack we help ensure the selection is simple and easy for users to find the best compilers and libraries to use. If no modules were removed the selection would become difficult to navigate as well become difficult for ITaP staff to manage. Any major changes will be coordinated with scheduled maintenance periods to minimize impact.</p>
<p>If you have any questions or concerns with the upcoming changes please contact <a href="mailto:rcac-help@purdue.edu">rcac-help@purdue.edu</a></p>
]]></description>
				<pubDate>Tue, 27 Sep 2016 00:00:00 -0400</pubDate>
									<category>Announcements</category>
							</item>
					<item>
				<title><![CDATA[Degraded performance of several systems]]></title>
				<link>https://www.rcac.purdue.edu/news/607</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/news/607</guid>
				<description><![CDATA[<p>We have seen a significant wave of these events this morning, September 21.  For the most part, this wave seems to have been linked to a storage problem that has been resolved.  However, we are implementing new monitoring and response procedures today to ensure a similar recurrence is caught and dealt with much more quickly.</p>
<p><strong>Original Message:</strong></p>
<p>System, Network, Storage, and Support staff are working to diagnose and correct issues that have been seen recently within ITaP's Research Computing systems.</p>
<p>Symptoms being reported involve an apparent complete freeze of open sessions, the inability to open new login sessions, difficulties using text editors, and disruptions in file access. In cases we have seen, these events seem to last for about 3-5 minutes, then clear up.  However, there may be ongoing effects on jobs running on the Research Clusters, including job failure due to the storage access disruption.</p>
<p>We are examining log files and monitoring processes actively, and are working to correlate the timing of these events across our systems, and expect to identify a fundamental cause that we can then correct.  At this time, however, we do not have an estimated time for a fix.</p>
<p>Please follow this news item for further information.</p>
]]></description>
				<pubDate>Tue, 13 Sep 2016 00:00:00 -0400</pubDate>
									<category>Outages</category>
							</item>
					<item>
				<title><![CDATA[POD Cluster Maintenance]]></title>
				<link>https://www.rcac.purdue.edu/news/523</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/news/523</guid>
				<description><![CDATA[<p>Carter and Scholar are back online for use as of 6:25am, though they will be operating with many nodes still offline.  Staff will be working through Wednesday to steadily increase the number of nodes available.  This concludes the POD cluster maintenance.</p>
<hr />
<p>Carter and Scholar are still being worked on.  We will issue another update by 6:00am if not already in service.</p>
<hr />
<p>The Rice, Hammer, and Peregrine1 clusters have been returned to normal operations as of 1:40am.  Work continues on Carter and Scholar, and we will issue an update on those systems by 3:00am if not already in service.</p>
<hr />
<p>The Snyder cluster has been returned to normal operations as of 12:00am.  Work continues on the the other clusters listed here.</p>
<hr />
<p>The work continues on these clusters, although progress was substantially delayed by the concurrent storage systems failure (<a href="https://www.rcac.purdue.edu/news/855">Unscheduled Storage Outage</a>).  We will post an update by 2:00am or sooner as clusters return to service.</p>
<hr />
<p>The  clusters will be unavailable beginning at Tuesday, June 7th, 2016 at 5:30am EDT, for scheduled maintenance. The clusters will return to full production by Tuesday, June 7th, 2016 at 10:00pm.</p>
<p>During this time, maintenance will be performed on the cooling systems used by these clusters.  This maintenance period will also allow critical high-availability fixes to be made to the Research Data Depot while client clusters are offline.</p>
<p>Any PBS jobs which request a walltime which would take them past Tuesday, June 7th, 2016 at 5:30am EDT will not start and will remain in the queue until after the maintenance is completed.</p>
]]></description>
				<pubDate>Tue, 07 Jun 2016 05:30:00 -0400</pubDate>
									<category>Maintenance</category>
							</item>
					<item>
				<title><![CDATA[Unscheduled Storage Outage]]></title>
				<link>https://www.rcac.purdue.edu/news/535</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/news/535</guid>
				<description><![CDATA[<p>The Isilon filesystem was restored to normal service and all affected clusters had it remounted as quickly as was sustainable by the filesystem.  This process was completed by Wednesday, May 18th, 2016 at 12:15am EDT.  All clusters other than Conte (which was undergoing a distinct maintenance) have returned to normal operations.</p>
<p>If you see any problems with your jobs from this event, please let us know, although it is unlikely there is any way to recover any work from jobs that suffered a failure.</p>
<hr />
<p>As of Tuesday, May 17th, 2016 at 5:30pm EDT,  are unavailable due to a loss of Isilon home directory storage.  Most processes on these systems will block until this storage is restored.</p>
<p>Our storage engineers have been notified, though there is not yet an estimate for return to service.</p>
]]></description>
				<pubDate>Tue, 17 May 2016 17:30:00 -0400</pubDate>
									<category>Outages</category>
							</item>
					<item>
				<title><![CDATA[New web-based quota monitoring tool]]></title>
				<link>https://www.rcac.purdue.edu/news/516</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/news/516</guid>
				<description><![CDATA[<p>A new web-based quota monitoring tool is available to all Research Cluster and Data Depot users. This tool is a web equivalent of the myquota tool on the clusters. The tool allows you to monitor your quota usage just like myquota, but it also allows you to create email alerts and usage reports. The tool monitors and reports on scratch and Data Depot quota usage.</p>
<p>Alerts for new users of Data Depot are being set up automatically as of May 6, 2016  - December 31, 2016 , and alerts for existing Data Depot users will be phased in over the coming weeks. These alerts can be modified or disabled at any time if so desired.</p>
<p>Alerts can be configured to notify you when your quota reaches a user-defined level (either an absolute value or a percentage). You can also create usage reports that will email you a usage report on a user-defined schedule.</p>
<p>The new tool is accessible from the <a href="https://www.rcac.purdue.edu/account/myquota/">My Quota</a> page. If you have any questions, feedback, or encounter any problems with this new tool please contact us at <a href="mailto:rcac-help@purdue.edu">rcac-help@purdue.edu</a>.</p>
]]></description>
				<pubDate>Fri, 06 May 2016 00:00:00 -0400</pubDate>
									<category>Announcements</category>
							</item>
					<item>
				<title><![CDATA[Unscheduled outage on Peregrine-1]]></title>
				<link>https://www.rcac.purdue.edu/news/499</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/news/499</guid>
				<description><![CDATA[<p>Outage RESOLVED
A misconfiguration that caused an unneeded IB driver to be loaded was fixed.  Peregrine-1 is back online.  Job scheduling is on.</p>
<p>Original Message:</p>
<p>The Peregrine-1 cluster is currently offline due to problems with the cluster nodes' operating system software.  This failure occurred gradually as nodes completed jobs, so there was no loss of jobs due to the outage, although no new jobs are able to run at the moment.</p>
<p>Engineers are investigating the issue and hope to return the nodes to normal function.  However, there is currently no estimate for return to service.</p>
]]></description>
				<pubDate>Thu, 17 Mar 2016 16:00:00 -0400</pubDate>
									<category>Outages</category>
							</item>
					<item>
				<title><![CDATA[Unscheduled outage for Peregrine1]]></title>
				<link>https://www.rcac.purdue.edu/news/498</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/news/498</guid>
				<description><![CDATA[<p>As of  Monday, March 7th, 2016 at 12:30pm EST,  the  cluster is unavailable due to a failed network switch in its datacenter. This switch is currently in the process of being replaced.  Estimated time to complete this work and bring the cluster back online is less than two hours.</p>
<p>UPDATE: Peregrine1's network switch has been replaced, and the cluster is now functional.</p>
]]></description>
				<pubDate>Mon, 07 Mar 2016 12:30:00 -0500</pubDate>
									<category>Outages</category>
							</item>
					<item>
				<title><![CDATA[ECN services outage - ITaP Research Computing systems impacted]]></title>
				<link>https://www.rcac.purdue.edu/news/496</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/news/496</guid>
				<description><![CDATA[<p>Engineering Computing Network (ECN) will be performing <a href="https://engineering.purdue.edu/ECN/mailman/archives/ecnannounce-list/2016-February/000008.html">staged patching and reboots of all of ECN's RedHat Linux workstations and servers</a> to protect against a serious vulnerability in glibc system library.</p>
<p>A significant number of ECN services will be affected, including several software license servers for ITaP Research Computing systems that are hosted by ECN. License servers are expected to be rebooted around 6:30am EST on Tuesday, March 1st, 2016. Exact duration of the outage is unknown, but it is not expected to be long.</p>
<p>ITaP Research Computing cluster job scheduling is not affected by the outage, but licenses for software like Matlab, Ansys/Fluent, CFD++, Sentaurus, Comsol, Abaqus, PowerFlow, and PowerAcoustics will be unavailable during the outage period, which may lead to license-controlled software refusing to work and jobs exiting with error conditions.</p>
<p>Users of Matlab are encouraged to always submit jobs that explicitly request license tokens available from the job scheduler. These are specified using the <em>gres</em> attribute in your job submission command. For example, to request a single Matlab license:</p>
<pre><code>$ qsub -l nodes=1:ppn=1,walltime=01:00:00,gres=MATLAB+1 myjob.sub
</code></pre>
<p>This way the job is guaranteed to only start execution when the necessary license is available. More examples for various Matlab toolboxes are available in the user guides.</p>
<p>Any other jobs using ECN licensed software that start during this downtime will not be able to check out a license and may result in jobs exiting with errors. As well, any software that requires a constant connection to the ECN licensing servers will stop during this time.</p>
<p>Further information regarding affected ECN services may be found in the <a href="https://engineering.purdue.edu/ECN/mailman/archives/ecnannounce-list/2016-February/000008.html">ECN announcement</a>.</p>
<p>If you are unsure if your software will be affected or have any other concerns please contact us at <a href="mailto:rcac-help@purdue.edu">rcac-help@purdue.edu</a>.</p>
]]></description>
				<pubDate>Tue, 01 Mar 2016 06:30:00 -0500</pubDate>
									<category>Outages</category>
							</item>
					<item>
				<title><![CDATA[Environment Module System Upgrade for Radon and Peregrine1]]></title>
				<link>https://www.rcac.purdue.edu/news/463</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/news/463</guid>
				<description><![CDATA[<p>The upgrade of the environment module system to Lmod on  was completed by the afternoon of January 6, 2016  - December 31, 2016 . Starting a new SSH session or logging out and back in will update your module system to Lmod - any sessions that were open from before the upgrade will still be using the old module system. If you encounter any issues or have any questions about the new module system please contact us at <a href="mailto:rcac-help@purdue.edu">rcac-help@purdue.edu</a>.</p>
<p><strong>Original:</strong></p>
<p>On January 6, 2016  - December 31, 2016  the environment module system on  will be upgraded to <a href="https://web.archive.org/web/20141016085456/https://www.tacc.utexas.edu/tacc-projects/lmod">Lmod</a>. This new system has been in use on the Rice and Snyder clusters since they were put into production. It has also been <a href="https://www.rcac.purdue.edu/news/680">available as an opt-in service</a> on the other clusters for several months.</p>
<p><strong>What is Lmod?</strong></p>
<p>Lmod is a <a href="http://www.lua.org/">Lua</a> based environment modules system developed by <a href="https://web.archive.org/web/20141016085456/https://www.tacc.utexas.edu/tacc-projects/lmod">TACC (Texas Advanced Computing Center)</a>. This system is used on TACC clusters such as Stampede and Lonestar. Lmod is also used at several other universities and centers. Lmod offers all the same functionality as the current environment modules system and uses the same commands and syntax. However, Lmod offers new and unique functionality that can be leveraged to offer an easier to use and understand software environment on ITaP clusters.</p>
<p><strong>Will I have to change anything?</strong></p>
<p>No. Lmod offers exactly the same functionality as the current module system. No changes to your job submission scripts will be necessary - all the old commands will still work as they have before. It does, however, offer several new features and new functionality.</p>
<p><strong>What are the new features of Lmod?</strong></p>
<p>At the moment Lmod is mostly a one-for-one replacement of the current environment modules system. In the future, ITaP will expand on the functionality of Lmod to improve usability of our software stack.</p>
<p>One of the most exciting features of Lmod is the <em>spider</em> command. This command allows you to discover software available on the clusters that you might not otherwise find.</p>
<p>For example, finding the bioinformatics software <em>bowtie2</em> is not easy. This software is not managed by ITaP and is not in the normal software availability. Nonetheless, it is available to the general cluster public if you happen to know how to find it. Lmod makes finding such software much easier:</p>
<pre><code>$ module spider bowtie2

----------------------------------------------------------------------------
  bowtie2:
----------------------------------------------------------------------------
     Versions:
....
        bowtie2/2.2.3

----------------------------------------------------------------------------
  To find detailed information about bowtie2 please enter the full name.
  For example:

     $ module spider bowtie2/2.2.3
----------------------------------------------------------------------------

$ module spider bowtie2/2.2.3

----------------------------------------------------------------------------
  bowtie2: bowtie2/2.2.3
----------------------------------------------------------------------------

    This module can only be loaded through the following modules:

      bioinfo
</code></pre>
<p>With this command you can discover that this software is available by first loading the <em>bioinfo</em> module.</p>
<p>One of the other features lmod offers is the ability to prevent software conflicts. Loading multiple MPI libraries or compilers, for example, may result in conflicting commands and libraries being used. Lmod will not allow the accidental loading of multiple, conflicting software simultaneously. For example:</p>
<pre><code>$ module load intel
$ module load pgi

Lmod has detected the following error: You can only have one compiler module loaded at a time.
You already have intel loaded.
To correct the situation, please enter the following command:

  module swap intel pgi/11.8-0

Please submit a consulting ticket if you require additional assistance.
</code></pre>
]]></description>
				<pubDate>Wed, 06 Jan 2016 00:00:00 -0500</pubDate>
									<category>Announcements</category>
							</item>
					<item>
				<title><![CDATA[Unscheduled Home Filesystem Outage]]></title>
				<link>https://www.rcac.purdue.edu/news/459</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/news/459</guid>
				<description><![CDATA[<p>As of 12:46, December 2, the home filesystem serving  was restored to normal operation. All queues have been re-enabled.</p>
<p>As of  Wednesday, December 2nd, 2015 at 12:00pm EST,  are all unavailable due to a failure of the home directory filesystem served by Isilon.</p>
<p>Engineers are working on the issue now, but there is currently no estimate for return to service.  We will update this post as we learn more.</p>
]]></description>
				<pubDate>Wed, 02 Dec 2015 12:00:00 -0500</pubDate>
									<category>Outages</category>
							</item>
					<item>
				<title><![CDATA[Cluster Maintenance - Hansen/Peregrine1]]></title>
				<link>https://www.rcac.purdue.edu/news/435</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/news/435</guid>
				<description><![CDATA[<p><strong>Update: September 22, 2015 1pm</strong></p>
<p>The work affecting  scratch filesystems has been completed and the clusters are back in full production.</p>
<p><strong>Original</strong></p>
<p>The  cluster will be unavailable beginning at Tuesday, September 22, 2015 from 8:00am - 10:00am EDT, for scheduled maintenance. The cluster will return to full production by Tuesday, September 22nd, 2015 at 10:00am EDT.</p>
<p>During this time, Carter's scratch will be de-coupled from Hansen/Peregrine1.  To ensure  are not impacted by the separation, scheduling will be paused and access to scratch files will be unavailable.</p>
<p>The  scratch filesystem will be unavailable for 1-2 hours.  Any running job using scratch will block on I/O and appear to hang.  Jobs will resume normally once access is restored.</p>
<p>Any PBS jobs which request a walltime which would take them past Tuesday, September 22, 2015 from 8:00am - 10:00am EDT will not start and will remain in the queue until after the maintenance is completed.</p>
]]></description>
				<pubDate>Tue, 22 Sep 2015 08:00:00 -0400</pubDate>
									<category>Maintenance</category>
							</item>
					<item>
				<title><![CDATA[Cluster Maintenance - Peregrine1]]></title>
				<link>https://www.rcac.purdue.edu/news/418</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/news/418</guid>
				<description><![CDATA[<p>The  cluster will be unavailable beginning at August 17, 2015  8:00am - August 19, 2015  6:00pm EDT, for scheduled maintenance. The cluster will return to full production by Wednesday, August 19th, 2015 at 6:00pm EDT.</p>
<p>During this time,  will be relocated from Calumet to the West Lafayette campus.</p>
<p>Any PBS jobs which request a walltime which would take them past August 17, 2015  8:00am - August 19, 2015  6:00pm EDT will not start and will remain in the queue until after the maintenance is completed.  Any job that has not started will remain in queue and resume normally after the move.</p>
]]></description>
				<pubDate>Mon, 17 Aug 2015 08:00:00 -0400</pubDate>
									<category>Maintenance</category>
							</item>
					<item>
				<title><![CDATA[ECN Service Interruption]]></title>
				<link>https://www.rcac.purdue.edu/news/427</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/news/427</guid>
				<description><![CDATA[<p>Due to power work in the MSEE building, most ECN services will be unavailable between 6:30am – 9:00pm EDT on Saturday, August 15, 2015.</p>
<p>For Research Computing users this means that software packages licensed through ECN servers will not be able to check out licenses.  Affected software packages include:</p>
<pre><code>Abaqus        Agilent       Altair        Altera        AMPL
Ansoft        Ansys         Autonomie     Cadence       Cadiq
COMSOL        Coventor      EDEM          Foundry       GridGen
GTSuite       Houdini       ImagineLab    LSDyna        Maple
Matlab        MSC           OriginPro     Pixar         PowerFlow
Simic         Synopsys      Tecplot       UGS
</code></pre>
<p>We apologize for the disruption.</p>
]]></description>
				<pubDate>Sat, 15 Aug 2015 06:30:00 -0400</pubDate>
									<category>Outages</category>
							</item>
					<item>
				<title><![CDATA[Cluster Maintenance]]></title>
				<link>https://www.rcac.purdue.edu/news/401</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/news/401</guid>
				<description><![CDATA[<p><strong>UPDATE</strong></p>
<p>As of 4:45 pm Tuesday, May 19, all the work noted below has been completed and both Hansen and Peregrine-1 have been returned to full service.</p>
<p>Thanks for your patience.</p>
<p>=-=-=</p>
<p>The Hansen and Peregrine-1 clusters will be unavailable beginning at 8:00 am on Tuesday, May 19, 2015, for scheduled maintenance. The clusters will return to full production by 6:00 pm the same day.</p>
<p>During this time, Peregrine1 and Hansen will have the operating system patched, and the PBS resource management system upgraded.</p>
<p>Any PBS jobs which request a walltime which would take them past 8:00 am on May 19 will not start and will remain in the queue until after the maintenance is completed.</p>
]]></description>
				<pubDate>Tue, 19 May 2015 08:00:00 -0400</pubDate>
									<category>Maintenance</category>
							</item>
					<item>
				<title><![CDATA[Research Data Depot Security Updates]]></title>
				<link>https://www.rcac.purdue.edu/news/390</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/news/390</guid>
				<description><![CDATA[<p>As of 3:15 pm the maintenance is complete and Research Data Depot is returned to full production.</p>
<p><strong>Original message:</strong></p>
<p>The storage servers powering the Research Data Depot will undergo maintenance on Thursday, February 26, 2015 from 10:00am - 4:00pm EST to install important security patches.</p>
<p>Research Data Depot servers are highly redundant, and this operation should have no visible impact to users of the filesystem.</p>
]]></description>
				<pubDate>Thu, 26 Feb 2015 10:00:00 -0500</pubDate>
									<category>Maintenance</category>
							</item>
					<item>
				<title><![CDATA[Important operating system updates - Community Clusters]]></title>
				<link>https://www.rcac.purdue.edu/news/384</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/news/384</guid>
				<description><![CDATA[<p>On the morning of  Thursday, February 5, 2015,  login servers will be rebooted to apply an important Red Hat Linux operating system update.</p>
<p>Additionally, during this time scratch storage servers will be rebooted to apply the same update. Access to Lustre scratch may pause briefly while backend servers are restarting.</p>
]]></description>
				<pubDate>Thu, 05 Feb 2015 08:00:00 -0500</pubDate>
									<category>Maintenance</category>
							</item>
					<item>
				<title><![CDATA[ECN services outage - ITaP Research Computing systems impacted]]></title>
				<link>https://www.rcac.purdue.edu/news/371</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/news/371</guid>
				<description><![CDATA[<p>Engineering Computing Network (ECN) in coordination with Physical Facilities will be conducting a <a href="https://engineering.purdue.edu/ECN/HomepageFeatures/major-service-outage-on-sat-jan-10th-from-8am-until-noon">planned power outage</a> in the MSEE building from 8am until noon on Saturday, January 10th, 2015. No one will be allowed to enter the building during this time for safety reasons.  A significant number of ECN services will be affected, including several software license servers for ITaP Research Computing systems that are hosted by ECN.</p>
<p>ITaP Research Computing cluster job scheduling is not affected by the outage, but licenses for software like Matlab, Ansys/Fluent, Sentaurus, Comsol, Abaqus, PowerFlow, and PowerAcoustics will be unavailable during the outage period, which may lead to license-controlled software refusing to work and jobs exiting with error conditions.</p>
<p>Users of Matlab are encouraged to submit jobs requesting license tokens available from the job scheduler. These are specified using the <em>gres</em> attribute in your job submission command. For example, to request a single Matlab license:</p>
<pre><code>$ qsub -l nodes=1:ppn=1,walltime=01:00:00,gres=MATLAB+1 myjob.sub
</code></pre>
<p>This way the job is guaranteed to only start execution when the necessary license is available. More examples for various Matlab toolboxes are available in the user guides.</p>
<p>Any other jobs using ECN licensed software that start during this downtime will not be able to check out a license and may result in jobs exiting with errors. As well, any software that requires a constant connection to the ECN licensing servers will stop during this time.</p>
<p>Further information regarding affected ECN services may be found on the <a href="https://engineering.purdue.edu/ECN/HomepageFeatures/major-service-outage-on-sat-jan-10th-from-8am-until-noon">ECN website</a>.</p>
<p>If you are unsure if your software will be affected or have any other concerns please contact us at <a href="mailto:rcac-help@purdue.edu">rcac-help@purdue.edu</a>.</p>
]]></description>
				<pubDate>Sat, 10 Jan 2015 08:00:00 -0500</pubDate>
									<category>Outages</category>
							</item>
					<item>
				<title><![CDATA[End of semester software environment changes]]></title>
				<link>https://www.rcac.purdue.edu/news/332</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/news/332</guid>
				<description><![CDATA[<p>At the end of the Fall semester several updates and changes are planned for the software environment. These changes will make available the <a href="https://www.rcac.purdue.edu/news/711">latest Intel compiler suite</a> and common libraries built with the new compiler, provide an <a href="https://www.rcac.purdue.edu/news/713">improved Python distribution</a>, and will make available a <a href="https://www.rcac.purdue.edu/news/680">new environment modules system for testing</a>.</p>
<p><strong><a href="https://www.rcac.purdue.edu/news/711">New Intel 15 Compiler Suite</a></strong></p>
<p>The largest of the software stack changes will be the addition of the latest Intel 15 compiler suite, version 15.0.1.133, and its associated common libraries.</p>
<p>No changes will be made to the default versions of common libraries or compilers, nor will there be any changes to any of the currently supported common libraries and compilers. As a reminder, a table of these supported compilers and libraries is included towards the end of this article.</p>
<p><strong><a href="https://www.rcac.purdue.edu/news/680">Evaluation of new environment modules system</a></strong></p>
<p>A new environment modules system, called Lmod, is available for evaluation on ITaP Research Computing clusters. ITaP Researching Computing staff are evaluating Lmod as a replacement for the current environment modules system and are looking for feedback from cluster users. Lmod is used in the same way as the current system but offers much more on top of existing functionality. Lmod is available as an opt-in to all current cluster users.</p>
<p><strong><a href="https://www.rcac.purdue.edu/news/713">Anaconda, a new Python distribution</a></strong></p>
<p>The default Python distribution will be replaced by <a href="https://store.continuum.io/cshop/anaconda/">Anaconda</a>, a distribution of Python that includes over 200 popular Python packages used throughout scientific computing. Some of these packages include h5py, mpi4py, netcdf, numpy, scikit, and scipy.</p>
]]></description>
				<pubDate>Mon, 22 Dec 2014 00:00:00 -0500</pubDate>
									<category>Announcements</category>
							</item>
					<item>
				<title><![CDATA[New Intel 15 Compiler Suite]]></title>
				<link>https://www.rcac.purdue.edu/news/338</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/news/338</guid>
				<description><![CDATA[<p>At the end of the Fall semester several updates and changes are planned for the software environment. These changes will make available the latest Intel compiler suite and common libraries built with the new compiler, update the default Matlab, provide an <a href="https://www.rcac.purdue.edu/news/713">improved Python distribution</a>, and will make available a <a href="https://www.rcac.purdue.edu/news/680">new environment modules system for testing</a>.</p>
<p><strong>New Intel 15 Compiler Suite</strong></p>
<p>The largest of the software stack changes will be the addition of the latest Intel 15 compiler suite, version 15.0.1.133, and its associated common libraries. This additional will make the following modules available for use:</p>
<ul>
<li>intel/15.0.1.133</li>
<li>boost/1.56.0_intel-15.0.1.133</li>
<li>fftw/3.3.4_intel-15.0.1.133_openmpi-1.8.1</li>
<li>hdf5/1.8.7_intel-15.0.1.133</li>
<li>hdf5/1.8.13_intel-15.0.1.133</li>
<li>mpich2/1.4.1p1_intel-15.0.1.133 (Radon, Rossmann, Hansen)</li>
<li>mvapich2/1.9_intel-15.0.1.133</li>
<li>netcdf/3.6.3_intel-15.0.1.133</li>
<li>netcdf/4.1.1_intel-15.0.1.133</li>
<li>openmpi/1.8.1_intel-15.0.1.133</li>
<li>python/2.7.8_intel-15.0.1.133</li>
</ul>
<p>No changes will be made to the default versions of common libraries or compilers, nor will there be any changes to any of the currently supported common libraries and compilers. As a reminder, a table of these supported compilers and libraries is included towards the end of this article.</p>
<p><strong>Miscellaneous updates</strong></p>
<p>Several miscellaneous software packages will also be replaced with the latest version. If you are using one of the affected module a notice message will be printed when loading it. Be sure to check your job output for any such messages and update your scripts as necessary.</p>
<p><strong>Common Libraries and Long Term Support</strong></p>
<p>We have chosen an installation of Intel and GCC to offer long-term support on. We expect to keep these versions of compilers and their associated common libraries for a few years. This software stack has proven to be stable and reliable. If you are looking for modules to use and not have to change them over the course of several years use these modules. Other versions, including newer ones, may come and go as we aim to offer a stable long-term stack and offer an up to date stack that undergoes more frequent updates.</p>
<p>These compilers are Intel 13.1.1.163 and GCC 4.7.2. These are combined with the following common libraries:</p>
<ul>
<li>fftw/3.3.4</li>
<li>hdf5/1.8.7</li>
<li>netcdf/3.6.3</li>
<li>netcdf/4.1.1</li>
<li>openmpi/1.8.1</li>
</ul>
<p>In addition to the Long Term Support compilers, there are several other compilers the above libraries are combined with. The full list of supported compilers include:</p>
<ul>
<li>gcc/4.7.2</li>
<li>intel/13.1.1.163</li>
<li>intel/14.0.2.144</li>
<li>intel/15.0.1.133</li>
</ul>
<p>There are a few other compilers available on the clusters however these are not combined with the common libraries.</p>
<p>The following common software are built with the latest Intel compiler (at the time of installation):</p>
<ul>
<li>boost</li>
<li>python</li>
</ul>
]]></description>
				<pubDate>Mon, 22 Dec 2014 00:00:00 -0500</pubDate>
									<category>Announcements</category>
							</item>
			</channel>
</rss>