<?xml version="1.0" encoding="UTF-8" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:media="http://search.yahoo.com/mrss/">
	<channel>
		<title>RCAC - Announcements, Outages and Maintenance, Outages, Maintenance, Outages, Maintenance, Science Highlights</title>
		<link>https://www.rcac.purdue.edu/index.php/news/rss/2,1,6,7,3,Anvil</link>
		<description><![CDATA[news::news.feed description]]></description>
		<atom:link href="https://www.rcac.purdue.edu/index.php/news/rss/2,1,6,7,3,Anvil" rel="self" type="application/rss+xml" />
		<language>en</language>
		<lastBuildDate>Mon, 16 Mar 2026 13:20:24 EDT</lastBuildDate>
					<item>
				<title><![CDATA[Scientific workflow management system, Pegasus, available on Anvil]]></title>
				<link>https://www.rcac.purdue.edu/index.php/news/2609</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/index.php/news/2609</guid>
				<description><![CDATA[<p>Pegasus, an NSF-funded scientific workflow management system, is now available for use on Purdue's Anvil supercomputer. With the addition of Pegasus, Anvil users can define, manage, and execute complex, multi-step computational tasks with ease through a web-based interface, reducing researcher workload and enabling faster time-to-discovery.</p>
<p>Pegasus is a <img width="400" style="padding:10px;" class="float-right" alt="Pegasus Software Logo" src="https://www.rcac.purdue.edu/files/anvil/Pegasus-Announcement/pegasusfront-black-reduced.png" />tool to help workflow-based applications function in various environments, including desktops, cloud, and high-performance computing (HPC) systems. It was designed to allow scientists to construct workflows in abstract terms and remove the need to understand the underlying execution environment. Pegasus has been used successfully in a number of scientific fields: astronomy, bioinformatics, earthquake science, gravitational-wave physics, ecology, and cryo-EM, amongst others. A workflow in Pegasus consists of multiple tasks with defined dependencies, and Pegasus handles job submission, data staging, execution ordering, and failure recovery. Some beneficial features of Pegasus include:</p>
<ul>
<li>Data Management: Pegasus handles data transfers, input data selection, and output registration.</li>
<li>Automated Error Recovery and Reliability: Errors are automatically addressed by retrying tasks, workflow-level checkpointing, re-mapping, and trying alternative data sources for data staging.</li>
<li>Adaptability and Reuse: Pegasus works in a variety of distributed computing environments, and workflows can easily be run in different environments without alteration.</li>
<li>Scalability: Pegasus can scale both the size of the workflow and the resources the workflow is distributed over without impacting performance.</li>
</ul>
<p>Pegasus is deployed on Anvil through the <a href="https://notebook.anvilcloud.rcac.purdue.edu/hub/oauth_login?next=">Anvil Notebook Service</a>, which provides browser-based access to Jupyter Notebooks running on Anvil infrastructure. The Pegasus Notebook environment includes the Pegasus workflow management system, HTCondor for workflow execution management, and preconfigured integration with Anvil’s SLURM scheduler. This environment allows users to develop and debug workflows interactively using the Pegasus Python API or command-line tools, submit workflows to Anvil’s batch system using their allocations, and monitor workflow execution and logs directly from the notebook interface. No additional Pegasus installation or configuration is required by the user.</p>
<p>To learn more about Pegasus and how to access it on Anvil, please visit: <a href="https://www.rcac.purdue.edu/knowledge/anvil/anvil-notebook-service/pegasus">Pegasus on Anvil</a></p>
<p>Anvil is one of Purdue University’s most powerful supercomputers, providing researchers from diverse backgrounds with advanced computing capabilities. Built through a $10 million system acquisition grant from the <a href="https://nsf.gov/">National Science Foundation (NSF)</a>, Anvil supports scientific discovery by providing resources through the NSF’s <a href="https://access-ci.org/">Advanced Cyberinfrastructure Coordination Ecosystem: Services &amp; Support (ACCESS)</a>, a program that serves tens of thousands of researchers across the United States. Anvil also supports advanced artificial intelligence research as an official resource provider of the <a href="https://nairrpilot.org">National Artificial Intelligence Research Resource (NAIRR) Pilot</a>.</p>
<p>Researchers may request access to Anvil via the <a href="https://www.rcac.purdue.edu/knowledge/anvil/access/anvil_through_access">ACCESS allocations process</a> or through the <a href="https://www.rcac.purdue.edu/anvil/anvilnairr">NAIRR allocations process</a>. More information about Anvil is available on Purdue’s <a href="https://www.rcac.purdue.edu/anvil">Anvil website</a>. Anyone with questions should contact <a href="mailto:anvil@purdue.edu">anvil@purdue.edu</a>. Anvil is funded under NSF award No. 2005632.</p>
<p><em>Written by: Jonathan Poole, poole43@purdue.edu</em></p>
]]></description>
				<pubDate>Tue, 10 Mar 2026 00:00:00 -0400</pubDate>
									<category>Science Highlights</category>
							</item>
					<item>
				<title><![CDATA[MATH Datacenter Cooling issue - Job scheduling paused on Anvil/Gautschi]]></title>
				<link>https://www.rcac.purdue.edu/index.php/news/2605</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/index.php/news/2605</guid>
				<description><![CDATA[<p>The MATH datacenter started experiencing issues with cooling systems around 12pm. Job scheduling on the Anvil and Gautschi clusters was paused shortly after and scheduling resumed at 1:30pm.</p>
]]></description>
				<pubDate>Wed, 04 Mar 2026 12:00:00 -0500</pubDate>
									<category>Outages</category>
							</item>
					<item>
				<title><![CDATA[Purdue research team uses Anvil to secure position as finalist in NASA competition]]></title>
				<link>https://www.rcac.purdue.edu/index.php/news/2600</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/index.php/news/2600</guid>
				<description><![CDATA[<p>A research group from Purdue University used the Anvil supercomputer to compete in NASA's <em>Beyond the Algorithm Challenge</em>, a nationwide competition aimed at improving flood analysis with emerging technologies. The team from the <a href="https://secquoia.github.io">SECQUOIA</a> (Systems Engineering via Classical and Quantum Optimization for Industrial Applications) research group was recognized as one of nine finalists in the competition, thanks to their innovative framework that combines artificial intelligence (AI) techniques with quantum computing technologies.</p>
<p>The <em>Beyond the Algorithm Challenge</em> was designed by the NASA Earth Science Technology Office (ESTO) to propel scientific discovery for complex Earth Science problems—in this case, rapid flood analysis—by encouraging the exploration of unconventional and innovative computing methods. Specifically, the ESTO wanted participants to utilize technologies such as quantum computing, quantum machine learning, neuromorphic computing, or in-memory computing, which have all shown promise in overcoming limitations of conventional computing methods. By testing these novel computing methods, the <em>Beyond the Algorithm Challenge</em> paves the way for transforming how Earth Science problems are solved, potentially improving the lives and safety of the American people.</p>
<p>The SECQUOIA group is <img width="400" style="padding:10px;" class="float-right" alt="Group photo of research team at NASA competition" src="https://www.rcac.purdue.edu/files/anvil/Anvil-Stories/QUAFFLE-David-Bernal/SECQUOIA.png" />a Purdue University research organization within the <a href="https://engineering.purdue.edu/ChE">Davidson School of Chemical Engineering</a>. Led by Dr. David Bernal, Assistant Professor of Chemical Engineering, the SECQUOIA group focuses on designing and implementing optimization algorithms using hybrid and cutting-edge hardware technologies, including quantum computing technologies. Upon learning about the <em>Beyond the Algorithm Challenge</em>, Bernal felt that the competition aligned well with SECQUOIA’s work and immediately began assembling a team. Team members for the challenge included: Dr. Bernal, Yirang Park, PhD student in Chemical Engineering; Alan Yi, sophomore in Computer Science; and Daniel Anoruo, senior in Computer Science with a cybersecurity focus from Towson University.</p>
<p>Over the course of 10 weeks, the group designed and refined QUAFFLE (Quantum U-Net Assisted Federated Flood Learning and Estimation). QUAFFLE is a hybrid modeling framework that combines Quantum U-Nets for image segmentation with federated learning, a machine learning approach that decentralizes the training process. To understand the reasoning behind QUAFFLE requires a rudimentary understanding of these architectures and techniques.</p>
<p>U-Net architecture is a tried-and-true convolutional neural network (CNN) used for pixel-level image segmentation. The name stems from the fact that when drawn, the architecture takes the shape of a “U.” U-Nets take images and identify specific objects within those images. The resulting accuracy of the U-Net model correlates with how well it was trained.</p>
<p>Federated learning is a technique in which a global model is collaboratively trained across multiple devices or servers, each of which has its own local model. One of the benefits of federated learning is that each local model can handle a specific type of data—ideal for tasks that involve analyzing dissimilar data. The performance of the global model is improved in this scenario by producing higher-quality training results on smaller, distributed datasets rather than relying on less robust results from one large, centralized dataset.</p>
<p>For the <em>Beyond the Algorithm Challenge</em>, the SECQUOIA group wanted to create a system that was capable of producing accurate flood maps. The group theorized that harnessing the power of quantum computing combined with federated learning would allow for this while improving speed, security, and efficiency, compared to traditional computing methods.</p>
<p>A major obstacle for the group was mismatched datasets. The flood maps would need to be based on all available imagery, which includes images of differing regions, sizes, and sources (LiDAR, drone, satellite, weather radar, etc).</p>
<p>“One of the main challenges we had with this specific application was that there's a lot of heterogeneity in the data,” says Yirang Park. “To overcome this, we implemented federated learning under a heterogeneous-client setting, where each client trained locally on a random subset of the data and contributed model updates to a shared QUAFFLE model, improving speed and accuracy.”</p>
<p>Another issue <img width="500" style="padding:10px;" class="float-right" alt="Grpahical illustration of QUAFFLE Unet architecture" src="https://www.rcac.purdue.edu/files/anvil/Anvil-Stories/QUAFFLE-David-Bernal/Screenshot%202026-02-21%20122204.png" />
the group faced in this challenge is the computational intensity required for flood detection. The very large, heterogeneous datasets needed for the task means that there is a significant amount of training parameters. More training parameters equals more computing power and longer computing times. To combat this, the group decided to replace the bottleneck layers in the U-Net architecture (the layers forming the bottom of the “U”) with quantum layers. The idea was that this would help reduce the number of training parameters required, thus reducing the training time and increasing learning efficiency.</p>
<p>“We theorized that if we needed fewer training parameters, we could speed up the training process,” says Daniel Anoruo. “Replacing the bottleneck with quantum-based architecture allowed us to do that while simultaneously improving feature extraction.”</p>
<p>The final challenge for the group was one of access and scarcity. For now, quantum computers are rare and few researchers are allocated computing time on the machines. The SECQUOIA group used the Anvil supercomputer to solve this problem by simulating two types of quantum computers: a gate-based system (with PennyLane software) and a photonic-based system (with ORCA-SDK software). The benefits of using a powerful supercomputer like Anvil to simulate a quantum computing system were manyfold: the researchers tested and refined QUAFFLE on a computing system they had access to, validated their approach for potential future use on different types of quantum systems, and bypassed the long process of obtaining an allocation on a quantum computer just to test an unproven (at the time) software framework.</p>
<p>“Running these simulations on Anvil gave us an advantage in the sense that we know QUAFFLE is hardware agnostic,” says Park. “There are multiple types of quantum computers, and no one knows which one will be the system, but we do know that QUAFFLE can adapt to different hardware architectures.”</p>
<p>Park continues, “Having a working code that has been proven in simulations and can adapt to various quantum systems has also allowed us to de-risk the approach. We know that we haven’t built something only to find that we’ve wasted time and resources after implementing it on precious quantum resources.”</p>
<p>The SECQUOIA group was thrilled with Anvil’s performance.</p>
<p>“Anvil really saved us,” says Alan Yi. “We tried testing these simulations on our local computers, and they would run for two days and not be done. But with Anvil GPUs, the simulation would finish really quickly, sometimes even less than an hour.”</p>
<p>After completing their work, the group had demonstrated that QUAFFLE was a success—it required 6% fewer parameters and outperformed a centralized quantum U-Net in accuracy when combining different data sources. Their innovative approach led to them securing a position as a finalist in the <em>Beyond the Algorithm Challenge</em>. While they did not ultimately receive the grand prize in the competition, the team’s work stood out for its innovation and real-world potential. QUAFFLE earned recognition from the judges as a promising solution, and the project gained valuable support from industry leaders, including Rigetti, Orca UK Computing, Flower, and IBM. The team plans to continue expanding QUAFFLE, and hopes to someday test it on an actual quantum system.</p>
<p>For more information about the SECQUOIA group, please visit: <a href="https://secquoia.github.io">https://secquoia.github.io</a>. The group’s presentation given to NASA for the <em>Beyond the Algorithm Challenge</em> can be viewed here: <a href="https://www.nasa-beyond-challenge.org/project-gallery/secquoia">https://www.nasa-beyond-challenge.org/project-gallery/secquoia</a></p>
<p>To learn more about High-Performance Computing and how it can help you, please visit our “<a href="https://www.rcac.purdue.edu/anvil/why-hpc">Why HPC?</a>” page.</p>
<p>Anvil is one of Purdue University’s most powerful supercomputers, providing researchers from diverse backgrounds with advanced computing capabilities. Built through a $10 million system acquisition grant from the <a href="https://nsf.gov/">National Science Foundation (NSF)</a>, Anvil supports scientific discovery by providing resources through the NSF’s <a href="https://access-ci.org/">Advanced Cyberinfrastructure Coordination Ecosystem: Services &amp; Support (ACCESS)</a>, a program that serves tens of thousands of researchers across the United States.</p>
<p>Researchers may request access to Anvil via the <a href="https://www.rcac.purdue.edu/knowledge/anvil/access/anvil_through_access">ACCESS allocations process</a>. More information about Anvil is available on Purdue’s <a href="https://www.rcac.purdue.edu/anvil">Anvil website</a>. Anyone with questions should contact <a href="mailto:anvil@purdue.edu">anvil@purdue.edu</a>. Anvil is funded under NSF award No. 2005632.</p>
<p><em>Written by: Jonathan Poole, poole43@purdue.edu</em></p>
]]></description>
				<pubDate>Tue, 24 Feb 2026 00:00:00 -0500</pubDate>
									<category>Science Highlights</category>
							</item>
					<item>
				<title><![CDATA[February 5 Maintenance – Math Data Center Upgrades and Service Impact]]></title>
				<link>https://www.rcac.purdue.edu/index.php/news/2537</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/index.php/news/2537</guid>
				<description><![CDATA[<p>On Thursday, February 5, RCAC will perform planned maintenance in the MATH data center to support cooling upgrades and capacity improvements as part of the ongoing MATH datacenter renovation project.</p>
<p>During this maintenance window, several clusters will experience a temporary outage so that hardware can be safely powered down while facility work is performed:</p>
<ul>
<li>
<p>Gautschi, Gilbreth, Negishi, Bell, and Anvil cluster nodes will be powered down.</p>
</li>
<li>
<p>The Gilbreth’s legacy V100 GPUs, that are well past their lifetime, will be decommissioned.</p>
</li>
<li>
<p>Hammer (Math nodes) and Geddes: A subset of nodes will be powered down but the services will be available, unless communicated separately.</p>
</li>
</ul>
<h3>How does this maintenance impact you?</h3>
<ul>
<li>
<p>Clusters listed in this message won’t be available to run jobs during the maintenance.</p>
</li>
<li>
<p>Any jobs requesting a walltime which would take them past the start of the maintenance will not start and will remain in the queue until after the maintenance is completed.</p>
</li>
<li>
<p>Users can continue to access their data.</p>
</li>
<li>
<p>GenAI studio will remain available. This maintenance will position Purdue to support growing computational needs. Users should see long‑term benefits in system reliability and our ability to support future computing and AI resources.</p>
</li>
</ul>
<p>If you have questions about how this outage will affect your work or need support, please contact <a href="mailto:rcac%E2%80%91help@purdue.edu">rcac-help@purdue.edu</a>.</p>
]]></description>
				<pubDate>Thu, 05 Feb 2026 07:00:00 -0500</pubDate>
									<category>Maintenance</category>
							</item>
					<item>
				<title><![CDATA[Network Slowness Notice]]></title>
				<link>https://www.rcac.purdue.edu/index.php/news/2547</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/index.php/news/2547</guid>
				<description><![CDATA[<p>We are currently investigating network performance issues affecting network traffic.</p>
<p>Impact To You:
At this time, you may notice latency or brief disruptions when accessing certain on-campus or external resources, especially during peak usage periods.</p>
<p>We appreciate your patience while we work to fully resolve the underlying problem and restore normal network performance. We will provide an update by 5:00PM EST today or sooner.</p>
]]></description>
				<pubDate>Mon, 02 Feb 2026 15:00:00 -0500</pubDate>
									<category>Outages and Maintenance</category>
							</item>
					<item>
				<title><![CDATA[Globus access to Depot degraded; slow Depot logins and Depot access on clusters]]></title>
				<link>https://www.rcac.purdue.edu/index.php/news/2574</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/index.php/news/2574</guid>
				<description><![CDATA[<p>Users of Data Depot on RCAC clusters are currently experiencing degraded performance, and some Globus transfers to and from Depot are failing or running slowly.  In addition, some users may see slow Globus logins or be temporarily unable to log in to Globus when accessing Depot collections.</p>
<p>System monitoring has identified an issue where heavy job activity was overloading the Data Depot filesystem used by the clusters and Globus.</p>
<p>You may see the following impacts:</p>
<ul>
<li>Globus transfers to and from Depot collections may fail, stall, or run much more slowly than usual.</li>
<li>Globus logins may be slow or occasionally fail when accessing Depot endpoints.</li>
<li>Jobs on RCAC clusters that read from or write to Depot may experience slow file access, delayed directory listings, or timeouts.</li>
</ul>
<p>Our engineers are investigating the high load from a large number of concurrent jobs and are working to reduce the impact on Depot, Globus, and cluster workloads.  Existing jobs will continue to run, but any that are heavily Depot‑I/O‑bound may run more slowly or see I/O errors until performance improves.  We will provide another update by 5:00PM EST or sooner if the issue is resolved.</p>
]]></description>
				<pubDate>Fri, 30 Jan 2026 15:00:00 -0500</pubDate>
									<category>Outages</category>
							</item>
					<item>
				<title><![CDATA[Anvil used to study dark matter and early universe formation]]></title>
				<link>https://www.rcac.purdue.edu/index.php/news/2573</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/index.php/news/2573</guid>
				<description><![CDATA[<p>Purdue University’s Anvil supercomputer was used by researchers from the University of California, Los Angeles (UCLA) to study the effects of dark matter on galaxy formation in the early universe. This research, part of the <a href="https://www.astro.ucla.edu/~snaoz/TheSupersonicProject/index.html">Supersonic Project</a>, aims to provide a more precise understanding of the galaxy formation process by accounting for a previously overlooked but important factor—the stream velocity.</p>
<p>Dark matter is elusive. We don’t know what it is or what it is composed of. This mysterious material scoffs at the adage “Seeing is believing”—it does not interact with the electromagnetic force, meaning it neither absorbs, reflects, nor emits light of any kind. We literally cannot see it, yet we know it is there. Dark matter has mass, thereby exerting the effects of gravity on visible matter. It is only by observing these gravitational effects that scientists know dark matter exists. In fact, dark matter accounts for roughly 85% of all matter in the universe, serving as a cosmic scaffolding that organizes galaxies at scale. Without it, galaxies would have long ago been torn asunder by their own rotational velocities, lacking the necessary gravitational pull required to hold together.</p>
<p>As one can imagine, studying a material that can’t be seen but whose effects must be observed through a telescope can be tricky. For decades, scientists have tackled this problem by running cosmological simulations that include dark matter and comparing them to what is actually seen in the universe. If the end result of a simulation matches the physical reality seen through the telescope, then that’s a good sign that the scientists are on the right track with their theories. If not, the theory must be altered or dismissed entirely. Recent technological advances have enabled scientists to study dark matter in greater depth than ever before. High-performance computing (HPC) systems provide an astonishing amount of computing power, while the new James Webb Space Telescope (JWST) gives astronomers an unprecedented view of the universe, enabling observations of the first stars and formation of the first galaxies after the Big Bang. This boost in data-gathering ability and computing performance lies at the heart of the dark matter research being conducted at UCLA.</p>
<div class="my-3 text-center"><img width="650" alt="AnvilPlot" src="https://www.rcac.purdue.edu/files/anvil/Anvil-Stories/Claire-Williams-Dark-Matter-Stream-Velocity/JWST%20MoM-z14.png" /></div> 
<p>Claire Williams is a PhD student in the <a href="http://www.astro.ucla.edu/">Astronomy and Astrophysics Division</a> of the <a href="https://www.pa.ucla.edu/">UCLA Department of Physics and Astronomy</a>. Williams’s focus is on theoretical astrophysics. She is part of the Supersonic Project, a collaboration that studies how stream velocity and dark matter affected galaxy formation in the early universe. In this instance, stream velocity refers to the relative velocity of baryons and dark matter during the early formation stages of the universe. The stream velocity has been largely neglected in traditional simulations of galaxy formation. However, recent findings show that the stream velocity was supersonic, which had major implications for how the baryons and dark matter were distributed. Williams’s, and the rest of the research group’s, goal is to improve our understanding of the galaxy formation process by including the stream velocity as a factor in their cosmological simulations.</p>
<p>“So our specific studies are trying to gain a more precise understanding of the process by including effects that previously nobody included,” says Williams. “People already had dark matter, they already had gas, but they were missing the stream velocity. It has been largely ignored because it is challenging to get right in simulations. But neglecting the fact that material was moving past the dark matter at five times the speed of sound inevitably leads to a different result. What our group has done is to run simulations that correctly include the relative motion of dark matter and ordinary matter at early times in the universe.”</p>
<p>Williams and her research group utilize the Anvil supercomputer to run high-resolution AREPO hydrodynamics simulations for a number of different studies. The common theme across these studies is that the group runs theoretical simulations both with and without stream velocity as a factor, and that the results are, or will soon be, compared with JWST observations. The size of the regions being simulated are, quite literally, astronomical, ranging upwards of two megaparsecs. This equates to a volume slightly larger than the Milky Way and Andromeda galaxies combined. The simulations are also incredibly detailed, with each individual particle representing an area roughly 200 times the mass of our sun. For comparison, that’s a single grain of sand on the beach. Simulations this large require a massive amount of computing power and would be impossible without HPC resources like Anvil.</p>
<p>“So we're simulating a region larger than the whole Milky Way, but our individual pieces that are moving around are only a couple 100 times bigger than our own sun,” says Williams. “This is why we need Anvil, because you couldn't run this on your laptop. This takes a couple of weeks to run on the cluster.”</p>
<p>Running the simulations is only the first part of the process; HPC resources are further needed to actually analyze the data. Williams continues:</p>
<p>“Then, at the end of the day, when you finish your simulation run, you basically have a bunch of imaginary particles in an imaginary box. But you have to figure out, ‘How would these particles translate to light that the telescope would see?’ So you need to post-process the simulations, which involves extensive data analysis and specialized algorithms to convert the resulting particles into light in space. We need Anvil for this data analysis as well.”</p>
<p>The end <img width="400" style="padding:10px;" class="float-right" alt="Image description" src="https://www.rcac.purdue.edu/files/anvil/Anvil-Stories/Claire-Williams-Dark-Matter-Stream-Velocity/JWST%20Dark%20Matter%20Map%203.png" />result of the group’s computational work is a theoretical picture of what the universe should look like to us today, as viewed through the JWST. Dark matter was dispersed throughout the universe soon after the Big Bang, unaffected by the forces of electricity and magnetism. The gravitational pull of dark matter led to clumps of particles, which eventually formed into galaxies. And the precise placement of these galaxies was likely influenced directly by the stream velocity. At least, that’s Williams’s hypothesis. Now, the research group must wait to see if it proves true.</p>
<p>“So one of the things that we have found with our studies,” says Williams, “is that the stream velocity should cause some very faint galaxies to shine very brightly for a brief period of time at the beginning of the early universe, because it causes them to form a bunch of stars all at once. Without the stream velocity factored in, you wouldn’t expect to see this happen. And now they're starting to make observations with the JWST that should show what we predict to see. So, hopefully, in the next few years, we can get confirmation from the telescope that this effect is happening.”</p>
<p>Williams continues, “One thing that's kind of cool is that if they don't see that effect, then it poses a big problem for dark matter in general, because all of our models so far are dependent on how we think dark matter should work. So if we make this prediction and the telescope doesn't see it, then we know we've messed up our collective understanding of dark matter along the way and may need to make changes to things we thought we had a grasp on in our cosmology.”</p>
<p>For more information about William’s research, please visit her <a href="https://www.astro.ucla.edu/~clairewilliams/">UCLA Bio Page</a>. More details on the Supersonic Project can be found here: <a href="https://www.astro.ucla.edu/~snaoz/TheSupersonicProject/index.html">The Supersonic Project</a></p>
<p>Interested in leveraging the latest advancements in computing to bolster your research? Please visit our “<a href="https://www.rcac.purdue.edu/anvil/why-hpc">Why HPC?</a>” page to learn more about High-Performance Computing and how it can help you.</p>
<p>Anvil is one of Purdue University’s most powerful supercomputers, providing researchers from diverse backgrounds with advanced computing capabilities. Built through a $10 million system acquisition grant from the <a href="https://nsf.gov/">National Science Foundation (NSF)</a>, Anvil supports scientific discovery by providing resources through the NSF’s <a href="https://access-ci.org/">Advanced Cyberinfrastructure Coordination Ecosystem: Services &amp; Support (ACCESS)</a>, a program that serves tens of thousands of researchers across the United States.</p>
<p>Researchers may request access to Anvil via the <a href="https://www.rcac.purdue.edu/knowledge/anvil/access/anvil_through_access">ACCESS allocations process</a>. More information about Anvil is available on Purdue’s <a href="https://www.rcac.purdue.edu/anvil">Anvil website</a>. Anyone with questions should contact <a href="mailto:anvil@purdue.edu">anvil@purdue.edu</a>. Anvil is funded under NSF award No. 2005632.</p>
<p><em>Written by: Jonathan Poole, poole43@purdue.edu</em></p>
]]></description>
				<pubDate>Thu, 29 Jan 2026 00:00:00 -0500</pubDate>
									<category>Science Highlights</category>
							</item>
					<item>
				<title><![CDATA[RCAC Student Spotlight : Elian Rieza]]></title>
				<link>https://www.rcac.purdue.edu/index.php/news/2538</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/index.php/news/2538</guid>
				<description><![CDATA[<p><strong>Name:</strong> Elian Rieza <img width="400" style="padding:10px;" class="float-right" alt="Image description" src="https://www.rcac.purdue.edu/files/RCAC-Stories/Student-Spotlights/Elian%20Rieza.jpeg" /></p>
<p><strong>Year:</strong> Sophomore</p>
<p><strong>Major:</strong> Electrical Engineering</p>
<p><strong>Position:</strong> Assistant Computational Researcher</p>
<p><strong>Can you introduce yourself and share a little about who you are?</strong>
Hello! My name is Elian and I’m an Assistant Researcher!</p>
<p><strong>What are some of your main interests or passions?</strong>
Some of my interests include Linux, servers, and drinking lots of coffee.</p>
<p><strong>Can you tell us about your role at RCAC? What does your job entail?</strong>
I am an Assistant Researcher handling tickets from researchers and the entire user base of multiple RCAC clusters, including Anvil and Gautschi. I also help the Apps team to cover issues facing the clusters.</p>
<p><strong>What do you enjoy most about working at RCAC?</strong>
Working at RCAC allowed me to handle servers on a day-to-day basis and, other than the fact that I'm a huge server nerd, it allowed me to learn more about Linux systems in a very friendly work environment.</p>
<h3>Tell us more about your favorite project you like to show off!</h3>
<p><strong>Project title:</strong>  Handling Apps tickets at RCAC</p>
<p><strong>Project description:</strong> I handle tickets from the wide range of users that Purdue's multiple clusters cover and support. Whenever issues arise, I would be one of the first people to handle the ticket then I'd handle their issues, noting whatever arises in RCAC's database.</p>
<p><strong>What did you learn from this project?</strong>  A lot of patience, especially from (slightly, and understandably) upset researchers who thought they had lost their life's work (thankfully they hadn’t!).</p>
]]></description>
				<pubDate>Thu, 15 Jan 2026 00:00:00 -0500</pubDate>
									<category>Science Highlights</category>
							</item>
					<item>
				<title><![CDATA[Anvil used to study how trade can reduce volatilities in crop supply]]></title>
				<link>https://www.rcac.purdue.edu/index.php/news/2521</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/index.php/news/2521</guid>
				<description><![CDATA[<p>A researcher from Purdue University used the Anvil supercomputer to study climate-induced volatility in crop production and identify the role of potential adaptation strategies for reducing future risk. The results of this research, notably that international trade can reduce volatility, are crucial for global food security as well as regional resilience.</p>
<p>Dr. Iman Haqiqi <img width="400" style="padding:10px;" class="float-right" alt="Image description" src="https://www.rcac.purdue.edu/files/anvil/Anvil-Stories/Iman-Haqiqi/erfsad7d12f4_hr.jpg" />is Lead Research Economist in the Department of Agricultural Economics at Purdue University. His research leverages high-performance computing (HPC) resources to study international trade, environmental, and resource economics, with a focus on global change and sustainability. Recently, Haqiqi utilized Anvil, one of Purdue’s most powerful supercomputers, to explore how strategic trade partnerships can buffer the risk of crop market volatility stemming from increased heat stress.</p>
<p>Heat stress is a significant concern for crop production. When plants are exposed to excessive heat for prolonged periods, they can exhibit numerous negative health effects, including inhibited growth, reduced photosynthesis rates, sunscald, wilting, and even death. Different crops exhibit varying levels of sensitivity to heat stress, but corn—a staple crop for billions of people—is particularly vulnerable. As extreme heat stress events increase in frequency and intensity, national and global food security is put at risk. Facing this challenge and understanding the effectiveness of alternative strategies to overcome it is precisely what drove Haqiqi to pursue his research.</p>
<p>Climate impact on average crop production has been researched to no end. There are many studies that look at the effects of heat stress or other extreme weather events on average corn yield. The problem, according to Haqiqi, is that looking at the average can be misleading and neglects a large part of the risk.</p>
<p>“A lot of other studies have looked at this problem and determined that, on average, crop production will be a little bit lower,” says Haqiqi. “But I find that looking at the average is misleading. Mixing extreme highs with extreme lows, for example, means that, on average, everything might be fine. What we need to do is study the volatility, because that’s where the real risk is. If we want to prepare, we have to measure the volatility, not just the average.”</p>
<p>While a small decrease in average annual corn yields may not be considered problematic, increased volatility is. Volatility always has been and always will be a factor in crop production. Some years will be worse than others. But as the risk of extreme weather events decimating a crop supply increases, so too does the chance that any particular season will cause the global supply of food, not to mention the agricultural market, to implode. Haqiqi’s goal was to investigate future volatility and risk in corn production associated with increased heat stress, as well as evaluate the effectiveness of two different adaptation strategies—irrigation and market integration.</p>
<p>Irrigation is a tried-and-true method of reducing crop vulnerability during periods of extreme heat. Not only does it cool the temperature of the plant, it also maintains appropriate soil moisture levels, which improves nutrient uptake, photosynthesis rate, and biochemical efficiency. The problem is that wide-scale adoption of irrigation as an adaptation strategy would further deplete an already strained resource—water. This concern over groundwater depletion has led to a growing interest in trade as an alternative option for offsetting crop volatility risk.</p>
<p>International trade partnerships between regions with differing climate patterns could reduce the risk of substantial losses to the national corn supply, but trade as an adaptation strategy had only been discussed in theory. Haqiqi wanted to measure the strategy’s effectiveness quantitatively. To begin, he needed to predict how corn yields would be affected by potential changes to the climate patterns. Haqiqi used a statistical panel model to estimate corn yield response to heat stress and then combined those results with NEX-GDDP-CMIP6 climate data to project future production volatility and risks of substantial yield losses. To assess overall volatility, Haqiqi needed to calculate the extreme heat levels (i.e., not the average) of each day for millions of fields worldwide, aggregate this for each growing season in every region that produces corn, and then aggregate this to determine global corn supply. Haqiqi then converted these from daily to yearly calculations and determined year-on-year changes in volatility. These results were then used to determine the risk of substantial loss of production for each region. Haqiqi also assessed the relative volatility of each region compared to the global market. Once these baseline results were obtained, Haqiqi could simulate multiple scenarios to analyze irrigation and market integration for their ability to reduce these future risks.</p>
<p>The results of <img width="400" style="padding:10px;" class="float-right" alt="Image description" src="https://www.rcac.purdue.edu/files/anvil/Anvil-Stories/Iman-Haqiqi/erfsad7d12f8_hr-2.jpg" />Haqiqi’s research were clear: 1) corn yields will experience higher volatility due to increased heat stress; 2) irrigation expansion can offset this risk; 3) trade can also buffer the risk, but without depleting the groundwater supply. The third point is salient—according to the numbers, irrigation in the US will need to rise from 15% of farm land to 50-75% in order to maintain historical risk levels, which is unsustainable.</p>
<p>“So the whole idea of this paper was to show that, yes, there are some temporary solutions, like irrigation, but they are not sustainable,” says Haqiqi. “Something else, like international trade, which is a solution from an economic perspective, can have a similar effect in terms of reducing volatility and risk. But also, it has benefits because you don't need to have a lot of unsustainable use of resources.”</p>
<p>Haqiqi’s research required a massive amount of computing power, and for that, he relied on Anvil. The supercomputer was used for all computational tasks involving yield projection, variability analysis, and risk assessment.</p>
<p>“Without Anvil, this paper would be just a conceptual framework that, hey, you know, trade could be a good thing compared to irrigation,” says Haqiqi. “But we didn't have numerical evidence to support that claim. Now, thanks to having access to Anvil, we could provide that evidence.”</p>
<p>Haqiqi went on to note that the support he received from the Anvil team was exceptional and that because of the quick, comprehensive responses to his support tickets, he was able to rapidly move past any issues he had.</p>
<p>The results of Haqiqi’s research were published in <em>Environmental Research: Food Systems</em>. To view the publication and learn more about the study, please visit: <a href="https://iopscience.iop.org/article/10.1088/2976-601X/ad7d12">Trade can buffer climate-induced risks and volatilities in crop supply.</a></p>
<p>To learn more about High-Performance Computing and how it can help you, please visit our “<a href="https://www.rcac.purdue.edu/anvil/why-hpc">Why HPC?</a>” page.</p>
<p>Anvil is one of Purdue University’s most powerful supercomputers, providing researchers from diverse backgrounds with advanced computing capabilities. Built through a $10 million system acquisition grant from the National Science Foundation (NSF), Anvil supports scientific discovery by providing resources through the NSF’s Advanced Cyberinfrastructure Coordination Ecosystem: Services &amp; Support (ACCESS), a program that serves tens of thousands of researchers across the United States.</p>
<p>Researchers may request access to Anvil via the <a href="https://www.rcac.purdue.edu/knowledge/anvil/access/anvil_through_access">ACCESS allocations process</a>. More information about Anvil is available on Purdue’s <a href="https://www.rcac.purdue.edu/anvil">Anvil website</a>. Anyone with questions should contact <a href="mailto:anvil@purdue.edu">anvil@purdue.edu</a>. Anvil is funded under NSF award No. 2005632.</p>
<p><em>Written by: Jonathan Poole, poole43@purdue.edu</em></p>
]]></description>
				<pubDate>Tue, 30 Dec 2025 00:00:00 -0500</pubDate>
									<category>Science Highlights</category>
							</item>
					<item>
				<title><![CDATA[Research Computing Holiday Break]]></title>
				<link>https://www.rcac.purdue.edu/index.php/news/2481</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/index.php/news/2481</guid>
				<description><![CDATA[<p>Research Computing personnel will observe the university winter break from 12:00am EST on 12/23/25 and will resume normal business hours on January 5th, 2026  During this time, Research Computing services will continue to be available, but all staff will be on leave.</p>
<p>Research Computing staff members will monitor the status of all computing and data resources in an effort to ensure continuous availability.</p>
<p>Research Computing staff members will monitor the ticketing system throughout the holiday period and answer critical issues and problems. Non-critical user issues and questions will be addressed beginning January 5th, 2026. There will also be no coffee hour consultations during this break.</p>
<p><strong>Scratch file purging (on community clusters with scratch space) will continue as normal during the break, so be sure to archive your files in scratch storage. This does not apply to Data Depot or home directories -- only scratch storage.</strong></p>
<p>Have a wonderful break, everyone, and we look forward to great things in the new year!</p>
]]></description>
				<pubDate>Tue, 16 Dec 2025 13:00:00 -0500</pubDate>
									<category>Announcements</category>
							</item>
					<item>
				<title><![CDATA[Anvil Maintenance on December 11, 2025]]></title>
				<link>https://www.rcac.purdue.edu/index.php/news/2476</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/index.php/news/2476</guid>
				<description><![CDATA[<p><strong>UPDATE: December 11, 2025 05:51 PM ET</strong>
As of 5:51pm ET, the maintenance work on Anvil has been completed and job scheduling has been resumed. If you encounter any issues post maintenance, please contact <a href="http://support.access-ci.org/help-ticket">ACCESS Help Desk</a>.</p>
<p><strong>Original Post:</strong></p>
<p>The Anvil system will be unavailable on <strong>December 11th, 2025, from 7:00 AM to 6:00 PM ET</strong> for scheduled maintenance. During this maintenance, we will perform NVIDIA driver upgrades and rebuild Slurm with PMIx.</p>
<p>Any Slurm jobs requesting a walltime that would extend past Thursday, December 11th at 7:00 AM ET will not start and will remain in the queue until after maintenance is completed.</p>
<p>If you have any questions, please submit a ticket through the <a href="http://support.access-ci.org/help-ticket">ACCESS Help Desk</a>.</p>
]]></description>
				<pubDate>Thu, 11 Dec 2025 07:00:00 -0500</pubDate>
									<category>Maintenance</category>
							</item>
					<item>
				<title><![CDATA[Anvil and AI used to solve for best taxation strategies]]></title>
				<link>https://www.rcac.purdue.edu/index.php/news/2480</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/index.php/news/2480</guid>
				<description><![CDATA[<p>A researcher from the University of Nebraska-Omaha used Purdue’s Anvil supercomputer to develop a new artificial intelligence (AI) technique that can derive optimal taxation strategies for governments. This new method leveraged Anvil’s advanced GPUs to factor in household differences across families within a population in order to determine how taxes should be applied for the best possible outcome.</p>
<p>Dr. Zhigang Feng is a professor in the Department of Economics at the University of Nebraska-Omaha. He, along with his collaborators hailing from multiple institutions, combined machine learning techniques with economic theory to tackle everyone’s favorite economic subject—taxes.</p>
<p>Taxation is an <img width="400" style="padding:10px;" class="float-right" alt="Image description" src="https://www.rcac.purdue.edu/files/anvil/Anvil-Stories/Zhigang-Feng-Optimized-Taxation/AdobeStock_485082141.jpeg" />oft-debated subject for governments worldwide, with different opinions and theories as to what works best for individual countries or locales. How differing taxation strategies affect the economic choices of households in a population is an extraordinarily complex problem to solve. Many models have been developed to try and understand and predict the effects of taxes on the economy, with varying levels of success. Most models fail to account for household heterogeneity in the context of dynamic economic fluctuations. This shortcoming is precisely what Feng and his colleagues set out to remedy.</p>
<p>Research has long shown that household heterogeneity needs to be factored in to accurately model economic behavior and therefore design optimized fiscal policies. However, heterogeneity takes an already complex mathematical problem and adds in an infinite-dimensional object. Feng’s goal was to develop a novel machine learning-based approach that successfully factored in household differences. To do this required a massive amount of computing power due to the curse of dimensionality problem, which is why he and his collaborators turned to Anvil.</p>
<p>&quot;This problem isn’t something traditional numerical methods in the standard economist's toolbox can handle—even with a handful of CPUs using MPI, let alone an average computer,&quot; says Feng. &quot;We needed multiple GPUs running in parallel to harness the optimization power of modern AI techniques, and we needed them on demand. We also required a machine with massive memory to store the state of every simulated individual. Thankfully, Anvil was able to provide us with both.&quot;</p>
<p>The group utilized both CPUs and the advanced GPUs on Anvil to create a Markov decision process in Wasserstein space. They combined deep neural networks for equilibrium function approximation, a histogram-based distribution approximation, an analytically derived distribution transition kernel, and a modified value and policy iteration with an augmented Lagrangian method, all of which together allowed them to address the problem of infinite dimensions. After developing the new approach, the group also needed to run the model simulations for multiple scenarios, showing the cause-and-effect of different taxation strategies.</p>
<p>Overall the group was very happy with Anvil’s performance. The queue for the GPUs was short, allowing them the access they needed to quickly conduct their research. Feng also noted that anytime the team hit any snags or had issues, they reached out to the Anvil support team and received help promptly. All of this combined enabled the group to efficiently proceed with a project that otherwise would not have been possible.</p>
<p>“To solve these models, we needed Anvil,” says Feng. “There’s no question—without it, this is not something we would have been able to achieve.”</p>
<p>Though the research publication is in its preliminary stages, it shows promising results and will have important implications for policymakers and researchers wanting to design effective fiscal policies. The novel machine learning method developed by Feng and his colleagues is also scalable and can be applied to a wide range of other economic models.</p>
<p>For more information about this project, as well as other research conducted by Dr. Feng, please visit his <a href="https://sites.google.com/site/zfeng202/research">Research Page</a>.</p>
<p>To learn more about High-Performance Computing and how it can help you, please visit our “<a href="https://www.rcac.purdue.edu/anvil/why-hpc">Why HPC?</a>” page.</p>
<p>Anvil is one of Purdue University’s most powerful supercomputers, providing researchers from diverse backgrounds with advanced computing capabilities. Built through a $10 million system acquisition grant from the National Science Foundation (NSF), Anvil supports scientific discovery by providing resources through the NSF’s Advanced Cyberinfrastructure Coordination Ecosystem: Services &amp; Support (ACCESS), a program that serves tens of thousands of researchers across the United States.</p>
<p>Researchers may request access to Anvil via the <a href="https://www.rcac.purdue.edu/knowledge/anvil/access/anvil_through_access">ACCESS allocations process</a>. More information about Anvil is available on Purdue’s <a href="https://www.rcac.purdue.edu/anvil">Anvil website</a>. Anyone with questions should contact <a href="mailto:anvil@purdue.edu">anvil@purdue.edu</a>. Anvil is funded under NSF award No. 2005632.</p>
<h4>Publications utilizing Anvil</h4>
<ol>
<li>Chen C, Feng Z, Gu J. HEALTH, HEALTH INSURANCE, AND INEQUALITY. <em>International Economic Review.</em> Published online July 4, 2024. doi:https://doi.org/10.1111/iere.12722</li>
<li>Feng, Zhigang and Han, Jiequn and Zhu, Shenghao, Optimal Taxation with Incomplete Markets–An Exploration Via Reinforcement Learning. Available at SSRN: <a href="http://dx.doi.org/10.2139/ssrn.4758552">http://dx.doi.org/10.2139/ssrn.4758552</a>
</li>
</ol>
<p><em>Written by: Jonathan Poole, poole43@purdue.edu</em></p>
]]></description>
				<pubDate>Mon, 08 Dec 2025 00:00:00 -0500</pubDate>
									<category>Science Highlights</category>
							</item>
					<item>
				<title><![CDATA[Purdue participates in prestigious international conference, SC25]]></title>
				<link>https://www.rcac.purdue.edu/index.php/news/2478</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/index.php/news/2478</guid>
				<description><![CDATA[<p>Purdue University made an impact at the 2025 International Conference for High Performance Computing, Networking, Storage and Analysis (SC25). For more than 20 years, Purdue has participated in SC by showcasing the people and computing resources that make Purdue a leader in HPC and research in higher education. This year saw the continuation of that legacy with captivating presentations at the Purdue exhibitor's booth, fun alumni networking events, workforce development opportunities, and more!</p>
<p>SC25 is an annual conference where the brightest minds in computing and technology from around the world gather in one location for a week of communication, collaboration, and innovation. The conference took place in St. Louis, Missouri, this year, with 16,500+ attendees and a record-breaking 559 exhibitors. Purdue’s exhibitor booth, hosted by the Rosen Center for Advanced Computing (RCAC), did not disappoint, engaging with a steady stream of attendees who dropped by to speak with our HPC experts, listen to presentations, and participate in demonstrations.</p>
<p>The central theme for the Purdue booth this year was to promote the <a href="https://www.purdue.edu/computes/">Purdue Computes</a> initiative. To help achieve this goal, Purdue provided the conference with <a href="https://www.rcac.purdue.edu/sc2025">booth presentations</a> throughout the week from experts within multiple departments. Purdue staff also participated in numerous workshops, Birds-of-a-feather sessions (BOFs), and panel discussions outside of the booth exhibits, all highlighting the university’s contributions to research computing and HPC in higher education. A full list of SC25 papers and presentations given by Purdue affiliates is as follows:</p>
<ul>
<li>
<strong>Haniye Kashgarani, LJ Lumas, Emma Zheng, and Brendan Swanson:</strong> <em>AnvilOps: Increasing Accessibility of Kubernetes with Automated Builds and Deployments</em>
</li>
<li>
<strong>Paul Jiang:</strong> <em>A Formal Characterization of Non-Monotonicity in Tensor Cores</em>
</li>
<li>
<strong>Richie Tan and Guangzhen Jin:</strong> <em>A Modular, Responsive, and Accessible HPC Dashboard Built upon Open OnDemand</em>
</li>
<li>
<strong>Mithuna Thottethodi, Sree Charan Gundabolu, and Vijaykumar T. N.:</strong> <em>BLAZE: Exploiting Hybrid Parallelism and Size-Customized Kernels to Accelerate BLASTP on GPUs</em>
</li>
<li>
<strong>Elham Sarbijan, FNU Ashish, Christina Joslin, and David Burns:</strong> <em>Generating Frequently Asked Questions from Technical Support Tickets using Large Language Models</em>
</li>
<li>
<strong>David F. Gleich:</strong> <em>KVMSR+UDWeave: Extreme-Scaling with Fine-grained Parallelism on the UpDown Graph Supercomputer</em>
</li>
<li>
<strong>Petros Drineas and Vasileios Georgiou:</strong> <em>Randomized Numerical Linear Algebra in HPC: Toward a Sustainable, Scalable Software Ecosystem</em>
</li>
</ul>
<p>Aside from hosting <img width="400" style="padding:10px;" class="float-right" alt="Image description" src="https://www.rcac.purdue.edu/files/RCAC-Stories/SC25-Post-Event/IMG_6076.jpg" />a booth and giving presentations, Purdue assisted directly with making SC25 a success. Purdue staff and affiliates volunteered for the SC25 Planning Committee (the organizing body for SC25), SCinet (the collaborative group that builds the infrastructure and network for the conference), and the Student Cluster Competition (a 48-hour HPC competition). Thanks to these volunteers, Purdue lent its expertise towards building and running the entire conference. Support for SC25 wasn’t limited to employees, however. Purdue also offered hardware for a training session to help ensure the best conference possible.</p>
<p>Anvil, one of Purdue’s most powerful supercomputers, was the main resource used to host an all-day, student-focused workshop at SC25. The workshop consisted of lectures combined with self-paced hands-on activities on HPC, AI, and quantum computing. Each student created their own ACCESS account in order to utilize Anvil, and as a bonus for participating, they will have continual access to the supercomputer for a full year. The exercises mainly focused on accelerated code (CUDA) with both C++ and PyTorch, for which the students used all 84 of Anvil’s cutting-edge H100 GPUs for the entirety of the day. In total, more than 80 students from multiple institutions took part in the workshop.</p>
<p>SC25 also provided <img width="400" style="padding:10px;" class="float-right" alt="Image description" src="https://www.rcac.purdue.edu/files/RCAC-Stories/SC25-Post-Event/SC25SCC-35.jpg" />an opportunity for Purdue students to shine. Throughout the week, graduate and undergraduate students from the university were involved in numerous workforce development activities, including giving presentations at the Purdue booth, conducting workshops, and taking part in poster sessions. Two Purdue students competed as part of the <a href="https://www.rcac.purdue.edu/news/7440">INpack team</a> in the 2025 IndySCC, a world-renowned supercomputing competition, while four of the eight <a href="https://www.rcac.purdue.edu/news/7449">Anvil REU students</a> were able to present on the work they conducted during the summer program. Outside of gaining presentation experience, the students were also able to attend different informational sessions and learn about the latest advances in HPC, as well as network and develop connections with people within the community. Providing students with opportunities such as these ties in directly with Purdue’s goal of developing the HPC workforce of the future.</p>
<p>To cap off the fantastic week for Purdue, the new HPL-MXP mixed-precision benchmark list and IO500 lists were released at SC25. Purdue University’s newest supercomputing community cluster, <a href="https://www.rcac.purdue.edu/compute/gautschi">Gautschi</a>, was ranked 27th on the HPL-MXP list and 20th on the IO500 list in the 10 Node Production category. This is an amazing achievement and a testament to the value of Purdue’s continued investment in HPC.</p>
<p>Overall, SC25 was a tremendous success for the university. If you or someone in your department would like to be involved with SC26, please contact  <a href="mailto:rcac-help@purdue.edu">rcac-help@purdue.edu</a>.</p>
<p>For more information regarding HPC and how it can help you, please visit our “<a href="https://www.rcac.purdue.edu/anvil/why-hpc">Why HPC?</a>” page.</p>
<p>Anvil is one of Purdue University’s most powerful supercomputers, providing researchers from diverse backgrounds with advanced computing capabilities. Built through a $10 million system acquisition grant from the <a href="https://nsf.gov/">National Science Foundation (NSF)</a>, Anvil supports scientific discovery by providing resources through the NSF’s <a href="https://access-ci.org/">Advanced Cyberinfrastructure Coordination Ecosystem: Services &amp; Support (ACCESS)</a>, a program that serves tens of thousands of researchers across the United States.</p>
<p>Researchers may request access to Anvil via the <a href="https://www.rcac.purdue.edu/knowledge/anvil/access/anvil_through_access">ACCESS allocations process</a>. More information about Anvil is available on Purdue’s <a href="https://www.rcac.purdue.edu/anvil">Anvil website</a>. Anyone with questions should contact <a href="mailto:anvil@purdue.edu">anvil@purdue.edu</a>. Anvil is funded under NSF award No. 2005632.</p>
<p>RCAC operates all centrally-maintained research computing resources at Purdue University, providing access to leading-edge computational and data storage systems as well as expertise and support to Purdue faculty, staff, and student researchers. To learn more about HPC and how RCAC can help you, please visit: <a href="https://www.rcac.purdue.edu/">https://www.rcac.purdue.edu/</a></p>
<p><em>Written by: Jonathan Poole, poole43@purdue.edu</em></p>
]]></description>
				<pubDate>Tue, 02 Dec 2025 00:00:00 -0500</pubDate>
									<category>Science Highlights</category>
							</item>
					<item>
				<title><![CDATA[RCAC hosts successful Anvil REU Summer 2025 program]]></title>
				<link>https://www.rcac.purdue.edu/index.php/news/2461</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/index.php/news/2461</guid>
				<description><![CDATA[<p>Over the summer, the Rosen Center for Advanced Computing (RCAC) hosted its annual 11-week hands-on internship, the Anvil Research Experience for Undergraduates (REU) Summer program.</p>
<p>Eight students from across the nation gathered at Purdue’s campus in West Lafayette, Indiana, for this year’s Anvil REU program. The students enrolled in this internship program to learn about high-performance computing (HPC) and to work on projects related to the operations of the NSF-funded Anvil supercomputer at Purdue. During the program, which is supported by the National Science Foundation (NSF), the students obtained the knowledge and skills necessary to build and support advanced research computing systems and scientific applications on these systems.</p>
<p>The Anvil REU program is a paid summer internship open to undergraduate students in the United States, regardless of their background. Due to a massive influx of applications—over 600 total—the application window closed early in mid-January of this year. This was a significant increase in applicants from 2024. The Anvil REU mentors—eight RCAC staff members who led the projects that the students would work on during the summer—along with the Anvil executive team, took this list of 600+ applicants and distilled it down to eight students. The eight participants of the Anvil REU program were:</p>
<ul>
<li>
<strong>Abigale Tucker</strong>, Computer Science major, Middle Tennessee State University</li>
<li>
<strong>Randy Alejo</strong>, Computer Science major, Stony Brook University</li>
<li>
<strong>Brendan Swanson</strong>, Computer Science major, North Carolina State University</li>
<li>
<strong>Emma Zheng</strong>, Computer Science major, Purdue University</li>
<li>
<strong>Abigail Lin</strong>, Computer Science major, University of Florida</li>
<li>
<strong>Sadra Williams</strong>, Computer Science major, North Carolina State University</li>
<li>
<strong>Christina Joslin</strong>, Data Science and Applied Statistics major, Purdue University</li>
<li>
<strong>David Burns</strong>, Computer Science major, University of Wisconsin–Madison</li>
</ul>
<div class="my-3 text-center"><img width="500" alt="AnvilPlot" src="https://www.rcac.purdue.edu/files/anvil/Anvil-REU/Anvil-REU-2025/REU-Photos/1W5A1588.jpg" /></div>
<p>The Anvil REU program consisted of four separate projects, with two students pairing together to tackle each one. These projects were chosen with real-world applicability in mind—the students would not only gain experience with HPC and learn new skill sets, but would simultaneously increase Anvil’s capabilities. Each project also had two mentors working with the students to help them achieve their goals.</p>
<h4>Project 1:</h4>
<p>The first Anvil REU project for 2025 focused on building a data warehouse to store and manage logs from data centers and compute systems, integrating data sources, and creating visual dashboards. Two students, Abigale Tucker and Randy Alejo, teamed up to take on this project under the supervision of their mentors, Sam Weekly and Patrick Finnegan, as well as Anvil Executive Team member Preston Smith. In this project, Tucker and Alejo built a data warehouse and several data pipelines that collect, transform, store, and enable the querying of data. The Anvil supercomputer supports over 12,000 users throughout the U.S. These users generate massive volumes of data encompassing a variety of scientific domains. Managing such a large amount of data is a difficult task, especially when it needs to be easily obtained at any point in the research process. Tucker and Alejo tackled this problem by designing a system to efficiently manage, process, and store this data, making it accessible, organized, and ready for analysis when it’s needed most.​ Their pipeline was developed using a tech stack that included Apache Kafka, ClickHouse, Apache Iceberg, Grafana, and Apache Superset. They tested their system by creating a testing environment that simulated the real architecture but used fake data, allowing them to validate and troubleshoot without risking the security of real researcher data or interrupting system processes that were already in place. Once they were pleased with the functionality and performance of their pipeline, they were able to connect it to real-world data on the Anvil system.</p>
<div class="my-3 text-center"><img width="500" alt="AnvilPlot" src="https://www.rcac.purdue.edu/files/anvil/Anvil-REU/Anvil-REU-2025/REU-Photos/1W5A0901.jpg" /></div>
<h4>Project 2:</h4>
<p>The second Anvil REU project worked on developing a dynamic web interface for building and deploying container workloads on the Anvil Composable Subsystem. Brendan Swanson and Emma Zheng worked on this project under the supervision of their mentors, LJ Lumas and Haniye Kashgarani, and Anvil Executive Team member Erik Gough. The Anvil Composable Subsystem is a Kubernetes-based private cloud that provides a platform for creating composable infrastructure on demand. This cloud-style flexibility provides researchers the ability to self-deploy and manage persistent services to complement HPC workflows and run container-based data analysis tools and applications. The problem is that deploying applications to Kubernetes can be really difficult, especially for beginners. To combat this issue, Swanson and Zheng developed <em><a href="https://anvilops.rcac.purdue.edu">AnvilOps</a></em>, a user-friendly web interface that automates the deployment of applications to Anvil Composable without writing Kubernetes manifests. Thanks to their hard work throughout the summer, <em>AnvilOps</em> features seamles Git integration, the ability to monitor and deployments roll back to previous versions if needed, and supports a wide variety of languages and frameworks so users can connect their GitHub repository as-is. All of this allows for Anvil users of any experience level to deploy applications at the click of a button.</p>
<div class="my-3 text-center"><img width="500" alt="AnvilPlot" src="https://www.rcac.purdue.edu/files/anvil/Anvil-REU/Anvil-REU-2025/REU-Photos/1W5A1118.jpg" /></div>
<h4>Project 3:</h4>
<p>The third Anvil REU project focused creating easy-to-use bioinformatics workflow templates. Abigail Lin and Sadra Williams worked on this project with their mentors, Nannan Shan and Arun Seetharam, as well as Anvil Executive Team member Arman Pazouki. Genomics research that utilizes HPC resources has been accelerating in the past few years, which has been great for discovery in the field. However, biologists often lack a deeper understanding of computing and computational workflows, which can severely hinder (or altogether halt) their research projects. To aid in this issue, Lin and Williams developed four Bioinformatics Workflow Templates for genomics analyses, each tailored for Purdue’s Anvil HPC platform. The templates were: RNA-seq, variant calling, genome assembly, and  general (a customizable option where users can easily create their own workflow). By completing their project, Lin and Williams have provided bioinformatics researchers with little programming knowledge a simple and easy way to conduct their science on Anvil.</p>
<div class="my-3 text-center"><img width="500" alt="AnvilPlot" src="https://www.rcac.purdue.edu/files/anvil/Anvil-REU/Anvil-REU-2025/REU-Photos/1W5A7651-Enhanced-NR.jpg" /></div> 
<h4>Project 4:</h4>
<p>For the fourth Anvil REU project, Christina Joslin and David Burns worked with their mentors, Elham Barezi, Ashish, and Anvil Executive Team member Carol Song, to add an automated document generation feature to TicketHub, a proprietary AI-enabled tool for user support staff. The scope of their project was to create a new feature that would proactively generate useful FAQs by using Natural Language Processing (NLP) and Large Language Models (LLMs) to identify and summarize common user issues from past user support requests. Maintaining accurate and up-to-date technical documentation is a time-consuming and heavily manual task for support staff at HPC facilities. By taking on this project, Joslin and Burns worked to remove some of the burden placed on Anvil’s support team. The students successfully developed this new TicketHub feature over the course of the internship, and ws able to test its performance by evaluating the generated FAQs in three key areas—clarity, accuracy, and relevance. The FAQs rated high in clarity, and had room for improvement in accuracy and clarity; however, the new feature proved to be very promising and work is ongoing to improve its performance and even extend its use beyond HPC.</p>
<div class="my-3 text-center"><img width="500" alt="AnvilPlot" src="https://www.rcac.purdue.edu/files/anvil/Anvil-REU/Anvil-REU-2025/REU-Photos/1W5A8912-Enhanced-NR.jpg" /></div> 
<h3>A comprehensive educational experience</h3>
<p>While the Anvil REU students worked day in and day out all summer, this program was more than a temporary job—it was a completely immersive learning experience. The REU students had access to their mentors and to the RCAC staff working on-site. On top of that, the students were able to take tours of the campus, attend presentations hosted by both RCAC and the SURF (Summer Undergraduate Research Fellowship) program, participate in technical workshops aimed at developing multiple skillsets (writing research abstracts, effective technical communication, etc.), and even attend the Practice and Experience in Advanced Research Computing (PEARC) 2025 Conference! The eight REU students also gave midpoint presentations on their projects to staff from the Pittsburgh Supercomputing Center (PSC), Lawrence Berkeley National Laboratory (LBNL), National Energy Research Scientific Computing Center (NERS), and Pacific Northwest National Laboratory (PNNL).</p>
<p>Aside from technical workshops and presenation experience, the REU students were able to take part in workshops dedicated to developing more intangible skills. One such workshop was led by Syd Moore, an Academic Advisor &amp; Gallup-Certified Strengths Coach, and guided the group through the myStrengths talent assessment. Another was led by Matt Jones, Certified Master Facilitator in LEGO® SERIOUS PLAY® methods (LSP). This session introduced the students LSP, a facilitated thinking, communication and problem solving technique for use with organisations, teams, and individuals. Both of these workshops took place at the beginning of the summer and had a follow-up midway through to build on the lessons learned and further develop the students to prepare them for their future careers. Giving these students the opportunity to gain as much knowledge and experience as possible is a vital component of the Anvil REU program. In this way, RCAC can help to ensure that each one of the REU participants develops into a capable and competent cyberinfrastructure professional.</p>
<p>The Anvil REU program also scheduled ample amounts of time for socializing, fun, and relaxation. Thanks to the RCAC’s partnership with the SURF program, the REU students were able to attend multiple SURF Socials throughout the summer. This allowed the students to hang out with other undergraduates who were at Purdue for non-HPC-specific research projects, leading to new friendships and expanding their professional networks. Of course, the REU participants also socialized outside of these programmed events, but teaching them—by example—the value of having a positive work-life balance is an essential part of professional development.</p>
<h3>Mission accomplished</h3>
<p>On the final day of the Anvil REU program, the students presented their work to the Anvil team. As they demonstrated the results of their projects, each student discussed their accomplishments, obstacles, failures, and what they learned throughout the summer. The students were then asked questions and given them feedback on their presentations. To the Anvil team, it was wonderful to hear how this summer might help steer the future careers of these students, many of whom expressed a desire to continue within the field of HPC. Overall, these eight students made fantastic progress: they completed their projects, learned technical and interpersonal skills they will need in the workforce, and gained an in-depth understanding of the HPC world.</p>
<p>To learn more about the upcoming 2026 summer Anvil REU program, please visit our <a href="https://www.rcac.purdue.edu/anvil/reu">Research Experience for Undergraduates</a> webpage. Applications are now being accepted. The application deadline is February 16, 2026, but may close earlier based on the volume of submissions. Interviews for positions will begin in January of 2026.</p>
<p>For more information regarding HPC and how it can help you, please visit our “<a href="https://www.rcac.purdue.edu/anvil/why-hpc">Why HPC?</a>” page.</p>
<p>Anvil is one of Purdue University’s most powerful supercomputers, providing researchers from diverse backgrounds with advanced computing capabilities. Built through a $10 million system acquisition grant from the <a href="https://nsf.gov/">National Science Foundation (NSF)</a>, Anvil supports scientific discovery by providing resources through the <a href="https://access-ci.org/">NSF’s Advanced Cyberinfrastructure Coordination Ecosystem: Services &amp; Support (ACCESS)</a>, a program that serves tens of thousands of researchers across the United States. Researchers may request access to Anvil via the <a href="https://www.rcac.purdue.edu/knowledge/anvil/access/anvil_through_access">ACCESS allocations process</a>. More information about Anvil is available on Purdue’s <a href="https://www.rcac.purdue.edu/anvil">Anvil website</a>. Anyone with questions should contact <a href="mailto:anvil@purdue.edu">anvil@purdue.edu</a>. Anvil is funded under NSF award No. 2005632.</p>
<p><em>Written by: Jonathan Poole, poole43@purdue.edu</em></p>
]]></description>
				<pubDate>Thu, 13 Nov 2025 00:00:00 -0500</pubDate>
									<category>Science Highlights</category>
							</item>
					<item>
				<title><![CDATA[Anvil supports BigCARE 2025 Summer Workshop]]></title>
				<link>https://www.rcac.purdue.edu/index.php/news/2442</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/index.php/news/2442</guid>
				<description><![CDATA[<p>Purdue University’s Anvil supercomputer recently supported the 2025 BigCARE Summer Workshop, a two-week course aimed at helping cancer researchers develop big data skills. This year’s workshop took place at the University of California, Irvine (UCI). Throughout the course, attendees learned to manage, visualize, analyze, and integrate a variety of omics data in cancer studies. Anvil was integral to the workshop, providing attendees with access to a high-performance computing (HPC) resource designed to have a low barrier of entry for newcomers, which is crucial for those who are inexperienced in big data science.</p>
<p>The Big Data Training for Cancer Research (BigCARE) workshop is a program funded by the <a href="https://www.cancer.gov">National Cancer Institute (NCI)</a>. It was founded in 2020 by Min Zhang, MD, PhD, a Professor of Epidemiology and Biostatistics at the University of California, Irvine’s <a href="https://publichealth.uci.edu/">Joe C. Wen School of Population &amp; Public Health</a>, and the Biostatistics Shared Resources Director for the <a href="https://cancer.uci.edu">UCI Chao Family Comprehensive Cancer Center</a>, and her collaborators Dr. Sean Davis, MD, PhD, Associate Director of Informatics and Data Science, Professor of Medicine, from University of Colorado Anschutz School of Medicine, and Dr. Dabao Zhang, PhD, Professor of Epidemiology and Biostatistics of Joe C. Wen School of Population &amp; Public Health at the University of California, Irvine. The team recognized a need for specialized HPC and Big Data training for cancer researchers and designed BigCARE to provide for that need. This year’s workshop focused on analyzing and interpreting genomic and genetic data, including microbiome analysis, metabolomics analysis, single-cell data analysis, epigenomic data analysis, mendelian randomization, and transcriptome-wide causal inference for directed gene regulations.</p>
<p>“Anvil has <img width="400" style="padding:10px;" class="float-right" alt="Image description" src="https://www.rcac.purdue.edu/files/anvil/Anvil-Stories/BigCare-2023-Summer-Workshop/BigCARE-2025/Eric_Ryan.png" />been extremely helpful during the previous BigCARE workshops,” says Zhang, “especially for our participants with limited computing skills. Anvil provides the essential infrastructure and computing support needed to navigate between command line and R packages for large-scale data. This year, Anvil made the implementation much smoother when we added some AI and machine learning tools for multi-omics data analysis. The Anvil platform, along with Jupyter Notebook, offered an all-in-one solution that helped participants easily and quickly switch from concept to interactive analysis of big data without obstacles.”</p>
<p>Anvil’s role in the BigCARE workshop was to provide HPC resources through Open OnDemand and Jupyter Notebooks, which limits the need for in-depth knowledge of command-line interfaces or HPC server environments. The course material was developed as Jupyter notebooks, and thanks to Open OnDemand, the researchers had direct web access to the notebooks. All of this equated to a low barrier of entry for the workshop participants.</p>
<p>Aside from providing the hardware and software needed to run the workshop, Anvil added value to BigCARE through the user support provided by the RCAC (Rosen Center for Advanced Computing) team. Before the start of the workshop, the Anvil team modified the Open OnDemand-Jupyter deployment that was customized for last year’s event. This customized deployment automatically handled all course setup and environment creation, eliminating much of the typical HPC work required by participants in such classes. Eric Adams, the Lead Research Operations Administrator for Education, and Ryan DeRue, a Senior Computational Scientist, also attended the event at UCI to present on Anvil and HPC, as well as provide support throughout the week.</p>
<p>“Supporting the BigCARE workshops is a great reminder of why we do what we do,” says Adams. “Providing a platform like Anvil that lowers the barrier to high-performance computing allows cancer researchers to focus on their science, not the technology. Seeing them apply these tools to real-world cancer data in real time is incredibly fulfilling.”</p>
<p>This year’s workshop was a huge success. Dr. Zhang and the attendees were thrilled by what they were able to accomplish during the two-week intensive, as well as how helpful both Anvil and the RCAC support team were. In a post-course survey, 18 of the participants stated that they were likely or very likely to apply for their own Anvil allocation in the future.  Dr. Zhang also indicated that she intends to continue using Anvil to support future BigCARE workshops.</p>
<p>More information about the BigCARE 2025 Summer Workshop can be found on UCI’s “<a href="https://bigcare.uci.edu">Big Data Training for Cancer Research</a>” webpage. Information about the Anvil supercomputer can be found on Purdue’s <a href="https://www.rcac.purdue.edu/anvil">Anvil Website</a>.</p>
<p>For more information regarding HPC and how it can help you, please visit our “<a href="https://www.rcac.purdue.edu/anvil/why-hpc">Why HPC?</a>” page.
Anvil is funded under NSF award No. 2005632. Researchers may request access to Anvil via the <a href="https://www.rcac.purdue.edu/knowledge/anvil/access/anvil_through_access">ACCESS allocations process</a>.</p>
<div class="my-3 text-center"><img width="550" alt="AnvilPlot" src="https://www.rcac.purdue.edu/files/anvil/Anvil-Stories/Anvil-AI-on-ACCESS/1W5A7969-Enhanced-NR.jpg" /></div> 
<p><em>Written by: Jonathan Poole, poole43@purdue.edu</em></p>
]]></description>
				<pubDate>Mon, 27 Oct 2025 00:00:00 -0400</pubDate>
									<category>Science Highlights</category>
							</item>
					<item>
				<title><![CDATA[Researchers use Anvil to create AI model for medical image diagnosis]]></title>
				<link>https://www.rcac.purdue.edu/index.php/news/2402</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/index.php/news/2402</guid>
				<description><![CDATA[<p>Researchers from Arizona State University utilized the Anvil supercomputer to develop and deploy a fully open AI (artificial intelligence) foundation model for diagnosing diseases based on medical imaging. The new model, called Ark+, was recently published in the <a href="https://doi.org/10.1038/s41586-025-09079-8">July 10 issue of Nature</a><sup>1</sup>. Ark+ was applied and thoroughly tested in the field of chest radiography. The researchers expect the underlying concept for Ark+ to be applicable across domains, including biology, chemistry, physics, and medicine, and hope that their work will inspire others to share code and datasets in order to accelerate open science and democratize AI.</p>
<p>Dr. Jianming Liang is a Professor in the <a href="https://chs.asu.edu">College of Health Solutions</a> at Arizona State University. Liang heads a research group at the university composed of himself and his graduate students. Together, they study annotation-efficient deep learning models for medical image analysis. Traditional deep learning models used for medical imaging analysis rely heavily on large, annotated datasets. However, annotating this data is a costly and inefficient process requiring qualified annotators to mark every image individually. Liang’s group seeks to address this challenge by exploring and developing deep learning models trained on datasets with limited annotated images. The research group has used Anvil for numerous experiments, but recently focused on developing Ark+, a fully open foundation model used in chest radiography.</p>
<p>Ark+ was designed to help democratize access to diagnostic capabilities. By creating an open AI model that can accurately assess medical images, Liang and his team can help support medical facilities and enable quicker, and potentially better, diagnoses, especially in communities that lack radiological expertise. A major aspect of Ark+ is that it is fully open. Many proprietary foundation models are not, making it difficult for researchers and developers to build on existing works, improve the model, or tailor it to their specific needs. The vision behind Ark+ was to create a powerful, robust foundation model that could be trained by aggregating public datasets while retaining the option to use federated private data. In this way, Ark+ remains fully accessible and usable to the public.</p>
<p>“AI and deep learning (DL) is revolutionizing many aspects of our lives, but the greatest impact of AI/DL has yet to come to healthcare via computer-aided diagnosis (CAD),” says Liang. “To build AI/DL-enabled CAD, we must first overcome a technological barrier: AI/DL requires massive amounts of carefully annotated data for training, but annotating medical data, especially medical images, is not only tedious, laborious, and time-consuming, but it also demands costly, specialty-oriented expertise.”</p>
<p>Liang continues, “Our research aims to address this annotation-dearth challenge in medical imaging by developing novel self- and full-supervised pretraining strategies, thereby relieving the annotation demand for training downstream (target) tasks. In the case of self-supervised pretraining, we utilize billions of image patches extracted from the original images for deep models to ‘understand’ anatomy autodidactically. In the case of full-supervised pretraining, we leverage any accessible, heterogeneous expert labels associated with any available datasets to ‘teach’ deep models to recognize disease patterns in images. Ark+ is a full-supervised pretraining strategy, representing a methodological breakthrough for learning one high-performance model from a multitude of datasets that are labeled differently.”</p>
<p>Ark+ was <img width="500" style="padding:10px;" class="float-right" alt="Image description" src="https://www.rcac.purdue.edu/files/anvil/Anvil-Stories/Jianming-Liang/Liang_Image_Chest%202.png" />
pretrained by cyclically accruing and reusing the knowledge contained within six public datasets. These six datasets contained over 700,000 chest radiography images which had already been annotated by experts. Once Ark+ was trained, Liang and the research group assessed its ability to properly diagnose thoracic diseases. Testing began by using data similar to the data the model was trained on, highlighting Ark+’s effectiveness when assessing images within familiar contexts. Testing then progressed to using “unseen” datasets; these images could be from different clinical settings or hospitals with different imaging protocols and varied patient populations. Without testing using “unseen” datasets, the researchers would have no way of knowing if Ark+ could perform in real-world scenarios that would likely differ drastically from the model’s training environment. Eight scenarios were used to evaluate Ark+ within each testing stage (which included a total of ten different datasets):</p>
<ol>
<li>Diagnosing common thoracic diseases</li>
<li>Adapting to evolving diagnostic needs</li>
<li>Learning to diagnose rare conditions from a few samples</li>
<li>Handling long-tailed thoracic diseases</li>
<li>Adjusting to shifts in the diagnostic setting without training</li>
<li>Tolerating sex-related bias</li>
<li>Responding to novel thoracic diseases</li>
<li>Using private data while preserving patient privacy with federated pretraining</li>
</ol>
<p>After extensively testing Ark+’s performance across these eight scenarios, the research group found the model to be more successful than anticipated. The results were overwhelmingly positive. Ark+ proved to be generalizable, adaptable, robust, and extensible while remaining open, public, light, and affordable. In direct comparison, Ark+ outperformed nine other foundation models. Liang’s research shows that accruing and reusing knowledge from numerous public datasets containing expert annotations can create a better AI model than proprietary ones trained on unusually large data. Of course, developing and testing Ark+ would have been impossible without the use of a high-performance computing (HPC) resource such as Anvil. AI models are dependent upon access to powerful GPUs, and Anvil provides these cutting-edge resources to AI researchers nationwide.</p>
<p>“Given the computational intensity nature of pretraining strategies, we rely on the powerful GPUs provided by the Anvil supercomputer. These advanced computing capabilities enable us to support our diverse range of computational and data-intensive research effectively. By leveraging these computing resources, our research could accelerate significantly, enabling deep learning algorithms to better generalize real-world clinical data. Ultimately, this advancement will enhance the effectiveness of computer-aided diagnosis at the point of care.”</p>
<p>To view the full development and testing procedures of Ark+, as well as the comprehensive results, please read the Liang research group’s Nature publication: <a href="https://www.nature.com/articles/s41586-025-09079-8">A fully open AI foundation model applied to chest radiography</a></p>
<p>To learn more about High-Performance Computing and how it can help you, please visit our “<a href="https://www.rcac.purdue.edu/anvil/why-hpc">Why HPC?</a>” page.</p>
<p>Anvil is one of Purdue University’s most powerful supercomputers, providing researchers from diverse backgrounds with advanced computing capabilities. Built through a $10 million system acquisition grant from the <a href="https://nsf.gov/">National Science Foundation (NSF)</a>, Anvil supports scientific discovery by providing resources through the NSF’s <a href="https://access-ci.org/">Advanced Cyberinfrastructure Coordination Ecosystem: Services &amp; Support (ACCESS)</a>, a program that serves tens of thousands of researchers across the United States.</p>
<p>Researchers may request access to Anvil via the <a href="https://www.rcac.purdue.edu/knowledge/anvil/access/anvil_through_access">ACCESS allocations process</a>. More information about Anvil is available on Purdue’s <a href="https://www.rcac.purdue.edu/anvil">Anvil website</a>. Anyone with questions should contact <a href="mailto:anvil@purdue.edu">anvil@purdue.edu</a>. Anvil is funded under NSF award No. 2005632.</p>
<ol>
<li>Ma, D., Pang, J., Gotway, M.B. and Liang, J.. A fully open AI foundation model applied to chest radiography. Nature 643, 488–498 (2025). <a href="https://doi.org/10.1038/s41586-025-09079-8">https://doi.org/10.1038/s41586-025-09079-8</a>
</li>
<li>Kim N. An open AI model could help medical experts to interpret chest X-rays. Nature. Published online June 11, 2025. doi:https://doi.org/10.1038/d41586-025-01525-x</li>
</ol>
<p><em>Written by: Jonathan Poole, poole43@purdue.edu</em></p>
]]></description>
				<pubDate>Tue, 14 Oct 2025 00:00:00 -0400</pubDate>
									<category>Science Highlights</category>
							</item>
					<item>
				<title><![CDATA[Anvil Cluster Open Ondemand Maintenance - Sep 23]]></title>
				<link>https://www.rcac.purdue.edu/index.php/news/2392</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/index.php/news/2392</guid>
				<description><![CDATA[<p>The Open Ondemand service for Anvil will be unavailable from <strong>Tuesday, September 23 at 9:00am EDT, 2025 to Tuesday, September 23 at 5:00pm EDT, 2025</strong>. During the maintenance, Anvil team will perform a reconfiguration to the Open Ondemand dashboard for Anvil which will upgrade the current dashboard to a new version with new features listed below.</p>
<h3>What’s New on the dashboard v2?</h3>
<ul>
<li>
<strong>New UI design:</strong> Brand new UI design to present a more modern look.</li>
<li>
<strong>Anvil AI Partition Status:</strong> Adding partition status check for the new Anvil AI partition.</li>
<li>
<strong>Cluster Status and Node Status:</strong> Allow users to have a glance of overall Anvil cluster status or dive deep into the status for specific compute node.</li>
<li>
<strong>New Announcement Widget:</strong> Now you can view past announcements with a scroll!</li>
<li>
<strong>New My Jobs page:</strong> Adding more features to My Jobs page.</li>
<li>
<strong>New Job page:</strong> View or control your specific jobs more easily through the job page.</li>
<li>
<strong>New Performance Metrics page:</strong> Now you can view your job performance metrics on Anvil within any time span.</li>
</ul>
<h3>What will impact you?</h3>
<ul>
<li>All Slurm jobs on Anvil (including jobs that have already submitted through Open Ondemand before this maintenance) will continue and <strong>NOT</strong> be impacted.</li>
<li>All functions related to Open Ondemand including login  will be unavailable during the maintenance.</li>
</ul>
<p>Anvil Open Ondemand service will return to full production by Tuesday, September 23 at 5:00pm EDT, 2025.</p>
<p>Please submit a ticket through ACCESS Help Desk at <a href="https://support.access-ci.org/help-ticket"><strong>https://support.access-ci.org/help-ticket</strong></a> if you have any questions or suggestions.</p>
]]></description>
				<pubDate>Tue, 23 Sep 2025 09:00:00 -0400</pubDate>
									<category>Maintenance</category>
							</item>
					<item>
				<title><![CDATA[Anvil Open OnDemand Dashboard version 2 enters production]]></title>
				<link>https://www.rcac.purdue.edu/index.php/news/2396</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/index.php/news/2396</guid>
				<description><![CDATA[<p>The Rosen Center for Advanced Computing (RCAC) has recently released the second version of their Anvil Open OnDemand (OOD) Dashboard. The Anvil OOD dashboard provides researchers who use the Anvil supercomputer with tools for user-friendly job accounting and performance metrics. Version 2 enhancements include the addition of Cluster Status, Job Overview, Cluster and Node Overview pages, as well as a Custom Timeframe selection option and a redesigned homepage.</p>
<p>The intent behind creating <img width="500" style="padding:10px;" class="float-right" alt="Image description" src="https://www.rcac.purdue.edu/files/anvil/Anvil-Stories/Anvil-Dashboard-2.0/Screenshot%202025-03-31%20at%2011.29.49%E2%80%AFAM.png" />the Anvil dashboard was to provide Anvil users with an easy way to view metrics associated with their computational resources. These metrics—highlighting service unit usage, disk usage, queued jobs,etc.—help researchers understand how they are utilizing their computational resources and how they can improve their performance without any coding or command-line confusion. The development of the dashboard was part of the 2024 Anvil Research Experience for Undergraduates (REU) Summer program. Two students, Richie Tan and Anjali Rajesh, worked throughout the summer to build and implement the dashboard onto the Anvil system. Once the summer program ended, Tan was hired as a student employee to continue developing the dashboard.</p>
<p>Thanks to the work of the REU students, the Anvil OOD dashboard has been a huge success. The dashboard has a user-friendly interface for viewing users’ jobs and allocations, and makes it easy for those who do not have knowledge of the command line tools for inspecting this data. This is especially true of users who primarily utilize the interactive applications on Anvil’s OOD for their research and education. The new dashboard provides detailed metrics and incorporates advanced data visualization techniques to highlight job distribution. It also assists with promoting efficient computing, alerting users of inefficient job requests, and helping them minimize resource usage and queue wait time without losing job performance.</p>
<p>Version 1 of the Anvil dashboard went into production in January. It included features such as:</p>
<ul>
<li>
<strong>Homepage widgets</strong> showing service units, disk usage, queued jobs, etc.</li>
<li>
<strong>My Jobs</strong> page for a comprehensive view of recent jobs on Anvil.</li>
<li>
<strong>Performance Metrics</strong> page for job performance summary over specific periods of time.</li>
<li>
<strong>In-memory caching</strong> for API requests.</li>
</ul>
<p>While the first version of the dashboard was very useful, RCAC wanted to include more features and develop a sleeker design. He set out to completely redesign the user interface, included a host of new tools on the dashboard, and made optimizations in the backend, including cache optimizations. A full list of the new features available in the Anvil dashboard Version 2 include:</p>
<ul>
<li>
<strong>New UI design:</strong> Brand new UI design to present a more modern look.</li>
<li>
<strong>Anvil AI Partition Status:</strong> Adding partition status check for the new Anvil AI partition.</li>
<li>
<strong>Cluster Status and Node Status:</strong> Allow users to have a glance of overall Anvil cluster status or dive deep into the status for specific compute node.</li>
<li>
<strong>New Announcement Widget:</strong> Now you can view past announcements with a scroll!</li>
<li>
<strong>New My Jobs page:</strong> Adding more features to My Jobs page.</li>
<li>
<strong>New Job page:</strong> View or control your specific jobs more easily through the job page.</li>
<li>
<strong>New Performance Metrics page:</strong> Now you can view your job performance metrics on Anvil within any time span.</li>
</ul>
<div class="my-3 text-center"><img width="650" alt="AnvilPlot" src="https://www.rcac.purdue.edu/files/anvil/Anvil-REU/Anvil-REU-2024/Richi-Tan-Presents-at-GOOD-25/Screenshot%202025-03-25%20at%204.42.24%E2%80%AFPM.png" /></div> 
<p>Anvil OOD dashboard Version 2 is now live on the system and available for researchers to use. The dashboard has been incredibly useful to researchers so far and the RCAC is thrilled with its performance and reception. RCAC has plans to further improve the dashboard, and has already begun working on the next updates, which will include organizing node display by physical location (i.e., by rack) in the Cluster Status app, adding time series job data, upgrading to Open OnDemand v3.1 for use on other Purdue clusters, making it easier to port to other clusters, and, ultimately, contributing valuable features from the Anvil OOD dashboard to the main Open OnDemand project.</p>
<p>The Anvil OOD dashboard is made possible thanks to Open OnDemand. Open OnDemand was used as the base dashboard framework, which utilizes <a href="https://rubyonrails.org">Ruby on Rails</a> for the backend. <a href="https://getbootstrap.com">Bootstrap 4</a> was used for styling, and <a href="https://slurm.schedmd.com/overview.html">Slurm Workload Manager</a> for job accounting.</p>
<p>Anvil is one of Purdue University’s most powerful supercomputers, providing researchers from diverse backgrounds with advanced computing capabilities. Built through a $10 million system acquisition grant from the <a href="https://nsf.gov/">National Science Foundation (NSF)</a>, Anvil supports scientific discovery by providing resources through the NSF’s <a href="https://access-ci.org/">Advanced Cyberinfrastructure Coordination Ecosystem: Services &amp; Support (ACCESS)</a>, a program that serves tens of thousands of researchers across the United States. Researchers may request access to Anvil via the <a href="https://www.rcac.purdue.edu/knowledge/anvil/access/anvil_through_access">ACCESS allocations process</a>. More information about Anvil is available on Purdue’s <a href="https://www.rcac.purdue.edu/anvil">Anvil website</a>. Anyone with questions should contact <a href="mailto:anvil@purdue.edu">anvil@purdue.edu</a>. Anvil is funded under NSF award No. 2005632.</p>
<p><em>Written by: Jonathan Poole, poole43@purdue.edu</em></p>
]]></description>
				<pubDate>Tue, 23 Sep 2025 00:00:00 -0400</pubDate>
									<category>Science Highlights</category>
							</item>
					<item>
				<title><![CDATA[Anvil impact highlighted in National Artificial Intelligence Research Resource Pilot webinar]]></title>
				<link>https://www.rcac.purdue.edu/index.php/news/2391</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/index.php/news/2391</guid>
				<description><![CDATA[<p>Dr. Haniye Kashgarani, a Senior AI Scientist at the Rosen Center for Advanced Computing (RCAC), recently gave a presentation for the NAIRR Pilot Partner Series Webinar. Her presentation focused on how Anvil, one of Purdue University's most powerful supercomputers, is helping researchers across the country to tackle scalable, data-intensive AI workloads.</p>
<p>Anvil is <img width="400" style="padding:10px;" class="float-right" alt="Image description" src="https://www.rcac.purdue.edu/files/anvil/Anvil-Stories/Anvil-AI-on-ACCESS/1W5A7969-Enhanced-NR.jpg" />an <a href="https://nsf.gov/">National Science Foundation (NSF)</a>-funded system that provides researchers from diverse backgrounds with advanced computing capabilities. In 2024, Anvil became an official resource provider for the newly launched <a href="https://nairrpilot.org/">National Artificial Intelligence Research Resource (NAIRR) Pilot</a> project. This is a pilot version of a project aimed at creating a national infrastructure that connects U.S. researchers to responsible and trustworthy Artificial Intelligence (AI) resources. The NAIRR project will also provide these researchers equitable access to the data, software, training, computational, and educational resources needed to advance research, discovery, and innovation within the field of AI. In order to fully support the NAIRR Pilot, Anvil received supplementary funding from the NSF. This funding enabled RCAC to develop “Anvil AI,” an additional Anvil partition with advanced graphics processing units (GPU) that are needed for AI workloads. A total of 84 Nvidia H100 SXM GPUs were procured and added to the system. Once the expansion was installed, Anvil was ready to take on NAIRR research projects and support the nation’s AI capabilities.</p>
<p>Kashgarani’s presentation began with an overview of the Anvil system, the traditional allocation system for utilizing Anvil (the NSF’s <a href="https://access-ci.org/">Advanced Cyberinfrastructure Coordination Ecosystem: Services &amp; Support [ACCESS]</a>, a program that serves tens of thousands of researchers across the United States), and the new <a href="https://nairrpilot.org/opportunities/allocations">NAIRR allocation system</a>. She also discussed how the NAIRR supplemental funding has supported Anvil via hardware upgrades and dedicated AI research scientists who can provide support to researchers using the system. Kashgarani continued the Anvil overview by highlighting scientific applications that are available on the Anvil system, including bioinformatics, computational chemistry, engineering, climate science, and AI software, as well as discussing the newly developed AnvilGPT, a large language model (LLM) service that  that researchers worldwide can easily access and use. AnvilGPT is hosted and supported entirely on-premises at Purdue and no uploaded documents or queries are used for training, which negates the concern of leaking intellectual property or proprietary data.</p>
<p>One notable computational capability that Anvil provides to NAIRR is a composable infrastructure. The Anvil Composable Subsystem is a Kubernetes-based private cloud managed with Rancher that provides a platform for creating composable infrastructure on demand. This cloud-style flexibility allows researchers to self-deploy and manage persistent services to complement HPC workflows and run container-based data analysis tools and applications. The composable subsystem is intended for non-traditional workloads, such as science gateways and databases, and the recent addition of composable GPU nodes supports tasks such as AI inference services and model hosting, a major boon for NAIRR researchers. Kashgarani noted in her presentation that the Anvil Composable setup makes it easy for researchers to launch and manage services, get feedback, and make updates to the applications and the services that they want to make available publicly.</p>
<div class="my-3 text-center"><img width="550" alt="AnvilPlot" src="https://www.rcac.purdue.edu/files/anvil/Anvil-Stories/NAIRR-Haniye-Presentation/Screenshot%202025-08-20%20at%2011.39.23%E2%80%AFAM.png" /></div> 
<p>Another boon for NAIRR researchers that Anvil provides is the availability of a wide range of popular, domain-specific datasets. These datasets are hosted on the Anvil system and made available as modules, which can be easily loaded and added to workflows. To improve ease-of-use for the datasets, the Anvil team developed a conversational search made available through the dataset query. This enables a context-sensitive chat function that summarizes information from various dataset documents and works across multiple domains. Researchers can utilize this function to easily identify which datasets would be best for their specific work. Examples of these datasets include:</p>
<ul>
<li>
<strong>Genomes:</strong> A collection of reference sequences and annotation files for 38 commonly analyzed organisms.​</li>
<li>
<strong>GOES-16:</strong> Nearly 10TB of GOES-16 datasets. This dataset is currently being used to train AI models for improved weather forecasting. ​</li>
<li>
<strong>NOAA AORC:</strong> 31TB of NOAA Analysis of Record for Calibration (AORC) dataset. AORC Forcing data includes the years 1979-2021. ​</li>
<li>
<strong>NCBI Blast:</strong> NCBI Blast databases to support the life sciences community. This continues to be mirrored and updated.  ​</li>
<li>
<strong>Eggnog mapper:</strong> EggNOG-mapper, a tool for function annotation of biological sequences, relies on precomputed databases of evolutionary relationships to annotate novel genomes, transcriptomes, or metagenomes. ​</li>
</ul>
<p>Next, Kashgarani discussed Anvil’s User Support and Training services. Anvil offers a tiered user support structure. Tier 1 handles triage and first responses, while Tier 2 and Tier 3 brings domain expertise when needed. NAIRR researchers can utilize this structure by submitting support tickets. For quick questions or a more informal setting, the Anvil team also offers regular Anvil support hours. These are dedicated sessions each week where users can “drop in” (virtually) and get one-on-one live feedback from one of the Anvil support staff. As for training, the Anvil team offers a number of options for users of all experience levels, both in asynchronous and live, lecture-style formats.</p>
<p>To wrap up the webinar, Kashgarani highlighted some of the current NAIRR research projects that are being conducted on Anvil. The three examples she showcased were:</p>
<ul>
<li>
<p><strong>Transforming 3D Object Detection for Safer Self-Driving:</strong> Using Anvil AI resources, researchers at Cornell developed a novel autoregressive model for 3D bounding box prediction, enabling robust detection under occlusion.</p>
</li>
<li>
<p><strong>Advancing Epilepsy Diagnosis with LLMs:</strong> On Anvil GPU, Stevens Institute researchers curated and processed seizure data for patients, developing a language-model-based tool to identify epileptogenic zones from clinical descriptions—accelerating clinical insights.</p>
</li>
<li>
<p><strong>Personalizing Mask Design Through Dynamic Speech Modeling:</strong> Leveraging Anvil GPU, FAMU-FSU researchers simulated real-time mask leakage during speech, revealing gender-based leakage patterns and informing design improvements for public health protection.</p>
</li>
</ul>
<p>Kashgarani’s webinar presentation was well received. At the time of the presentation, Anvil had 31 active NAIRR projects led by a total 105 unique researchers. The webinar certainly sparked interest and several Principal Investigators reached out to submit proposals after the event. Now Anvil has 34 active NAIRR projects with 115 unique researchers, with more and more inquiring about the allocation process each week. To view Kashgarani’s presentation in its entirety, please visit: <a href="https://www.youtube.com/watch?v=a_laiv5Py34">NAIRR Pilot Partner Series Webinar</a></p>
<p>Researchers and educators can apply for access to NAIRR resources and view descriptions of NAIRR projects at <a href="https://nairrpilot.org/">https://nairrpilot.org/</a>. Resource request submissions can be made following the process outlined in <a href="https://nairrpilot.org/opportunities/allocations">https://nairrpilot.org/opportunities/allocations</a>. Submissions should select “Purdue Anvil CPU” or “Purdue Anvil GPU” as the preferred resource, depending on the user’s needs. Anyone with questions should contact <a href="mailto:anvil@purdue.edu">anvil@purdue.edu</a>.</p>
<p>More information about Anvil is available on Purdue’s <a href="https://www.rcac.purdue.edu/anvil">Anvil website</a>. Anvil is funded under NSF award No. 2005632.</p>
<p><em>Written by: Jonathan Poole, poole43@purdue.edu</em></p>
]]></description>
				<pubDate>Mon, 08 Sep 2025 00:00:00 -0400</pubDate>
									<category>Science Highlights</category>
							</item>
					<item>
				<title><![CDATA[Anvil enters year four of production]]></title>
				<link>https://www.rcac.purdue.edu/index.php/news/2378</link>
				<guid isPermaLink="true">https://www.rcac.purdue.edu/index.php/news/2378</guid>
				<description><![CDATA[<p>Anvil, one of Purdue’s most powerful supercomputers, continues its pursuit of excellence in HPC as it enters its fourth year of operations. Funded by a $10 million acquisition grant from the <a href="https://www.nsf.gov">National Science Foundation (NSF)</a>, Anvil began early user operations in November 2021 and entered production operations in February 2022. After three years online, Anvil has more than proven its value. The supercomputer has been used to help over 12,000 researchers push the boundaries of scientific exploration in a variety of fields, including artificial intelligence, astrophysics, climatology, and nanotechnology. This past year was also marked by an explosion of growth for Anvil, both in machine size and usage statistics. Thanks to supplemental funding from the NSF’s <a href="https://nairrpilot.org/">National Artificial Intelligence Research Resource (NAIRR) Pilot</a>, the Anvil AI partition was added to the supercomputer and brought online. A total of 84 Nvidia H100 SXM GPUs were procured and added to the system. With this upgrade, Anvil is now poised to deliver a world-class AI supercomputing resource to researchers nationwide.</p>
<h3>Anvil at a Glance—Three Years of Operations</h3>
<div class="my-3 text-center"><img width="650" alt="AnvilPlot" src="https://www.rcac.purdue.edu/files/anvil/Anvil-Stories/Anvil-Y4/Anvil_stats_20241231_cropped.jpg" /></div> 
<p>Over the past three years, Anvil has had a significant impact on scientific research and student development. With more than 12,000 total users thus far (double the number from its second year of operations), of which over 6,000 were undergraduate students (another twofold increase), Anvil is not only helping meet the growing need for <a href="https://www.rcac.purdue.edu/anvil/why-hpc">high-performance computing (HPC)</a> within the realms of research, but also actively assisting with the development of cyberinfrastructure professionals of tomorrow. Overall, Anvil has allowed users access to 1.8 billion CPU hours and 1.8 million GPU hours, supporting research across 65 diverse scientific domains. In 2024 alone, 165 research publications (a ~2.3x increase from 2023) cited Anvil usage. Aside from the supercomputer itself, the Anvil team has been hard at work promoting the benefits of HPC and ensuring the nation has a workforce trained in the use, operation, and support of advanced cyberinfrastructure. In its three years of operations, the Anvil team has participated in 72 outreach events and conducted 34 training sessions, with a multitude already planned for the coming year. These training sessions are designed to deliver working knowledge of HPC systems and teach users how to get the most out of their research time on Anvil. The team also provided hands-on training to students through initiatives such as the Anvil Summer REU program and RCAC’s CI-XP student program, which allowed the students to gain much-needed knowledge and experience in the field of HPC.</p>
<h3>Anvil Tech Specs</h3>
<p>Anvil is a supercomputer <img width="400" style="padding:10px;" class="float-right" alt="Image description" src="https://www.rcac.purdue.edu/files/anvil/Anvil-Stories/Anvil-AI-on-ACCESS/1W5A7969-Enhanced-NR.jpg" />deployed by Purdue’s <a href="https://www.rcac.purdue.edu">Rosen Center for Advanced Computing (RCAC)</a> in partnership with Dell and AMD.  The system was created to significantly increase the computing capacity available to users of the NSF’s <a href="https://access-ci.org">Advanced Cyberinfrastructure Coordination Ecosystem: Services and Support (ACCESS)</a>, a program that serves tens of thousands of researchers across the United States. Before the new expansion, Anvil’s system consisted of 1,000 Dell compute nodes, each with two 64-core third-generation AMD EPYC processors, 32 large memory nodes with 1 TB of RAM per node, and 16 GPU nodes, each with four NVIDIA A100 Tensor Core GPUs, all of which are interconnected with 100 Gbps Nvidia Quantum HDR Infiniband. The new NSF NAIRR funding has added 21 Dell PowerEdge XE9640 compute nodes, each with 4 Nvidia 80GB H100 SXM GPUs, as well as an additional 1 PB of flash-based object storage integrated into Anvil’s composable subsystem. The new GPU nodes also feature an additional NDR Infiniband fabric to support larger AI workloads.</p>
<p>“Anvil joined the NAIRR Pilot as a resource provider in May of 2024” says Rosen Center Chief Scientist Carol Song, principal investigator and project director for Anvil. “We made available Anvil’s discretionary capacity, which was allocated entirely to researchers, right away. This H100 GPU expansion not only gives Anvil a significant boost to the amount of resources available to the NAIRR Pilot users, but also provides a major increase in Anvil’s GPU computing power. The H100 GPU outperforms the current A100 GPU in Anvil by as much as nine times in computing speed. Many workloads, especially AI model training and inference, will run much faster, reducing the time-to-results for researchers.”</p>
<p>In 2024, GPU capabilities were upgraded for the Anvil Composable Subsystem of the Anvil supercomputer. The Anvil Composable Subsystem now hosts eight composable nodes, each with 64 cores and 512 GB of RAM, and multiple GPU nodes with a total of 4 NVIDIA A100 80GB GPUs and 4 NVIDIA H100 96GB GPUs. The Anvil Composable Subsystem is a Kubernetes-based private cloud managed with Rancher that provides a platform for creating composable infrastructure on demand. This cloud-style flexibility allows researchers to self-deploy and manage persistent services to complement HPC workflows and run container-based data analysis tools and applications. The composable subsystem is intended for non-traditional workloads, such as science gateways and databases, and the addition of the composable GPU node supports tasks such as AI inference services and model hosting.</p>
<h3>Anvil Innovations</h3>
<p>The Anvil supercomputer has been host to a number of innovations throughout the past year. From on-premises generative AI, to a Jupyter Notebook platform, to increased datasets and a streamlined, user-friendly dashboard, the Anvil team has strived to provide researchers with the best cutting-edge tools to help advance their work. These innovations include:</p>
<p><strong>AnvilGPT:</strong> AnvilGPT is a large language model (LLM) service that makes open-source LLM models like LLaMA accessible worldwide to ACCESS researchers. Unlike other LLM services, AnvilGPT is hosted entirely with on-premises (on-prem) resources at Purdue. This means researchers have more democratized access to LLMs, as well as more control. AnvilGPT is hosted on the Anvil Composable Subsystem and leverages the powerful H100 GPUs for rapid processing. The service was designed to provide a secure, central, and flexible AI platform tailored for Anvil users. Anyone with an Anvil allocation has access to AnvilGPT for free.</p>
<p><strong>Anvil Notebook Service:</strong> The Anvil Notebook Service is a cloud-based, scalable platform for web-based Jupyter Notebooks. It offers access to CPU and GPU resources through a variety of Jupyter notebooks supporting Python, R, Julia and popular machine learning frameworks like Tensorflow and PyTorch. The notebook service is also tightly integrated with the Anvil HPC system, allowing users to interact with data stored on Anvil and submit jobs to Anvil's batch system.</p>
<p><strong>Scaling Anvil Composable:</strong> With the addition of AnvilGPT and the Anvil Notebook Service, as well already hosting 12 Science Gateways with various scaling requirements, Anvil has seen an ever-increasing demand for Kubernetes infrastructure. To combat this heightened demand (which often exceeded the Kubernetes resource capacity), the Anvil team has developed an automated batch to Kubernetes conversion process. This process utilizes idle batch nodes on the Anvil HPC system to increase Kubernetes resources​, which not only allows Kubernetes to perform at scale, but also maximizes the use of Anvil’s 1000+ node capacity. The ability to scale the composable system has already been used to great effect:</p>
<ul>
<li>
<p>NanoHUB <a href="https://www.rcac.purdue.edu/news/6828">STARS Workshop</a></p>
<p>-NanoHUB staff integrated their hub with Anvil Composable to scale out tool sessions</p>
<p>-Supported 75 participants launching tool sessions with 4C and 16GB RAM</p>
</li>
<li>
<p>CyberFACES (NSF CyberTraining)</p>
<p>-Custom JupyterHUB supporting 100s of participants</p>
</li>
<li>
<p>Purdue DataMine (Anvil Notebook Service, 2025)</p>
<p>-1200+ students currently using Anvil batch to launch notebooks​</p>
</li>
</ul>
<p><strong>Open OnDemand Dashboard:</strong> As part of their Anvil REU experience, undergraduate students Richie Tan and Anjali Rajesh developed an Anvil web dashboard to highlight Anvil usage metrics and make complex information more accessible to Anvil users. By creating this dashboard, Rajesh and Tan provided Anvil users the ability to effortlessly tap into relevant metrics that can help them understand how they are using their computational resources and how they can improve their performance without any coding or command-line confusion. The dashboard has been so successful that it has been shared with the OpenOnDemand project (which it was built on) for broader use. Some of its key features include:</p>
<ul>
<li>Homepage widgets showing service units, disk usage, queued jobs, etc.</li>
<li>My Jobs page for a comprehensive view of recent jobs on Anvil.</li>
<li>Performance Metrics page for job performance summary over specific periods of time.</li>
<li>In-memory caching for API requests.</li>
</ul>
<p><strong>Datasets:</strong> The Anvil team has incorporated popular domain-specific datasets onto the system to optimize user workflows​. A module system enables searching by dataset category; e.g. hydrological models, geospatial models, etc. The team also included automatic web-based documentation generation for future discoverability and search function. Perhaps the biggest innovation within the datasets is the conversational search made available through the dataset query. This enables a context-sensitive chat function that summarizes information from various dataset documents and works across multiple domains.</p>
<div class="my-3 text-center"><img width="500" alt="AnvilPlot" src="https://www.rcac.purdue.edu/files/anvil/Anvil-Stories/Anvil-Y4/Screenshot%202025-08-20%20at%2011.39.23%E2%80%AFAM.png" /></div> 
<h3>Enabling science through advanced computing</h3>
<p>Thanks to its configuration and cutting-edge hardware, Anvil is one of the most powerful academic supercomputers in the US. When it debuted, the Anvil supercomputer was listed as number 143 on the Top500 list of the world’s most powerful supercomputers. Anvil’s advanced processing speed and power has allowed researchers to save hours of time on computations and simulations, enabling innovative scientific research and discovery. The highlights given below are but a few of the hundreds of use-cases stemming from Anvil:</p>
<p><strong>1)</strong> Researchers from the George Washington University used Purdue’s Anvil supercomputer to simulate fluid flows in order to elucidate the physics of turbulent bubble entrainment. Understanding this process will lead to practical applications in a variety of fields, including oceanography, naval engineering, and environmental science.</p>
<p>Andre Calado is <img width="400" style="padding:10px;" class="float-right" alt="Image description" src="https://www.rcac.purdue.edu/files/anvil/Anvil-Stories/Anvil-Y4/Pic_jon.png" />a Graduate Research Assistant at the George Washington University, working to complete his PhD in computational fluid dynamics. He, alongside his advisor Elias Balaras—a professor in the <a href="https://mae.engineering.gwu.edu">Department of Mechanical and Aerospace Engineering</a>—wanted to advance the study of two-phase flows (air and water), specifically how turbulence underneath the water interacts with the water’s surface and the role this plays in air entrainment. The pair used Anvil to run direct numerical simulations in order to produce high-fidelity simulations of the physics of bubble entrainment. Their work has pushed the boundaries of what has been accomplished so far within two-phase flow research.</p>
<p>“We’re happy to be using Anvil to perform these calculations,” says Calado. “It has been very helpful. These are very large computations—we’re talking about thousands of cores at a time. So we need these resources to do the fundamental research in order to understand the physics, and then hopefully apply what we learn to more practical engineering calculations.”</p>
<p><strong>2)</strong> Dirty air, or particle-laden flow, as it’s known in the hypersonics research world, can be extremely problematic for vehicles traveling at hypersonic speeds. Tiny particles, sub-micrometer in size, are deposited into the air via natural events, such as volcanic eruptions, ice clouds, and atmospheric dust, or through human-induced air pollution. These particles then impact the vehicle as it flies through the atmosphere. While it may seem that these particles are too small to be problematic, they can actually cause damage and increase the risk of functional failure. Dr. Qiong Liu, an Assistant Professor in the <a href="https://mae.nmsu.edu/">Department of Mechanical and Aerospace Engineering at New Mexico State University</a>, along with Irmak Karpuzcu, Akhil Marayikkottu, and Deborah Levin, all from the <a href="https://aerospace.illinois.edu/">Department of Aerospace Engineering at the University of Illinois, Urbana-Champaign</a>, have used the Anvil supercomputer to elucidate exactly what happens when particles hit the surface of a hypersonic vehicle in flight.</p>
<p>“We are looking to understand the effects of tiny particle impact on the surface of hypersonic vehicles,” says Liu. “These high-speed vehicles usually have thermal protection around the surface to help with excessive heating, but repeated particle impact, even from such small particles, are going to cause damage to the thermal protection, which will cause significant troubles during flight.”</p>
<p>Using the direct simulation Monte Carlo (DSMC) method, the researchers studied the fundamental flow physics and particle trajectory in the flow field around a blunted cone, the most common forebody shape used in hypersonic flight. The simulations looked at particles ranging from .01 micrometers up to 2 micrometers in size. With these simulations, the group was able to determine both the effects that a single particle of varying sizes had on the bow shock and the statistical characteristics of those particles. They found that lighter particles (less than .02 micrometers) could not penetrate the bow shock wave, and so could never directly impact the vehicle. However, heavier particles (greater than .2 micrometers) passed through the bow shock, directly impacted the vehicle, ricocheted upstream, and then traveled downstream in the flow. This unique interaction and motion of heavier particles led to bow shock distortions.</p>
<p>With this study, the research team delivered some much-needed clarity regarding the physics of hypersonic flight, but it was only possible with help from supercomputing resources like Anvil. Liu was thrilled with Anvil’s performance throughout the project.</p>
<p>“We are really, really happy with computing on Anvi,” says Liu. “The code was well parallelized, so we had no problems running that. Also, the queue was very short, so we were able to submit jobs and get results very quickly. I’ve actually encouraged many of my new colleagues to apply to use Anvil because I had such a good experience.”</p>
<p><strong>3)</strong> Researchers from the University of Wisconsin (UW)–Madison used Purdue’s Anvil supercomputer to study turbulence and turbulent transport in astrophysical plasmas. This research seeks to elucidate the fundamental physics of turbulence, which will have applications across the fields of fluid and plasma dynamics. The group not only pushed the boundaries of scientific research with their work, but also tested the performance limits of Anvil, utilizing upwards of half the machine (512 nodes at once) to run a single simulation.</p>
<p>Bindesh Tripathi, who spearheaded the project, is working toward finishing his doctoral dissertation in the <a href="https://www.physics.wisc.edu/">Department of Physics at UW–Madison</a>. Under the joint supervision of advisors Dr. Paul Terry and Dr. Ellen Zweibel, both of whom are professors at the university, Tripathi conducts research involving astrophysics and plasma physics, mathematical/theoretical physics, and numerical methods. Tripathi used Anvil to shed light on the underlying physics of stable-mode excitations within fluid and plasma dynamics, a little understood phenomenon that occurs at large (galactic) scales. To accomplish this task, Tripathi first had to make several bespoke changes to a 3-dimensional (3D) magnetohydrodynamics simulation software known as Dedalus. Then, in order to run the code successfully, the group needed access to an extraordinary amount of computing power, which the Anvil supercomputer was able to provide. To support the researchers’ work, the Anvil team set up a special allocation that allowed the group to utilize 512 nodes at once. The group routinely used 30,000 to 40,000 cores simultaneously. To be clear, this was a parallel code, so one single simulation required the use of all of the cores at the same time. This level of computation for a real-world research problem had not yet been tested on Anvil, but the computer was able to handle it with no issues. Tripathi’s code ran seamlessly, even at such a large scale, and he was thrilled with the performance of the system.</p>
<p>“I ran the Dedalus code, and I found it running beautifully well,” says Tripathi. “Anvil has a large number of cores, and the queue time was relatively short, even for the very large resources that I was requesting, and the jobs would run quite fast. So it was a quick turnaround, and I got the output pretty quickly. I have had to wait a week or even longer on other machines, so Anvil has been quite useful and easy to run the code. Anvil has also generously provided us with storage of a large dataset, which now amounts to 125,000 gigabytes from my turbulence simulations.”</p>
<video width="100%" height="auto" preload controls>
    <source src="https://www.rcac.purdue.edu/files/anvil/Anvil-Stories/Bindesh-Tripathi/Tripathi_Anvil_film_final.mp4" type="video/mp4" />
</video>
<h3>Training and Education Impacts</h3>
<p>Aside from enabling groundbreaking research across multiple fields of science, Anvil is being used as a tool to develop the future workforce in computing. From professional training and workshops to hands-on learning experiences for students, Anvil is helping to forge the next generation of researchers and cyberinfrastructure professionals.</p>
<h4>Professional Training</h4>
<p>One major training and <img width="400" style="padding:10px;" class="float-right" alt="Image description" src="https://www.rcac.purdue.edu/files/anvil/Anvil-Stories/BigCare-2023-Summer-Workshop/BigCARE-2024/IMG_0455.jpg" />educational impact made by Anvil involved supporting the <a href="https://bigcare.uci.edu">2024 BigCARE Summer Workshop</a>, which took place at the University of California, Irvine (UCI). The BigCare Workshop was a National Cancer Institute-funded biomedical data analysis workshop designed to train cancer researchers on how to visualize, analyze, manage, and integrate large amounts of data in cancer studies. This year’s workshop focused on analyzing and interpreting genomic and genetic data, including transcriptomic analyses, epigenomic analyses, genome-wide association analyses, and network analyses. Thanks to supplemental funding from <a href="https://www.niaid.nih.gov/">National Institute of Allergy and Infectious Diseases (NIAID)</a>, the workshop also covered COVID and microbiome data analysis by introducing infectious and immune-mediated disease-related data sets, a first for BigCARE. Dr. Min Zhang, the principal investigator on the NCI-funded project, taught the workshop participants the skills needed to analyze their research data, while Anvil provided an HPC environment that had a very low barrier to entry, ensuring that non-HPC professionals could quickly and easily complete their research without having to become an expert in computing.</p>
<p>“During the previous big data workshops I organized,” says Zhang, “participants faced significant challenges as they had to navigate both the command line interface and the R programming environment, which often led to difficulties as most participants have limited computing skills. Anvil’s powerful computing capabilities allow participants to handle large-scale omics data more efficiently, making analysis of next-generation sequencing data more accessible.”</p>
<p>Anvil was so helpful for the workshop that Zhang intends to renew it as the resource for supporting BigCare for the foreseeable future. “We are pleased to announce that our R25 grant, ‘Big Data Training for Cancer Research,’ has been renewed by the National Cancer Institute for the next five years,” says Zhang. “We look forward to the continued fruitful collaboration with the Anvil group, leveraging their expertise to drive our program forward.”</p>
<p>Another workshop supported by the Anvil supercomputer was the <a href="https://www.secm4.org/">2024 Southeastern Center for Microscopy of Macromolecular Machines (SECM4)</a> data processing workshop. The workshop focused on teaching researchers how to process data for single-particle cryogenic electron microscopy (SPA-cryo-EM) analysis. The workshop was developed and led by Dr. Nebojša (Nash) Bogdanović, a faculty member specializing in cryo-EM who co-manages the operations of the SECM4 cryo-EM service center, located at Florida State University. Attendees learned how to use HPC-based cryo-EM software such as Relion, CryoSPARC, and ML-based Topaz for tasks like preprocessing, particle picking, 2D and 3D reconstruction, classification, and model building.</p>
<p>Due to the nature of cryo-EM work and the workshop’s size, the instructors required a resource capable of providing large-scale computing power. They turned to Anvil for that support.</p>
<p>“We understood that the computational resources required for this work are very intense,” says Dr. Bogdanović. “So we needed 4 to 8 GPUs, 500 GB to 1 TB of RAM (or more), as well as a large SSD allocation. What we managed to do with Anvil was to use their implementation of CryoSPARC, a software we use readily in our field, and distribute it to the 12 participants in our workshop to demonstrate how each step is carried out.”</p>
<p>The Anvil team provided the SECM4 workshop with access to the supercomputer’s advanced GPUs and granted a special dispensation to reserve a block of GPUs for the three-day course.</p>
<p>“So we, thanks to the kindness of the Anvil team, were able to reserve up to 10 GPUs simultaneously,” adds Dr. Bogdanović, “guaranteeing that our participants could run jobs during the workshop. That worked out wonderfully, and we are very grateful Anvil was able to do this.”</p>
<p>Dr. Bogdanović was delighted with Anvil’s performance. To prepare for the workshop, he gained access to Anvil months in advance through a proposal-based, NSF-funded ACCESS program, ensuring the system would fit his needs. CryoSPARC was already installed and ran flawlessly, better than his experience with the software on other HPC systems. RELION was also available, but he needed a different version for the workshop. The Anvil team was on hand to help and guided him through their specifics of installation, and RELION worked perfectly when implemented. Dr. Bogdanović prepared all the results for the workshop projects in advance to determine what was most suitable for inclusion and to keep backups on hand in case of any hiccups. Fortunately, everything ran smoothly, and the workshop was a huge success. Participants found it so useful that five went on to apply for and receive their own Anvil allocations.</p>
<h4>Student Support and Education</h4>
<p>In its third year of operations, Anvil expanded its scope of student support by directly and indirectly supporting high school students in their computing and HPC development.</p>
<p>The first major push to engage younger students came in the summer of 2024. RCAC, utilizing Anvil resources and with support from the Anvil staff, hosted two summer camps aimed at high schoolers, with the hopes of giving them an introduction into the college experience by providing them the ability to earn college credit, explore potential majors and experience campus life.</p>
<p>CyberSafe Heroes: A Week of Cybersecurity Mastery, was the first of the two camps. It focused on cybersecurity best practices and career pathways. Students participated in encryption challenges, ethical hacking simulations, cybersecurity escape rooms, online safety workshops, and engaging career panels with cybersecurity professionals.</p>
<p>The second camp, Code Explorers: Coding and Environmental Discovery, focused on creating an immersive introduction to coding, connecting it to environmental science. Students were able to code with microcontrollers, conduct data analysis with Python, create environmentally themed games and more.</p>
<p>Another instance <img width="400" style="padding:10px;" class="float-right" alt="Image description" src="https://www.rcac.purdue.edu/files/anvil/Anvil-Stories/Ana-Neuman-HS-Student-Polymer-Physics/37780006.JPG" />of high school student support came when Sarah Will, a senior at the <a href="https://www.scienceandmathacademy.com/">Science and Mathematics Academy at Aberdeen High School (SMA)</a> in Aberdeen, Maryland, completed her senior capstone project by conducting research utilizing the Anvil supercomputer. Will worked under the guidance of PhD student Anastasia Neuman, from the <a href="https://cbe.seas.upenn.edu/">Chemical and Biomolecular Engineering Department at the University of Pennsylvania</a>. The pair decided to expand on research previously conducted by Neuman, which looked into how confinement within nanoparticle packings affected the miscibility (the ability to be mixed at a molecular level to produce one homogeneous phase) of a bulk polymer blend. For this project, the two used Anvil to simulate the effects nanoparticle packings have on block copolymers (BCPs). The results of the new project were unexpected, but will help experimental researchers produce materials with specific BCP phase structures. This could lead to novel polymer properties (e.g., improved malleability or conductivity), which could lead to solutions for problems such as CO2 separation, rechargeable batteries, food packaging, and tissue engineering.</p>
<p>One thing that stood out for Neuman was Anvil’s accessibility and ease of use. Will was a first-time HPC user. She had no experience working within a terminal and was unfamiliar with HPC server environments. Thanks to Anvil’s Open OnDemand portal, Will was able to log into the cluster via a web browser, even on the high school computers, which don’t allow students to download any software.</p>
<p>“I think it’s easier for students because they are used to working with a web browser more than they are a terminal,” says Neuman. “So being able to access Anvil with Open OnDemand made it a lot more user-friendly and a great introduction to computational work. I’ve even recommended Anvil to my supervisor at UPenn, who teaches many classes that introduce computational work to undergraduates.”</p>
<p>Anvil student support also extended to undergraduates. Throughout its third year Anvil supported roughly 1,700 students in a national data science experiential learning and research program known as <strong><a href="https://datamine.purdue.edu/">The Data Mine</a></strong>. The goal of The Data Mine is to foster faculty-industry partnerships and enable the adoption of cutting-edge technologies. The course introduces students of all levels and majors to concepts of data science and coding skills for research. The students then partner with outside companies for a year to work on real-world analytic problems. Anvil provided 1 million CPU hours for the program and allowed the students to manage extensive research datasets, thanks to the supercomputer’s large capacity.</p>
<p>Anvil also supported <img width="400" style="padding:10px;" class="float-right" alt="Image description" src="https://www.rcac.purdue.edu/files/anvil/Anvil-Stories/Chipshub/Screenshot%202024-08-12%20at%208.52.21%E2%80%AFAM.png" />nearly 60 students via the <strong><a href="https://engineering.purdue.edu/semiconductors/stars">2024 STARS summer program</a></strong>. STARS is an eight-week, on-site program offered by Purdue University’s College of Engineering. The program is designed to teach undergraduate students deep-tech skills in integrated circuit design, fabrication, packaging, and semiconductor device and materials characterization. The backbone of the program is <strong><a href="https://chipshub.org/">Chipshub</a></strong>, the online platform for everything semiconductors. Chipshub is powered by nanoHUB, the first end-to-end platform for online scientific simulations.</p>
<p>“Chipshub extends nanoHUB’s success to deliver both open-source and commercial software that supports a semiconductor community through workforce development at scale,” says Gerhard Klimeck, Chipshub co-director, Elmore Professor of Electrical and Computer Engineering and Riley Director of the Center for Predictive Devices and Materials and the Network for Computational Nanotechnology.</p>
<p>Chipshub partnered with RCAC to leverage the power of the Anvil supercomputer. By taking advantage of the Anvil Composable Subsystem, Chipshub can deliver the power of HPC to hundreds of users at once and significantly cut down time spent waiting for results. This ability to compute at scale allows Chipshub to drive semiconductor workforce development throughout the nation without having to limit classroom size, which is precisely what the STARS program did.</p>
<p>“Chipshub proved itself as each member of the STARS cohort were concurrently running simulations, producing chip layouts, and running physical verification continuously for the final three weeks of STARS,” says Dr. Mark C Johnson, who led the STARS program. “Collectively, 12 teams of four to five students each produced a chip design that has been combined into a single layout and will be submitted in September, 2024 for fabrication.” During the program, Chipshub and Anvil powered 1,800 simulation sessions and 6,000 interactive hours.</p>
<p>The most direct and intensive undergraduate student support provided by Anvil was RCAC’s very own Anvil Research Experience for Undergraduates (REU) Summer 2024 program. The 2024 Anvil REU program saw eight students from across the nation gather at Purdue’s campus in West Lafayette, Indiana, for 11 weeks to learn about HPC and work on projects related to the operations of the Anvil supercomputer. Eight members of RCAC’s staff provided mentorship to the students throughout the summer, helping them to complete four separate Anvil-enhancing projects. The student participants of the program were:</p>
<ul>
<li>
<strong>Jeffrey Winters</strong>, Computer Science and Engineering double major, University of California, Merced</li>
<li>
<strong>Alex Sieni</strong>, Computer Science and Statistics double major, University of North Carolina at Chapel Hill</li>
<li>
<strong>Richie Tan</strong>, Computer Science major, Purdue University</li>
<li>
<strong>Anjali Rajesh</strong>, Computer Science major, Rutgers University</li>
<li>
<strong>Nihar Kodkani</strong>, Computer Science and Math double major, Purdue University</li>
<li>
<strong>Selina Lin</strong>, Computer Science and Math double major, Purdue University</li>
<li>
<strong>Philip Wisniewski</strong>, Computer Science major, Purdue University</li>
<li>
<strong>Austin Lovell</strong>, Computer Science major, Purdue University</li>
</ul>
<p>By summer’s end, these eight students made fantastic progress: they completed their projects, learned technical and people skills they will need when in the workforce, and gained an in-depth understanding of the world of HPC. In fact, since the conclusion of the 2024 Anvil REU program, six of the students have taken up student positions and continue their work at RCAC. Many have also gone on to present their work at national conferences, including the 2024 International Conference for High-Performance Computing, Networking, Storage, and Analysis (SC24) and the Global Open OnDemand 2025 Conference (GOOD 2025).</p>
<div class="my-3 text-center"><img width="650" alt="AnvilPlot" src="https://www.rcac.purdue.edu/files/anvil/Anvil-REU/Anvil-REU-2024/REU24.png" /></div> 
<h3>Industry Partnerships</h3>
<p>Anvil’s third year of production saw an explosion of growth for its Industry Partnership program. This program allows industry users to utilize the Anvil supercomputer for their business needs, but at a fraction of the cost of private HPC companies. Examples of some of the current Industry Partnership users, as well as projects under discussion, include:</p>
<ul>
<li>
<strong><a href="https://myradar.com">MyRadar</a></strong>: high resolution weather prediction</li>
<li>
<strong><a href="https://myradar.com">BlueWave AI Labs</a></strong>: AI/ML for nuclear plant operational, regulatory efficiency improvements</li>
<li>Smart building technology company (Kubernetes GPU workloads)</li>
<li>LLM-based tool for conversational assistance during emergencies</li>
<li>AI-driven platform for airport power infrastructure management for electric aircraft</li>
<li>Electromagnetic propulsion systems</li>
<li>Generative AI for personalized content</li>
<li>Medical research company working on blood test-based cancer detection</li>
<li>Life sciences diagnostics company</li>
<li>Technology company aimed at early detection of TBI and cognitive impairment</li>
</ul>
<p>To learn more about the Industry Partnership program, please visit: <a href="https://www.rcac.purdue.edu/industry">https://www.rcac.purdue.edu/industry</a></p>
<h3>Continuing Anvil’s success</h3>
<p>The Anvil team is thrilled by all it accomplished in its third year, and is looking forward to driving discovery and innovation throughout its fourth year of operations and beyond. The team has multiple plans for the coming year, including targeted training and support, the development of a complete scalable and dynamic ecosystem of services, and increasing the conscientious use of AI for the advancement of science and technology.</p>
<p>“Anvil has established itself as a major HPC resource to the national research community,” says Preston Smith, Executive Director for the Rosen Center for Advanced Computing and co-PI on the Anvil project. “After three years in production, we are pleased with everything Anvil has enabled thus far, whether it be the science conducted on the machine or the training and education opportunities it has provided. Looking ahead to year four, our goal is to continue to innovate, helping expand the boundaries of scientific discovery, while still providing world-class support and education for researchers nationwide. With our inclusion as a resource for the NAIRR Pilot, we are looking forward to the new challenges in the upcoming year”</p>
<p>Anvil is funded under NSF award number 2005632. The Anvil executive team includes Carol Song (PI), Preston Smith (Co-PI), Erik Gough (Co-PI), and Arman Pazouki (Co-PI). Researchers may request access to Anvil via the <strong><a href="https://www.rcac.purdue.edu/knowledge/anvil/access/anvil_through_access">ACCESS allocations process</a></strong>.</p>
<p><em>Written by: Jonathan Poole, poole43@purdue.edu</em></p>
]]></description>
				<pubDate>Fri, 22 Aug 2025 00:00:00 -0400</pubDate>
									<category>Science Highlights</category>
							</item>
			</channel>
</rss>