Scratch Purging to resume on Coates, Rossmann, Hansen, Carter
Due to issues with the automated processess indexing the Lustre filesystems, resumption of scratch purging has been postponed. The first automated mailing of purge warnings for LustreA and LustreD will be sent on April 1, and the purge will occur on April 8.
The first automated mailing of purge warnings for LustreC will be sent on May 6, and the purge will occur on May 13.
During the week following Spring Break, 2014 (March 24-28), scratch filesystem purging will resume on the filesystems serving Coates, Rossmann, Hansen, and Catrter - known as LustreA and LustreC.
On March 25, the first automated mailing of purge warnings will be mailed, with purges actually occurring on April 1, 2014.
It has been over 1 year since a filesystem purge has been run on these filesystems, but as the filesystems approach 80% of capacity, performance will begin to degrade. Before this occurrs, storage engineers have begun scanning the filesystems in preparation of resuming purging.
When purging is restarted, though, one parameter will be slightly different than before. Rather than purging based on an individual file's creation time, lustreC and lustreA will be purged based on the last access time of an individual file.
Any file not accessed in 90 days will be subject to purge
As previously, you can use the "purgelist" command to report which files are marked for purging.
It is our intention that this purge policy will make it easier for actively used datasets to remain in place and make your research computing more productive.
Preparing for Purge
To minimize the impact of the first purge in March, ITaP recommends that all lustreC and lustreA users take a moment to manually delete any old or unused data from their scratch spaces. Not only will this save the automated purge process some of work, but it will free up much-needed space on the filesystems immediately.
You can identify old files that are candidates for removal with the "purgelist" command. Please either remove unneeded files in the list or archive to Fortress any that are needed for safe keeping.
Notes on Cluster Scratch and Archival Storage
It is important to keep in mind that cluster scratch storage is for limited-duration, high-performance storage of data for running jobs or workflows. It is important that old data in scratch filesystems is occasionally purged - to keep the filesystem from being fragmented or filling up. Scratch is intended to be a space in which to run your jobs, and not used as long-term storage of data, applications, or other files.
Please keep in mind that any scratch filesystem is engineered for capacity and high performance, and are not protected from any kind of data loss by any backup technology. While research computing scratch filesystems are engineered to be fault-tolerant and reliable, some types of failures can result in data loss.
ITaP recommends that important data, research results, etc. be permanently stored in the Fortress HPSS archive, and copied to scratch spaces while being actively worked on. The "hsi" and "htar" commands provide easy-to-use interfaces into the archive.
For more information on using Fortress, please visit the web site at http://www.rcac.purdue.edu/userinfo/resources/fortress/
Persistent Group Storage
If non-purged, disk-based storage is a requirement for your group's work, please consider ITaP's persistent group storage service. This service is well-suited for storing a research group’s data, results, applications, source code and anything else members may need to share with each other.
For more information on the Persistent Group Storage service, see