Skip to main content
Have a request for an upcoming news/science story? Submit a Request

Conte Cluster LustreD filesystem upgraded to Lustre version 2.4.2

  • Announcements
  • Conte

During the scheduled maintenance the weekend of March 15-16, 2014, the Lustre filesystem for Conte ("lustreD") was upgraded to Lustre version 2.4.2, from version 2.1.5.

This version upgrade was recommended by Data Direct Networks to address recent issues encountered on LustreD. ITaP storage engineers and research analysts applied the software upgrade and ran a series of tests to reproduce the filesystem errors encountered in recent weeks.

These tests were unable to induce a failure as observed during recent outages.

Finally, following the software upgrade, LustreD has been mounted on Conte with the "flock" option, ensuring filesystem-wide file locking, coherent across all client nodes.

Lustre 2.4.2

The new version of Lustre on LustreD incorporates many improvements over Lustre 2.1.5, to name a few:

  • Improved performance on sequential directory traversals (statahead) - for example "ls -l".
  • Improved metadata capabilities
  • Multiple improvements to RPC
  • And many bug and stability improvements Notes on Cluster Scratch and Archival Storage

It is important to keep in mind that cluster scratch storage is for limited-duration, high-performance storage of data for running jobs or workflows. It is important that old data in scratch filesystems is occasionally purged - to keep the filesystem from being fragmented or filling up. Scratch is intended to be a space in which to run your jobs, and not used as long-term storage of data, applications, or other files.

Please keep in mind that any scratch filesystem is engineered for capacity and high performance, and are not protected from any kind of data loss by any backup technology. While research computing scratch filesystems are engineered to be fault-tolerant and reliable, some types of failures can result in data loss.

ITaP recommends that important data, research results, etc. be permanently stored in the Fortress HPSS archive, and copied to scratch spaces while being actively worked on. The "hsi" and "htar" commands provide easy-to-use interfaces into the archive.

For more information on using Fortress, please visit the web site at https://www.rcac.purdue.edu/storage/fortress

Originally posted: