Data Depot Hardware Replacement and Migration
May 11, 2021 5:00pm – May 12, 2021 11:00pm
Earlier this morning we have received a number of tickets about issues while mounting migrated Data Depot spaces as Windows/Mac network drives.
After troubleshooting with vendor support, the issue is now resolved. If you had problems earlier, please try to re-map the drives following instruction in the User Guide section.
UPDATE: May 12, 2021 9:29pm
As of 9:29pm, the first stage of Data Depot migration is completed. 614 out of 751 spaces (totalling 247 TB in 162 million files) have been migrated to the new hardware at an average rate of 3 GB/second.
Data Depot is returned to full production service. Most access methods will continue working without change for both migrated and not-yet-migrated spaces. However, accessing migrated spaces via Windows/Mac network drives (SMB/CIFS) will require a change on your part. Please see Data Depot Migration FAQ for detailed information on all aspects of migration and accessing your Data Depot spaces.
We will be contacting owners of remaining spaces individually to schedule their migration in the following weeks.
UPDATE: May 12, 2021 10:12am
Data Depot migration continues as planned, with over 600 out of 751 spaces migrated in the first 13 hours. Data Depot and community clusters remain unavailable while data is being moved.
We will provide another update by the end of the maintenance (11pm as planned).
ORIGINAL: April 20, 2021 10:44am
On May 11, 2021 at 5:00pm – May 12, 2021 at 11:00pm, the Data Depot storage service will be unavailable while it will be transitioned to new hardware. All Depot access methods (SCP/SFTP, Windows network drives, Globus, NFS exports, direct mounts on Research Computing clusters, etc) will be affected.
Current Status: Data Depot was brought online in 2014 and has been instrumental in providing an affordable and reliable shared storage solution to the Purdue research community, while tightly integrating with the Community Cluster cyberinfrastructure. The system is currently utilized by 740 research labs and collaborative projects, using close to 95% of its size and file count capacity.
The lifecycle hardware replacement and expansion project for Data Depot has been underway for quite some time now, with the goal to provide an even larger, faster and more feature-rich storage system. As you can imagine, syncing 2.5 PB of live changing data is no small feat. Data has been continuously replicated from the current filesystem onto the new one for the last six months. The first rounds of file syncs have completed, and regular syncs are ongoing until the cut-over is executed.
Storage Resiliency: The quickly growing file count has caused a few issues recently. The filesystem, being nearly full, affects performance and some important functionality of the storage system. Specifically, due to increased load and limited metadata resources, we had to resort to suspending all regular snapshots operations on the current Data Depot filesystem. Please note that this does not in any way affect the resilient and redundant distributed nature of Data Depot storage. Your data remains fully safe, secure and protected against hardware and software failures across multiple disk trays, storage servers and datacenters, but options for self-recovering accidentally deleted files are temporarily unavailable. However, data is copied to and snapshotted on the new hardware, and RCAC staff are happy to recover files for you. As always, we strongly encourage regular use of Fortress to maintain long-term archives of any important research data.
Migration Plan: We strive to make the switch as transparent and straightforward as possible, with little to no changes in your everyday operations. On the day of the transition, Data Depot will require a downtime. During this period, we will migrate Depot spaces for the majority of current labs, so at the end of this downtime most of you should be able to simply return to your usual operation (now on the new Depot). A handful of very large or metadata-intensive spaces may not be fully converted during this initial run. For such spaces, the old Depot spaces will remain functional and we will individually coordinate with affected research groups for best times to complete their transition. Once everyone is migrated, we will need one final downtime (likely in the mid-July or early August time frame) to finalize the flip on the back-end.
For those of you who also use Community Clusters, please note that clusters will be unavailable during the two transitional downtimes.
We greatly appreciate your patience and your continuing support and use of our services. Please reach out to us if you have any questions or concerns.