Conte Cluster enters production
October 28, 2013
As of 8:00am on Monday, October 28, 2013, the early access testing phase of Conte has completed and the cluster is in full production. Thank you for your patience and cooperation during the testing phase while the system was fine-tuned.
The full Conte user guide is available at: http://www.rcac.purdue.edu/userinfo/resources/conte/userguide.cfm
Frequently asked questions about Conte can be read at: http://www.rcac.purdue.edu/userinfo/resources/conte/faq.cfm
Some reminders about details that are specific to Conte:
- ITaP recommends the Intel Compiler and Intel MPI for software development on Conte. These tools can be accessed with "module load devel"
- The default maximum walltime for jobs in your queue is two weeks. Extensions can be made on a per job basis upon request. If your work routinely requires a longer run time than the default, please contact us at email@example.com.
- The Moab scheduler is configured to schedule jobs by entire nodes. If you'd like to run multiple jobs on a single node, simply add "#PBS -l naccesspolicy=shared" to your job submission.
- Native mode on the Xeon Phi coprocessors is now available, in an experimental mode. Currently, native mode is limited to single node jobs. Simply submit your job with "-l nodes=1:ppn=16:mics=2" to initialize the Phi coprocessor for your job.
- Conte now provides a fast-turnaround "debug" queue for testing short parallel jobs.
- Conte's LustreD scratch offers quotas of 100TB and 2M files. Scratch purging scans will begin soon.
- Remember that LustreD is engineered for capacity and high performance, and is not protected from data loss by any backup technology. Be sure to use the Fortress HPSS archive for permanent storage of data and results.
- See: Effective Use of Research Storage
Finally, further resources on developing for the Xeon Phi coprocessor are available at: Intel Xeon Phi: Learning Resources