Skip to main content
Have a request for an upcoming news/science story? Submit a Request

RCAC staff win I-GUIDE Spatial AI Challenge

  • Science Highlights
  • Anvil

Two staff members from Image descriptionthe Rosen Center for Advanced Computing (RCAC) recently took first place in the I-GUIDE Spatial AI (Artificial Intelligence) Challenge 2024–25. Dr. Elham Barezi, the Lead AI Research Scientist, and Dr. Jungha Woo, a Lead Software Engineer, worked together to develop the winning project, called GeoMapCLIP. GeoMapCLIP—a fine-tuned extension of GeoCLIP—automatically localizes unknown geospatial images using visual cues. Once fully developed, GeoMapCLIP could give users the ability to determine the exact latitude and longitude coordinates of undetermined locations captured within photos.

The I-GUIDE Spatial AI Challenge is an international initiative designed to spark innovation in geospatial science and tackle real-world issues by leveraging the power of AI. Hosted on the NSF-funded Institute for Geospatial Understanding through an Integrative Discovery Environment (I-GUIDE) platform, the challenge aims to produce solutions for some of today’s most pressing sustainability issues. This year’s challenge contained two competition tracks that participants could pursue: 1) “Data, Models, and Their Applications”—wherein participants could develop novel spatial AI models, submit innovative datasets, or showcase applications that integrate data and technology in new ways; and 2) “Open Problems”—where users were tasked with addressing a set of curated open geospatial challenges in order to create actionable, sustainable solutions. Judges evaluated each submission based on criteria encompassing technical excellence, creativity, and adherence to FAIR (Findable, Accessible, Interoperable, and Reusable) data principles. The challenge was open to researchers, data scientists, AI enthusiasts, and geospatial professionals.

GeoMapCLIP

Barezi and Woo decided to pursue the “Data, Models, and Their Applications” track of the Spatial AI Challenge. Inspiration for the project came from conversations that the pair had with professors from the anthropology department at Purdue. The anthropologists have access to a vast number of maps, both historical and modern, pulled from numerous locations, including research papers and scanned books. The problem was that not all the maps were appropriately labeled, so the professors were having trouble geolocating the images. What they needed was a tool that could do this automatically with minimal hands-on oversight. Barezi and Woo felt this was a perfect project for the I-GUIDE Spatial AI challenge and began working to develop GeoMapCLIP.

“We chose to enhance GeoCLIP because the tool has proven to perform well under limited circumstances, such as on social media or with images of popular locations,” says Barezi. “Our goal with GeoMapCLIP was to take that software and improve it for broad-use, map-specific tasks.”

For GeoMapCLIP, Barezi Image descriptionand Woo focused on developing AI models that can interpret geospatial map images and accurately extract coordinates, thereby enabling the automated understanding of map content, including scale, symbols, and location. The pair built on the CLIP and GeoCLIP frameworks, adapting the model for satellite maps by training it with ArcGIS imagery. The end result is a GeoMapCLIP that demonstrates better localization accuracy on unfamiliar satellite maps. This new vision model can also be integrated into any other AI model to enrich the context and improve its data analysis. Thanks to the pair’s hard work, GeoMapCLIP can help with map analysis across multiple domains, including hydrology, archeology, urban planning, mining, and disaster management.

“GeoMapCLIP is really innovative in that it harnesses the power of AI to automatically tag images with appropriate coordinates based on visual clues within the image,” says Woo. “It can now do this for almost any location in the world. This will be tremendously valuable to researchers who find themselves needing to utilize map datasets that have not been accurately labeled or annotated. It can also help them by analyzing maps and their changes over time using AI.”

GeoMapCLIP was a huge success, winning the I-GUIDE Spatial AI Challenge 2024-25. When notifying Barezi and Woo of their first-place finish, the judges stated that their project “demonstrated exceptional innovation, technical rigor, and alignment with FAIR and open science principles,” and that they were “particularly impressed by your notebook's structure, reproducibility, and impactful storytelling.”

“It was exciting to win,” says Barezi. “We were so focused on developing the best product we could that we hadn’t had much time to think about placement in the challenge. So learning that we took first was a lovely surprise.”

As part of the I-GUIDE Spatial AI Challenge, participants had access to Indiana University’s Jetstream2 high-performance computing (HPC) resource to work on their projects. Developing GeoMapCLIP required Barezi and Woo to train the model using millions of satellite images. A model training workload of this size necessitates an HPC system with substantial GPU memory and persistent storage. Jetstream2 was able to provide the team with everything they needed to develop their model. But the version of GeoMapCLIP they developed for the challenge is only the first step for using AI to analyze legacy data. Now that the challenge is over, the pair plan to continue their work by utilizing the NSF-funded Anvil supercomputer located at Purdue University.

“We understand that this is the first iteration of the project, and have highlighted some opportunities for future improvements,” says Woo. “We intend on continuing our work by adding more data and modifying our optimization method to improve our accuracy for various zoom levels and earth surfaces. Of course, this will all require lots of computing power. Thankfully, with the new Anvil AI expansion, we will still have access to cutting-edge GPUs.”

Barezi and Woo will be presenting their work on GeoMapCLIP on the I-GUIDE VCO webinar series. The webinar will take place on August 20, 2025, at 10 a.m. ET (11 a.m. CT). The pair will not only cover their development process, but also the improvements they intend to make. To register for the webinar, please visit: REGISTRATION

To learn more about this project and how it can be used, please visit the GeoMapCLIP notebook, hosted on the I-GUIDE Platform.

To view the second and third place projects in the Spatial AI Challenge, please visit the challenge’s Selected Winners page.

Jetstream2 allocates resources to researchers and educators across the US through the NSF-funded ACCESS project and the NAIRR Pilot. To learn more about the supercomputer, please visit the Jetstream2 homepage.

Anvil is one of Purdue University’s most powerful supercomputers, providing researchers from diverse backgrounds with advanced computing capabilities. Built through a $10 million system acquisition grant from the National Science Foundation (NSF), Anvil supports scientific discovery by providing resources through the NSF’s Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support (ACCESS), a program that serves tens of thousands of researchers across the United States.

Researchers may request access to Anvil via the ACCESS allocations process. More information about Anvil is available on Purdue’s Anvil website. Anyone with questions should contact anvil@purdue.edu. Anvil is funded under NSF award No. 2005632.

Written by: Jonathan Poole, poole43@purdue.edu

Originally posted: