Accelerating Research at Scale: How the NSF ASCEND Engine’s First Hackathon Turned Bottlenecks into Breakthroughs
By pairing ASCEND‑supported projects with NVIDIA expertise and ARCC compute resources, this hackathon transformed theoretical capability into operational progress.

“This event brings together the in‑house computational infrastructure at UW, deep domain expertise in areas like wildfire and soil health, a corporate partner with unparalleled deep learning know‑how, and the startup and university teams with the ideas, the questions, and the drive to see it through.” —Sam Malloy, Use‑Inspired R&D Director, NSF ASCEND Engine
What happens when seven NSF ASCEND Engine–supported research teams are given direct access to NVIDIA’s deep learning and high‑performance computing expertise, the production‑scale compute resources of the University of Wyoming’s Advanced Research Computing Center (ARCC), and focused mentorship through NVIDIA and the Open Hackathons program?
You get progress—fast.
Held January 28–30 at the University of Wyoming, the ASCEND–NVIDIA Hackathon brought together seven peer‑reviewed project teams, roughly 40 participants, and a national network of mentors for intensive, hands‑on collaboration. To the ASCEND team’s knowledge, this was the first hackathon hosted for Engine partners and teams, positioning the event as both a technical accelerator and a pilot model for how Engines can compress research timelines through targeted partnerships.
By Friday afternoon’s final presentations, teams demonstrated tangible gains: code scaled from CPU to single‑GPU to multi‑GPU workflows, automated pipelines replacing fragile manual processes, data systems re‑architected for reuse, and radical increases in simulation speed.
An Enablement Hackathon, Not a Competition
“When people hear the word hackathon, they think competition,” said Sepideh Khajehj, a Senior Developer Advocate at NVIDIA and an organizer with Open Hackathons. “But this is an enablement hackathon. There’s no competition except with yourself. Your goal is to beat your previous results and take your project to the next level.”
That philosophy shaped the event’s design. Teams were selected through a peer‑review process to ensure they arrived with clearly defined goals that would benefit from participation. The objective was not novelty, but removing bottlenecks and barriers inhibiting promising research.
Hackathons, Malloy noted, play a unique role in the research‑to‑translation pipeline. “They allow teams to dig deeply into a hard but achievable question—something that might otherwise take weeks, months, or even years—and accelerate that timeline dramatically.”
The Power of Partnership
The event was hosted in collaboration with NVIDIA and the OpenACC Organization with support from the NSF ASCEND Engine and the University of Wyoming’s ARCC. Each partner played a distinct role: Open Hackathons coordinated logistics and mentor matching, NVIDIA provided software and deep learning expertise, ARCC delivered production‑scale compute infrastructure, and ASCEND ensured alignment with broader research‑to‑impact goals.
“Everything starts with science,” said John Josephakis, Global Vice President of Sales and Business Development for HPC/Supercomputing at NVIDIA. “Scientific problems are getting larger and more complex. As technology and methodologies evolve, AI can and should be used to solve those classical modeling and simulation problems. At NVIDIA, we collaborate on these world-class initiatives to bring the community together with the computing expertise and AI tools researchers need. By connecting government, industry, and academia, we can expand what is possible at the frontiers of technology and innovation and accelerate scientific discovery.”
Behind the scenes, preparation was extensive.
“It takes a significant amount of time,” said Professor Suresh Muknahallipatna, the Faculty Director of Computational Resources at the ARCC. “Our staff began preparing the infrastructure at the beginning of January. Everything was tested in advance, and there were no issues during the hackathon. That allowed participants to focus entirely on their research.”
Once the event began, teams could work without worrying about systems, access, or configuration—a critical factor when tackling complex workflows under tight timelines.
Why This Matters for the NSF ASCEND Engine
The NSF ASCEND Engine harnesses the region’s unique advanced sensing capabilities to improve our ability to prepare for and respond to natural hazards such as wildfire and drought. That means investing not only in new sensing technologies, but also in the computational capacity required to turn massive streams of environmental data into insight.
Many ASCEND projects rely on physics‑based models that integrate satellite imagery, sensor feeds, long-term atmospheric and precipitation records, and biological observations. These approaches offer unprecedented fidelity, but only if teams can train, scale, and iterate on models efficiently.
“I see this hackathon as a steppingstone for going from a non-deep‑learning‑based product to a fully automated, deep‑learning‑based workflow,” said Raj Kumar, Developer of Technology Manager at NVIDIA and member of the governing board for the ASCEND Engine in Colorado and Wyoming. “The infrastructure has existed for years, but the community hasn’t always been able to adopt it. This helps bridge that gap.”
By pairing ASCEND‑supported projects with NVIDIA expertise and ARCC compute resources, the hackathon transformed theoretical capability into operational progress.
Progress in Practice: What Changed
Across teams, progress followed a common theme: turning single-use, labor‑intensive workflows into scalable, automated, and real‑time systems—often compressing months of work into days.

One example of the hackathon’s impact came from the CO-WY Meteorological Wildfire Data team. Wildfire forecast models estimate wildfire spread, and that information could inform first responders and evacuations to save time and lives. The team— whose forecasting workflow was constrained by long training times and limited scalability—worked with NVIDIA mentors to re‑engineer their workflow. Switching from CPU to fully GPU-based training reduced model training time by approximately 87 percent. Profiling‑driven code changes delivered an additional ~50 percent reduction. Faster models allow for faster training iteration and hyper parameter tuning, which will speed up the path to operational deployment. Alongside these gains, the team made their code more reproducible, modular, and fully GPU‑compatible, enabling faster experimentation and easier scaling. The result was a faster, more flexible framework that dramatically streamlined development and expanded the model’s potential for future wildfire forecasting applications.
At BioSensor Solutions, a team is working towards a model of real‑time soil microbial respiration using low‑cost sensors, which would reduce reliance on expensive laboratory assays while providing actionable decision support for farmers. During the hackathon, the team integrated nine years of ERA‑5 weather data into their training dataset, optimized deep learning model performance, and surfaced new opportunities for sensor and model co‑design. Rick Loft, one of the team’s mentors, described the experience as hitting a “fast‑forward button.” The result was a fourfold improvement in model training speed, representing a hypothetical annual savings of $52,000.
Other teams focused on enabling real‑time inference, a critical threshold for many environmental and sensing applications.
Longji Cui’s research group at the University of Colorado Boulder is developing advanced methods to improve thermal imaging resolution using machine learning, quantum‑inspired techniques, and information theory. The team arrived with a compute‑intensive Swin Transformer–based super‑resolution model running at approximately 10 frames per second (FPS)—suitable for offline processing but insufficient for real‑time use. During the hackathon, the team implemented a series of GPU‑focused optimizations, including Flash Attention, BF16 precision, and Torch Compile, reduced element‑wise kernel operations from 35 percent of runtime to just 1.5 percent, and increased batch size from 2 to 16. By the end of the event, the group achieved a six‑fold performance gain, increasing inference speed from 10 FPS to 60 FPS, enabling real‑time, high‑fidelity thermal image reconstruction.
Additional teams focused on wildfire smoke forecasting, meteorological modeling, and infrastructure‑scale simulations. In each case, the combination of expert mentorship, GPU acceleration, and production‑scale computing enabled faster iteration, clearer performance diagnostics, and more deployable systems.
A Blueprint for the Future
For the NSF ASCEND Engine, the hackathon demonstrated how targeted partnerships can dramatically accelerate research readiness, and plans are already in progress to establish enablement hackathons as a regular part of Engine activities
“This is a pretty unique convergence,” Malloy said. “Bringing together the infrastructure, the expertise, the corporate partners, and the project teams in one focused environment—we’re excited about what this can become.”
By the end of the three‑day hackathon, every participating team reported meaningful progress—and clear pathways for continued refinement. While access to ARCC’s computing resources was essential, participants consistently emphasized that peer learning, expert mentorship, and cross‑project collaboration were just as valuable.
“The expertise was an incredibly valuable resource,” Loft said. Others echoed the desire for repeat events, noting that peer learning, access to software, and issue-specific experts were just as impactful as access to the ARCC itself.
For many teams, the event did not mark an endpoint, but a turning point—unlocking faster prototyping, more adaptive workflows, and a clearer path from experimentation to deployment.