Prize Program Design Reference Guide
A practitioner's checklist for designing and launching prize competitions (Part 5)
In Part 1 of this series, we told the story of how prize authority at DOE went from dormant law to functioning tool. Part 2 walked through the full-stack design of the Solar Prize and how its success led to over 100 prize competitions across the agency. Part 3 examined when prizes are the wrong tool. Part 4 addressed what’s missing in the broader ecosystem and why the field needs a community of practice.
This post is for people ready to act. It’s the checklist and decision framework for designing and launching a prize competition. If you want the story or the theory, go back to the earlier posts. This is the reference guide.
Just to note: unlike our previous grants deep dives, this is more for program designers specifically, and not necessarily for applicants/competitors. Though understanding the motivations and structural decisions behind prizes can be helpful for people considering applying to prizes in the future.
Step 1: Is a Prize the Right Tool?
Before designing anything, work through the first layer of the full-stack program design framework. What specific problem are you trying to solve? Who needs to apply and what will motivate them? What is your budget? What does success look like? What are the measurable outcomes that will tell you whether it worked?
Those answers should drive the mechanism choice, not the other way around. Part 3 of this series covered the conceptual boundaries in detail, but the short version: prizes work well when the path to success is uncertain, when results can be objectively assessed, when competition will drive innovation, and when you want to attract solvers who wouldn’t normally engage with your organization’s funding. Prizes are the wrong tool when you need close oversight, when the path is well-known and you just need execution, when you want to build capacity in equal amounts across many participants rather than select winners, or when success is subjective.
If you’ve worked through that analysis and concluded a prize is the right approach, the next question is whether your organization has done this before.
Step 2: Is This Your Organization’s First Prize?
If your organization has run prizes before and has existing approval and announcement processes, skip this section. If this is your first one, work through it before you start designing.
The first barrier you’ll hit is simple inertia. “That’s not how we do things here.” Your organization has established ways of deploying capital, whether grants, contracts, or something else. Those processes exist because people spent years building them, and staff have built careers learning to navigate them. Proposing a different mechanism means asking people to learn something new, which feels like unnecessary effort and risk when the existing approach is perceived as working well enough. The response to this isn’t to argue that prizes are better than grants (they aren’t, for many purposes). It’s to identify a specific problem that your current tools aren’t solving well and position the prize as a targeted experiment to address that gap.
The second barrier is a misconception about what prizes actually are. When most people hear “prize competition,” they picture a large, winner-takes-all technical challenge. Something like the Ansari XPRIZE or the L-Prize: a single audacious goal, a big pot of money, one winner, program over. That model exists and has its place, but it represents a narrow slice of what prizes can do. As we covered earlier in this series, prizes can be structured in many ways: multi-stage competitions with increasing award amounts, multiple winners at each stage, support services layered on top of cash, technical vouchers, network connections. Expanding leadership’s mental model of what a prize can look like is often necessary before you can have a productive conversation about whether to try one.
Once you’re past those threshold objections, the more specific concerns emerge. Leadership will worry about the “no strings attached” nature of prize funds. We covered this dynamic in Part 1: the fear that winners would take early-stage prize money and walk away. The concern comes from people whose mental model is grants, where oversight happens through invoices, quarterly reviews, and project management. For someone who has spent a career in that world, the absence of those controls feels like risk rather than feature.
The way through is to start small. Propose early-stage amounts low enough that if someone did walk away (which in our experience across 100+ DOE prizes, nobody has), it’s manageable and does not involve large, irreversible sunk costs. Then make sure in the next stage there is a prize that will incentivize competitors to continue work rather than walking away with their part of the purse. The Solar Prize started at $50,000. Win that, and you’re one of 20 teams competing for 10 prizes worth $100,000 each. Hard to abandon a 50% shot at doubling your money.
You also need to answer the authority questions before you’ve invested months in a design you can’t execute. For federal agencies: Does your office have prize authority? Is it delegated? At what dollar thresholds do approvals change? At DOE, prize authority defaulted to the Secretary for years, which meant anything over $1 million required Secretarial sign-off. That barrier alone kept most offices from trying. Foundations and private sector organizations face different questions (legal review, board approval) but still need to map them early. Ironing out decision authority early will be especially critical when you want a fast moving prize, e.g. being able to select winners on the same day as a pitch competition.
Beyond these prize-specific hurdles, the full-stack framework applies. The ecosystem scan and coalition building work from Layer 2 is just as important here, and just as tempting to skip under time pressure. Right-size your first prize to prove the model before expanding. A successful small prize opens doors. A failed ambitious one closes them.
Step 3: The Design Dimensions
Every prize requires decisions across a set of design dimensions. What follows is a common list you will need to work through. The decisions you make here are what create the incentive framework that drives the behavior you want or sets up the prize for failure from the outset. The majority of the impact of prizes is generated during this design phase not the selection phase.
1. Competition & Winner Structure: How does the competition work and who wins?
First-to-achieve, single winner: Race to a threshold. First to hit the target wins, competition ends. The L-Prize worked this way.
Best-by-deadline, single winner or ranked placement: Competition closes on a fixed date, top performer(s) win. Can award just first place or tiered prizes (1st/2nd/3rd).
Best-by-deadline, threshold: Everyone who clears a defined bar wins the same amount. The Ready stage of the Solar Prize worked this way: 20 teams each received $50,000.
Rolling/ongoing: Continuous intake with periodic judging windows. Less common.
2. Stage Structure: How many phases?
Single-stage: One competition, one set of winners.
Multi-stage: Sequential phases where advancement depends on winning prior stages. The Solar Prize used three: Ready, Set, Go.
3. Evaluation Method: How are results judged?
Objective technical validation: Measurable, verifiable. Did the prototype hit 25% efficiency? Send it to a third-party lab, get data.
Subjective expert assessment: Judges evaluate quality, potential, team strength, market fit.
Hybrid: Technical threshold to qualify, then subjective assessment for final ranking.
These can combine. The Solar Prize used a threshold at Ready (clear the bar, you’re in) but the Go stage was closer to ranked placement (judges selecting top performers from finalists).
Note: If your evaluation method depends on validated performance data, make sure you actually have access to the facilities and expertise to gather it. A prize that requires third-party testing only works if competitors can get that testing done within your timeline and budget.
4. Award Type: What do winners receive?
Cash only
Cash, plus any combination of:
vouchers (technical services, lab access)
network/support services
Other things of value
Non-cash only (recognition, vouchers, access)
5. Funding Amount: How much, and how is it distributed?
Total prize pool: Overall budget allocated to prizes.
Per-winner amount at each stage: The $50k → $100k → $500k escalation creates different incentives than a flat structure.
Ratio between stages: How much does the jump incentivize continued participation? Win $50k with a 50% shot at $100k and walking away gets hard.
Fixed vs. variable: Are amounts predetermined or can judges allocate a pool based on submission quality?
Administrative costs/overhead: How much funding is needed to execute the program which can include staff time, payment to third parties for support, stipends to reviewers, etc.
One practical note: prizes work well for small awards where grant overhead is prohibitive. The administrative burden of a $50,000 grant is nearly the same as a $5 million grant, which is why program offices avoid small grants. Prizes don’t have that problem. The same structure that awards a $500,000 grand prize can award twenty $25,000 early-stage prizes without scaling administrative cost. At the other end, very large prizes (above $1-5 million) raise questions about whether no-strings-attached is appropriate or if a cooperative agreement with oversight makes more sense. This is a decision government funders may have to consider, but it may not be as much of an issue for foundations or private entities.
6. Eligibility: Who can compete?
Open: Anyone.
Restricted: US only, small business only, students only, etc.
Progressive restriction: Open for phase 1, must meet additional criteria for phase 2 (incorporated, domestic manufacturing) to receive award and advance.
Team composition requirements: Must include a university partner, community organization, etc.
7. IP Treatment: Who owns what?
Competitors retain all IP: The default for federal prizes based on AMERICA COMPETES Prize authority.
Government gets license: Winners grant government rights to use the solution.
Open source/data sharing requirement: Winners must make results publicly available.
Anonymized data or output sharing: Winners will have some data on performance or other aspects of their work shared with competitors to evaluate relative strengths at a task or of a product, but it will be done in a way that protects individual entities (e.g. Solar Forecasting Prize).
8. Application Burden: How much work to apply?
Minimal: The Solar Prize Ready phase required a 5-page written plan, a 90-second video, and a summary slide. For $50,000, hard to argue it isn’t worth the effort.
Moderate: More detailed plans, budgets, letters of support.
Heavy: Approaching traditional FOA requirements, which can defeat a major benefit of doing a prize.
Every additional page loses potential applicants. The instinct to ask for “just a bit more information, just in case” works against you. If your evaluation method requires third-party data (product performance testing, independent validation), think through who bears that cost, who gathers the data, and whether the timeline is realistic. Asking competitors to get lab testing done in a 3-month window when lab queues could run that long is setting them up to fail.
9. Support During Competition: What help do competitors get?
None: Pure competition, sink or swim.
Passive network: Directory of resources competitors can access on their own.
Active support: mentorship, office hours, recruiting assistance.
Technical vouchers and access to facilities: Funding for lab access or specialized services usable during competition.
If you’re paying organizations to help competitors (Recognition Rewards, Power Connector contracts), you need to work through the details: what triggers payment, when it gets paid out, how you assess whether the support was actually useful, and how those costs fit within your total budget alongside prize purses and administration. And you need to understand the overall costs associated with facilities access, particularly for validation purposes.
10. Timeline: How long does everything take?
Competition period length per stage: The Solar Prize initially used 3-month windows, then shifted to 4 months after learning that was too tight for real progress and voucher processing.
Time between stages: Including voucher redemption windows if applicable.
If recurring, frequency of cohorts: Annual, semi-annual, rolling.
Design timelines for competitors, not staff convenience. A competition window over the holidays that gives your team a break forces competitors to prepare when their teams and partners are unavailable. This doesn’t mean your staff should work at the worst possible times. It means thinking through when each step happens and aligning deadlines with periods when competitors, reviewers, and partners can all actually engage.
11. Venue/Demonstration Requirements: Where does judging happen?
Fully remote: Submissions only.
In-person pitch component: The Solar Prize Go stage included live pitches with same-day winner announcements.
Physical demonstration: Testing at a specified facility.
Hybrid: Written submissions plus in-person final round.
12. Program Cadence: One-off or recurring?
One-off: Single competition, program ends.
Recurring: Annual cohorts, ongoing program.
Recurring programs are significantly easier to sustain once launched. They force you to measure impact (you need to prove Round 1 worked to justify Round 2) and let you iterate on design based on experience. One-off programs struggle with both. If there’s any way to structure your prize as recurring, do it. Once the community expects another round, word spreads on its own and the outreach burden drops.
This is not an exhaustive list, and every prize will have its own constraints and requirements. But these are the dimensions most prize designers need to work through, and the choices you make here matter more than most people realize. For the federal government, a traditional FOA process filters for teams that know how to navigate government funding. A well-designed prize can reach an entirely different population: entrepreneurs who would never consider a federal grant, researchers who don’t have the overhead to manage cost share, small teams moving fast on ideas that don’t fit neatly into a three-year project plan. The Solar Prize hit 75% first-time applicants in Round 1 and stabilized around 50% in later rounds, more than double the typical rate for DOE programs. That wasn’t luck. It was a response to incentive design.
For the rest of the launch sequence, governance mapping, timeline architecture, partner agreements, and impact tracking, the full-stack program design framework applies. What follows are pitfalls specific to prizes that the general framework doesn’t cover.
Step 4: Prize-Specific Pitfalls
The governance trap. Decision authority and communication authority are separate approval chains. The Solar Prize faced a six-month delay between selecting winners and being allowed to announce them. We had authority to make selections. We assumed that meant we could communicate those decisions. It didn’t. The people reviewing the announcement package were senior officials who had never heard of the prize, didn’t understand prize authority, and surfaced all the old fears about no-strings-attached funding. Map both chains before you launch. Identify who approves the competition, who approves selections, and who approves communications and get them all up to speed on your prize as soon as practical then remind them throughout the process. They are often different people with different levels of awareness.
One-off when recurring is possible. We covered this in Step 3 under Program Cadence, but the decision is consequential enough to revisit. If you can position a new prize as the first round of a recurring program rather than a one-off competition, it opens doors that are hard to open otherwise. You can start thinking about funding sources for Round 2 a year in advance. You can frame impact measurement around what you’ll need to justify continuation. You can recruit applicants who missed Round 1 by pointing them to Round 2. You can iterate on design based on what you learned. Your prize can build a reputation and brand in the target community. One-off programs don’t get any of this. By the time outcomes materialize, staff may have moved on to other projects and nobody is around to document what worked. The recurring structure creates accountability and momentum that one-off programs struggle to generate.
Partner agreement timing. If your design includes vouchers, network support, or dedicated applicant support, those agreements need to be in place before competitors need them. Prize timelines are compressed compared to grants. A 3-month or 4-month competition window doesn’t leave room for legal agreements to work their way through procurement. The Solar Prize initially used 3-month windows and discovered that voucher agreements with National Labs couldn’t be processed fast enough for competitors to actually use them during the competition. Build in more lead time than you think you need.
Realistic technical requirements. While prizes can establish ambitious goals, setting targets that are practically unachievable discourages serious competitors and wastes everyone’s time. Before finalizing technical requirements, confirm that between available funding, expertise, and facilities, participants have a reasonable path to success.
Scope creep into capacity building. When prizes start working, they can become the default tool for everything. New staff who are trained on prizes and see their success may develop tunnel vision, reaching for a prize structure even when other mechanisms would be a better fit. The risk compounds over time. Eligibility expands. Stages get added where everyone advances. Structures emerge where most participants win. At that point the mechanism is breaking down. Prizes are competitions. They work by being selective. If your goal is to support a broad set of participants rather than select winners, a prize is probably the wrong tool. Other mechanisms exist for that purpose. The same full-stack thinking that led you to choose a prize for the right problem should prevent you from using it for the wrong one.
Step 5: Resources
One theme runs through everything we’ve learned about program design, and it’s especially true for prizes: collaborating with other practitioners always generates a better result. Prizes seem easy and intuitive to design. Define a goal, offer money, pick a winner. But there is nuance at every level, and unknown unknowns that can lower or eliminate impact. The governance trap we hit with the Solar Prize wasn’t in any handbook. We learned it by making the mistake. Someone who had run a prize before could have warned us. Seeking out other practitioners and leveraging existing resources isn’t optional. It’s a critical step.
For federal agencies, Challenge.gov is the official repository for prize competitions. All federal prizes should be posted there, and the General Services Administration runs a federal Prize Community of Practice through that platform.
NASA’s Center of Excellence for Collaborative Innovation (CoECI) provides hands-on support for challenge-based initiatives. It was established in 2011 to help federal agencies experiment with these methods before standing up their own capabilities. Most federal staff outside NASA don’t know CoECI is available to them. There is an informal distribution list of federal prize practitioners from many agencies that frequently ask each other for advice. Seek out an experienced federal prize staff member and they can likely help add you to this list.
For practitioners across sectors, the Global Prize Network launched in 2025 as a LinkedIn group connecting prize designers from government, philanthropy, and private sector organizations. It emerged from a gathering during Climate Week where organizations that fund and run prizes realized they were all solving similar problems independently.
If all else fails, shoot us an email via the website listed at the end of this post and we will do our best to help.
Conclusion
I wish there was more to share in the previous section. The resources that exist are useful but scattered, and the community of practice we described in Part 4 is still more aspiration than reality. We’re specifically interested in helping make more available.
This series started with a 13-year story about institutional resistance and ended with a checklist. The path from “that’s not how we do things here” to 100+ prize competitions at DOE required persistence, timing, and leaders willing to spend political capital on something new. We wrote it down because too much of this knowledge stays locked in the heads of practitioners who figured it out through trial and error, and disappears when they move on.
Prizes are a powerful tool when used well. They can reach applicants your organization has never seen, support hard tech that doesn’t fit traditional funding models, and generate outcomes that justify continued investment. But they’re not magic. The impact comes from design, not the mechanism itself.
If this guide helps you launch a prize, we’d like to hear how it goes. If you’ve run prizes and learned lessons we missed, we’d like to hear that too. The ecosystem gets stronger when practitioners share what they know.
Innovation Waypoints is brought to you by Waypoint Strategy Group.



