Discussion about this post

User's avatar
Greg Smestad's avatar

A [Systems] diagram that can help visualize the points of this excellent article can be found here: J. Morabito, T. Peterson, G. P. Smestad, and K. DeGroat, “Systems Analysis and Recommendations for R&D and Accelerated Deployment of Solar Energy,” 2009 Peer Review Meeting, U.S. Department of Energy’s (DOE) Solar Energy Technologies Program, Denver CO, March 2009. Download White Paper: https://www.solideas.com/publications/

https://www.researchgate.net/publication/242081639_Systems_Analysis_and_Recommendations_for_RD_and_Accelerated_Deployment_of_Solar_Energy

DNVO's avatar

I agree with the central premise that design up front matters more than ex post evaluation, but the critical question is what kind of design are we actually talking about?

Government programs are not built for speed or efficiency, yet they continue to invest heavily in evaluation activities that prioritize data collection over insight, with little evidence of sustained tracking of real system or market outcomes. As you describe, the result is a proliferation of metrics that satisfy governance requirements but rarely inform decisions or reflect true impact.

A redesigned approach should embed, from the outset, a deliberate set of KPIs that concurrently assess operational performance (internal execution) and commercialization or system impact (external outcomes). These metrics must be explicitly aligned with the initiative’s mission, value proposition, and theory of change. This requires going deeper during program design (e.g., asking why repeatedly until the true source of intended impact is clear).

Finally, this argues for a more agile, learning-oriented design philosophy: prioritizing short feedback cycles, early signals, and iterative adjustment over traditional predictive models that look rigorous on paper but rarely survive contact with complex, evolving systems.

No posts

Ready for more?