beacon labs logo

Recap of 2025

Beacon Labs

Recap of 2025

In 2025, 2 major milestones defined our year. The first was launching Beacon Labs as an independent organization. From 2023 to 2024, we operated as Fracton Research, but we spun out from Fracton Ventures and became independent under the name Beacon Labs1.

As before, Beacon Labs is an R&D institution for a positive sum world through exploring inclusive coordination design, which is particularly addressing challenges related to financing public goods. However, we feel that public goods funding has now entered a phase of reflection and reassessment, as experimental approaches such as Quadratic Funding (QF) and Retro Funding have reached a point where the initial momentum has settled. Kevin Owocki, co-founder of Gitcoin, mentioned that in his essay.

owocki_tweet

Kevin Owocki's essay

QF and Retro Funding are popular in public goods funding, but these mechanisms tend to devolve into popularity contests and for personal relationships between grant makers and recipients to influence the results in the distribution of funds2. While supporting projects that are popular or familiar to specific communities is not inherently problematic, relying solely on social effects when allocating funds is not necessarily a healthy approach to resource allocation. Therefore, by expressing the diverse values that each project holds, dependence on any single value system is reduced, which we believe contributes to healthier allocation of funds.

The concept of impact evaluation has been applied in Retro Funding, which is based on the principle of “impact = profit”. Retro Funding embodies the idea that resources should be allocated based on track records validated through impact evaluation, rather than on promises of future impact. However, conducting impact evaluation has been difficult in practice. Many initiatives fail to adequately measure actual outcomes, focusing instead on outputs. Proper impact evaluation requires examining the difference between outcomes before and after a project’s implementation—but current analytical environments are insufficient for this. We need to improve the conditions that make this possible.

This challenge directly connects to our second major milestone of the year: the development of MUSE3, an OSS project led by Beacon Labs.

MUSE enables evidence-based planning for impact evaluation. As mentioned, impact evaluation hinges on comparing pre-outcomes and post-outcomes by intervention, and defining which outcomes to track—and how they relate causally—must occur during the planning stage. It is therefore critical to understand the baseline situation and predefine what should be measured after implementation.

Moreover, continuous data collection, analysis, and monitoring—before and throughout implementation—is crucial. Without anticipating what data can be collected and when, meaningful evaluation after the fact becomes difficult. Considering counterfactual—what would have happened without the intervention—is also an indispensable aspect of impact evaluation.

With MUSE, grant program operations and OSS development initiatives can be planned using explicit causal pathways (logic models). This shifts policy and program design toward evaluation-readiness from the outset—raising evaluability during the planning stage. The resulting causal pathways, supported by evidence curated by MUSE, enhance accountability and persuasiveness in program design.

muse 1

List of evidence curated through MUSE (Evidence Cards)

muse 2

Causal pathway (logic model) generated with MUSE

Contributing to the Impact Evaluation Ecosystem

Through the development of MUSE and related research, we continued contributing to the impact evaluation ecosystem for digital public goods.

From July to August, we participated in the Impact Evaluation Research Retreat (IERR)4, held in Iceland. IERR is an immersive research retreat focused on the Impact Evaluator (IE) framework, evaluation mechanisms, and decentralized funding systems. It was led by members of Protocol Labs, GainForest, and the Ethereum Foundation—organizations at the forefront of public goods funding and impact evaluation. Participants included developers, researchers, data scientists in the blockchain ecosystem, as well as academics such as mathematicians and professors. In total, 25 participants from 17 countries across six continents—except Antarctica—gathered.

In November, we attended Devconnect Argentina hosted by the Ethereum Foundation and gave a talk at Funding the Commons Buenos Aires on the importance of evidence in impact evaluation.

ftc ba

Presentation at Funding the Commons Buenos Aires

Around the same time, we also participated in Code for Japan Summit 2025, Asia Pacific Evaluation Association (APEA) conference, and the Japan Evaluation Society conference.

cfj 2025

Presentation at Code for Japan Summit 2025

Through these opportunities, our reach expanded beyond the Ethereum ecosystem including digital public goods, evidence-based policymaking (EBPM), and academic research.

Looking Ahead to 2026: From Ethereum to Digital Public Goods, and into the Real World

Next year, Devcon will be held in Mumbai, and Global Evidence Summit will take place in Bhubaneswar—both in India. We aim to continue contributing to the advancement of impact evaluation in digital public goods, while sharing insights, methodologies, and case studies from digital public goods space back into academic fields related to evaluation science and EBP space.

devcon_india

Devcon India announcement at Devconnect Argentina

Innovative concepts such as QF, prediction markets, and Futarchy were first implemented at scale in the Ethereum ecosystem. Through continued experimentation on Ethereum, these concepts have gradually gained social adoption.

Meanwhile, impact evaluation and EBP have yet to take root in real-world systems, facing institutional hurdles and limited civic demand. If impact evaluation and EBP can be validated and refined within the Ethereum ecosystem as a testbed, they may help drive implementation into the real world from outside existing institutional frameworks.

About Beacon Labs

Footnotes

  1. Beacon Labs. (2025). Beacon Labs: Beacon for Pluralistic Public Goods Funding. https://beaconlabs.io/reports/beacon-labs/

  2. Vitalik Buterin. (2025). d/acc: one year later. https://vitalik.eth.limo/general/2025/01/05/dacc2.html

  3. Beacon Labs. (2025). Evidence Layer for Digital Public Goods. https://beaconlabs.io/reports/evidence-layer-for-digital-public-goods/

  4. Shuhei Tanaka. (2025). Impact Evaluator Research Retreat 2025 Report. https://beaconlabs.io/reports/ierr2025/