top of page
Search

Rethinking SSTC Evaluation: From Grassroots Realities to Behavioral and Causal Insights

  • Writer: Zhiqi Xu
    Zhiqi Xu
  • May 19
  • 3 min read

Note

This blog is adapted from my contribution to the discussion on EvalforEarth, “Maximising the impact of South-South and Triangular Cooperation in a changing aid architecture through evaluation.” You can find the original discussion thread here: EvalforEarth Discussion. Many thanks to the initiators—Carlos Tarazona (FAO), Arwa Khalid (FAO), Javier Guarnizo (UNIDO), and Xin Xin Yang (UNICEF)—for raising this timely topic and to the platform for creating the space for thoughtful dialogue.



In discussions around evaluating South-South and Triangular Cooperation (SSTC), we often get caught up in frameworks, donor strategies, or high-level indicators. But in my view, we risk missing the real impact if we overlook three crucial elements: the role of grassroots actors, the measurement of intangible outcomes, and smarter ways to handle attribution.

Let me illustrate these points with a case study and two methodological approaches drawn from interdisciplinary perspectives—behavioural science and econometrics.


1. Local Actors Often Make the Difference

In a UNDP-supported micro-finance project I studied, village leaders were inspired by Bangladesh’s Grameen Bank model. They tried to replicate it, but it didn’t go smoothly—initial uptake was poor, and bad debts mounted. The idea of micro-finance, at first, just didn’t translate well.

The UNDP project transformed into a farmers' association focusing on cattle-raising. Sichuan, China, 2013
The UNDP project transformed into a farmers' association focusing on cattle-raising. Sichuan, China, 2013

What changed things? Not policy tweaks from above, but the persistence of grassroots organisations and local leadership. Over time, they adapted the model to fit their social and economic realities. Eventually, it evolved into a successful, resilient farmers’ association.



A Regular Assembly of Yilong Farmers' Association. Sichuan, China, 2012
A Regular Assembly of Yilong Farmers' Association. Sichuan, China, 2012




If we rely solely on short-term evaluations, we risk labelling this as a failure—missing the bigger story. Localisation is often a slow, messy process. But it’s powerful. Evaluations need to leave room for this kind of long-term, adaptive success. That means recognising local feedback and the long arc of change.



2. Measuring Intangible Outcomes Through Psychology and Behavioral Science

Empowerment. Ownership. Mutual learning.

These outcomes are often seen as “soft,” elusive, or impossible to measure. But psychology and behavioral science have been grappling with these concepts for decades. They’ve developed validated frameworks that could make our evaluations richer and more precise.

That said, adaptation is key. Most of these tools were developed for WEIRD (Western, Educated, Industrialized, Rich, Democratic) populations. If we want meaningful insights in diverse settings, we need to tailor these instruments to fit local contexts. Doing so will help evaluators capture not just what changed, but how and why—especially in deeply human dimensions like trust, learning, and motivation.

source from internet
source from internet

3. Tackling Attribution Complexity with Stronger Causal and People-Centered Analysis

Attribution remains a classic challenge, especially in environments where multiple initiatives overlap. But instead of just flagging this as a limitation, we can embrace more robust and people-centered methods.

Causal inference techniques—like Propensity Score Matching (PSM), natural experiments, and well-structured comparison groups—offer tangible ways to identify what worked. Even in universal programs, timing differences (early vs. late adopters) can create valuable natural comparisons over time.

In my recent elderly care study, I applied Latent Profile Analysis (LPA) to identify subgroups based on psychological traits, willingness, and demographic factors. This helped explain why treatment effects looked inconsistent in the aggregate—there was hidden diversity driving different responses. Bringing such segmentation into SSC evaluations can surface nuanced insights that traditional averages obscure.

Ultimately, segmenting populations based on both timing and underlying profiles can lead to more accurate assessments—and more actionable learning.


In Short: To truly understand the impact of SSTC, we need to:

  • Recognise the catalytic role of grassroots actors;

  • Adopt innovative tools from psychology and behavioural science to capture intangible outcomes;

  • Apply diverse causal methods to clarify contribution and reveal hidden heterogeneity.


These approaches require more thoughtful design and analysis—but they offer a path toward evaluations that are not only more credible, but also more locally grounded and policy-relevant.


I’d love to hear others’ experiences—how have you approached these challenges in your own SSTC evaluations?

 
 
 

Comments


Contact
Information

International Institute of Social Studies, Erasmus University Rotterdam

Kortenaerkade 12, 2518 AX Den Haag, The Netherlands

z.xuATiss.nl

  • LinkedIn
  • Twitter

©2025 by Zhiqi Xu.

bottom of page