Advocacy is an art of advancing a cause, idea or policy within political, economic and/or social institutions. It requires an ability to deftly navigate power dynamics, build influence, collaborate with (sometimes unlikely) allies and think on your feet.
Continuous, rigorous evaluation of advocacy is vital to ensure we’re learning from past experiences, pivoting as the ground shifts beneath us and investing our time and energy where it’s most likely to have the biggest return.
But applying traditional evaluation tools to advocacy work tends to miss the mark. At best, approaches like log frames and pre-fixed metrics provide imperfect and incomplete insights into advocacy results. At worst, these approaches can clip the wings of advocates by binding them to static indicators that become increasingly less relevant as the context inevitably changes.
Evaluation approaches for advocacy need to be as adaptive and contextually aware as advocacy work itself. I haven’t quite figured out what this looks like and, while there are many smart people who have been talking about this, I don’t think anyone else has either.
It makes this sandbox an exciting place to play.