From CAHRD to Campbell

After two days of an intensive consultation meeting at the Collaboration for Applied Health Research and Delivery (CAHRD), where the focus was on learning from stakeholders about the future direction of applied health research in low- and middle-income countries, I set off to Belfast to attend the 2014 Campbell Collaboration Colloquium. Having been a traditional ‘Cochraner’ for some time, it was a bizarre experience for me to meet so many people who, while doing the same type of work (systematic reviews) for the same purpose (informing policy and practice), are doing it in quite different contexts (education, crime and justice, and international development). It is somewhat like travelling to a different country in the modern world – you see people doing the same thing, such as going to a restaurant, but they have quite different menus and speak a different language.

Talking of language, one common issue that emerged from both meetings is terminology. In the CAHRD meeting we talked about the need for a standardised terminology in health service delivery research. As an example, the term “health system” means different things for different people and is often used when people want to describe something about health care, but know relatively little about it. In the Campbell conference I joined a session of the Knowledge Translation and Implementation (KTI) group where we were tasked with consolidating the definition of ‘knowledge translation’. The group leaders presented no less than 15 related terms (such as knowledge mobilisation and technical assistance) and identified 61 different frameworks or models of KTI through preliminary research. The tasks of resolving differences and reaching a consensus seem daunting.

While differences appear to be ubiquitous, many of them need not be a cause of concern so long as they do not lead to misunderstanding and ignorance. In the world of Campbell I soon got used to the term “moderator analysis,” which had only been known to me in the context of subgroup analysis and meta-regression for exploring potential sources of heterogeneity; and “impact evaluation for a development programme,” which appears somewhat similar to health technology assessments for new drugs, with which I am more familiar. I realised that although the names may be different and the techniques and emphasis may (quite rightly) vary to some extent to suit a different context, the principles are the same.

With my unease dissipated, I quickly started to enjoy exploring the new territory – as expected at such a conference there are many interesting things to be uncovered. For example, Professor Paul Connolly talked about how randomised controlled trials (RCTs) are depicted negatively in research methods textbooks as an unrealistic method advocated by positivists ignorant of the complex world of teaching and learning. He also detailed how the team at the Centre for Effective Education, based in the Queen’s University Belfast, have managed to conduct more than 30 RCTs in education settings since 2007. My recent task of sifting through nearly 10,000 records for a systematic review is easily dwarfed by the efforts of international colleagues who have trawled through over 60,000 records for a review of youth crime and violence. However, against the rather gloomy prospect of soon getting lost in the ever expanding sea of information, comes the welcome news that the Evidence for Policy and Practice Information and Co-ordinating (EPPI) Centre (a major player in the field of evidence synthesis in education and social policy) has developed smart software that utilises text mining and machine learning to automatically ‘prioritise’ references that are most likely to be relevant for a review based on the input of a few key words.

One of the most inspiring talks was delivered by Dr Howard White, who illustrated that the lack of permanent changes backed up by solid evidence has rendered education and social policy vulnerable to the influence of short-term political cycles. The example he quoted is the resurfacing of the debate on the merit of pay-for-performance based on exam results in school settings – an issue that was claimed to be resolved in a book concerning the education system in West Africa in the 1920s.

For people like me who have mainly been involved in evidence synthesis and evaluation in health care, but are curious about their application in the wider world, the International Initiative for Impact Evaluation (3ie), of which Dr White is the Executive Director, is well worth looking into. They are a US-based, not-for-profit organisation that commissions and carries out in-house systematic reviews and impact evaluations of development programmes for developing countries. They have offices in Washington, New Delhi and London and have commissioned or carried out more than 130 impact evaluations and 30 systematic reviews since 2009. Topics have been diverse, ranging from the more familiar, such as a systematic review of community-based intervention packages for reducing maternal morbidity and mortality and improving neonatal outcomes, to the less familiar, for example, impact evaluation of export processing zones on employment, wages and labour conditions in developing countries. All reports are available from their website, which also includes a wealth of other resources such as evidence ‘Gap Maps’, methodological working papers, a prospective registry for international development impact evaluations, and a searchable database of evaluation experts.

My final reflections upon the journey through both meetings is that to achieve the common aspiration of evidence-informed policy and practice, we need to break any boundary of disciplines and ideologies; and understand and embrace differences rather than exclude or ignore them, so that the diverse strength from individual persons and organisations can be harvested to the greatest extent to expedite the progress. Perhaps science has its own cycles, just like politics, and after a period of phenomenal advances in increasingly divided subject areas, the time has come to focus on how to integrate and synergise specialised knowledge.

Yen-Fu Chen with Martina Vojtkova, Evaluation Specialist from the 3ie, at the Campbell Collaboration Colloquium 2014.
Yen-Fu Chen with Martina Vojtkova, Evaluation Specialist from the 3ie, at the Campbell Collaboration Colloquium 2014.

— Yen-Fu Chen, Senior Research Fellow

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s