Governments have been collecting data on their citizens almost from the first moment that they came into being. Data was needed to determine what was out there that could be extracted: The Egyptian Pharaohs conducted a census to find out the scale of the available labor force to build the pyramids, and in the Roman Empire, the five-yearly census was all about finding out who was available for military service and what wealth existed to be taxed. But governments have also used data to find out what people needed: The ancient Babylonians collected data from their citizens nearly 6,000 years ago in order to understand how much food was required to feed their population, and the Egyptian census was also used to work out how to divide the land after the flooding of the Nile.
As the activities of governments have become more complex, and people’s expectations have grown, more and more data has been collected. Early censuses just recorded numbers of people, with basic information about wealth and maybe occupation. It’s all more complicated now—the most recent U.K. census, held in 2011, had 56 different questions. And that’s not even including data collected by the health service, the education department or the myriad other public and private bodies who collect and use information about individuals.
A lot is known about some people, and that information is used by governments and other institutions to design, implement and evaluate policy on their behalf. But very little is known about others—and the decisions that are made for and about them are likely to be much worse as a result.
The state of data on some of the world’s people is so bad that even facts that appear incontrovertible are highly questionable. According to the latest figures, maternal mortality rates in Africa dropped from 740 deaths per 100,000 births in 2000 to 500 deaths per 100,000 births in 2010. Good news. But look more closely and you find that actual data from civil registration systems on maternal mortality is only available for 16 percent of all the world’s births. The vast majority of what we think we know about maternal mortality is the product of estimates—basically statistical guesswork—high-quality and serious stuff, yes, but still guesswork.
For the purposes of official U.N. statistics, maternal mortality is modeled using three variables—GDP, fertility rates and the national probability of having a skilled birth attendant present. And data on all of these is in turn highly unreliable—famously, Ghana’s GDP went up by 60 percent overnight after a recalculation; fertility rates are hard to measure in countries without national systems for registering births; and the figures on birth attendants are derived from survey data, which is incomplete, given the fact that only 28 out of 49 African countries have had a survey in the past seven years.
So while the “official” number of maternal deaths per 100,000 live births in sub-Saharan Africa in 2000 was 500, it could in fact be as low as 400 or as high as 750. In other words, there might not have been any fall in the maternal mortality rate at all. In fact, given the uncertainty in both the 2000 and the 2010 estimates, the maternal mortality rate could even have gone up.
Does it matter if governments, service providers and people in general don’t have data? Some pretty impressive things have been done in the world of development with terrible data—smallpox, for example, was eradicated despite a lack of a census or basic health information in many countries. Progress is possible without data.
But this is true only in some areas, and only up to a point. The range and complexity of what governments are expected to do make life almost impossible without good data. Say a government wants to run a health service and allocate resources to it in a way that helps to reduce inequalities so that the sickest get more resources. In the U.K., which has plentiful data, the formula for allocating health care financing is based on the population in a given region, on the cost of supplying services in different areas and on a range of demographic factors relevant to health needs and health inequalities such as age, death rates, HIV rates, indicators of poverty and deprivation and even, for a time, the size of the homeless population in a given area. In addition to data on inputs, data on outcomes is also collected regularly so that the government can judge quite quickly if its attempts to improve people’s health are working or not.
Much of that information is simply not available to policymakers in large numbers of countries. Malaria, for example, is a leading cause of death and ill health in poor countries, so for allocating health resources in any country, information about where malaria is most prevalent is crucial, and progress on tackling malaria would be a key indicator with which to judge the success of health policies. But there is not enough data to be certain about either the incidence of malaria or trends in tackling it in countries that account for 85 percent of all (estimated) deaths from the disease. Malaria is just one example: Infection rates for key diseases such as HIV and AIDS are often extrapolated from the numbers among particular population groups—such as pregnant women visiting clinics, where HIV tests are mandatory. But these can be wrong—the actual rates of infection in Ethiopia, for example,turned out to be about half the estimated rates once a national survey was carried out.
Bad data is a recipe for bad decisions. Without data, governments and other institutions and individuals trying to make policies or run programs in any sector are operating almost blindly. They can’t know what inputs are needed where. And they can’t know if the things they are doing are actually working, as there’s no way of telling if things are getting better or not. Of course, governments will still make decisions, and some of these decisions will turn out well. But without data it’s harder for effective governments to know what to do, and it’s harder for people to hold ineffective governments to account without the evidence that data provides of poor outcomes and bad or corrupt decision-making.
The pressures for improving data quality are growing. The creation of globally agreed sets of indicators to monitor human progress, most famously the Millennium Development Goals agreed on in 2000, created new incentives to invest in a limited range of indicators to measure progress. Increasingly, the public is demanding better data too: As democratic government becomes the norm in most countries in Africa, Asia and Latin America, people expect data so they can judge the success of the politicians they vote for, and politicians need data to know how to satisfy people’s expectations and be re-elected. Many countries have strong campaigns calling for greater transparency of both outcome and, crucially, input data, so that citizens can track government spending and the results it is producing.
And as the general public in many donor countries becomes more skeptical about the positive effect of the aid they pay for, the onus is on governments to come up with the figures to prove that aid programs are doing something good. Within the development sector, pressure for better evaluations, for trials of new interventions to judge their effects and for a much more rigorous approach to demonstrating effectiveness, has led to much more attention being paid to data collection by those in charge of implementing projects—be they from official agencies, nongovernmental organizations or governments. The Millennium Villages project, a multimillion-dollar development project led by the economist Jeffrey Sachs, has become embroiled in controversy due to poor evaluation procedures and lack of data on control groups against which to measure success or failure.
The general concern for and complaints about data quality have crystallized in a commitment to action in recent months, with the call from the U.N.’s High-Level Panel on the Post-2015 Development Agenda for a “data revolution” and a global partnership on data to provide resources for improvements in data quality and quantity worldwide. This call has grabbed the imagination of official agencies, NGOs and governments worldwide, and there is real potential—at last—for resources and political will to improve what some commentators call the “statistical tragedy” of poor data in poor countries.
The resources involved are huge. Properly staffing and resourcing statistical offices in some countries would itself be a needed first step, which someone would have to pay for. Beyond that, the U.N. estimates the average cost per person of a national census is $4.60, although costs vary hugely—the cost per person of the most recent census in India was just 50 cents, while in the U.S. it was $42. The costs of a single census for the whole population of sub-Saharan Africa, at 910.4 million people, would therefore be somewhere between $450 million, if the costs are closer to those of the Indian census, and $4 billion, if the costs are closer to the world average. And such a census would have to be repeated every 10 years to produce useful and usable data. Add to that regular household surveys, which should take place around every five years, at around $1 million to $2 million each, to collect more in-depth information on trends for key demographic, social and economic issues; other data-collection exercises using mobile technologies, big data and so on; plus the cost of processing data into a form that is usable by governments, other institutions and, crucially, individual citizens; and the resources needed are really quite formidable.
Raising the money would be just the start. Spending it would also produce huge challenges. First among these would be prioritization—what is the data that’s really important to collect? There’s a lot of agreement on basic demographic data that countries should have—information about numbers of people, births and deaths, incomes and assets, health and education levels and so on. But beyond that, there’s an almost infinite range of data that would be very useful—but each extra piece would add to the costs of collection. If, as we should probably assume will happen, a new set of global goals on sustainable development are agreed to in 2015, there will be new requirements for data to measure progress on them. This will include things that have never been measured in a comprehensive way in many countries: food waste, for example, or rates of domestic violence, both of which are likely to feature among the data necessary to measure progress on the new goals, and for which there are nothing approaching credible figures for most countries. A global initiative on data would have to involve some compromise between a globally agreed set of core data and data that reflects different national priorities. But it’s inevitable that this will mean, in very many countries, collecting much more information than is available now.
The scale of ambition, however, must not be confined to data on objective indicators like income or education levels. The time has passed when it’s acceptable for a government or a researcher or an NGO to tell people that their lives are improving, without also asking them what they think. A global initiative on data would have to find out from the world’s citizens what their priorities are, what they feel about their lives and how they perceive the changes happening around them. Thanks to the work of opinion polling companies like Gallup and Ipsos over decades, there’s a huge amount of expertise on how to do this in a rigorous and credible way—and this is the moment to do it.
It’s not just about adding more and more questions to censuses or surveys. There are also some things that surveys might not be able to do, for which new approaches would be needed. Sensitive issues might require different ways of collecting data: Measuring crime rates or illegal activity, for example, through face-to-face surveys might be difficult if people are reluctant to speak openly—anonymous data collection through mobile phones might be more effective.
Sampling might also need to be rethought. A typical global opinion poll, such as Gallup’s World Values Survey, asks questions of somewhere between 1,000 and 2,000 carefully selected people to represent a whole country. Often, this works well. But part of “better” data is also data that allows for more effective disaggregation of populations—if the commitment of a post-2015 agreement is to end poverty, more information is needed on who is poor to understand why and what to do about it. If, for example, poverty in a given country is overwhelmingly concentrated among a small ethnic group, or among people with a mental illness or a physical disability, then a sampled survey might not pick up a sufficient number of people in that group to really get a picture of their needs and the problems they face. Something more deliberate might be needed to provide the information required to track progress on poverty.
Another part of better data might involve data that’s collected more often, or at specific times. In a country where employment is precarious, short-term and very informal, you might need to collect data on employment and wages from a sample of the population every month to really get a good picture of what’s going on in the labor market. And in an agricultural economy in which all the money comes in at once at harvest time, there’s a premium on asking the questions at the right time. If the researcher happens to come in 11 months after the harvest asking questions about income, people will have forgotten what they earned, and the quality of the data is likely to be worse than it would be if the same questions were asked at the right moment.
So better data means more data, but also data that’s collected in different ways and from some specific groups of people. There is, rightly, a huge amount of excitement about the potential for new technologies to help meet some of these challenges. Mobile phones and the Internet can help enormously with coverage and with the more frequent collection of data at a low cost. But, as with all apparent “magic bullets” in the development sector, these technologies are not, in fact, magic bullets.
Experience with MY World, a global survey on people’s priorities run by the Overseas Development Institute and the U.N., suggests caution. Though most of the respondents are asked their views through face-to-face surveys or on the Internet, there has also been experimentation with different ways of conducting the survey by mobile phone. Of the more than quarter of a million responses collected via mobile phones, seven out of 10 come from men. This is probably a problem that can be dealt with, but it does illustrate that, even with shiny new technologies, the old rules about representativeness and rigor still apply.
Of course, collecting more data is all very well, and it would be a big step forward from the current situation. But it’s nothing unless it is presented in a form that people can actually use. Data is the raw material for information; it’s not the information itself. Making data useful would also involve a huge processing and dissemination job to turn the new data into products that could be used by people, organizations and governments to monitor progress and improve decision-making and practice.
The barriers to all of this are political as well as financial and technical. Some facts are known but kept hidden for political reasons by governments seeking to avoid too much scrutiny of their decisions. Increasingly, citizens and NGOs are demanding more data on inputs. How much are governments, official agencies and NGOs spending on different projects or sectors? Where is the money going and to whom? What money is being earned in the country, by what individuals and companies, and what tax is being paid on it? Most of this information is known by somebody—making it public is more about tackling the politics than about technical issues having to do with collection and analysis.
Not all new data will be popular either. Data can make governments or other bodies look bad if outcomes are worse than people think they are. This isn’t a new problem: Data from one of the earliest modern censuses, carried out by the Swedish government in 1749, was kept secret because, to the government’s surprise, the population turned out to be smaller than expected—something of an embarrassment and a military risk. A data revolution will need constant vigilance and monitoring to make sure that bad as well as good news goes public.
Researchers have been lamenting bad data for decades. It’s almost an iron law of development conferences that once two or more researchers are gathered in a room, they will start to talk about bad data. But most people have been unaware of just how bad the situation is—and of the consequences of knowing so very much less than we think we do.
There is, finally, now an opportunity to do something about this sorry situation. It would take a wholehearted engagement from all the usual players—the World Bank, the U.N., NGOs, national statistical offices and academics, plus newcomers like mobile phone companies, opinion pollsters and companies that collect data for commercial purposes. Big funders would need to be involved, and also citizens themselves, to give their views on the indicators and the data that matter to them.
Improving data might seem like a geeky and somewhat marginal pursuit, compared with the weight of need and injustice that we know exists in the world. But, as Napoleon is alleged to have said, “War is 90 percent information,” and that applies to the war on poverty as much as to any military conflict. A data revolution might not be what most people think of as a real revolution. But it’s sure to be revolutionary.
By Claire Melamed,
Dr. Claire Melamed is the head of the Growth, Poverty and Inequality program at the Overseas Development Institute, and leads the institute’s work on the post-2015 global agenda. Prior to working at ODI, she worked for 10 years in a number of different U.K. development NGOs as well as for the United Nations in Mozambique. She also taught at the University of London and the Open University. Her recent ODI reports on the post-2015 development agenda can be found here.
sourche: http://www.worldpoliticsreview.com/articles/13523/data-revolution-developments-next-frontier?utm_source=ODI+email+services&utm_campaign=26c828dd79-ODI_Newsletter_30_January_2014&utm_medium=email&utm_term=0_bb7fadfa38-26c828dd79-75438105
Photo: An employee of the Southern Sudan Commission for Census interviews residents of Juba, April 22, 2008 (U.N. photo by Tim McKulka).
As the activities of governments have become more complex, and people’s expectations have grown, more and more data has been collected. Early censuses just recorded numbers of people, with basic information about wealth and maybe occupation. It’s all more complicated now—the most recent U.K. census, held in 2011, had 56 different questions. And that’s not even including data collected by the health service, the education department or the myriad other public and private bodies who collect and use information about individuals.
A lot is known about some people, and that information is used by governments and other institutions to design, implement and evaluate policy on their behalf. But very little is known about others—and the decisions that are made for and about them are likely to be much worse as a result.
The state of data on some of the world’s people is so bad that even facts that appear incontrovertible are highly questionable. According to the latest figures, maternal mortality rates in Africa dropped from 740 deaths per 100,000 births in 2000 to 500 deaths per 100,000 births in 2010. Good news. But look more closely and you find that actual data from civil registration systems on maternal mortality is only available for 16 percent of all the world’s births. The vast majority of what we think we know about maternal mortality is the product of estimates—basically statistical guesswork—high-quality and serious stuff, yes, but still guesswork.
For the purposes of official U.N. statistics, maternal mortality is modeled using three variables—GDP, fertility rates and the national probability of having a skilled birth attendant present. And data on all of these is in turn highly unreliable—famously, Ghana’s GDP went up by 60 percent overnight after a recalculation; fertility rates are hard to measure in countries without national systems for registering births; and the figures on birth attendants are derived from survey data, which is incomplete, given the fact that only 28 out of 49 African countries have had a survey in the past seven years.
So while the “official” number of maternal deaths per 100,000 live births in sub-Saharan Africa in 2000 was 500, it could in fact be as low as 400 or as high as 750. In other words, there might not have been any fall in the maternal mortality rate at all. In fact, given the uncertainty in both the 2000 and the 2010 estimates, the maternal mortality rate could even have gone up.
Does it matter if governments, service providers and people in general don’t have data? Some pretty impressive things have been done in the world of development with terrible data—smallpox, for example, was eradicated despite a lack of a census or basic health information in many countries. Progress is possible without data.
But this is true only in some areas, and only up to a point. The range and complexity of what governments are expected to do make life almost impossible without good data. Say a government wants to run a health service and allocate resources to it in a way that helps to reduce inequalities so that the sickest get more resources. In the U.K., which has plentiful data, the formula for allocating health care financing is based on the population in a given region, on the cost of supplying services in different areas and on a range of demographic factors relevant to health needs and health inequalities such as age, death rates, HIV rates, indicators of poverty and deprivation and even, for a time, the size of the homeless population in a given area. In addition to data on inputs, data on outcomes is also collected regularly so that the government can judge quite quickly if its attempts to improve people’s health are working or not.
Much of that information is simply not available to policymakers in large numbers of countries. Malaria, for example, is a leading cause of death and ill health in poor countries, so for allocating health resources in any country, information about where malaria is most prevalent is crucial, and progress on tackling malaria would be a key indicator with which to judge the success of health policies. But there is not enough data to be certain about either the incidence of malaria or trends in tackling it in countries that account for 85 percent of all (estimated) deaths from the disease. Malaria is just one example: Infection rates for key diseases such as HIV and AIDS are often extrapolated from the numbers among particular population groups—such as pregnant women visiting clinics, where HIV tests are mandatory. But these can be wrong—the actual rates of infection in Ethiopia, for example,turned out to be about half the estimated rates once a national survey was carried out.
Bad data is a recipe for bad decisions. Without data, governments and other institutions and individuals trying to make policies or run programs in any sector are operating almost blindly. They can’t know what inputs are needed where. And they can’t know if the things they are doing are actually working, as there’s no way of telling if things are getting better or not. Of course, governments will still make decisions, and some of these decisions will turn out well. But without data it’s harder for effective governments to know what to do, and it’s harder for people to hold ineffective governments to account without the evidence that data provides of poor outcomes and bad or corrupt decision-making.
The pressures for improving data quality are growing. The creation of globally agreed sets of indicators to monitor human progress, most famously the Millennium Development Goals agreed on in 2000, created new incentives to invest in a limited range of indicators to measure progress. Increasingly, the public is demanding better data too: As democratic government becomes the norm in most countries in Africa, Asia and Latin America, people expect data so they can judge the success of the politicians they vote for, and politicians need data to know how to satisfy people’s expectations and be re-elected. Many countries have strong campaigns calling for greater transparency of both outcome and, crucially, input data, so that citizens can track government spending and the results it is producing.
And as the general public in many donor countries becomes more skeptical about the positive effect of the aid they pay for, the onus is on governments to come up with the figures to prove that aid programs are doing something good. Within the development sector, pressure for better evaluations, for trials of new interventions to judge their effects and for a much more rigorous approach to demonstrating effectiveness, has led to much more attention being paid to data collection by those in charge of implementing projects—be they from official agencies, nongovernmental organizations or governments. The Millennium Villages project, a multimillion-dollar development project led by the economist Jeffrey Sachs, has become embroiled in controversy due to poor evaluation procedures and lack of data on control groups against which to measure success or failure.
The general concern for and complaints about data quality have crystallized in a commitment to action in recent months, with the call from the U.N.’s High-Level Panel on the Post-2015 Development Agenda for a “data revolution” and a global partnership on data to provide resources for improvements in data quality and quantity worldwide. This call has grabbed the imagination of official agencies, NGOs and governments worldwide, and there is real potential—at last—for resources and political will to improve what some commentators call the “statistical tragedy” of poor data in poor countries.
The resources involved are huge. Properly staffing and resourcing statistical offices in some countries would itself be a needed first step, which someone would have to pay for. Beyond that, the U.N. estimates the average cost per person of a national census is $4.60, although costs vary hugely—the cost per person of the most recent census in India was just 50 cents, while in the U.S. it was $42. The costs of a single census for the whole population of sub-Saharan Africa, at 910.4 million people, would therefore be somewhere between $450 million, if the costs are closer to those of the Indian census, and $4 billion, if the costs are closer to the world average. And such a census would have to be repeated every 10 years to produce useful and usable data. Add to that regular household surveys, which should take place around every five years, at around $1 million to $2 million each, to collect more in-depth information on trends for key demographic, social and economic issues; other data-collection exercises using mobile technologies, big data and so on; plus the cost of processing data into a form that is usable by governments, other institutions and, crucially, individual citizens; and the resources needed are really quite formidable.
Raising the money would be just the start. Spending it would also produce huge challenges. First among these would be prioritization—what is the data that’s really important to collect? There’s a lot of agreement on basic demographic data that countries should have—information about numbers of people, births and deaths, incomes and assets, health and education levels and so on. But beyond that, there’s an almost infinite range of data that would be very useful—but each extra piece would add to the costs of collection. If, as we should probably assume will happen, a new set of global goals on sustainable development are agreed to in 2015, there will be new requirements for data to measure progress on them. This will include things that have never been measured in a comprehensive way in many countries: food waste, for example, or rates of domestic violence, both of which are likely to feature among the data necessary to measure progress on the new goals, and for which there are nothing approaching credible figures for most countries. A global initiative on data would have to involve some compromise between a globally agreed set of core data and data that reflects different national priorities. But it’s inevitable that this will mean, in very many countries, collecting much more information than is available now.
The scale of ambition, however, must not be confined to data on objective indicators like income or education levels. The time has passed when it’s acceptable for a government or a researcher or an NGO to tell people that their lives are improving, without also asking them what they think. A global initiative on data would have to find out from the world’s citizens what their priorities are, what they feel about their lives and how they perceive the changes happening around them. Thanks to the work of opinion polling companies like Gallup and Ipsos over decades, there’s a huge amount of expertise on how to do this in a rigorous and credible way—and this is the moment to do it.
It’s not just about adding more and more questions to censuses or surveys. There are also some things that surveys might not be able to do, for which new approaches would be needed. Sensitive issues might require different ways of collecting data: Measuring crime rates or illegal activity, for example, through face-to-face surveys might be difficult if people are reluctant to speak openly—anonymous data collection through mobile phones might be more effective.
Sampling might also need to be rethought. A typical global opinion poll, such as Gallup’s World Values Survey, asks questions of somewhere between 1,000 and 2,000 carefully selected people to represent a whole country. Often, this works well. But part of “better” data is also data that allows for more effective disaggregation of populations—if the commitment of a post-2015 agreement is to end poverty, more information is needed on who is poor to understand why and what to do about it. If, for example, poverty in a given country is overwhelmingly concentrated among a small ethnic group, or among people with a mental illness or a physical disability, then a sampled survey might not pick up a sufficient number of people in that group to really get a picture of their needs and the problems they face. Something more deliberate might be needed to provide the information required to track progress on poverty.
Another part of better data might involve data that’s collected more often, or at specific times. In a country where employment is precarious, short-term and very informal, you might need to collect data on employment and wages from a sample of the population every month to really get a good picture of what’s going on in the labor market. And in an agricultural economy in which all the money comes in at once at harvest time, there’s a premium on asking the questions at the right time. If the researcher happens to come in 11 months after the harvest asking questions about income, people will have forgotten what they earned, and the quality of the data is likely to be worse than it would be if the same questions were asked at the right moment.
So better data means more data, but also data that’s collected in different ways and from some specific groups of people. There is, rightly, a huge amount of excitement about the potential for new technologies to help meet some of these challenges. Mobile phones and the Internet can help enormously with coverage and with the more frequent collection of data at a low cost. But, as with all apparent “magic bullets” in the development sector, these technologies are not, in fact, magic bullets.
Experience with MY World, a global survey on people’s priorities run by the Overseas Development Institute and the U.N., suggests caution. Though most of the respondents are asked their views through face-to-face surveys or on the Internet, there has also been experimentation with different ways of conducting the survey by mobile phone. Of the more than quarter of a million responses collected via mobile phones, seven out of 10 come from men. This is probably a problem that can be dealt with, but it does illustrate that, even with shiny new technologies, the old rules about representativeness and rigor still apply.
Of course, collecting more data is all very well, and it would be a big step forward from the current situation. But it’s nothing unless it is presented in a form that people can actually use. Data is the raw material for information; it’s not the information itself. Making data useful would also involve a huge processing and dissemination job to turn the new data into products that could be used by people, organizations and governments to monitor progress and improve decision-making and practice.
The barriers to all of this are political as well as financial and technical. Some facts are known but kept hidden for political reasons by governments seeking to avoid too much scrutiny of their decisions. Increasingly, citizens and NGOs are demanding more data on inputs. How much are governments, official agencies and NGOs spending on different projects or sectors? Where is the money going and to whom? What money is being earned in the country, by what individuals and companies, and what tax is being paid on it? Most of this information is known by somebody—making it public is more about tackling the politics than about technical issues having to do with collection and analysis.
Not all new data will be popular either. Data can make governments or other bodies look bad if outcomes are worse than people think they are. This isn’t a new problem: Data from one of the earliest modern censuses, carried out by the Swedish government in 1749, was kept secret because, to the government’s surprise, the population turned out to be smaller than expected—something of an embarrassment and a military risk. A data revolution will need constant vigilance and monitoring to make sure that bad as well as good news goes public.
Researchers have been lamenting bad data for decades. It’s almost an iron law of development conferences that once two or more researchers are gathered in a room, they will start to talk about bad data. But most people have been unaware of just how bad the situation is—and of the consequences of knowing so very much less than we think we do.
There is, finally, now an opportunity to do something about this sorry situation. It would take a wholehearted engagement from all the usual players—the World Bank, the U.N., NGOs, national statistical offices and academics, plus newcomers like mobile phone companies, opinion pollsters and companies that collect data for commercial purposes. Big funders would need to be involved, and also citizens themselves, to give their views on the indicators and the data that matter to them.
Improving data might seem like a geeky and somewhat marginal pursuit, compared with the weight of need and injustice that we know exists in the world. But, as Napoleon is alleged to have said, “War is 90 percent information,” and that applies to the war on poverty as much as to any military conflict. A data revolution might not be what most people think of as a real revolution. But it’s sure to be revolutionary.
By Claire Melamed,
Dr. Claire Melamed is the head of the Growth, Poverty and Inequality program at the Overseas Development Institute, and leads the institute’s work on the post-2015 global agenda. Prior to working at ODI, she worked for 10 years in a number of different U.K. development NGOs as well as for the United Nations in Mozambique. She also taught at the University of London and the Open University. Her recent ODI reports on the post-2015 development agenda can be found here.
sourche: http://www.worldpoliticsreview.com/articles/13523/data-revolution-developments-next-frontier?utm_source=ODI+email+services&utm_campaign=26c828dd79-ODI_Newsletter_30_January_2014&utm_medium=email&utm_term=0_bb7fadfa38-26c828dd79-75438105
Photo: An employee of the Southern Sudan Commission for Census interviews residents of Juba, April 22, 2008 (U.N. photo by Tim McKulka).
Γιατί η μετα-αναπτυξιακή κοινωνία δεν θα υπάρξει στο ορατό μέλλον ;
ΑπάντησηΔιαγραφήτου Bartosz Bartkowski (Τhe Sceptical Economist)
http://aftercrisisblog.blogspot.gr/2014/04/blog-post_11.html