Satellite imagery could help us end poverty by helping identify the areas that are hardest hit.
This new map of the world is very enlightening to scientists, in that it helps them see what areas of the world are hardest hit by poverty — and that could be key in helping us eliminate poverty worldwide. This data is helping authorities track crop conditions, deforestation and, now, where poverty can be found across the globe where data collection by governments isn’t very good.
This data is potentially invaluable to humanitarian organizations and policymakers, and a team of researchers from Stanford University believes that they have created a deep-learning algorithm that can spot signs of poverty just based on the satellite images alone. It can check the conditions of roads, for example, to see how good the infrastructure in a particular area.
“We have a limited number of surveys conducted in scattered villages across the African continent, but otherwise we have very little local-level information on poverty,” study coauthor Marshall Burke, an assistant professor of Earth system science at Stanford and a fellow at the Center on Food Security and the Environment, said in a statement. “At the same time, we collect all sorts of other data in these areas – like satellite imagery – constantly.”
There’s a huge amount of useful satellite data out there that no one has ever combed through, and that presents an opportunity for researchers who want to better understand global circumstances. If researchers are successful in creating a handy way to track poverty, it could be a huge boon for humanitarian organizations trying to figure out where the resources are the most useful.
“There are few places in the world where we can tell the computer with certainty whether the people living there are rich or poor,” said study lead author Neal Jean, a doctoral student in computer science at Stanford’s School of Engineering. “This makes it hard to extract useful information from the huge amount of daytime satellite imagery that’s available.
“Without being told what to look for, our machine learning algorithm learned to pick out of the imagery many things that are easily recognizable to humans – things like roads, urban areas and farmland,” he added.
Leave a Reply