In 2013, researchers at the Massachusetts Institute of Technology (MIT) Media Lab created a computer vision system that could read photos of urban areas and gauge how safe people would find them. Now, using the same system, the team is working with colleagues at Harvard University to identify what causes urban change. Tested with five American cities, this system quantifies the physical improvement or deterioration of neighborhoods.
According to a paper published in the Proceedings of the National Acadamy of Sciences, researchers used the system to analyze over a million pairs of photos taken seven years apart. These results were then used to test popular theories about the causes of urban revitalization.
Contrary to popular belief, raw income levels and housing prices do not predict change in a neighborhood. Instead, it had more to do with other factors. The researchers found that the density of highly educated residents, proximity to central business districts or other physically attractive neighborhoods, and the initial safety score assigned by the computer vision system all lead to improvements in the physical condition.
Another theory that was tested is that neighborhoods are mostly revitalized when their buildings have deteriorated enough for replacement. The researchers at MIT and Harvard found little correlation between the age of a neighborhood’s buildings and the degree of physical improvement.
In order to properly train the machine-learning system used, human volunteers had to rate the relative safety of urban areas shown in hundreds of thousands of image pairs. Then, for the new study, the same system compared images associated with the same geographic coordinates seven years apart in Google’s Street View. However, images had to be preprocessed to ensure the system’s decisions were reliable. For example. green spaces are one of the ways people assess safety. If one image was captured in summer, and the other in winter, the machine-learning system might incorrectly think the neighborhood has lost green space.
To test the system’s outcome, researchers then presented 15,000 random pairs of images from the data set to human reviewers. When asked to assess the relative safety of the areas shown, the reviewers matched the computer 72 percent of the time. Additionally, in the remaining percentage, most of the disagreements were of pairs with little change in safety scores.
MIT’s Media Lab is always innovating, From reading a book without opening it to a living shirt that reacts to sweat, its researchers are finding new ways to solve old problems.