MAI Blog

Pix2City

What did we do at the Media Architecture Institute during the recent lock-down? We tested the potential of artificial intelligent to engage citizens. The result is Pix2City.

Pix2city is a proof-of-concept that uses artificial intelligence to propose a new form of participation that understands citizens as co-creators. The proof of concept uses artificial intelligence to translate the input from the citizens into visualizations of the result. People can freely express their visions for the city and see for example how this would affect the amount of shadows on the streets. All this without the need for any expertise in urban planning or simulations. Here AI is the “magic” building block as it allows to translate graphical inputs and predict the results.

With this tool, we want to help authorities to find common ground between long term plans and the sentiment of the citizens. Here, we need tools that allow people to express freely their ideas -without any expertise in urban planning. Pix2City enables people to transform their visions into a realistic visualization of the result. It shows for example how planting trees would increase the amount of shadows and trigger a cooling effect in the city.

pix2city

pix2city

This is made possible thanks to an AI-model trained using image to image translation with conditional adversarial networks. The model of pix2city was trained using orthophotographs and a custom rendered map that highlighted green areas and trees. This allowed to improve the method described in the paper in two ways. First the orthophotographs allow to reach a higher level of detail compared to satellite pictures. Secondly it helped to customize the elements that should be considered during the training process. The map contained the building blocks, pedestrians ways, green ares, trees, zebra crossings and parking spots.

The interface uses the custom rendered map as a canvas where the people can draw by hand changes for the city. This data will be sent upon request to the server which will translate the image using the pre-trained model. The model showed a good performance in predicting the introduction of green ares and green elements. Other features, such as zebra crossings were translated with less accuracy.

If you are interested to use pix2city as part of an ongoing or future project, you can write us at ask at mediaarchitecture.org