In its Sixth Assessment Report, the Intergovernmental Panel on Climate Change (IPCC) stressed that it is “unequivocal” that human influence has warmed the atmosphere and that, among other severe impacts, it is “virtually certain that global mean sea level will continue to rise over the 21st century.” Getting people to care about those conclusions, though, is another matter entirely – and to that end, a team of Canadian and American researchers teamed up to use AI to automatically generate realistic images of flooding.
The work, the researchers explain, is intended to fight back against inaction. “Visualizing the effects of climate change has been found to help overcome distancing, a psychological phenomenon resulting in climate change being perceived as temporally and spatially distant and uncertain, and thus less likely to trigger action,” they wrote. “In fact, images of extreme weather events and their impacts have been found to be especially likely to trigger behavioral changes. Previous research has shown that simulating first-person perspectives of climate change can contribute to reducing distancing.”
The researchers created a model called ClimateGAN, which “leverages both simulated and real data for unsupervised domain adaptation and conditional image generation.” ClimateGAN first leverages a Masker model, which predicts the pixels in an image that would be underwater if a flood occurred. Then, it uses a Painter model (itself using GauGAN, a deep learning model from Nvidia Research) to generate the appropriate water textures based on the input image and the prediction from the masker model.
For training, the authors wrote, “We collected a total of 6740 images: 5540 non-flooded scenes to train the Masker, and 1200 flooded images to train the Painter.” They also applied “approximately 20000 different viewpoints in [a 1.5km2 virtual world using the Unity3D game engine], which [were] used to train the Masker.” The researchers are making this simulated data available to other researchers (the real images garnered from the internet are not provided due to copyright issues).
There are, they note, some limitations to their model. “An intuitive extension of our model would be to render floods at any chosen height. Despite the appeal of this approach, its development is compromised by data challenges. To our knowledge, there is no data set of metric height maps of street scenes, which would be necessary for converting relative depth and height maps into absolute ones. Moreover, simulated worlds—including our own—that have metric height maps do not cover a large enough range of scenes to train models that would generalize well on worldwide Google Street View images. Another promising direction for improvement would be to achieve multi-level flooding, controlling water level represented by a mask, which faces the same challenges.”