AI is outperforming our best weather forecasting tech, thanks to DeepMind

Data from a multi-decade simulation, called ERA5, is fed into the GraphCast graph network as a set of measurements at a particular point. By traversing the graph, GraphCasts predicts the next measurement for that point and for its neighbors. 


Climatologists have spent decades amassing data on how the weather has changed at points around the globe. Efforts such as ERA5, a record of climate back to 1950, developed by the European Centre for Medium-Range Weather Forecasts (ECMWF), are a kind of simulation of the earth over time, a record of the wind speed, temperature, air pressure, and other variables, hour by hour.

Google’s DeepMind this week is heralding what it calls a turning point in using all that data to make inexpensive predictions of the weather. Running on a single AI chip, Google’s Tensor Processing Unit (TPU), the DeepMind scientists were able to run a program that can predict weather conditions more accurately than a traditional model running on a supercomputer. 

Also: Less is a lot more when it comes to AI, says Google’s DeepMind

The DeepMind paper is published in next week’s issue of the scholarly journal Science, accompanied by a staff article that likens the paper to part of a “revolution” in weather forecasting. 

Mind you, GraphCast, as the program is called, is not a replacement for traditional models of forecasting, according to lead author Remi Lam and colleagues at DeepMind. Instead, they view it as a potential “complement” to existing methods. Indeed, the only reason GraphCast is possible is because human climate scientists built the existing algorithms that were used to “re-analyze,” meaning, go back in time and compile the enormous daily data of ERA5. Without that precision effort to create a world model of weather, there would be no GraphCast.

The challenge Lam and team took on was to take a number of the ERA5 weather records and see if their program, GraphCast, could predict some unseen records better than the gold standard for weather forecasting, a system called HRES, also developed by ECMWF.

HRES, which stands for High RESolution Forecast, predicts the weather for the next 10 days, around the world, using an hour’s worth of work, for an area measuring around 10 kilometers squared. The HRES is made possible because of mathematical models developed over decades by researchers. HRES is “improved by highly trained experts” which — while valuable — “can be a time-consuming and costly process,” write Lam and team, and which comes with the cost of multi-million-dollar supercomputers. 

Also: Why DeepMind’s AI visualization is utterly useless

The question is whether a deep learning form of AI could match that model created by human scientists with a model automatically generated.

GraphCast takes weather data such as temperature and air pressure and represents it as a single point for a square area on the globe. That individual point is linked to neighboring areas’ weather conditions by what are called “edges.” Think of the Facebook social graph, where each person is a dot and they are linked to friends by a line. The earth’s atmosphere becomes a mass of points, each square area, linked by lines representing how each area’s weather is related to its neighboring area.

That’s the “graph” in GraphCast. Technically, it’s a well-established area of deep learning AI called a graph neural network. A neural network is trained to pick out how the points and lines relate, and how those relations can change over time. 

Armed with the GraphCast neural net, Lam and team entered 39 years’ worth of the ERA5 data on air pressure, temperature, wind speed, etc., and then measured how well it predicted what would happen next over a 10-day period in comparison to the HRES programs. 

Also: I replaced my phone’s weather app with this $340 forecasting station. Here’s why

It takes a month on 32 of the TPU chips working in concert to train GraphCast on the ERA5 data; that’s the training process in which the neural network has its parameters — or neural “weights” — tuned to the point where they can reliably make predictions. Then, a group of the ERA5 data that has been set aside –the “held-out” data, as it’s known — is fed into the program to see if the trained GraphCast can predict from the data points how those points will change over ten days — effectively predicting the weather inside this simulated data.

“GraphCast significantly outperforms” HRES on 90% of the prediction tasks, the authors observe. GraphCast is able to best HRES in predicting the shape of extreme hot and cold developments as well. They notice that HRES does better with predictions that have to do with the stratosphere, versus surface changes in weather.

It’s important to realize that GraphCast is not actively predicting the weather in production. What it did well at is a controlled experiment with previously known weather data, not live data.

An intriguing limitation of GraphCast is that it stumbles when it gets outside of a 10-day period, Lam and team note. As they write, “there is increasing uncertainty at longer lead times.” GraphCast gets “blurry” when things get more uncertain. That suggests that they have to make changes to GraphCast to handle the greater uncertainty of longer time frames, most likely by crafting an “ensemble” of forecasts that overlap. “Building probabilistic forecasts that model uncertainty more explicitly … is a crucial next step,” write Lam and team.

Also: How a digital twin for intense weather could help scientists mitigate climate change

Interestingly, DeepMind has big ambitions for GraphCast. Not only is GraphCast just one of what they expect to be a family of climate models, but it is part of a broader interest in simulation. The program is operating on global data that simulates what happens over time. Lam and team suggest other phenomena can be mapped, and predicted, in this way, not just weather.

“GraphCast can open new directions for other important geo-spatiotemporal forecasting problems,” they write, “including climate and ecology, energy, agriculture, and human and biological activity, as well as other complex dynamical systems.

“We believe that learned simulators trained on rich, real-world data, will be crucial in advancing the role of machine learning in the physical sciences.”

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button