I would like to introduce you to an AI that automatically alerts you to the potential for great soaring conditions days in advance, so that you never miss an opportunity for a great flight. It is your personal first-alert system to great soaring weather.
What it Is
I created this “bot” that runs in the cloud to crunch the RASP soaring forecast every morning at 10am. It looks for 3 kinds of days: wave days, local soaring days, and cross-country days. If forecast conditions are promising for any of these conditions, you receive an alert via Twitter. Just “follow” the bot on Twitter and set up SMS notifications so that you simply receive a text when the forecast is good.
Why
Over the past few years, I have picked the brains of many experienced pilots about what to look for in the RASP forecasts. As I flew more and more and farther from Hollister, it became easier to know what to look for and what the threshold is for a “good day.” However, what has not gotten easier is remembering to check the forecasts every day when they are issued in the morning. Further, I am pretty sure that most people only check the RASP in certain seasons, and a lot of fantastic days “out of season” go unnoticed. Since this thing is awake every single morning checking the forecast, I hope it can motivate everyone to get out to the airport more often.
How to Use It
Go to its Twitter page (https://twitter.com/GliderKcvh/), follow it, and turn on mobile notifications to get text messages. In order to see the option to turn on mobile notifications, make sure your mobile number is attached to your account in your settings.
How it Works
For each classifier, I have programmed a set of “features” which attempt to capture what we are really looking for in the forecast. For example, in the local soaring classifier, it computes the average value of hcrit in a certain radius around KCVH and also the maximum achievable distance from Hollister assuming you can get close to the top of the boundary layer. There are several such numbers for each classifier.
Each of the features provides a bit of evidence for a good day. They do not all need to be high. Somewhat like human judgement, if one of them looks particularly great, then it may cause it to decide that is good enough. The evidence can also be milder but consistent among the various features, which similarly makes the decision affirmative.
The XC classifier is somewhat different in that there is one extra step, which is to calculate the best route between two points. The reason is that we care mostly about conditions along the narrow path that we are likely to take. It works in just the same way as Google Maps or other pathfinding algorithms. While Google Maps will steer you towards faster and shorter routes, the glider version steers you towards higher boundary layers and higher vertical velocities. This approximates the right decision-making in the air with respect to McCready theory to maximize speed. After the best path is found, the features are calculated along the route. For example, the highest and average hcrit on the route.
The second step is to know how much weight to place on each feature, which is where things get a bit fuzzy. For example, how much better is it to have a high hcrit vs. a high vertical velocity? That is difficult to answer succinctly and is where the AI learning comes in. To find the right weight between the features, I sorted the data into “good” and “bad” days and let the algorithms figure out the weighting. For the definition of “good” and “bad,” I used the criterion of whether or not the day’s forecast would cause me to be depressed if I couldn’t fly.
This means I needed to download and store historical RASP data, so I made tools to do just that. At the press of a button, I get the entire set of RASP parameters in raw data format for the day. At first, I needed to collect a good deal of data both “good” and “bad” so that the result of the machine learning is meaningful. After that bootstrapping phase (which took quite a long time), I only needed to collect a new data sample when I thought the classifiers made a clear mistake so that I could retrain them or add new features. For the past 6 months or so, this is what I’ve been doing.
Limitations
First, the goal if this bot is not to predict if the weather will be good in reality. Its goal is to predict whether or not a person looking at the forecast would decide that it is good enough to fly. In other words, the forecast is taken “at its word.” I will leave the forecasting power of the weather models to the meteorologists.
Second, this is a heavily data-limited process. At maximum, you only get one new datapoint per day, and few days per year are really good, so the complex AI schemes that require massive amounts of data (like convolutional neural networks) are very likely to overfit the data and fail to generalize. This leaves us to use simpler and less accurate methods. For this reason, I decided to tune the algorithm to be somewhat optimistic so that it never misses a good day although it may sometimes identify an “iffy” day as “good.”
Third, some of the reasons we decide whether to fly cannot be found in RASP data. For example, the wave classifier is easily able to spot the periodic patterns in the vertical velocity plots, but it currently has a very difficult time to notice cloud cover and rain. I had expected the “normalized surface sun” plots to give a good indication on the cloud cover, but it often shows large patches of 100% sun even when the large-scale forecasts indicate rain and heavy cover. I’m considering incorporating other data sources to make this better in the future. It is also possible I am missing something, so if you know what it is, please do let me know!
Last, there are some systematic flaws in the way RASPS works. The biggest one is that the terrain resolution is often too coarse (e.g. 4km). This causes multiple issues. I think everyone knows that in order to get good wave forecasts, the resolution must be high to capture the dynamics of the air flowing over the mountain ranges. Another is that over sharp mountainous terrain, the simulation misses the smaller-scale flow happening, which causes the calculated boundary layer height, thermal velocity, and Cu potential to be much lower than they are in reality. This is another reason to bias the algorithm (and your brain!) to be slightly optimistic.
Getting Involved
The project is open-source and free (https://github.com/rocketman768/GliderWeatherBot), so please download the code and play with it if you are interested. I would love to see other bots developed for more locations.