How constraining your machine learning models with physical concepts will improve the generalizability and explainability of your systems
Assume you’ve been tasked with predicting the amount of goals a star footballer will score in the next match. You hurriedly yell out the response to your boss until you got the result: minus 3. Within a split second, you know the improbability and absurdity of the prediction.
The majority of machine learning models learn primarily from data. And, as the old adage goes, “garbage in, garbage out” — whatever data you put in will be repeated in the target performance you choose to retrieve later; you can’t expect effective prediction if your input dataset is biassed, inconsistent, or, worse yet, unreliable.
But what if we could constrain (or guide) these machine learning models by enforcing a Physics formulation that consistently obeys the natural law?
Modeling based on data vs. modeling based on a theory
In order to explain phenomena in the real world, we’d have to create models that can closely reproduce them. Most modeling approaches can be divided into two categories: data-driven and theory-driven approaches.
The data-driven approach relies on data to make sense of the phenomena around them, but it also does so with a narrow understanding of the theoretical interpretation. For example, you might be asked to forecast the house price in a specific neighborhood. You have a good working theory, such as the size and distance to common service facilities would affect the housing price. This makes sense, but provided the requisite inputs, there is very little first-principle physical theory that could provide the final result conclusively.
The theory-driven approach, on the other hand, tries to construct models based on first-order principles (for example, the force acting on an object is represented by its mass and the acceleration it produces — F=ma). The model is certain, and the system can be described simply by looking at the mathematical formulation.
Machine Learning with Physics Guidance is a hybrid approach.
But what if you could get the best of both worlds?
What if the phenomenon you’re attempting to describe has a large enough amount of data to be partly explained using physical first principles?
A hybrid solution has a number of advantages that have been well established in recent literature:
Since we have incorporated “information” into your model, you can achieve generalisation, which means your model can perform consistently well while having to predict previously unknown datasets.
Since the physical formula is predictable, you can provide insights and accuracy to otherwise “black box” machine learning models.
How can we then incorporate Physics into our machine learning models? There have been some advancements in this area, and the majority of integration can be divided into three categories:
- As a replacement for one or more of the ML-based modules, a physics-based model may be used.
You have a physics formulation as part of a larger ML-based model in this group. As shown in the diagram below, the Physical formulation can change the input parameters, manipulate the intermediate embedding, and even constrain the output prediction.
- Physically inconsistent performance is penalized in this physics-based model.
Consider the earlier example of forecasting the number of goals a star footballer would score. If the outcome is less than zero (physically impossible), the model will severely penalize this prediction. This is to ensure that the model “learns” correctly physical explanations that are compatible with nature (eg. temperature cannot fall below 0 Kelvin, velocity cannot exceed that of light, etc).
Using machine learning to parameterize a physical model
Some problems are computationally costly, such as forecasting climate variables (such as precipitation). The majority of physics-based climate models are global in size and have very low spatial resolution (eg. 25km by 25km of area per unit pixel).
Consider the situation in which you would forecast when the next flood occurrence in your region will occur. If you have a large enough historical dataset, you might use a strictly data-driven approach. However, you may want to use a physical model to link your forecast to what’s going on in your area or around the world (because climate events are unsurprisingly interconnected).
One option is to run the physics-based model while using machine learning to parameterize some of the more computationally expensive variables (such as rainfall).
Let’s look at how this definition can be applied to a real-life scenario.
Lake Temperature is an application.
The authors of  use physical concepts and a recurrent neural network to try to simulate lake water temperature (RNN). The issue, however, is the scarcity of training data, which can cause any data-driven machine learning model to underperform.
This is where physics will be able to help.
The paper begins by incorporating a physical model into the RNN. The model incorporates energy conservation concepts into the RNN, as seen in the diagram below, where a net input of energy warms the lake and vice versa.
Second, the physical law of energy efficiency is used to penalize predictions that do not adhere to it.
The final loss function is obtained by combining the loss from a standard RNN module with the loss from the Energy Conservation theorem.
In comparison to the bare RNN model, the hybrid physics-guided RNN (PGRNN) model improves efficiency by nearly 20%. This demonstrates how ML models driven by physics can increase the baseline by obeying the physical world.
If we continue to encode our domain knowledge through machine learning, I believe it will become “smarter” in the future. What other fields do you believe physics-guided machine learning models could succeed in?
Diginews.live is now on Telegram. Join Diginews channel in your Telegram and stay updated with latest news