Your own skills or the type of challenge at hand may guide the way you think about a problem and the conceptual process you go through to discover a solution. Many mental models exist to represent various thinking patterns, and understanding diverse techniques as a Data Scientist can help you model data more effectively in the corporate sector and convey your findings to decision-makers.
We cannot overlook the complexity of today’s society, whether we examine the economy, firms that function inside the economy, individuals who play various roles in society, or how our physical, social, political, and industrial complex interact with one another. There is no one or simple explanation that adequately conveys the magnitude of the situation. As data scientists, we must comprehend this complication and refine our thinking in order to isolate what counts, disregard what doesn’t, and go forward with solving key questions.
In this blog, I’ll discuss some of the important “thinking” paradigms that have aided me in visualising abstract problems and how I’ve been able to solve them to generate insights. While meta-cognition, or “thinking about thinking,” is an interesting topic to address — and one that I believe is crucial to AI’s success — I’ll focus on thinking paradigms that are relevant to data scientists in this article.
Thinking Like a Model
The ability to conceive in terms of models is the first and most important talent we need as data scientists. A model is any physical, mathematical, or logical representation of an item, property, or process in its most abstract form. Let’s pretend we want to create a heavy-lifting aeroplane engine. Before constructing the entire aircraft engine, we may construct a miniature model to evaluate the engine’s many attributes (e.g., fuel consumption, power) under various conditions (e.g., headwind, impact with objects). We might develop a 3-D digital model before we make a small model forecast what will happen to the miniature model made of various materials.
There were two different entities in each of these cases: a model and the item that was being modelled. The little model engine in the first example was a replica of the entire aircraft engine. The digital model of the engine in the second scenario was a tiny model engine. The digital model would have required to simulate several aspects of flight physics (e.g., thrust). As a result, the model does not always have to be of an object. It can also refer to a piece of real estate. A model of the gravitational force, for example, considers the relative masses of the two objects as well as the distance between their centres of mass. The model in this situation is an equation or a mathematical representation. The model was a physical model in the instance of the miniature model engine.
Models can also be used to describe a process. A common model of the consumer purchasing process, for example, begins with awareness, consideration, purchase, and repeat purchase. The model is a logical representation of the process in this case. Either equations or code can be used to convey the same logical representation.
As a result, models are either abstractions of reality or something worth investigating. They’re also means for us to explain and make sense of our surroundings. One of the most vocal proponents of this way of thinking is Charlie Munger, vice chairman of Berkshire Hathaway .
“I believe it is undoubtedly true that the human brain need models to function. The goal is to make your brain perform better than the other person’s since it understands the most basic models: those that will provide the most effort per unit.” “You progressively collect wisdom if you get into the mental habit of linking what you’re reading to the basic structure of the underlying principles being demonstrated.”
In the corporate arena, Charlie Munger has more than adequately shown the importance of this form of model thinking. This ability is important not only for investing, but also for making sense of the world. Data scientists and AI researchers require this form of model thinking as well. We are frequently asked to simulate some component of human decision-making (for example, prediction, optimization, categorization — see  for additional information) or to understand a process or occurrence (e.g., anomalous behavior). Domain experts from a variety of fields – business, science (physics, chemistry, biology), engineering, economics, and social science — frequently ask us to develop models to make sense of the world, draw conclusions, make decisions, or act.
It will be easier for us to express these mental models as mathematical formulas, logical representations, or just code if data scientists can comprehend the mental models in these disciplines. In reality, there are a number of books [2–4] on mental models, as well as hundreds of categorised mental models. The Mental Models website  contains 339 models in ten categories, including economics and strategy, human nature and judgement, systems, the biological or physical realms, and so on. In his paper , Scott Page argues that ‘many-model thinkers’ make better conclusions. Not only are data scientists who can think and develop an ensemble of models based on hundreds of mental models better data scientists, but they are also better decision-makers. Having a collection of models is beneficial to both people and machines!
The single most effective, practical, and profound mental paradigm that I have utilised frequently throughout my thirty-year career in business and technology is systems thinking. It has aided me in seeing the larger picture and the connections between seemingly unconnected topics. I believe that having a thorough understanding of systems thinking, as well as doing it, is crucial for data scientists. Peter Senge succinctly summarises what systems thinking entails:
“Systems thinking is a discipline for recognising the underlying ‘structures’ of complicated situations and distinguishing high from low leverage change. That is, we can learn how to promote health by looking at wholes. To do so, systems thinking provides a vocabulary that starts with reorganising our thinking.”
Systems thinking is described in a variety of publications [10–12] and even many series of Medium articles [8,9]. The works of Barry Richmond [12–13] and John Sterman  have had the most influence on my own work. I’ll use their work to explain what it is and why it’s important to data scientists.
Analytical thinking — the ability to break down a problem into its constituent parts, devise solutions for these parts, and put them back together — is emphasized in our traditional school curriculum. As a consultant, you’re continually told to consider MECE (Mutually Exclusive and Collectively Exhaustive), to clearly write out your hypotheses, and to construct a MECE set of options to solve any problem. While analytical and MECE thinking is important in solving problems, we are often blinded by them and rarely question them. MECE thinking, on the other hand, is perfectly countered by systems thinking. Let’s go over some of the fundamental thinking patterns associated with the systems approach.
Dynamic thinking is a mental process that supports a problem’s framing in terms of how it evolves over time. Dynamic thinking examines how the behaviours of physical or human systems vary through time, as opposed to static thinking, which focuses on single events. The capacity to consider how the input can change over time and how that might affect the output behaviour is crucial.
Data scientists frequently use cross-sectional data from a specific point in time to create predictions or inferences. Unfortunately, because most problems have a continually changing context, only a few aspects can be studied statically. Static thinking encourages the ‘one-and-done’ model-building approach, which is deceptive at best and destructive at worst. Even simple recommendation algorithms and chatbots that are educated on past data require continuous updates. Building solid data science models requires an understanding of the dynamic nature of change.
System-as-cause thinking is a way of thinking that defines what should be included inside our system’s boundaries (i.e., extended boundary) and at what level of granularity (i.e., intensive boundary). The extensive and intense boundaries are determined by the context in which we are studying the system and what is under the decision-control maker’s vs. what is not.
Data scientists usually work with whatever data they’ve been given. While this is a wonderful place to start, we also need to understand the larger context of how a model will be used and what the decision-maker has control over or influence over. When developing a robo-advice tool, for example, we may include a variety of factors such as macroeconomic data, asset class performance, firm investment strategy, individual risk appetite, life stage of the individual, investor health, and so on. Whether we’re creating a tool for an individual consumer, an advisor, a wealth management client, or even a government policymaker, the breadth and depth of criteria to include varies. We can construct targeted models and scope them appropriately if we have a wider view of the different elements and how they interact with the user and the context of the user.
Forest thinking is a way of thinking that helps us to perceive the “larger picture” and aggregate when necessary while keeping the important details in tact. When looking at individual bits of data (e.g., individual customer data), data scientists are frequently forced into tree-by-tree thinking and fail to comprehend the wider picture of what data is required to solve the problem at hand. I’ve heard this translated as “create the best model we can with the data we have,” rather than “investigate what data could be needed to solve the problem we have.”
Operational thinking is a kind of thinking that concentrates on the operational processor “causality” of how behaviors appear in systems. Factors thinking, list-based thinking, and MECE thinking are the polar opposites of operational thinking. The use of machine learning as the primary or only approach for data science could easily lead to factors thinking, in which the focus is just on forecasting the outcome variable without regard for the process or causality. While this may be appropriate in a variety of situations, it is not always applicable. Recent research on explainable AI is an attempt to re-create some of the process and logic used to arrive at the answers.
Closed-loop thinking is a way of thinking that looks for feedback loops in systems where some effects turn into causes. The prevalent view of time as an arrow moving in a single forward direction is a powerful attitude that has stifled our thinking in both business and science. This trend has affected data scientists as well. Data scientists can employ causal loop diagrams and stock-flow diagrams, which are commonly used by the system dynamics community, as well as causal inference, to break away from straight-line thinking.
These are only a handful of the important system thinking patterns. Quantitative thinking, scientific thinking, non-linear thinking, and 10,000-meter thinking are also discussed by Barry Richmond [12,13]. He also gives communication and learning strategies that are worth understanding.
For a few important reasons, I find the systems thinking approach particularly intriguing. For starters, it provides a different attitude and, more crucially, a mindset that connects disparate elements and, more often than not, delivers a fresh perspective to problem-solving. Second, rather than declaring a single and proper answer, it is more open-ended, allowing for many perspectives and trade-offs to be considered.In contrast to fragile and inexplicable machine decisions, this frequently leads to more informed human decisions. Third, it provides a more effective means of explaining and communicating judgments. In future posts, I’ll use particular instances to demonstrate these advantages.
Thinking in terms of agents
You might not discover anything on the Internet if you search for agent-based thinking. Instead, a number of references to agent-based modelling will be found (ABM). While ABM is the tangible manifestation of this style of thinking, which I will discuss in future articles, I want to concentrate on the thinking that precedes the construction of such agent-based models.
Simpler (or atomic) things or concepts are the subject of agent-based thinking, which examines how relatively simple interactions between these entities might result in emergent system behaviours. We are interested in system-level behaviours in the same way that systems thinkers are, but rather than monitoring relationships from the top down, we examine system behaviours from the bottom up. Individual-centric thinking: what is the individual’s state (physical or mental), how does the individual interact with its environment and other individuals, and how does this influence the individual’s status? Individual consumers (e.g., modelling individual consumers making purchase decisions in a marketing context), corporate entities (e.g., companies acting in their own self-interest making markets efficient), and even national governments can all benefit from individual-centric or agent-based thinking (e.g., countries trading with each other based on their comparative advantage).
While both systems thinking and agent-based thinking can be applied to the same set of issues and generate comparable conclusions, they do so from different perspectives. SEIR (Susceptible, Exposed, Infectious, Recovered) is a well-known epidemiological model of illness progression that can be evaluated from a systems or individual perspective. We are working at the systems level when we look at the overall population of susceptible, exposed, etc., and at the agent-based level when we look at the condition of each individual in terms of whether they are already infectious, recovered, etc. from the disease.
Agent-based thinking becomes a more natural method of thinking when we want to go from aggregate behaviours of illness progression to individual behaviours. For example, agent-based thinking is a more natural approach if we want to understand not just the overall levels of infections but also which individuals are susceptible to the disease or which behaviours (e.g., social distancing or mask-wearing) of specific individuals and how they may contribute to disease progression.
Agent-based modelling (also known as agent-based simulation or microsimulation systems), multi-agent systems, and reinforcement learning are all built on the foundation of agent-based thinking. As a result, a data scientist must be comfortable assessing problems from the perspective of individual agents, which might be IoT devices or physical assets (also known as “digital twins”), or individual decision-making entities such as consumers, corporations, or other entities.
In certain cases, I find agent-based thinking extremely intriguing. For starters, when a group of entities or agents is recognisable and heterogeneous, it becomes easier to examine their behaviour. Second, when interactions between things are more localised, agent-based thinking makes it easier to investigate them. Third, agent-based thinking is a preferable method when individual actions (or behaviours of groups of persons) matter more than system behaviour. Fourth, we should model at the individual level rather than the system level when individual entities adapt and develop differentl.
Behavioral (Economics) Thinking
Behavioral economics, a collection of mental models centered on human nature and judgment , has had a big impact on both my consulting and AI careers. Both AI and behavioral economics share the same ancestor, which is interesting. Herbert Simon’s concept of limited rationality challenged the popular assumption of “humans as ideal rational decision-makers,” arguing that we make decisions based on our cognitive capacity, accessible knowledge, and time constraints . The field of behavioral economics [15–17], which has been on the rise since the late 1900s, was founded on this. Simon is also regarded as one of AI’s forefathers, and his work on heuristic programming and human problem solving set the groundwork for subsequent symbolic AI systems.  In his words
The principle of bounded rationality states that the human mind’s capacity for formulating and solving complex problems is small in comparison to the size of the problems that must be solved for objectively rational behaviour in the real world — or even a reasonable approximation to such objective rationality.
Behavioral economics has had a significant impact on how humans make decisions , the underlying processes for making decisions , how they deviate from an ideal utility-maximizing economic perspective, and how to nudge them toward certain decisions for their own benefit  in the academic and business worlds.
Behavioral (economics) thinking (also known as behavioural thinking) is a kind of thinking that focuses on how people actually make decisions rather than how they should make decisions. We are frequently required to understand how humans make decisions when we apply agent-based reasoning (e.g., decisions regarding what goods to purchase, how much to invest, etc.). Anchoring, defaults, the bandwagon effect, loss aversion, hyperbolic discounting, and a long list of other heuristics are all used to explain how we make decisions in different situations.
Behavioral thinking is valuable to me as a data scientist in two ways. First, it aids our understanding of how humans make decisions, allowing us to better understand how data scientists’ models will be used and the explanations required for better adoption. Second, it allows us to replicate or observe overall behavior by modeling human decision-making within agents. This second feature can be thought of as a subset of agent-based thinking.
In 1980, Seymour Papert  coined the phrase “computational thinking.” However, it was not until an article by Jeannette Wing  that the relevance of computational thinking as a vital component of computer science education was recognized. Algorithmic thinking and computational approaches to all parts of science, engineering, and business have been maturing the foundations of computational thinking over the latter half of the previous century.
Computational thinking is a way of thinking that focuses on structured problem solving, issue decomposition, pattern recognition, generalization, and abstraction, all of which can be coded and executed by computers. The computing revolution, the internet and smartphone revolutions, and now the big data, analytics, and AI revolutions have all benefited greatly from computational thinking.
While all of the previous thinking patterns may be applied to both people and robots, computational thinking is essential for developing intelligent systems. Computational thinking can also be viewed as a tool for encoding and realising other sorts of thinking. As a result, as data scientists, we must frequently think computationally and be clear in our recommendations.
 Trents Griffin. A Dozen Things I’ve Learned from Charlie Munger about Mental Models and Worldly Wisdom. August 22, 2015.
 Anand Rao. Ten human abilities and four types of intelligence to exploit human-centered AI. Medium — Start it up, Oct 10, 2020.
 Shane Parrish and Rhiannon Beaubien. The Great Mental Models Volume 1: General Thinking Concepts. Latticework Publishing Inc. 2019.
 Shane Parrish and Rhiannon Beaubien. The Great Mental Models Volume 2: Physics, Chemistry, and Biology. Latticework Publishing Inc. 2020.
 Gabriel Weinberg and Lauren McCann. Super Thinking: The Big Book of Mental Models. Portfolio, 2019.
 Scott E Page. Why “Many-Model Thinkers” Make Better Decisions, Harvard Business Review, November 19, 2018.
 Leyla Acaroglu. Tools for systems thinkers: The six fundamental concepts of systems thinking. Medium — Disruptive Design. September 7, 2017.
 Andrew Henning. Systems thinking Part 1 — Element, interconnections, and goals. Medium — Better Systems. August 1, 2018.
 Donella Meadows. Thinking in Systems: A Primer. Chelsea Green Publishing. 2008.
 John Sterman. Business Dynamics: Systems Thinking and Modeling for a complex world. McGraw-Hill Education. 2000.
 Barry Richmond. An Introduction to Systems Thinking with Stella. ISEE Systems, 2004.
 Barry Richmond. The “thinking” in systems thinking. How can we make it easier to master?. Systems Thinker.
 Herbert Simon. Models of bounded rationality Volume 1: Economic Analysis and Public Policy. The MIT Press. 1984.
 Dan Ariely. Predictably irrational, Revised and Expanded Edition: The hidden forces that shape our decisions. Harper Collins, 2009.
 Daniel Kahneman. Thinking, fast and slow. Farrar, Straus and Giroux, 2013.
 Richard Thaler and Cass Sunstein. Nudge: Improving decisions about health, wealth, and happiness.Penguin Books, 2009.
 Seymour Papert. Mindstorms: Children, computers, and powerful ideas. Basic Book, Inc., 1980.
 Jeannette Wing. Computational thinking. Communications of the ACM 49 (3):33- 35. 2006.
The article was originally posted here
Diginews.live is now on Telegram. Join Diginews channel in your Telegram and stay updated with latest news