Reality isn’t Real

Published Categorized as Insights

Matt recently equipped himself with VR equipment. He was interested in what this technology could do for his research in neuroscience. One day, I came around his place to catch-up, and I tried it.

There was this “game” where your hands were captured by sensors from the headset and recreated in this 3D virtual reality. The “game” has a set of tasks for the player to complete with their hands, like separating red balls from yellow ones in 2 different bowls, or stacking up cubes, etc. And sometimes the “game” introduces “defects”, like having spaghetti fingers, or numb hands, which makes it harder to accomplish the task. What struck me was the way the system tried to predict the movement of my hands and fingers when interacting with those red/yellow balls and these cubes. I could see that, as I was about to grasp or move a ball or a cube, the “game” projected the movement of my 3D rendered hands to the desired future state I was trying to reach in the “real world”. It was like a lag where my eye could see in the future before my actions fully happened, and it is a very weird sensation.

That got me thinking… What is the perception of reality if not the projection of “predictive mental models”? Is reality like an interface that allows it’s actors (here humans) to interact with? How do we know what is real from what isn’t? Do we actually perceive reality as it is or are we hallucinating it in shapes and forms that comfort our beliefs? 

In his book “The case against reality” Donald Hoffman explains how our brain is literally hallucinating reality, modelled after common sense. For example, if I throw a pebble in the air, I will expect it to drop on the ground within a certain amount of time. And it will be perceived as uncommon (or alien almost) if at some point in the fall the pebble stops, slows down, or reverses direction. This predictive processing has been fine tuned from thousands of years of evolution to compress information at such a level we can today intuit some aspects of the world, and it is a good thing for most aspects of life, like the smell and color of rotten food triggering disgust is designed to put us off from eating something what might be poisonous. Today, most Machine-Learning and AI algorithms that aim at cognifying information, work in the same way, and I suspect this is what has been used in this “game”. I will even speculate it may have been created for capturing data from real hand movements from its gamers and feeding the algorithm back for making more accurate predictions to other future applications.

For humans, the reason our brains are constantly building a predictive picture of the reality we experience is to help us identify quickly what can be a pay-off and what can be a cost. In short, our brains are saving energy by compressing information through learning and practice, to such an extent that it becomes innate (or intuitive ). It is partly what Daniel Khaneman describes in his book “Thinking fast and slow” when someone has practiced and experienced a situation so often and in many different contexts, they develop a form of intuition and the task becomes effortless. 

Going back to the compression of information, there is a huge amount of data that qualifies, let’s say, an apple. When we look at an apple we perhaps first see its color. If it’s red, it will trigger some hunger, making us salivate, meaning we deeply recognise its nutritional value. It also usually means it’s ripe to eat, and that tells us we can have its value now. All the information is nested one into another, compressed layers after layers of evolution, at a level such that today, we became biologically wired to spot and desire the sweet and juicy grapes hanging on the top of the fridge, like fruits hanging on trees. 

Although it’s way more complex and nuanced than what I just wrote, the idea is that a lot of information from reality is a projection made by our brains and not the raw reality itself. And it’s mostly a good thing for us. But as the world around us changes fast, our biology evolution is lagging behind (some things take time). For example, when you encounter a red berry you need to be careful that it isn’t poisonous. You perhaps know not to eat this particular red berry because your father, mother, someone who encountered it before can tell you not to eat that specific berry, or more generally tell you not to eat what you don’t recognise. This information hasn’t been wired in our biology, and that’s why we have to pass it on from generation to generation. But we sometimes get fooled by thinking that, because one thing is good for us the other thing might be good too. Take the wild potato, it closely resembles the wild sweet pea, and while the former is a good source of nutrients the latter is in fact poisonous (that’s how Chris McCandless sadly died as reported in the bestseller book Into the wild). 

We make bets, we guess this or that will bring us benefits at some point in the future. But sometimes we get the prediction wrong. Yes, we may win the bet 9 times out of 10, but it doesn’t mean we will never lose, like mistaking a simple plant can lead to death.

In his book The Black Swan, Nicholas Nassim Taleb’s talks about the impact, often extremes, of rare and unpredictable events. He made a point about the human tendency to find simplistic explanations for these events, wrongly predicting an event will occur because it has always occurred in the past, and reciprocally. A key idea in Taleb’s book is not to attempt to predict these Black Swan events, but to see how we might benefit from disorder and uncertainty by recognising these positive unpredicted events (opportunities) and be less fragile, or more Antifragile, to the negative events. 

We cannot truly predict. Our brain’s process in trying to predict the future is in fact the process of looking at “reality”, mainly show us the narratives that fit our expectations most of the time. How could one imagine the very first car when all that was around was horses? Indeed, we often fall short when trying to predict the future, we stay with what is familiar, like imagining a flying car instead of ideating on the more general idea of commuting using 3D space, without being too attached to pre-existing constructs and models. We can and should exercise our power of “guesstimating”,  but we have to be humble and raise our awareness as we are living in a fast-paced environment, where volatility, ambiguity, complexity and uncertainty seem to be the default. In such an environment, future projections are becoming less and less reliable (if they ever were).

Reality isn’t real, it’s a game of perceptions that has been fine-tuned by millennia of evolution and culture. But that doesn’t mean we cannot play with these features! I believe we could train ourselves to relax the boundaries of these predictive models and try to open the doors of perception by exercising a form of  “continuously envisioning” (but you can call it whatever you want). In doing so, we are opening our minds to explore the possible futures in front of us, a sort of axe-sharpening exercise that pro-actively tries to seize up-coming opportunities.