Perception and Chaos with Karl Friston

Published Categorized as Articles

Some key quotes from the conversation between Karl Friston and Jordan Peterson

  • So, binding free energy, I think, can be read very simply as minimizing surprise. But, of course, to be surprised, you have to have something you predicted. You need to do Predictive Processing.
  • In making those good predictions, we have to have an internal model, sometimes called a world model, a model that can generate what I would have seen if this was the state of affairs out there.
  • And then, using the ensuing prediction errors to adjust your beliefs and update your beliefs in the service of minimizing those prediction errors (or minimizing that surprise or minimizing that free energy)
  • And that notion of a generative model is quite key and holds the attribute of a hierarchy (discussed later), simply in the sense that we live in a deeply structured world, a very dynamic world

Is it surprise compared to the expectation or surprise compared to a desire? (Expectation sounds mechanical computational)

So for example, if I’m interacting with a woman who I’m romantically interested in, I’m attempting to bring about the realization of my desire, not my expectation. The desire manifests itself as a fantasy and then the motivation is the minimization of the discrepancy between the actual world and the fantasy.

Or Is it a mistake of perception. Not expectation or desire because I might be seeing the situation wrong?

  • I sell the brain as a fantastic organ literally because it is in the game of generating the right kind of fantasies. And these are the fantasies – or the motivated expectations – that drive our behaviour.
  • Vanilla predictive processing and vanilla predictive coding just deals with sense-making. So it’s usually in the moment. As soon as you bring decisions and choices and actions to the table, you’re immediately in the game of making inferences about the future.
  • The future that has yet to be observed. So there are no prediction errors. It’s just your expected surprise, your expected entropy or expected free energy or your expected discrepancies between what you fantasise should happen or what you prefer to happen and what you anticipate given your best sense-making at the moment.
  • So expected surprise means going from simple sense-making into the world of choice and decision-making and motivation (discussed later).
  • So entropy / uncertainty (and I’m using entropy and uncertainty synonymously here) are the mathematical statement of the surprise I expect when I haven’t actually seen the outcome yet
  • In many senses what you can say is that we are motivated to resolve our uncertainty and that can be sort of two ways of minimising uncertainty
    • it can either be through choosing those behaviours that resolve uncertainty in the sort of folk psychological sense, you know, watching the news, looking over there to see whether my fantasies about the cause of that visual flatter in the periphery of my vision was what I thought it was, was it a bird or a butterfly.
    • Or: I will avoid putting myself in surprising situations, e.g. being very cold physiologically or being unloved or being embarrassed, or anything that I would find surprising about myself. I will minimise my expected surprise by avoiding those kinds of things.
  • And so they artificially constrain the range of their potential behaviour and that can become maladaptive if a person’s conception of themselves is too narrow.

Model and stories

  • We’re looking for a simple, but apt-for-purpose model of your world. So if you can find the causal structure that is as simple as possible that provides an adequate explanation of your sensorian, then that is the good model. I mean, literally, as a statistician, that that would be the model with the greatest evidence, the accuracy minus the complexity.
  • so the optimal simple explanation is the least complex affordance necessary for the operation at hand.
  • So if you were to read our active engagement with the world just as a process of committing to the right paths, the right plans, the right narratives, the right micro stories, I think that is the essence of sentient behavior and existence.
  • You only see and you only model and rehearse and sample the world using the level apt for getting those right paths forward.

What if you optimize predictive error so that you lay out a fantasy on the future and then work so that there’s just enough predictive error so that you encounter something you don’t expect at a micro level, small enough that you can manage it, but large enough so that it expands the confines of your hierarchical presuppositions. A little entropy has allowed to enter the system at just enough rate and intensity so that you can push your development in a manner that doesn’t stress you too badly physiologically.

  • You’ve highlighted a paradox that we might be seen as in the game of trying to minimize our surprise. And yet we seek out novelty.
  • So I think that’s the kind of surprise that we aspire to. It’s the novelty that affords the opportunity to resolve uncertainty and thereby resolve angst.
  • I think asking questions about the right kinds of narrative that resolve uncertainty, responding to epistemic affordance, novelty seeking, information seeking, whilst at the same time still avoiding those surprising states of loss or physiological extrem

Motivation and Dopamine

  • So we can also take some pleasure in joint movement towards the shared goal because the dopamine system seems to indicate progress towards a shared instantiation of a vision
  • Dopamine is an important neurochemical part of the anatomy of sense making and exchange
  • This is like a novelty bonus – and certainly in machine learning this information gain, this epistemic affordance that is part of the good narratives and the good paths into the future, would be seen exactly as this novelty bonus.
  • So it is intrinsically rewarding to resolve uncertainty and to seek out those novel things or avoid those unfamiliar, uncharacteristic, obstacle states that do not characterise me
  • So I think that is when you get the dopamine blush – that resolution of uncertainty – suddenly you see the path forward clearly and it is exactly like the the micro story that is responding to the affordances.

Aligning ourselves

  • The nice thing about entropy and free energy is that has an extensive quantity. So your free energy and my free energy or your entropy and my entropy… we just have to add them together because they’re extensive. And then our free energy is exactly the sum of our free energy. So if we can both render our mutual worlds more predictable and less surprising than our joint free energy will fall.