What is Wisdom?

What could go wrong if we develop technology to significantly amplify the intelligence of human minds? Intelligence is tricky to understand and I get confused when comparing it to the related concepts of wisdom and rationality. I’d like to draw clear distinctions between them. In a nutshell, rationality is the tendency to apply the capacity of intelligence, whereas wisdom describes the embodied knowledge of human behavioral patterns, specifically in terms of failure modes.

The relationship between rationality and intelligence seems better understood. My favorite exposition is in the excellent What Intelligence Tests Miss (good summary on LW). Of course, LessWrong itself is partially devoted to understanding this distinction and CFAR was built to see if we can isolate and train rationality (as opposed to intelligence). Intelligence is typically viewed as the capacity to perform the relevant moves — explicit reasoning, analogical application of past experiences, and avoiding biased heuristics of thought — when presented with a well-formed problem. In practice, the hard part of taking advantage of intelligence is having the awareness that one is facing a situation where intelligence can be explicitly applied. Thus, one can perform well when formally posed a problem, such as on an IQ or SAT test, yet still behave foolishly in the real world where the problems are not clearly structured and labeled. A colloquialism which approximates this dynamic is the idea of “book” and “street” smarts. Thus, to be rational requires not only some capacity for intelligence but, more importantly, the habits of identifying when and where to apply it in the wild.

How does wisdom fit into this? Informally, wisdom refers to the ability to think and act with sound judgment and common sense, often developed through a diversity of life experiences. We tend to look to the aged members of society as a font of wisdom rather than those with merely a large raw capacity for reasoning (intelligence). This corresponds with the heuristic of listening to your elders even when it doesn’t always make sense. Wisdom is often associated with conservativism and functions as a regulatory mechanism for societal change. The young and clever upstart has the energy and open-mindedness to create new technology and push for change while the old and wise have seen similar attempts fail enough times to raise a note of caution. The intelligent (and rational) are not more careless than the wise but rather seem to have more blind spots — perhaps as a result of seeing fewer well-laid plans fail in unexpected ways. To anticipate failure — to predict the future — we rely on models. Ideally, we deduce from known laws — this is possible in the physical sciences. In messier and more complex systems, like human interactions, we are forced to primarily rely on experience from analogous situations (inductive and abductive reason). It is no surprise that the hardest failures to predict relate to how humans will act — politics, not rocket science.

Looking through the literature on measuring wisdom (1, 2, 3), one major commonality is the emphasis on modeling psychological dynamics: intrapersonal (knowing thyself) and interpersonal (making sense of interactions with, and between, other humans). Proficiency in these domains seems to only become possible through experience (specifically, exposure to extremes) interacting with other humans and introspecting, or reflecting, on experience. In contrast, a foundation in the physical sciences and mathematics seems to be learnable by interaction with text, thought, exercises, and experiments performable without significant interpersonal dynamics. In a sense, we can say that proficiency in the “hard” sciences is intelligence-constrained whereas proficiency in predicting and interacting with humans is constrained by a lack of diverse personal experience data and the ability to act upon heuristics extracted from it.

This can be understood as a difference in modelability — the extent to which we can formalize useful (predictive) models of the system. With mathematics and the physical sciences — at least when applied to sufficiently simplified slices of reality — we are able to constrain non-determinism into a probabilistic model with well-behaved errors. On the other hand, modeling humans presents us with an uncertainty of a kind that we struggle to reduce (see: the struggle of the social sciences to successfully science). Even residing in a deterministic universe amenable to reductionism, and being armed with excellent models of sub-atomic interactions, we are unable to build the machinery necessary to predict the behavior of human beings. There are too many moving parts for a supercomputer, let alone the highly-constrained working memory of a human brain, to make useful predictions by analyzing the interactions of the component parts. On the other hand, the human brain has evolved to be quite good at modeling itself and other humans — we are social animals, after all. We perform this feat by observing behavior and automatically chunking it into categories and schemas to be recalled in future situations that appear similar enough. Unfortunately, we have not yet found a shortcut for developing this repository of experiences and the corresponding heuristics derived from it. This is the hard-to-replicate thing we tend to call wisdom.

The weak relationship between intelligence (or rationality) and wisdom should make us wary of the consequences of intelligence amplification. Increasing our capacity for intelligence and rationality without a corresponding increase in wisdom — which appears constrained by experience and associated reflection-based learning — may be dangerous. Amplified intelligence allows us to make better predictions of the physical world which can be leveraged to build more powerful systems and technologies, like nukes in the 20th century and more powerful AI in the 21st century. However, if we fail to simultaneously increase our wisdom we face the risk of unleashing capabilities onto humanity which may be quite safe in theory but in practice may lead to disaster when they come into contact with human society. We need more foresight into the disastrous failure modes of interactions between humans and their tools. How do we amplify wisdom?

Authentic Emotion and Reliable Action

What is going on when we’re judging the believability of an actor’s performance? Where does this sense of the authenticity, or realness, of feelings come from? In what sense can a feeling be manufactured?

The simple answer is when our big (physical) movements are reliably reflective of our internal felt state — the obfuscated micromovements — we consider our behavior coherent and authentic. We’ve developed all kinds of heuristics that subconsciously seem to classify whether someone’s “outward” presentation of affect is corresponding to what they are probably feeling on the “inside”. But I think there may be something more going on, as well. Not only do we want to know whether someone’s behavior is coherent with their felt sense but also we are interested in how reliably the emotional affect and corresponding action is going to be triggered by similar situations in the future. We are interested in perceived durability or reliability.

The fundamental question which becomes crucial to answer to ensure survival for a social organism is:  When I’m in trouble in the future, how likely are you to not only feel compassion but act on it in a meaningful way to help me? This matters because we are all resource constrained. We have finite time, money, and energy. We can’t be there, in any useful sense, for everyone at once.  Sometimes the thing we want to do, for some people the easy and natural thing to do, is to feel and show compassion for anyone that seems to need help. But in some sense, and in a way that seems to set off the inauthenticity detector for many people, this seems inauthentic. The detector may be doing useful work here. Even if someone can and frequently does feel compassion all the time for everyone and acts upon it in the moment, you are subconsciously (correctly) realizing that, while providing momentary relief, this person is likely to be unreliable in the future if you require a more substantive intervention than empathy. Whether we like it or not, useful and reliable relationships are necessarily bound together with some amount of specialness, exclusivity, and scarcity. Making it appear otherwise can lead to trouble.

Emergent Cause

In trying to make sense of Hoel’s theory of “causal emergence”, summarized here, I get stuck thinking about the semantics. What is a cause? What if we taboo it, similarly to how e-prime taboos “to be” verbs? The reductionist approach seems to focus on the events which seem to be the beginning of a chain of events which lead to the event in question. This is something like an attempt at pure objectivity. Alternatively, the causally emergent approach seems to seek the best predictor for a given event — the thing that provides the most information — and then treats that as a “causally emergent, ontologically real” thing that “actually exists”. Scary words. I do like this idea of gesturing towards the inherent limits of human subjectivity — we are limited by what we can observe, which appears inherently subjective. Furthermore, there are limits of computability, and maybe even something like comprehensibility. What value is a reductive theory of the universe if we lack the tools to apply it at the scales we care about? We don’t try to catch baseballs with quantum mechanics, in the words of a friend. An explanation that is true, in some sense of the word, is not necessarily useful.

What causes a set of dominos to fall over? A reductive model is forced to explain the physics back to the beginning of the universe. A causally emergent model may be willing to model it back to the point where local complexity is maximized (or something like that, *waves hands around*): the mind of the nearby human who was feeling restless this afternoon. What causes humans to act? Models can be built here, too. We can tease apart the best predictors for a given event: when they had their last meal, environmental cues, genetics, etc. Is this the actual cause? Depends on what you mean…

Path creation with the Will

When we move following a pre-established path, we call this habit and it feels automatic. Barring interruption, these movements will be carried out in response to the stimulus they have tended to follow in the past. On the other hand, deliberate path creation seems to require the application of will. What’s that? Here is a funky hypothesis. Willpower is what it feels like when new desire paths are being traversed. For this to be successful, it seems to require a combination of holding the path in attention and for its prospective traversal to feel sufficiently emotionally appealing. I’m not sure how emotional appeal works but perhaps we can black box it: given a set of perceptions and the patterns they activate within the mind, some emotional affect is experienced (good vs bad vs neutral, along with an intensity [1]).

We can imagine ourselves typically defaulting to follow what we normally do in a situation where our attention is occupied with something else (a memory, an unrelated thought, etc), e.g. I normally follow the paved sidewalk. Changes in behavior in response to an identical stimulus tend to be enabled only when attention is applied, e.g. I’m in a rush and notice that cutting across a lawn will reduce my journey length even if no formal path exists there. This willed and deliberate change in behavior is only possible when we can feel that a sufficiently emotionally-salient and emotionally attractive benefit may result from the change. What does this say about habit change? To deliberately form new paths — habits — we usually need to be ready to attend to the sensory experience that we want to trigger the desired new path while simultaneously holding the emotionally charged prospective outcome in attention. If I hypothetically want to stop smoking a cigarette every time I leave my office, I need to prepare myself to attend to the triggering stimulus — leaving the office — as well as the emotionally charged prospective outcome — I need to imagine myself getting lung cancer 30 years later and focus on what that would feel like.

This would suggest a strong link between the ability to control the movement of attention and the plasticity of habits.


The studies of consciousness and quantum physics probably have very little to do with each other, and very little in common, with the exception of appearing chock-full of paradoxes. “Light is both a particle and a wave! When I look around at the world, there is an observer sitting in my head — but where?!” Why do we tend to struggle to speak clearly about these things? Perhaps the way we speak — down to the individual words we use — muddles things more than we realize. One potential culprit is the verb “to be”, along with its many forms of “is, are, were, was, am, be, been”. Usage of these verbs does not always cause problems but they often too easily allow for making claims about the world that obscure underlying experience (how we come to know) and make unwarranted — and unnoticed — logical leaps. Rather than “Light is both a particle and a wave” we can try to say “Light behaves like a wave when measured using instrument X, and behaves like a particle when measured using instrument Y”.

In Science and Sanity, Alfred Korzybski — along with his presentation of general semantics — suggests a ban on the usage of “to be” as an antidote to “demonological thinking”. He calls it E-prime (English Prime) — a new form of the English language. This proposed restriction on language serves to counterbalance a common, comfortable, and confused way of seeing the world: as a collection of neatly separated objects who each have some “core essence” or ideal form — Aristotelian essentialism. By speaking more precisely about how we come to form beliefs and being more careful about making logical leaps in speech, we may slowly improve how we think (a weak form of Sapir-Whorf). It may help us better accept the nebulosity around us.

This post was written in e-prime

Mixed feelings

Is the character of our experience purely a function of our attention and, if so, what determines it? Can we experience a mood subconsciously and what would that mean? My friend Scott brought up an example of someone who appeared happy from external physical cues but was surprised to hear that when asked about it. What is going on there? It seems like some part of our mind may be reacting to physiological stimulus yielding behavior that we pattern match to specific emotions, while another part of our mind may be experiencing something else entirely. It is unclear whether attention is rapidly moving back and forth between these experiences of a different character or whether our “background” mood is somehow “coloring” our attention-moderated foreground experiences — perhaps these are two ways of saying the same thing.

When a background emotion is aversive, such as sadness or grief, we often find ourselves seeking to drown it out with a positive stimulus — the proverbial sad person eating ice cream on the couch. Alternatively, you can have different “processes” within the mind fighting over large physical movement rather than merely internally experienced qualia (small movement). A poker player in an intense hand will be physically displaying a mixture of excitement/fear driven by his primal emotional response to the expected win or loss — anything from an elevated heart rate to a nervous tic and vocal changes. At the same time, his reasoning process will be trying to command attention towards trying to counteract these emotional signals — it predicts them to be self-defeating. The subjective experience is one of a battle for stabilizing attention — a feeling of tension or contradiction.