LW Sequence notes – Mysterious answers to mysterious questions (Part 1)

I’m reading the LessWrong Sequences from the beginning. Most of them are a repeat for me but there may be some sections I skipped originally, and of course plenty of things forgotten.

The sequences provide stepping stones of logic to help people internalize non-trivial concepts about rationality and similar topics.

I’m going to leave notes on the main ideas I take away. Mostly to serve as a reminder to me, but hopefully it encourages other people to read along.

The first sequence is Mysterious answers to mysterious questions

  • A belief is only worth holding if it effects the distribution of anticipated experience. If following a chain of beliefs does not lead to an anticipated experience, or leads to a prediction that is contradicted by reality with sufficient evidence, the belief[s] should be deleted.
  • No two rationalists can agree to disagree. This is a symptom of disagreeing on the facts. Check your definitions.
  • Focus on narrow, small steps of inference. It may seem wise and impressive to generalize but it is much harder to prove, or identify logical errors while trying to do so.
  • “A hypothesis that forbids nothing, permits everything, and thereby fails to constrain anticipation. Your strength as a rationalist is your ability to be more confused by fiction than by reality. If you are equally good at explaining any outcome, you have zero knowledge.” – Yudkowsky
  • Absence of evidence is evidence of absence. A state of the world is less likely if we see no evidence for it.
  • The expectation of the posterior probability (right side), after viewing the evidence, must equal the prior probability (left side). Evidence cannot be one-sided, it cannot just confirm or just deny.
  •  P(H) = P(H|E)*P(E) + P(H|~E)*P(~E). This is important:

If you expect a strong probability of seeing weak evidence in one direction, it must be balanced by a weak expectation of seeing strong evidence in the other direction.  If you’re very confident in your theory, and therefore anticipate seeing an outcome that matches your hypothesis, this can only provide a very small increment to your belief (it is already close to 1); but the unexpected failure of your prediction would (and must) deal your confidence a huge blow.  On average, you must expect to be exactly as confident as when you started out.  Equivalently, the mere expectation of encountering evidence – before you’ve actually seen it – should not shift your prior beliefs.  –Yudkowsky

  • Hindsight bias is hard to deal with. We need to imagine a world before the evidence came in to truly admire the value of the science.
  • An explanation is not an explanation if it does not constrain the probability space. Using scientific words does not add value if the meaning is not understood. Do not merely accept the naming of a scientific phenomena or theory as an explanation; use it’s implications to see if the evidence can truly be explained by the specific theory or system (and not just anything, “magic”)

Self-awareness as a level of sentience

Can we define physically independent entities that experience self-awareness as separate highly sentient beings *worth caring about*? Humans possess a dense cluster of nerve cell in their brain with the comparative minority spread out throughout the body and gut. Throughout our life almost all of our cells die and are replaced, the exception being those nerve cells. Nevertheless, it is fair to treat neurons as substrate upon which information is encoded rather than something that is a timeless definition of “who we are.”

The only similarity between the same person at an early age and a late age is their DNA (encoding with too much compression to define an independent sentient creature, so meaningless for this exercise), much of their nervous system (many cells don’t regrow — although technology is changing that — and can live until the rest of the body dies), and some of the information encoded in their cortical connections. Without a concept of self; particularly an ability to generate narrative and a timeline which connects the past self to the future self, I propose the being has no concept of individuality or on-going existence!

This raises some interesting conclusions. People with surgically split brains can be considered two independent creatures (they are not completely independent since they are physically confined within the same system — their body — but this doesn’t change the fact that they will be able to function independently on a neurological level which is all that matters). More importantly, a lack of self-awareness may imply that a creature is unable to distinguish between it’s selves over time, let alone between itself and others. If this can be described as a lack in ability to narrate one’s life, it seems to prevent the experience of high level emotions such as suffering, joy, etc. This would also imply that it is not morally wrong to kill such creatures. Interestingly, it seems that humans fall into this category when they are at an early enough age, although the future of said human involves an ascension towards self-awareness. We can say that it is morally justifiable to kill cows since they cannot achieve self-awareness, but not morally justifiable to kill baby humans since they will achieve self-awareness in a future state (although it is maybe more morally justifiable than killing an already self-aware human?).