Why does one experience so much akrasia in trying to revisit past insights? Maybe it’s something only I experience, but I doubt it.
When reading I try to mark, comment, flag, etc. deep/interesting insights. While consuming them my brain is releasing quite a bit of positive feedback, and its enjoyable just from that standpoint.
Why is it then so hard to pick up an old book and look at the 3 pages out of 500 that I have marked as truly insightful? Yes it is possible that a lot of the information has already been internalized and incorporated into myself. And yes I will likely remember the insight upon rereading and it will not be quite as enjoyable. Yet on average, per unit of time spent, I am likely to experience far more pleasure, insight, etc. from reviewing past finds than plowing through new, unrefined, ore.
Fundamentally, we are very complex machines evolved to carry pieces of our DNA forward through time
A thought provoking documentary series and I agree with a lot of the underlying ideas.
I think they’ve missed the main point though. Humans, societies, just like animals and the ecosystem, are inherently very dynamic and unstable systems. Left alone, nature is very good at adapting to gradual change but in the process will exhibit extreme instability. The result is a tough life for individual components (wild animals, plants, cells) but the persistence and success of the system (living creatures on planet earth) over a very long time, billions of years, and a very diverse environmental landscape. They got this part right. What’s left is to realize that human civilization has continually tried to fight and calm this instability, primarily driven by those with power/wealth that wish to protect status quos. The result appears to be long periods of stability followed by large conflicts and economic disasters. The conclusion should not be to throw up our hands and say we can’t handle the challenge, but to try to understand it better.
Left to their own devices, humans and societies of humans will behave in the same way as any natural system because fundamentally the former is a subset of the latter. The turbulence of civilizations is the natural state of our system. This instability should also be visible in our economic systems, which has certainly been the case as far as recorded data is available. We need to consider some of the contradictions between some of our moral/ethical ideals and the behavior of self-stabilizing (in the long run) natural systems. Modern liberal democracies seem to have accomplished a utopia where individuals feel like they have freedom and control but rather have allowed themselves to be hijacked by political and corporate power centers. This is not a coincidence, we as a species are simply not intelligent enough to be able to successfully self regulate a large network from the inside.
The problem arises from the feeling that humans have of being special. We hold the individual as sacrosanct, we defend with our lives the ideals of individual freedom and equality. At appears to me that, at least with current technology and knowledge, there exists direct trade-off between the worship of the individual and our ability as a species to adapt and continue improving, to thrive in the long run.
What is true is already so.
Owning up to it doesn’t make it worse.
Not being open about it doesn’t make it go away.
And because it’s true, it is what is there to be interacted with.
Anything untrue isn’t there to be lived.
People can stand what is true,
for they are already enduring it.
— Eugene Gendlin
Is vanity (specifically in the sense of attractiveness) solely a function of sexual competition and for purposes of arousal? Would we find a lack of vanity in a society of equally well-off (to prevent status competitions for non-sexual goods) asexual beings?
I’m reading the LessWrong Sequences from the beginning. Most of them are a repeat for me but there may be some sections I skipped originally, and of course plenty of things forgotten.
The sequences provide stepping stones of logic to help people internalize non-trivial concepts about rationality and similar topics.
I’m going to leave notes on the main ideas I take away. Mostly to serve as a reminder to me, but hopefully it encourages other people to read along.
The first sequence is Mysterious answers to mysterious questions
- A belief is only worth holding if it effects the distribution of anticipated experience. If following a chain of beliefs does not lead to an anticipated experience, or leads to a prediction that is contradicted by reality with sufficient evidence, the belief[s] should be deleted.
- No two rationalists can agree to disagree. This is a symptom of disagreeing on the facts. Check your definitions.
- Focus on narrow, small steps of inference. It may seem wise and impressive to generalize but it is much harder to prove, or identify logical errors while trying to do so.
- “A hypothesis that forbids nothing, permits everything, and thereby fails to constrain anticipation. Your strength as a rationalist is your ability to be more confused by fiction than by reality. If you are equally good at explaining any outcome, you have zero knowledge.” – Yudkowsky
- Absence of evidence is evidence of absence. A state of the world is less likely if we see no evidence for it.
- The expectation of the posterior probability (right side), after viewing the evidence, must equal the prior probability (left side). Evidence cannot be one-sided, it cannot just confirm or just deny.
- P(H) = P(H|E)*P(E) + P(H|~E)*P(~E). This is important:
If you expect a strong probability of seeing weak evidence in one direction, it must be balanced by a weak expectation of seeing strong evidence in the other direction. If you’re very confident in your theory, and therefore anticipate seeing an outcome that matches your hypothesis, this can only provide a very small increment to your belief (it is already close to 1); but the unexpected failure of your prediction would (and must) deal your confidence a huge blow. On average, you must expect to be exactly as confident as when you started out. Equivalently, the mere expectation of encountering evidence – before you’ve actually seen it – should not shift your prior beliefs. –Yudkowsky
- Hindsight bias is hard to deal with. We need to imagine a world before the evidence came in to truly admire the value of the science.
- An explanation is not an explanation if it does not constrain the probability space. Using scientific words does not add value if the meaning is not understood. Do not merely accept the naming of a scientific phenomena or theory as an explanation; use it’s implications to see if the evidence can truly be explained by the specific theory or system (and not just anything, “magic”).
Can we define physically independent entities that experience self-awareness as separate highly sentient beings *worth caring about*? Humans possess a dense cluster of nerve cell in their brain with the comparative minority spread out throughout the body and gut. Throughout our life almost all of our cells die and are replaced, the exception being those nerve cells. Nevertheless, it is fair to treat neurons as substrate upon which information is encoded rather than something that is a timeless definition of “who we are.”
The only similarity between the same person at an early age and a late age is their DNA (encoding with too much compression to define an independent sentient creature, so meaningless for this exercise), much of their nervous system (many cells don’t regrow — although technology is changing that — and can live until the rest of the body dies), and some of the information encoded in their cortical connections. Without a concept of self; particularly an ability to generate narrative and a timeline which connects the past self to the future self, I propose the being has no concept of individuality or on-going existence!
This raises some interesting conclusions. People with surgically split brains can be considered two independent creatures (they are not completely independent since they are physically confined within the same system — their body — but this doesn’t change the fact that they will be able to function independently on a neurological level which is all that matters). More importantly, a lack of self-awareness may imply that a creature is unable to distinguish between it’s selves over time, let alone between itself and others. If this can be described as a lack in ability to narrate one’s life, it seems to prevent the experience of high level emotions such as suffering, joy, etc. This would also imply that it is not morally wrong to kill such creatures. Interestingly, it seems that humans fall into this category when they are at an early enough age, although the future of said human involves an ascension towards self-awareness. We can say that it is morally justifiable to kill cows since they cannot achieve self-awareness, but not morally justifiable to kill baby humans since they will achieve self-awareness in a future state (although it is maybe more morally justifiable than killing an already self-aware human?).
Noah seems to argue that taking property rights to extremes and privatizing everything is the libertarian dream, will increase efficiency, but will also introduce onerous monetary and psychological transaction costs and lead to a feeling of less freedom, rather than the desired increase. This seems like a straw man to me. No economist will argue for such extreme levels of privatization precisely because it does not increase efficiency. We know that in practice, public goods and pseudo-public goods (public trash cans and benches, parks, etc.) tend to be more efficiently provisioned by government. Largely because of the difficulty of making capital allocation decisions at such a high frequency and for such small amounts as would be required if these were provided for privately (assuming current technology).
However, we can imagine a world where our usage of semi-public goods is metered unobtrusively. In exchange for paying less in income taxes, we would be taxed proportional to our usage of roads (already in testing in Netherlands), parks and sidewalks just as we now pay for water and electricity (yes these are provided by pseudo-private for-profit institutions, but the structure and regulation of the industries results in a market that mimics government-provided utilities). If we can have a sufficiently transparent and trustworthy infrastructure to meter our usage of such things, we’d have no problem paying for them one time at the end of the year. To eliminate the remaining potential for psychological costs when determining whether to allocate capital to such small privileges as walking on the sidewalk, we can simply make these taxes/costs steeply progressive (since the end of year bill will come with knowledge of the user’s wealth). This would eliminate most of the problems cited by Noah and I believe is coming soon to first world nations.
He begins with the curious case of color in dreams. When people today are asked whether they regularly dream in color, most say they do. But it was not always so. Back in the 1950s most said they dreamed in black and white. Presumably it can hardly be true that our grandparents had different brains that systematically left out the color we put in today. So this must be a matter of interpretation. Yet why such freedom about assigning color? Well, try this for an answer. Suppose that, not knowing quite what dreams are like, we tend to assume they must be like photographs or movies — pictures in the head. Then, when asked whether we dream in color we reach for the most readily available pictorial analogy. Understandably, 60 years ago this might have been black-and-white movies, while for most of us today it is the color version. But, here’s the thing: Neither analogy is necessarily the “right” one. Dreams don’t have to be pictures of any kind at all. They could be simply thoughts — and thoughts, even thoughts about color, are neither colored nor non-colored in themselves.