Computational transcession

A very interesting potential solution to the Fermi paradox which turns the Kardashev scale on it’s head. It suggests that intelligent lifeforms will have a tendency to evolve and expand inwards rather than outwards. It seems like this balances on the idea that transforming biological lifeforms into a denser substrate (computational on silicon, initially?) and then acceleration of energy efficiency gains by increasing the “computational density” will occur quicker than expansion into space (very time consuming without energy to reach very high speeds) to find more energy.

Bayesianism and Ethics

In an Aeon article from last year, David Deutsch seems to be taking a dim view of Bayesianism as a core component of an AGI:

The doctrine assumes that minds work by assigning probabilities to their ideas and modifying those probabilities in the light of experience as a way of choosing how to act. This is especially perverse when it comes to an AGI’s values — the moral and aesthetic ideas that inform its choices and intentions — for it allows only a behaviouristic model of them, in which values that are ‘rewarded’ by ‘experience’ are ‘reinforced’ and come to dominate behaviour while those that are ‘punished’ by ‘experience’ are extinguished.

Or rather, since the Bayes AGI just does what rewards it the most, it will align its ‘values’ to correspond to that. I think the problem is with the hidden meaning in the word values. What he is referring to here is probably something along the lines of a compressed version of the AGI’s internal decision making model. In fact, you could say that this is in theory what we consider the personal or ethical values of a human being to really be. Of course in practice a human’s actions frequently depart from his advertised values — probably by design.

The core problem here is not with Bayesianism but rather that an unfortunately designed AGI could find itself with a fitness function (and corresponding set of values, ethics, etc.) which are not necessarily friendly to humans. Or perhaps he’s suggesting it’s a problem with any AGI which has a goal system. What is the alternative, in that case?

Eating your cousin

The mammalian superorder Euarchontoglires split from it’s sister group into Glires (which includes rodents and rabbits) and Euarchonta (which includes primates and us) about 90 million years ago. So when you eat the nice juicy rabbit steak, you’re eating one of your furry ~9 millionth cousins (assuming about 10 years per generation on average). Of course if you live in an African jungle and lack food sources, you may be eating monkeys who split from humans only around 5-10 million years ago, so they are you perhaps 250,000th cousin (generations probably get longer in this genus; say 20 years). During instances of cannibalism, of course, you’re likely eating a 10th or 20th cousin (assuming the cannibalism is occurring where it has in the past).

How many generations would you have to go back before reaching a common ancestor with someone/something before you’re comfortable eating them, ethically? How far back until you’re comfortable mating with them, but also how far back until you’re uncomfortable? Tricky business, this

Flying arms

Don’t tell the TSA or soon we’ll have robocops on every plane. Perhaps it’s time to rethink the pile of rules and regulations and start with something evidence-based? Is profiling based on appearance and race so bad (ethically? bad how?) that it’s not worth the massive gains in efficiency we would experience getting through airports? Seems unlikely. Pre-check seems to be a step in the right direction; we should be able to give up privacy in exchange for not having to get to security 3 hours before our flights.

Mammalian slaughter

 A January 2012 study, published in the journal Proceedings of the National Academy of Sciences, estimated that Florida’s raccoon population had fallen 99.3 percent, opossum by 98.9 percent, and bobcat by 87.5 percent between 2003 and 2011.

I think we need more crocodiles to combat them

Standing desk

I’ve had a standing desk at home for a few years now, and I can safely say I’m still a fan. My friend Zak switched to one at home recently and he mentioned that the first initial benefit is that it gets you away from the computer. I think this is still one of the primary benefits for me; instead of having the status quo of slouching in front of a screen as a desirable option, it forces you to reconsider and often choose reading a book on the couch. I feel it also has a tendency to increase energy levels.

If you spend all day at work sitting in front of a computer and then do the same at home, I would strongly recommend switching one of the environments to standing. I did mine with a cheap old IKEA desk with adjustable height shelves. Another key component has been one of those standing mats that cashiers sometimes have. A nice one is relatively expensive (more than the desk actually, like $200) but the few times I’ve stood without one my heels were feeling it pretty quickly, and not in a good way.

Habit training

I wonder how much of people’s frustrations with aspects of their life are due to a lack of understanding of habit. It’s a concept that any self-conscious being is probably familiar with but the depth tends to stop there. How many people take time (or read books!) to understand how habits have shaped their lives, or the more difficult task of how to break undesirable habits as well as form good ones. It is a bit of a meta-cognitive task but not outside of the reach of the majority of people, I would imagine. To the extent that willpower is a useful concept, making and breaking habits certainly requires some of it. But the key is, having the skills to do so more efficiently allows for less expenditure of energy/willpower/whatever in order to accomplish the same task. This seems powerful yet we don’t see habit management skills being explicitly sought out by most people.

Affecting the conversation

Tyler Cowen’s comment on recent Republican strategy does a good job summarizing my thoughts that mostly went unelaborated in the previous post. This is a well understood bargaining technique and works particularly well when the opponents have no choice but to address the belligerents in a formal and public setting. The right wing never expected to not raise the debt ceiling or to repeal ACA, but they did expect to have an impact on future conversations and certainly moved the Schelling point in their favor for many issues. This also appear similar, albeit executed more successfully, to U.S. foreign policy during the Vietnam War.