Diet sense

A pretty good overview of paleo nonsense ends with a good summary of what science can agree on at this point:

Having talked to all of these people and read their work, here is how I walk away from this. Oxidative stress will increasingly be the target of medical treatments and preventive diets. We’ll hear more about the role of blood sugar in Alzheimer’s and continue to focus on moderating intake of refined carbohydrates. The consensus remains that too much LDL is bad for you. We do not have reason to believe that gluten is bad for most people. It does cause reactive symptoms in some people. Peanuts can kill some people, but that does not mean they are bad for everyone. I agree with Katz that the diets consistently shown to have good long-term health outcomes—both mental and physical—include whole grains and fruits, and are not nearly as high in fat as what Perlmutter proposes.

The amount of hype over a major dietary change with so little scientific evidence is not surprising; this ain’t the first time

Computational transcession

A very interesting potential solution to the Fermi paradox which turns the Kardashev scale on it’s head. It suggests that intelligent lifeforms will have a tendency to evolve and expand inwards rather than outwards. It seems like this balances on the idea that transforming biological lifeforms into a denser substrate (computational on silicon, initially?) and then acceleration of energy efficiency gains by increasing the “computational density” will occur quicker than expansion into space (very time consuming without energy to reach very high speeds) to find more energy.

Bayesianism and Ethics

In an Aeon article from last year, David Deutsch seems to be taking a dim view of Bayesianism as a core component of an AGI:

The doctrine assumes that minds work by assigning probabilities to their ideas and modifying those probabilities in the light of experience as a way of choosing how to act. This is especially perverse when it comes to an AGI’s values — the moral and aesthetic ideas that inform its choices and intentions — for it allows only a behaviouristic model of them, in which values that are ‘rewarded’ by ‘experience’ are ‘reinforced’ and come to dominate behaviour while those that are ‘punished’ by ‘experience’ are extinguished.

Or rather, since the Bayes AGI just does what rewards it the most, it will align its ‘values’ to correspond to that. I think the problem is with the hidden meaning in the word values. What he is referring to here is probably something along the lines of a compressed version of the AGI’s internal decision making model. In fact, you could say that this is in theory what we consider the personal or ethical values of a human being to really be. Of course in practice a human’s actions frequently depart from his advertised values — probably by design.

The core problem here is not with Bayesianism but rather that an unfortunately designed AGI could find itself with a fitness function (and corresponding set of values, ethics, etc.) which are not necessarily friendly to humans. Or perhaps he’s suggesting it’s a problem with any AGI which has a goal system. What is the alternative, in that case?

Eating your cousin

The mammalian superorder Euarchontoglires split from it’s sister group into Glires (which includes rodents and rabbits) and Euarchonta (which includes primates and us) about 90 million years ago. So when you eat the nice juicy rabbit steak, you’re eating one of your furry ~9 millionth cousins (assuming about 10 years per generation on average). Of course if you live in an African jungle and lack food sources, you may be eating monkeys who split from humans only around 5-10 million years ago, so they are you perhaps 250,000th cousin (generations probably get longer in this genus; say 20 years). During instances of cannibalism, of course, you’re likely eating a 10th or 20th cousin (assuming the cannibalism is occurring where it has in the past).

How many generations would you have to go back before reaching a common ancestor with someone/something before you’re comfortable eating them, ethically? How far back until you’re comfortable mating with them, but also how far back until you’re uncomfortable? Tricky business, this

Flying arms

Don’t tell the TSA or soon we’ll have robocops on every plane. Perhaps it’s time to rethink the pile of rules and regulations and start with something evidence-based? Is profiling based on appearance and race so bad (ethically? bad how?) that it’s not worth the massive gains in efficiency we would experience getting through airports? Seems unlikely. Pre-check seems to be a step in the right direction; we should be able to give up privacy in exchange for not having to get to security 3 hours before our flights.