The man in the article didn’t strike me as particularly brilliant but rather helped underscore the difficulty of the problem at hand: creating a game with sufficiently close to zero but negative expectancy that still incorporates elements which trick the player into thinking the game is predictable. People are drawn to casino games like slots because it unfolds over time, they can see the potential of winning more often than it is possible (slots will have near-wins a far greater percentage of time than randomness would suggest — this drives the addiction and profitability of the game).
Either that, or the games makers are lazy and stupid. Why not use the techniques they have applied on the visible parts of the game and randomly distribute them over a set of already created results-cards rather than having the visible aspect some function of the results?
A good lottery could use stock market data (such as a historical chart, predict the next move!). This would be full-proof as long as the makers were able to collect a large enough set of independent and high resolution samples — maybe even splice samples from different stocks and time periods in a random fashion.
It’s a good thing that our brain filters out most of the details in our lives and makes them mundane, otherwise we’d be sitting around all day drooling. We need psychoactive drugs or sensory enhancement/artists to focus our brain. This touches on the issue of wireheading as well, something I’ll try to discuss more as I still have not decided my opinion of it.
Simple linear models outperform experts in making predictions. Notice that the examples are mostly from the social sciences; it seems they have yet to learn a lesson that hard sciences learned over a hundred years ago. Potential root causes include misguided ethical ideals (a cultural aversion to discrimination by race/gender/income/etc), cognitive biases of the experts, and a fear of irrelevance on the part of the experts. Trust the numbers; science!
The best heuristic I’ve ever heard to make better real-world decisions, just watch: http://www.youtube.com/watch?v=RoEEDKwzNBw
A link to Tyler’s blog is on the right…
I learned about this before from the Herzog film Encounters at the End of the World (recommended) where they show how in a blizzard it is very difficult to navigate since without visual cues humans will walk in circles.
This doesn’t appear as a big mystery to me. It’s very difficult to move in a straight line; one has to maintain a very delicate balance, either between the amount of force exerted by the left vs right foot or the position of the steering wheel in the case of driving the car. Humans (and from what I can tell any system based off of a neural network) have a very hard time with “absolutes” but rather think and act in relative terms. Neurons fire due to changes in stimulus; a sustained stimulus results in neuron firing rates that die down over time. My hypothesis is that people make one tiny mis-calibration in pressure while trying to move in a straight line and after a few seconds any sense that such a mis-calibration occurred has been ignored by the brain since the change in force itself was so small and short-lived. The human then goes on unknowingly at the new level of force (slightly longer right food stride, steering wheel tiled 2 degrees to the left, etc) resulting in slow and gentle curvature over time.
A major piece of evidence for this seems to be that while the studies constantly show that humans will travel in circles, there’s no consistency regarding how tightly or quickly the circles appear. This would be explained by the fact that in such an experiment, each person (and even in each separate trial by that person) the deviation from “absolutely perfect” force exertion occurs at a pseudo random time and for a pseudo random amount.