Animalia · Bio · Performativity · Science

The Real Wisdom of the Crowds

Consider this: shiners have a natural preference for darkness. Plop a shoal of them into a pool of water, and they’ll head for the shadiest bits. This is something that animals do all the time: They track gradients in their environment. A migrating robin might follow the Earth’s magnetic field, a moth might follow the scent of a flower, or an ant might track the pheromones laid by its nest-mates. But single shiners are laughably bad at this.

Andrew Berdahl and Colin Torney from Couzin’s team discovered their ineptitude by projecting shifting patterns of light over a shallow pool and adding the shiners in increasing numbers. Overhead cameras tracked their movements, and the team calculated how good they were at chasing the shadows.

The solo fish did so badly that they were almost swimming randomly. Only larger shoals were good at avoiding the shifting light. Even then, Berdahl and Torney found that the shiners’ movements were far more influenced by what their neighbours were doing, than by how bright the environment was.

Excerpt from an article written by Ed Yong at Phenomena. Continue HERE

Human-ities · Social/Politics · Theory

The disadvantage of smarts

What, if any, evolutionary advantage does intelligence give us?

Actually, less intelligent people are better at doing most things. In the ancestral environment general intelligence was helpful only for solving a handful of evolutionarily novel problems.

Suggested reading: “The Bell Curve: Intelligence and Class Structure in American Life” by Herrnstein, Richard J. and Charles Murray (1994)

You mean our ancestors did not really have to reason?

Evolution equipped humans with solutions for a whole range of problems of survival and reproduction. All they had to do was to behave in the ways in which evolution had designed them to behave—eat food that tastes good, have sex with the most attractive mates. However, for a few evolutionarily novel problems, evolution equipped us with general intelligence so that our ancestors could reason in order to solve them. These evolutionarily novel problems were few and far between. Basically, dealing with any type of major natural disaster that is very infrequent in occurrence would require general intelligence.

Suggested reading: “Evolutionary Psychology and Intelligence Research” by Satoshi Kanazawa, in American Psychologist; 65: 279-289 (2010)

Excerpt of an interview with Satoshi Kanazawa on intelligence. Continue HERE

SATOSHI KANAZAWA is Reader in Management at the London School of Economics and Political Science, and Honorary Research Fellow in the Department of Psychology at Birkbeck College, University of London. He has written over 80 articles across the fields of psychology, sociology, political science, economics, anthropology and biology. One such was his widely reported article “Why Liberals and Atheists Are More Intelligent” (2010). His latest book is called “The Intelligence Paradox: Why the Intelligent Choice Isn’t Always the Smart One” (2012).

Human-ities · Philosophy · Technology

The Future of Moral Machines

Leif Parsons

A robot walks into a bar and says, “I’ll have a screwdriver.” A bad joke, indeed. But even less funny if the robot says “Give me what’s in your cash register.”

The fictional theme of robots turning against humans is older than the word itself, which first appeared in the title of Karel Čapek’s 1920 play about artificial factory workers rising against their human overlords. Just 22 years later, Isaac Asimov invented the “Three Laws of Robotics” to serve as a hierarchical ethical code for the robots in his stories: first, never harm a human being through action or inaction; second, obey human orders; last, protect oneself. From the first story in which the laws appeared, Asimov explored their inherent contradictions. Great fiction, but unworkable theory.

The prospect of machines capable of following moral principles, let alone understanding them, seems as remote today as the word “robot” is old. Some technologists enthusiastically extrapolate from the observation that computing power doubles every 18 months to predict an imminent “technological singularity” in which a threshold for machines of superhuman intelligence will be suddenly surpassed. Many Singularitarians assume a lot, not the least of which is that intelligence is fundamentally a computational process. The techno-optimists among them also believe that such machines will be essentially friendly to human beings. I am skeptical about the Singularity, and even if “artificial intelligence” is not an oxymoron, “friendly A.I.” will require considerable scientific progress on a number of fronts.

Written by COLIN ALLEN, NYT. Continue HERE