Eliezer Yudkowsky is a cofounder and research fellow at the Singularity Institute for Artificial Intelligence, an institute for the study of safe advanced artificial intelligence. He is one of the world's foremost researchers on Friendly AI and recursive self-improvement. He is chiefly known for pioneering the study of Friendly AI, which emphasizes the importance of the structure of an ethical optimization process and its supergoal, in contrast to the common trend of seeking the right fixed enumeration of ethical rules that a moral agent should follow. In 2001, he published the first technical analysis of motivationally stable goal systems, with his book-length Creating Friendly AI: The Analysis and Design of Benevolent Goal Architectures. In 2002, he wrote "Levels of Organization in General Intelligence," a paper on the evolutionary psychology of human general intelligence, published in the edited volume Artificial General Intelligence. He also has two papers forthcoming in the edited volume Global Catastrophic Risks, entitled "Cognitive Biases Potentially Affecting Judgment of Global Risks" and "Artificial Intelligence as a Positive and Negative Factor in Global Risk."
Self-described autodidact and co-founder of the Singularity Institute for Artificial Intelligence, Eliezer Yudkowsky discussed the technological creation of the first smarter-than-human intelligence - what he calls the Singularity.