Generic selectors
Exact matches only
Search in title
Search in content
Search in posts
Search in pages

Tegmark: Life 3.0

Tegmark: Life 3.0

How was the book?

This book is exciting and everybody who are interested about artificial intelligence should read it. It was a great supplement to the Nick Bostrom’s ”Superintelligence”.

I like the way Tegmark summarizes big and important topics such as AI and consciousness. He typically starts the summary by saying that ”In summary”….:-) Secondly he keeps readers hooked with simple language and examples. Thirdly the story of Prometheus and Omegas was fascinating image of how the world could change when power from the people was moved to movement’s supported by companies owned by Omegas. I think there was a small resemblance to Google, Facebook and Microsoft. Aren’t they doing the same thing?

The essence of the book is ”exploring the origin and fate of intelligence, goals and meaning.” Also Tegmark wants explore how to turn the ideas into action. The book is about ”the tale of our own future with AI.” This book is also an invitation to join the conversation about AI as Tegmark states ”I wrote it in the hope that you, my dear reader, will join this conversation.”

Name of the book comes from an idea that “Life 1.0 (biological stage): evolves its hardware and software Life 2.0 (cultural stage): evolves its hardware, designs much of its software Life 3.0 (technological stage): designs its hardware and software.”

Obviously AI will be the Life 3.0. Or AGI (artificial general intelligence). Learning and accomplishing goals are something that’s characteristic for an AGI and by AGI Tegmark means AI that can reach human level and beyond which will be enabling Life 3.0.

What are the key learnings of the book? 

Three schools of thought

”The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.” (Isaac Asimov)

There are three distinct schools of thought when thinking about when (if ever) will it happen, and what will it mean for humanity. These are digital utopians, techno-skeptics and members of the beneficial-AI movement.

I) Digital Utopians. ”Digital life is the natural and desirable next step in the cosmic evolution and that if we let digital minds be free rather than try to stop or enslave them, the outcome is almost certain to be good. Most of the utopians think human-level AGI might happen within the next twenty to a hundred years.”

Such as Larry Page from Google: ”Don’t be evil”.

II) Techo-skeptics. ”They think that building superhuman AGI is so hard that it won’t happen for hundreds of years, and therefore view it as silly to worry about it now.”

Such as Andrew Ng: “Fearing a rise of killer robots is like worrying about overpopulation on Mars.”

III) The Beneficial-AI Movement. ”Stuart Russell and many groups around the world are pursuing the sort of AI-safety research that he advocates. Concerns similar to Stuart’s were first articulated over half a century ago by computer pioneer Alan Turing and mathematician Irving J. Good.

Key question is that ”how to build beneficial AI.” AI should be redefined: the goal should be to create not undirected intelligence, but beneficial intelligence.

The questions raised by the success of AI aren’t merely intellectually fascinating; they’re also morally crucial, because our choices can potentially affect the entire future of life.”

We have to write the specifications for AI such way that we are happy our selves…. A superintelligent AI is by definition very good at attaining its goals, whatever they may be, so we need to ensure that its goals are aligned with ours.

What is intelligence?

It is ”ability to accomplish complex goals”. That’s why ”there’s no fundamental reason why machines can’t one day be at least as intelligent as us. The ability to learn is arguably the most fascinating aspect of general intelligence.”

”The driving force behind many of the most recent AI breakthroughs has been machine learning. Natural language processing is now one of the most rapidly advancing fields of AI, and I think that further success will have a large impact because language is so central to being human. The better an AI gets at linguistic prediction, the better it can compose reasonable email responses or continue a spoken conversation. Although it doesn’t understand what it’s saying in any meaningful sense.”

AI-safety research

There are four main areas of technical AI-safety research:

I) Verification = Ensuring that software fully satisfies all the expected requirements,

II) Validation = “Did I build the right system?”

III) Security

IV) Control = ”But sometimes good verification and validation aren’t enough to avoid accidents, because we also need good control: ability for a human operator to monitor the system and change its behavior if necessary.”

Tegmark illustrates a scenario where you do not want to end-up. For example: ”What if the phishing email appears to come from your credit card company and is followed up by a phone call from a friendly human voice that you can’t tell is AI-generated?”

But robojudges could in principle ensure that, for the first time in history, everyone becomes truly equal under the law: they could be programmed to all be identical and to treat everyone equally, transparently applying the law in a truly unbiased fashion.

Future of Work

Tegmark is a Jobtimist. If you can answer yes to these questions – you will find the future of work:

I) Does it require interacting with people and using social intelligence?

II) Does it involve creativity and coming up with clever solutions?

III) Does it require working in an unpredictable environment?

The following professions are a safe bet – a teacher, nurse, doctor, dentist, scientist, entrepreneur, programmer, engineer, lawyer, social worker, clergy member, artist, hairdresser or massage therapist. “Work keeps at bay three great evils: boredom, vice and need.” (Voltaire)

Philosophy with a deadline (Nick Bostrom).

Tegmark spends a lot of time exploring how AI could execute the takeover of Earth? ”Exploring scenarios with slower takeoffs, multipolar outcomes, cyborgs and uploads”. Slow Takeoff and Multipolar Scenarios We’ve now explored a range of intelligence explosion scenarios, spanning the spectrum from ones that everyone I know wants to avoid to ones that some of my friends view optimistically. Yet all these scenarios have two features in common: A fast takeoff: the transition from subhuman to vastly superhuman intelligence occurs in a matter of days, not decades. A unipolar outcome: the result is a single entity controlling Earth.” Globalization is merely the latest example of this multi-billion-year trend of hierarchical growth.

Consciousness = subjective experience. Would an artificial consciousness feel that it had free will? “Yes, any conscious decision maker will subjectively feel that it has free will, regardless of whether it’s biological or artificial.” Decisions fall on a spectrum between two extremes: You know exactly why you made that particular choice. You have no idea why you made that particular choice—it felt like you chose randomly on a whim.

If some future AI system is conscious, then what will it subjectively experience?

A) First of all, the space of possible AI experiences is huge compared to what we humans can experience.

B) Second, a brain-sized artificial consciousness could have millions of times more experiences than us per second, since electromagnetic signals travel at the speed of light—millions of times faster than neuron signals.

We need to find answers to some of the oldest and toughest problems in philosophy—by the time we need them.

The long-term future of humanity

Tegmark analyses that the cosmos is a future playground for man and superintelligence. ”If we discover an extraterrestrial civilization, it’s likely to already have gone superintelligent. My vote is for embracing technology, and proceeding not with blind faith in what we build, but with caution, foresight and careful planning.”

How should we change according to the book?

”The first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.” (Irving J. Good)

I) Become humble: ”Traditionally, we humans have often founded our self-worth on the idea of human exceptionalism: the conviction that we’re the smartest entities on the planet and therefore unique and superior. The rise of AI will force us to abandon this and become more humble.”

II) Homo sapiens has to do some re-branding: ”From this perspective, we see that although we’ve focused on the future of intelligence in this book, the future of consciousness is even more important, since that’s what enables meaning. Philosophers like to go Latin on this distinction, by contrasting sapience (the ability to think intelligently) with sentience (the ability to subjectively experience qualia). We humans have built our identity on being Homo sapiens, the smartest entities around. As we prepare to be humbled by ever smarter machines, I suggest that we rebrand ourselves as Homo sentiens!”

What should I personally do? 

”If a machine can think, it might think more intelligently than we do, and then where should we be? Even if we could keep the machines in a subservient position … we should, as a species, feel greatly humbled”. (Alan Turing)

Think – how do I want the future of life to be.

Summary

The book in six words – ”Cogito, ergo sum i.e. ergo sum, cogito?”