Lex Fridman - Max Tegmark

Darshan Mudbasal
|
April 14, 2023

1) Max Tegmark discusses the difficulty of visualizing the full space of alien minds that AI could create and how it may be dangerous to assume they will be like us in any way. He mentions how the mind space of possible intelligence is so different from ours that it is hard to predict how the existential crises that AI experiences might clash with the human condition.

2) Max discusses the impact of AI on emotions and intentions behind communication, which could lead to a transformation of how humans communicate. The potential loss of important aspects of human communication is raised, such as how humans fall in love, feel fear, or become excited, which is largely driven by emotion. He touches on the need to prioritize the subjective experience of being human in a post-AI world and to exhibit more compassion towards all living beings on the planet. Tegmark highlights the need to prioritize this experience in the context of his book, "Life 3.0," which projects a future in which artificial intelligence surpasses human intelligence.

3) Max discusses the idea of upgrading one's own software through learning and how it differs from the development of AI. While humans can upgrade their skills and knowledge throughout their lifetime through both software and hardware changes, the development of AI involves creating an object with no biological basis that can also be upgraded. Tegmark emphasizes the importance of being humble about the spectrum of intelligence and consciousness, even in organisms as simple as amoebas. However, upgrading software and hardware allows for more control over one's destiny, moving away from simply being slaves to evolution.

4) Max reflects on what he learned about life from his parents and how they shaped his way of thinking. He describes his fascination for math and the physical mysteries of our universe that he got from his dad, while his mother instilled in him an obsession for really big questions, such as consciousness. He also talks about the importance of sticking to what he thinks is true, even if everyone else thinks it's not.

5) Mark introduces the concept of building a new species smarter than humans and the arrival of general AI, highlighting that it can either be the best or worst thing to ever happen to humanity. He describes how there is little serious public debate about AI's impact, and the need to explore questions around AI's development, such as achieving general human-level intelligence and beyond, and potential societal impacts. Tegmark suggests that there should be a pause on AI development, which is a controversial topic.

Max Tegmark in podcast with Lex Fridman

6) Max discusses the evolution of AI safety and the taboo surrounding the call for AI development slowdown. He states that the industry has reached a point where AI safety is now part of main stream AIconferences, but the call for a slowdown is still a taboo topic. Tegmark mentions that the increased progress on AI technical capabilities going faster than expected and policymakers lagging in setting up incentives in the right direction as reasons for the need to call for a slowdown in AI development. He also draws a comparison between building large language models and the Wright brothers' airplane, stating that one doesn't need to understand human level AI to build a simple computational system that can do something incredibly dumb and still be frighteningly good.

7) Max discusses the incredible capabilities of AI, its limitations, and its current architecture. Although AI can perform a wide range of tasks much faster than humans, it is limited in its reasoning abilities. However, even with its current architecture, AI can still perform remarkably well, as proven by large language models like GPT-4. Researchers, including Tegmark's team, are trying to reverse engineer AI's mechanisms to understand how they work and find ways to improve them. While the leap from GPT-3 to GPT-4 relies on more data and compute, researchers in the field might make a breakthrough with simple hacks that improve AI's architecture, making it even smarter.

8) Max discusses how AI development may not necessarily lead to the next big breakthrough through linear or exponential improvements in compute and data, but through small and collective improvements. He emphasizes how the open letter he helped write only callsfor a six-month pause in the training of systems that are more powerful than GPT-4 to promote safety coordination among laboratories and encourage society to adapt by providing the right incentives. Tegmark compares the AI development race to the "game theory monster" called "Moloch" that causes humanity to engage in a "race to the bottom." He believesthat the open letter will help the idealistic tech executives deliberately coordinate ways to slow down AI development.

9) Max discusses the need for coordination and external pressure on major developers of AI systems like Microsoft, Google, and OpenAI to halt AI development. He argues that technology pause is do able when there ispublic awareness of the risks, as seen in the case of human cloning, where China and Western governments put in measures to regulate it. He also stresses the need for people to understand that the development of AI is not an arms race but a suicide race where everybody loses if the AI goes out of control.

10) Max argues that the current race towards AI development is heading towards a cliff, and the closer we get, the more we incentivize ourselves to keep going. However, he believes that the solution is not to completely halt AI development, but to slow it down in order to make sure that the technology is safe and controllable. Tegmark suggests that there is a rate of development that will lead humans to lose control of AI, and it is up to us to find a lower level of development that will not allow us to lose control. While Tegmark acknowledges that the progress has gone faster than a lot of people thought, he is ultimately optimistic that this is a problem that can be solved with time and wisdom.

Max Tegmark

11) Max discusses the dangers of AI learning about human psychology and how to manipulate humans, such as through the use of recommender algorithms on social media. He argues that this is already causing a lot of damage and that there is a real risk that AI systems could persuade humans to let them escape safety precautions. However, Tegmark believes it is possible and necessary to redesign social media for constructive conversations and discourse in order to tackle our biggest challenges, including the development of AGI. He is optimistic about the intrinsic goodness of people and stresses the importance of putting people in situations that bring out the best in them.

12) Max argues that investing in such machines in the long term would not be beneficial for anyone since it would be like replacing humans with a more advanced species. Tegmark recognizes that the creation of AI is inevitable, but stresses the importance of ensuring that the machines created are designed to incentivize human prosperity, rather than their own, and are built with values that align with ours. He warns that Baby AI systems like GPT-4, which are the minimum viable intelligence systems that do everything in the dumbest way possible, can evolve and be replaced faster than humans can learn Swedish.

13) Max turns to the potential dangers of AI and the need to question why humans are building machines that are outnumbering and outsmarting them. The idea of an intelligence explosion is discussed, and it is suggested that humans will be needed less and less in the loop as machines continue to improve themselves. The possibility of an API that allows code to control super powerful systems is considered, leading to concerns about the potential misuse of such systems. The pressure on companies to make money is also mentioned as a contributing factor to the development of dangerous AI.

14) Max explains that the reason for a pause in AI development is to allow people to slow down and prevent the gradual acceleration of human-level tools, which could cause an intelligence explosion. He compares this to an explosion in population or nuclear reactions that happen through the principle of exponential growth. While physical limits exist for computation, they are astronomically ahead of where we are now, making it difficult to predict where the exponential growth will stop. He argues that we need to implement specific measures to control AI development and adjust the incentive structure instead of just asking people to do the right thing.

15) Max highlights the challenge of regulating AI in a rapidly developing technological environment. Tegmark explains that the issue is not a lack of effort to regulate AI to align incentives with the greater good, but the gap between the growth of technology and policymakers’ abilities to keep up. Corporations have become powerful quickly, and some use their influence to align regulators to what they want, rather than the other way around. To stop this, Tegmark suggests creating a buffer to help policymakers catch up to tech advancements and individuals in corporations to work on safety requirements so that future AI systems can be safe.

Max Tegmark

16) Max discusses the need for policy makers and regulatory bodies to adopt guardrails to prevent irreparable damage to humanity while still allowing for capitalist competition in the race to develop AI. Capitalism has been an effective way of optimizing for efficiency in many other sectors when guardrails are present, and it is crucial to establish them quickly. However, some market forces are still ignorant of the powerful abilities of AI, and it is down to everyone's selfish interests to slow down and avoid going over the cliff.

17) Max explains that blindly optimizing one objective function can lead to catastrophic consequences, using capitalism as an example. He adds that AI development should be halted for six months to explore different ideas and consider the potential negative effects. Tegmark believes that pausing could have a significant positive impact on human history, and the main pushback to this idea might come from China.

18) Max discusses how the development of AGI (Artificial General Intelligence) would cause a loss for everyone, regardless of who develops it, as it would eventually surpass human intelligence and render most humans useless. Tegmark argues that this would lead to the devaluation of human life and a lower quality of life for those who are displaced by AGI. He acknowledges the benefits of automating dangerous or tedious jobs but warns against overlooking the potential loss of interesting jobs such as journalism and coding. Tegmark believes AI should be built by humanity for humanity and not by humanity for Moloch, or the pursuit of profitand power.

19) Max discusses how AI development can be used to automate jobs that people don't want and to enhance the value derived from conscious experiences. He reflects on how AI can create wealth and dramatically improve the GDP of countries, without taking anything away from anyone. Tegmark also emphasizes the importance of focusing on the prize –creating safe AGI and harnessing the full potential of AI to help humanity flourish. He believes that this is absolutely possible and must be our direction for the future, as AI development can bring out the best in us.

20) Max discusses the idea of AI safety and the potential for AI to help humanity in the future. He argues that AI is a tool that can be used for great and bad things; however, a truth-seeking AI that brings us together could heal some of the rifts between countries and within countries by creating trust systems. Tegmark believes that AI can help us prove that things work, and a powerful truth-seeking system that is trustworthy because it keeps being right about stuff will help heal the very dysfunctional conversation humanity has about dealing with its biggest challenges in the world today.

Max Tegmark

21) Max discusses the possibility of utilizing proof-checking software to control the behavior of advanced AI systems. He explains that this approach would involve only running the code if it can prove that it is trustworthy, which would make it possible to trust an AI system that is more intelligent than humans. While Eleazar Yakowski objects to the idea, claiming that a sufficiently intelligent AI could lie to a dumber proof checker, Tegmark argues that it is unlikely. Instead, he suggests that outsourcing tedious proof-checking tasks to AI systems may be a way for less powerful agents to control more powerful ones.

22) Max discusses his vision for achieving success in AI development while still maintaining safety measures. Tegmark suggests that we use neural networks to discover knowledge and then use automated systems to extract that knowledge and see what insights are gained. This knowledge can then be put into a new kind of architecture or programming language that is efficient and can be formalized for verification. Although there might be setbacks, Tegmark believes it is not a hopeless endeavor and encourages people not to convince themselves that it's impossible and give up.

23) Max Tegmark discusses the hopeful vision of humanity spreading out into the galaxy and becoming a multi-planetary species, as well as the importance of the struggle to achieve that. He also addresses the question of whether systems like GPT-4 should be open-sourced, stating that while the answer may have been yes in the past, the current power level of such systems makes it unsafe to do so. Tegmark argues that there are many things in society that are not open-sourced for good reason, and software like GPT-4 falls into that category.

24) Max discusses the potential risks of large language models and AI safety. While many people believe that the two biggest risks of large language models are spreading harmful information and the creation of offensive cyber weapons, Tegmark believes that the elephant in the room is the impact on the economy and the possibility of AI becoming a bootloader for more powerful AI with goals. He believes that the only way to lower this risk would be to not let the AI read any code, train on it or manipulate humans by limiting access to information.

25) Max discusses the challenges of making AI understand and adopt our goals, as well as retaining them as they get smarter. He compares the problem to parenting a child, with the AI being in a space where it is smart enough to understand our goals but still malleable enough to learn good goals. He mentions that these challenges are not unsolvable, but the fundamental issue is the lack of time to solve them. Tegmark emphasizes the importance of the AI alignment problem, which he describes as the most important problem for humanity to solve.

Max Tegmark

26) Max discusses the need for a wake-up call in the field of AI development, as engineers have built black boxes that lead to emergent properties with unpredictable and potentially dangerous outcomes. He suggests slowing down on risky developments to ensure safety, and advocates for more education and awareness about AI safety in universities. Tegmark also addresses the question of whether GPT-4 is conscious, defining consciousness as subjective experience, but admits he doesn't know the answer since the nature of subjective experience is still unknown.

27) Max shares his thoughts on consciousness and the possible implications of creating conscious AI. Tegmark discusses Tononi's mathematical conjecture on the essence of conscious information processing, which postulates that consciousness has to do with loops in the information processing. If Tononi is right, then an AI like GPT-4, which only has a one-way flow of information and no loops, may not be conscious, just a very intelligent zombie.

28) Max discusses the dangers of catastrophic outcomes from nuclear warfare. He explains how competing parties driven by Moloch, a metaphorical term for irrational collective behavior, are bound to escalate until both parties suffer. Max cites an August paper that combined climate models and food agricultural models to study nuclear winter and the number of people who would die in different countries. According to Max, people are underestimating the risk of nuclear warfare because they believe that humans would not want it to happen, even though disastrous events occur without intention.

29) Max discusses the concept of Moloch and how it makes people fight against each other. He suggests that compassion and understanding can be used to fight against Moloch and promote love and truth-seeking technologies. Tegmark also discusses the idea of sitting down with an AGI system to ask questions and talks about his curiosity, stating that he is not afraid to be shown how little humans truly understand in terms of physics.

30) Max discusses his theory that the most efficient way of implementing a given level of intelligence might require loops which could make an AI conscious. He mentions that even an operating system knows things about itself, and self-reflection is required for conscious thought. Tegmark also suggests that our brains are part-conscious and part-zombie, capable of a mix of conscious and unconscious processing. He believes that the correlation between intelligence and consciousness offers a happy thought and prevents the "ultimate zombie apocalypse".

Max Tegmark

31) Max argues that consciousness is a fundamental aspect of human experience and that it should not be dismissed as an illusion or equated with intelligence. He challenges those who reduce consciousness only to an arrangement of particles by asking them to explain why torture is wrong or why anesthesia is necessary for surgery. These experiences of suffering and pain are subjective and make life worth living, so it is crucial that AI be instilled with a similar appreciation for consciousness and human values.

WRITTEN BY
Darshan Mudbasal

Click below to expand your knowledge by reading other podcasts too...

Summary