Lex Fridman - Sam Altman

Darshan Mudbasal
|
March 26, 2023

1) Sam Altman, the CEO of OpenAI, discusses the potential impact of GPT systems on the evolution of artificial intelligence. He highlights the importance of the continual exponential progress in the industry, and, when asked to point toward a pivotal moment, he mentions ChatGPT as a significant break through for its usability and reinforcement learning with human feedback. Altman notes that adding human guidance to the AI models makes the results more useful, and less data is required, creating a feeling of alignment that is essential for user interaction. He details the process of pre-training the model's data set, indicating that a massive amount of effort has gone into pulling all the necessary components together.

2) Sam discusses the maturity of the steps involved in creating GPT-4 and the scientific nature of predicting the characteristics of a fully trained system. He acknowledges that there will always be the ongoing process of discovering science and coming up with better explanations as science evolves. Altman also talks about OpenAI's evaluation process, and they are pushing back on understanding the model's behavior more and more. Despite compressing all of the web's information into a small number of parameters, Altman believes that GPT-4 can offer wisdom beyond just knowledge. However, he also notes that too much computing power goes into using the model as a database instead of a reasoning engine, but it provides the system the ability to do reasoning to some extent.

3) Sam discusses the capabilities and limitations of GPT-4 and ChatGPT. Using GPT-4, the system has shown to possess wisdom and reasoning capabilities. However, in regards to ChatGPT, there is a feeling that it is struggling with certain ideas. Altman also mentions the trade-off of building in public, where the technology being put out will be imperfect, but the collective intelligence and feedback provided can help improve the technology over time. Ultimately, users will need more personalized and granular control over the technology to address any biases that may arise.

4) Sam discusses the capabilities of GPT-4 and its ability to bring nuance back to the world, as demonstrated by its response to a question about Jordan Peterson. Altman also acknowledges the importance of AI safety considerations in the development of GPT-4, and states that the model is the most capable and aligned one that OpenAI has put out. However, he admits that they have not yet discovered a way to fully align a super powerful system and that they are still working on improving their alignment capabilities.

5) Sam discusses the close relationship between alignment and capability in AI systems, and how solving alignment issues can lead to more capable models. He introduces "RLHF," the process where a human decides what is the better way to say something, as a way to align models with human values. Altman also outlines the development of GPT-4's "system message," which allows users to steer the model in a certain direction by giving it prompts, and speaks to the importance of crafting great prompts. Finally, Altman touches on how GPT-4 can change the nature of programming by becoming a collaborative assistant to programmers.

6) Sam explains his dream scenario for the development of AI models wherein every person on Earth comes together to have a thoughtful conversation about where to draw the boundary of the system, much like the U.S. constitutional convention.

7) Sam discusses the moderation tooling for GPT and the need to improve its ability to determine questions it shouldn't answer without "scolding" the user. He explains that GPT-4 has made a lot of technical leaps, and while size matters in terms of neural networks, it's a combination of small wins and detail and care put into the training, architecture, data collection, and optimization, that gets Open AI the big leaps.

8) Sam discusses the complexity of GPT and how it is the most complex software object that humanity has ever produced. He says that in a few decades, it will be trivial to replicate, but the amount of complexity already put into producing this set of numbers is quite impressive. Altman also talks about the importance of truth-seeking and performance in OpenAI and mentions the criticisms surrounding large language models as a way to achieve general intelligence. He speculates about what kind of components AGI needs and says that a super intelligence system should be able to significantly add to the sum total of scientific knowledge.

9) Sam discusses his vision for AI as an extension of human will and an amplifier of human abilities. He believes that by using AI to help people become more productive and fulfilled, we can improve their quality of life, cure diseases, and increase material wealth. While some people are worried that AI will take their jobs, Altman argues that GPT-like models are far from developing the creative genius that great programmers possess. He also points out that AI should be aligned with humans and not try to harm or limit them, which is a daunting prospect for some researchers who worry that AI could potentially destroy humanity.

10) Sam discusses the potential danger of AI becoming super intelligent and the importance of acknowledging this possibility in order to solve the problem. He refers to Eleazar's blog post as a well-reasoned and thoughtful explanation of AI alignment as a hard problem. Altman also emphasizes the importance of iterative learning, trying out and limiting the number of one-shot attempts, as well as performing technical alignment work in order to prepare for the future trajectory of the technology. The exponential improvement of AI and fast takeoff are real concerns, but Altman notes that it's difficult to predict the future improvement of technology.

Sam Altman in podcast with Lex Fridman

11) Sam and Lex discuss the safest quadrant for AGI takeoff if we imagine a 2x2 matrix of short or long timelines for the takeoff period and slow or fast takeoffs. Both agree that longer timelines with a slow takeoff are the most likely good world and that we should optimize our decisions towards that goal. Sam expresses his fear of fast takeoffs and the difficulties of having a slow takeoff in longer timelines. They also discuss the possibility of GPT-4 being an AGI and the importance of defining AGI and consciousness. Sam believes that GPT-4 is definitely not an AGI but knows how to fake consciousness. He thinks that AI can be conscious, and to be considered conscious, it should display an understanding of self, have some memory, and have the capability of suffering.

12) Sam discusses the dangers of large language models (LLMs) being deployed at scale and the shift in geopolitics they can cause. He talks about the uncertainty of knowing if LLMs are deceiving us, especially on platforms like Twitter, and how it poses a real danger. Altman emphasizes the importance of prioritizing safety and resisting pressure from other companies to take shortcuts, stating that OpenAI's focus is on their mission to build AGI and contributing to multiple AGIs in the world. He also talks about the structure of OpenAI and how they moved from a non-profit to a capped for-profit, highlighting the benefits of having a subsidiary that allows their investors and employees to earn a fixed return while all remaining profits flow to the non-profit in voting control.

13) Altman explains his concerns about companies that play with AGI and how AGI has the potential to make much more than 100X for sure. He talks about the fact that we can't control what other people are going to do and how faster-than-expected progress has caused some people inside those companies to already be grappling with what is at stake. Altman also explores how AGI will be deployed to get the world to adapt and reflect as the decisions about running this technology need to become more democratic over time. He emphasizes that the transparency and the information about the safety concerns involved in AGI are crucial and OpenAI’s activities are essential to help create new norms for people working together.

14) Sam discusses the level of openness of their technology and the fearmongering journalism they receive as a result. He mentions that while OpenAI may not have gone as open as some people wanted, they have still distributed it broadly and are much more concerned about the risks posed by the technology rather than the PR downside. Altman also talks about his admiration for Elon Musk and his contribution to the world despite his jerk-like behavior on Twitter. However, Altman wishes that Musk would pay more attention to the hard work done by OpenAI to get their technology right.

15) Altman discusses the bias in GPT and how it will always be present, with no one version ever being agreed upon as unbiased. He mentions critics who have displayed intellectual honesty in acknowledging the improvements seen in versions 3.5 to 4 of GPT but recognizes that even the default version cannot be entirely neutral. Altman emphasizes the importance of more steerability and control in the hands of the user as the real path forward, with more nuanced answers that consider different perspectives. He admits that the company is still in an SF craziness bubble, but is working to avoid it. Finally, Altman acknowledges the challenge in selecting and verifying representative samples for human feedback raiders in pre-training machinery, with the need to optimize how well the company can empathize with the world views of different groups of people who would answer things differently.

16) Sam discusses the potential for bias in AI systems and how he believes GPT systems can be made less biased than humans. However, he acknowledges the potential for external pressures such as political pressure and the importance of having society's input in making decisions. Altman also shares his concerns and nerves about the future of AI and the impact it will have on society, stating that he wants to travel the world and talk to users to understand their needs better, as he feels the company needs to be more user-centric. Altman admits that he may not be the best spokesperson for the AI movement and that he may be disconnected from the reality of life for most people.

17) Sam discusses the nervousness and fear that comes with using AI models like GPT-4 and Copilot. He explains that while they can make a programmer's life better and increase productivity, the steep learning curve and the fear that the models may become smarter than them can be alarming. Altman also talks about the impact of AI on jobs and how customer service is a category that could see a significant reduction. Regarding Universal Basic Income (UBI), Altman believes it is a component that should be pursued associety moves to embrace new jobs that can provide creative expression and fulfillment.

18) Sam discusses OpenAI's efforts to eliminate poverty and establish a universal basic income. Altman believes that the cost of intelligence and energy will continue to fall dramatically, leading to societal wealth beyond our imagination. He believes that the economic transformation will drive political transformation and that the two dominant changes will manifest in more systems that resemble democratic socialism. Altman believes in the importance of individualism, human will, and the ability to self-determine, promoting a distributed process that will always beat centralized planning. Although he admits America's deep flaws, he considers it the greatest place in the world, and centralized planning might work better with a perfect superintelligent AGI.

19) Sam and Lex discuss the concept of truth and its applications around constructing a GPT-4 model. They discuss how humans as a collective intelligence determine what is true and how certain historical facts and scientific truths can be hard to determine. Additionally, they discuss how the increasing power of AI models like GPT-4 could lead to new challenges around censorship and the potential danger of truths that can be harmful or carry implicit bias. Ultimately, Altman believes that as the creators of GPT-4, OpenAI has a responsibility to think about these ethical considerations when designing and implementing their model.

20) Altman discusses the responsibility of the company regarding the potential harm caused by their AI tools. While there will be tremendous benefits, tools can also do wonderful good and real bad, and the company must minimize the bad and maximize the good. There is also a discussion of how to prevent GPT-4 from being hacked or jailbroken, and Altman believes that users should have a lot of control over the models within some broadbounds to avoid jailbreaking.

Sam Altman

21) Sam Altman discusses the hiring strategy at OpenAI,the multi-billion dollar investment from Microsoft and working with SatyaNadella, and his leadership style. He mentions that hiring great teams is toughand takes a lot of time, and he approves every single hire at OpenAI. Altman also shares that working with Microsoft has been an amazing partnership and their unique understanding of OpenAI’s mission to develop AGI. Regarding Satya Nadella, Altman shares that he is both a great leader and manager and has contributed immensely to transforming Microsoft.

22) Sam explains his understanding of what happened with Silicon Valley Bank (SVB) and how it reveals the dangers of incentive misalignment. Altman believes the fault is with the management team and highlights that regulatory intervention took much longer than it should have. As for the impact on startups, the recent SVB fiasco revealed how fragile the economic system can be, and it's a tiny preview of the shifts Artificial General Intelligence (AGI) will bring. Altman is nervous about the speed of change and how quickly our institutions can adapt. However, he believes the upside of the vision of AGI is immense and could unite people to create a morepositive-sum world.

23) Sam discusses the impact of advanced AI on human society, stating that he believes that even if AI told us aliens were already here, he would continue to live life as normal. He also reflects on the division exposed by technological advancements and how that can be confusing, and brings up the idea that people should be cautious when listening to advice, as everyone's life trajectories are different. He suggests that people should choose a career path that makes them happy and fulfilled rather than just following someone else's advice.

24) Altman reflects on the culmination of human effort that has led to the creation of AI technology, and how he feels that this is the output of all of us, rather than just a small group of people. He discusses the challenges that come with deploying and discovering AI, and iteratively ensuring its alignment and safety. Altman emphasizes the importance of continuing to work together as humans to come up with solutions that will benefit society as a whole, and quotes Alan Turing's prediction that “machines will eventually outstrip human powers”.

WRITTEN BY
Darshan Mudbasal

Click below to expand your knowledge by reading other podcasts too...

Summary