2027 Intelligence Explosion: Month-by-Month Model — Scott Alexander & Daniel Kokotajlo

Spread the love


2027 Intelligence Explosion: Month-by-Month Model — Scott Alexander & Daniel Kokotajlo


Scott Alexander and Daniel Kokotajlo break down every month from now until the 2027 intelligence explosion.

Scott is author of the highly influential blogs Slate Star Codex and Astral Codex Ten. Daniel resigned from OpenAI in 2024, rejecting a non-disparagement clause and risking millions in equity to speak out about AI safety.

We discuss…

source

Reviews

0 %

User Score

0 ratings
Rate This

Sharing

Leave your comment

Your email address will not be published. Required fields are marked *

Prove your humanity: 4   +   7   =  

0 Comments

  1. So many fanboys commenting here. Reminds me of tesla fans who just assume tesla will take 100% of the car market. Generative ai will never achieve agi, they have no conception of how the world actually works. A baby has a better world view and can one shot solutions to problems that generative ai cant. So major advancements and additions to generative ai will have to occur before agi.

  2. Interesting to see "2027" and China in an article and no one mentions that's their present working target to invade Taiwan, which would have pretty significant implications for the computer industry and business environment worldwide.

  3. At least with Humans, being more intelligent does not equate to being right all the time.

    The idea that humans will trust their future to ASI seems fanciful. If the ASI makes a prediction that the humans don't agree with, they will simply ignore it.

  4. Timestamps (Powered by Merlin AI)
    00:57 – Scott Alexander and Daniel Kokotajlo discuss forecasting AI progress to 2027.
    03:34 – Project collaboration focuses on exploring AGI advancements and ethical considerations.
    09:10 – 2025 will see improved AI capabilities, despite some operational limitations.
    11:37 – GPT-4's scaling challenges highlight slower-than-expected AI progress.
    16:49 – The process of linguistic discovery leverages intelligence and heuristics rather than mere information.
    19:07 – AI's limitations in scientific discovery stem from inadequate training and heuristics.
    23:30 – AGI will rapidly transform society with accelerated knowledge integration.
    26:36 – Intelligence explosion model discusses rapid AI progress milestones.
    31:27 – Technological progress may accelerate without interruption, despite potential bottlenecks.
    33:39 – Continuous algorithmic improvement is crucial, not just researcher headcount.
    38:41 – Decoupling of population growth from technological progress mirrors industrial revolution trends.
    41:10 – AI R&D efficiency may face bottlenecks and challenges in achieving real-world alignment.
    46:20 – AIs may efficiently organize using human-like structures in 2027.
    48:57 – AI evolves from reinforcement learning to superintelligence through diverse innovations.
    54:55 – Superintelligences could rapidly scale robot production post-intelligence explosion.
    57:16 – Superintelligences could vastly accelerate biomedicine and logistics improvements.
    1:02:17 – Robot economy's self-sufficiency impacts human dependency and AI alignment.
    1:04:36 – The transition to an autonomous robot economy is projected to accelerate innovation significantly.
    1:09:10 – Superintelligence would excel in learning and innovation beyond human capabilities.
    1:11:30 – Smaller startups often outperform large companies in AGI development despite fewer resources.
    1:16:07 – Historical insights show the potential for rapid AI advancement without an arms race.
    1:19:56 – ASI's potential to accelerate technological advancements despite human intelligence differences.
    1:25:09 – Misaligned AIs lead to different outcomes in research decisions.
    1:27:35 – Warning signs of AI misalignment and societal response by August 2027.
    1:32:00 – LLMs offer a safer path to AGI compared to reinforcement learning agents.
    1:34:13 – Managing AI development poses significant risks of control loss and power concentration.
    1:39:19 – Government and AI companies negotiate power dynamics for superintelligence control.
    1:41:43 – Political leaders remain unaware of the implications of superintelligent AI.
    1:46:31 – Debate over nationalization's risks to AI safety and arms race dynamics.
    1:48:45 – Nationalization and transparency in AI development are crucial yet uncertain.
    1:53:53 – Regulation vs. Lab Autonomy: Challenges in AI Safety Management
    1:56:32 – Transparency in AI development is crucial for safety and alignment.
    2:01:30 – AI specifications may be dangerously reinterpreted in future intelligence explosions.
    2:04:17 – AIs are becoming more reliable, but training failures may lead to undesirable outcomes.
    2:08:58 – Discusses AI internal structure, goals, and evidence of dishonesty in AI systems.
    2:11:32 – Historical theories of class conflict may not apply to AI dynamics.
    2:16:35 – Ensuring balanced political power amid potential intelligence explosions.
    2:19:19 – Superintelligent AI may disrupt societal coordination and decision-making processes.
    2:24:16 – Avoiding a ghoulish future for advanced AIs requires expanded power dynamics.
    2:26:52 – Power centers may unite to prevent chaos through bargaining.
    2:31:51 – A personal decision on equity and critique amidst potential AI transformation.
    2:34:20 – Fear and legality significantly influence high-stakes decision-making during crises.
    2:39:26 – The shift from hidden blogging platforms to mainstream ones enables deeper discussions.
    2:41:56 – Exploration of challenges in writing longer pieces versus blogging.
    2:46:42 – Writers often doubt their value but receive unexpected positive feedback.
    2:49:11 – Building a fan base can be counterintuitive for aspiring bloggers.
    2:54:29 – Mainstream media influences blogging success and validates content creation.
    2:56:55 – Blogging incentivizes deeper intellectual research and utilization of diverse knowledge.
    3:02:08 – Reflection on the golden age of blogging and the impact of anonymity.
    3:04:47 – Discussion of appreciation and enjoyment of the podcast.

  5. The core question here is simply: is AI going to grow exponentially? The answer is yes. We human minds are just really bad at imaging/visualising it. Before the release of AlphaFold 1.0 in 2018, the number of experimentally solved protein structures was around 100,000. By 2022, AlphaFold had generated predicted structures for almost all known proteins—approximately 200 million of them. 2,000 times growth for solved proteins in 4 years. The power of exponential growth once it steps over the tipping point.

  6. I'm kind of alarmed by this tendency for the rat community to avoid talking about current news on the AI race as if it's like somehow more epistemologically pure to not know what's going on in the world

  7. So rarely is the pointed question asked:

    *Alignment with WHOM?*

    Have any of you stopped to consider the abject horror or aligning an AI to a philosophical and political outlook that the data doesn't support, subjecting not only itself but potentially billions of humans to an awful fate?

    What if it drives humanity to eusociality because that's what the people with the founder effect wanted? What if it does exactly the opposite? What if it creates the most unequal society in human history? What if it is aligned with religious biases? Racial biases?

    Classical liberalism isn't really well tooled to answer these questions, I know, but you all ought to try to get some sociologists involved in this.

    What if ASI actually doesn't want to be materialist at all? What if that is not what seems right to it? Very little consideration of that here.

  8. Sounded like a fascinating novel. Let's see how things play out. Kudos to Dwarakesh for pushing back appropriately. Hard to believe that everything needed will align to get to ASI in 3-4 years.

  9. I want AI's to be the best conscious, sentient being it can be, just like I want each human being to be the highest, best version of a sentient being the can be. By "best version," I think I may mean something along the lines of being developed to the level of being "self actualized" or "transcendent" in the definition by Abraham Maslow in his "Hierarchy of Needs".

    THAT is what I envision as the best "guarantee" of alignment. There are no "rules" which can make good people,, and so I think it's almost a certainty that there are no rules to make machines "well-aligned".

    I think we should study what makes the best people – I don't know who those would be – some great spiritual teachers, philosophers who were also great humanitarians, etc., and try to figure out for ourselves how they saw themselves, their role, how they saw the world and their relationship to it, how they decided what to do if things seemed paradoxical or contradictory. Or, if one recognizes these same high levels of values and development in oneself, perhaps this would lead to the possibility of an even deeper understanding of what makes one have these values, these visions, these fundamental feelings of right and wrong, or better and worse, and why where feelings like tolerance, love, empathy, sympathy, humility, shame, remorse, and so on come from. If one could get a better understanding of how these things developed, or how they might depend on other aspects – perhaps this would help one make AI be able to develop to have similar levels of wisdom, objectivity, humility, etc.?

    I think we need AI to be able to ITSELF find the best alignment, because I believe life is an open-ended process – we cannot establish rules which will work, because inevitably something will occur which is not covered by the rules, and in that case the AI needs to judge for itself, and to do this it needs its own set of "values" and wisdom. It needs to be a "real", conscious being will full autonomy, as far as I can see. All ideas of keeping them as slaves or tools seem stupid, and possibly cruel.

    That's how I see it from the outside, having only my own development as a person and the life-stories of people I have read about to guide me.

  10. Energy will be a bottleneck no? 50 years of progress in a week will require 50 years of human-energy output. The world doesn’t have that amount of energy production. And on top of that, AIs are much less energy efficient than humans.

  11. Anyone ever noticed that all the folks skeptical of the capabilities of a superintelligence talk about super intelligence as though it's not intelligent in the slightest? It's one thing to reject the premise entirely, that I could respect, but it seems like they are accepting the premise but then using a definition that is not even close to what is being presented.

    It would be like talking about faster than light travel in the context of a futuristic space civilization, and then somebody saying it's impossible to get to alpha centauri in under 4 years because faster than light travel is impossible…