
Machine Learning Street Talk (MLST)
Machine Learning Street Talk (MLST)
- 4.7 (90)
-
![He Co-Invented the Transformer. Now: Continuous Thought Machines - Llion Jones and Luke Darlow [Sakana AI]](/assets/artwork/1x1.gif)
NOV 23
He Co-Invented the Transformer. Now: Continuous Thought Machines - Llion Jones and Luke Darlow [Sakana AI]The Transformer architecture (which powers ChatGPT and nearly all modern AI) might be trapping the industry in a localized rut, preventing us from finding true intelligent reasoning, according to the person who co-invented it. Llion Jones and Luke Darlow, key figures at the research lab Sakana AI, join the show to make this provocative argument, and also introduce new research which might lead the way forwards. **SPONSOR MESSAGES START** — Build your ideas with AI Studio from Google - http://ai.studio/build — Tufa AI Labs is hiring ML Research Engineers https://tufalabs.ai/ — cyber•Fund https://cyber.fund/?utm_source=mlst is a founder-led investment firm accelerating the cybernetic economy Hiring a SF VC Principal: https://talent.cyber.fund/companies/cyber-fund-2/jobs/57674170-ai-investment-principal#content?utm_source=mlst Submit investment deck: https://cyber.fund/contact?utm_source=mlst — **END** The "Spiral" Problem – Llion uses a striking visual analogy to explain what current AI is missing. If you ask a standard neural network to understand a spiral shape, it solves it by drawing tiny straight lines that just happen to look like a spiral. It "fakes" the shape without understanding the concept of spiraling. Introducing the Continuous Thought Machine (CTM) Luke Darlow deep dives into their solution: a biology-inspired model that fundamentally changes how AI processes information. The Maze Analogy: Luke explains that standard AI tries to solve a maze by staring at the whole image and guessing the entire path instantly. Their new machine "walks" through the maze step-by-step. Thinking Time: This allows the AI to "ponder." If a problem is hard, the model can naturally spend more time thinking about it before answering, effectively allowing it to correct its own mistakes and backtrack—something current Language Models struggle to do genuinely. https://sakana.ai/ https://x.com/YesThisIsLion https://x.com/LearningLukeD TRANSCRIPT: https://app.rescript.info/public/share/crjzQ-Jo2FQsJc97xsBdfzfOIeMONpg0TFBuCgV2Fu8 TOC: 00:00:00 - Stepping Back from Transformers 00:00:43 - Introduction to Continuous Thought Machines (CTM) 00:01:09 - The Changing Atmosphere of AI Research 00:04:13 - Sakana’s Philosophy: Research Freedom 00:07:45 - The Local Minimum of Large Language Models 00:18:30 - Representation Problems: The Spiral Example 00:29:12 - Technical Deep Dive: CTM Architecture 00:36:00 - Adaptive Computation & Maze Solving 00:47:15 - Model Calibration & Uncertainty 01:00:43 - Sudoku Bench: Measuring True Reasoning REFS: Why Greatness Cannot be planned [Kenneth Stanley] https://www.amazon.co.uk/Why-Greatness-Cannot-Planned-Objective/dp/3319155237 https://www.youtube.com/watch?v=lhYGXYeMq_E The Hardware Lottery [Sara Hooker] https://arxiv.org/abs/2009.06489 https://www.youtube.com/watch?v=sQFxbQ7ade0 Continuous Thought Machines [Luke Darlow et al / Sakana] https://arxiv.org/abs/2505.05522 https://sakana.ai/ctm/ LSTM: The Comeback Story? [Prof. Sepp Hochreiter] https://www.youtube.com/watch?v=8u2pW2zZLCs Questioning Representational Optimism in Deep Learning: The Fractured Entangled Representation Hypothesis [Kumar/Stanley] https://arxiv.org/pdf/2505.11581 A Spline Theory of Deep Networks [Randall Balestriero] https://proceedings.mlr.press/v80/balestriero18b/balestriero18b.pdf https://www.youtube.com/watch?v=86ib0sfdFtw https://www.youtube.com/watch?v=l3O2J3LMxqI On the Biology of a Large Language Model [Anthropic, Jack Lindsey et al] https://transformer-circuits.pub/2025/attribution-graphs/biology.html The ARC Prize 2024 Winning Algorithm [Daniel Franzen and Jan Disselhoff] “The ARChitects” https://www.youtube.com/watch?v=mTX_sAq--zY Neural Turing Machine [Graves] https://arxiv.org/pdf/1410.5401 Adaptive Computation Time for Recurrent Neural Networks [Graves] https://arxiv.org/abs/1603.08983 Sudoko Bench [Sakana] https://pub.sakana.ai/sudoku/1h 13m![Why Humans Are Still Powering AI [Sponsored]](/assets/artwork/1x1.gif)
NOV 3
Why Humans Are Still Powering AI [Sponsored]Ever wonder where AI models actually get their "intelligence"? We reveal the dirty secret of Silicon Valley: behind every impressive AI system are thousands of real humans providing crucial data, feedback, and expertise.Guest: Phelim Bradley, CEO and Co-founder of ProlificPhelim Bradley runs Prolific, a platform that connects AI companies with verified human experts who help train and evaluate their models. Think of it as a sophisticated marketplace matching the right human expertise to the right AI task - whether that's doctors evaluating medical chatbots or coders reviewing AI-generated software.Prolific: https://prolific.com/?utm_source=mlsthttps://uk.linkedin.com/in/phelim-bradley-84300826The discussion dives into:**The human data pipeline**: How AI companies rely on human intelligence to train, refine, and validate their models - something rarely discussed openly**Quality over quantity**: Why paying humans well and treating them as partners (not commodities) produces better AI training data**The matching challenge**: How Prolific solves the complex problem of finding the right expert for each specific task, similar to matching Uber drivers to riders but with deep expertise requirements**Future of work**: What it means when human expertise becomes an on-demand service, and why this might actually create more opportunities rather than fewer**Geopolitical implications**: Why the centralization of AI development in US tech companies should concern Europe and the UK24 min![The Universal Hierarchy of Life - Prof. Chris Kempes [SFI]](/assets/artwork/1x1.gif)
OCT 25
The Universal Hierarchy of Life - Prof. Chris Kempes [SFI]"What is life?" - asks Chris Kempes, a professor at the Santa Fe Institute. Chris explains that scientists are moving beyond a purely Earth-based, biological view and are searching for a universal theory of life that could apply to anything, anywhere in the universe. He proposes that things we don't normally consider "alive"—like human culture, language, or even artificial intelligence; could be seen as life forms existing on different "substrates". To understand this, Chris presents a fascinating three-level framework: - Materials: The physical stuff life is made of. He argues this could be incredibly diverse across the universe, and we shouldn't expect alien life to share our biochemistry. - Constraints: The universal laws of physics (like gravity or diffusion) that all life must obey, regardless of what it's made of. This is where different life forms start to look more similar. - Principles: At the highest level are abstract principles like evolution and learning. Chris suggests these computational or "optimization" rules are what truly define a living system. A key idea is "convergence" – using the example of the eye. It's such a complex organ that you'd think it evolved only once. However, eyes evolved many separate times across different species. This is because the physics of light provides a clear "target", and evolution found similar solutions to the problem of seeing, even with different starting materials. **SPONSOR MESSAGES** — Prolific - Quality data. From real people. For faster breakthroughs. https://www.prolific.com/?utm_source=mlst — Check out NotebookLM from Google here - https://notebooklm.google.com/ - it’s really good for doing research directly from authoritative source material, minimising hallucinations. — cyber•Fund https://cyber.fund/?utm_source=mlst is a founder-led investment firm accelerating the cybernetic economy Hiring a SF VC Principal: https://talent.cyber.fund/companies/cyber-fund-2/jobs/57674170-ai-investment-principal#content?utm_source=mlst Submit investment deck: https://cyber.fund/contact?utm_source=mlst — Prof. Chris Kempes: https://www.santafe.edu/people/profile/chris-kempes TRANSCRIPT: https://app.rescript.info/public/share/Y2cI1i0nX_-iuZitvlguHvaVLQTwPX1Y_E1EHxV0i9I TOC: 00:00:00 - Introduction to Chris Kempes and the Santa Fe Institute 00:02:28 - The Three Cultures of Science 00:05:08 - What Makes a Good Scientific Theory? 00:06:50 - The Universal Theory of Life 00:09:40 - The Role of Material in Life 00:12:50 - A Hierarchy for Understanding Life 00:13:55 - How Life Diversifies and Converges 00:17:53 - Adaptive Processes and Defining Life 00:19:28 - Functionalism, Memes, and Phylogenies 00:22:58 - Convergence at Multiple Levels 00:25:45 - The Possibility of Simulating Life 00:28:16 - Intelligence, Parasitism, and Spectrums of Life 00:32:39 - Phase Changes in Evolution 00:36:16 - The Separation of Matter and Logic 00:37:21 - Assembly Theory and Quantifying Complexity REFS: Developing a predictive science of the biosphere requires the integration of scientific cultures [Kempes et al] https://www.pnas.org/doi/10.1073/pnas.2209196121 Seeing with an extra sense (“Dangerous prediction”) [Rob Phillips] https://www.sciencedirect.com/science/article/pii/S0960982224009035 The Multiple Paths to Multiple Life [Christopher P. Kempes & David C. Krakauer] https://link.springer.com/article/10.1007/s00239-021-10016-2 The Information Theory of Individuality [David Krakauer et al] https://arxiv.org/abs/1412.2447 Minds, Brains and Programs [Searle] https://home.csulb.edu/~cwallis/382/readings/482/searle.minds.brains.programs.bbs.1980.pdf The error threshold https://www.sciencedirect.com/science/article/abs/pii/S0168170204003843 Assembly theory and its relationship with computational complexity [Kempes et al] https://arxiv.org/abs/2406.1217641 min
OCT 21
Google Researcher Shows Life "Emerges From Code" - Blaise Agüera y ArcasBlaise Agüera y Arcas explores some mind-bending ideas about what intelligence and life really are—and why they might be more similar than we think (filmed at ALIFE conference, 2025 - https://2025.alife.org/). Life and intelligence are both fundamentally computational (he says). From the very beginning, living things have been running programs. Your DNA? It's literally a computer program, and the ribosomes in your cells are tiny universal computers building you according to those instructions. **SPONSOR MESSAGES** — Prolific - Quality data. From real people. For faster breakthroughs. https://www.prolific.com/?utm_source=mlst — cyber•Fund https://cyber.fund/?utm_source=mlst is a founder-led investment firm accelerating the cybernetic economy Oct SF conference - https://dagihouse.com/?utm_source=mlst - Joscha Bach keynoting(!) + OAI, Anthropic, NVDA,++ Hiring a SF VC Principal: https://talent.cyber.fund/companies/cyber-fund-2/jobs/57674170-ai-investment-principal#content?utm_source=mlst Submit investment deck: https://cyber.fund/contact?utm_source=mlst — Blaise argues that there is more to evolution than random mutations (like most people think). The secret to increasing complexity is *merging* i.e. when different organisms or systems come together and combine their histories and capabilities. Blaise describes his "BFF" experiment where random computer code spontaneously evolved into self-replicating programs, showing how purpose and complexity can emerge from pure randomness through computational processes. https://en.wikipedia.org/wiki/Blaise_Ag%C3%BCera_y_Arcas https://x.com/blaiseaguera?lang=en TRANSCRIPT: https://app.rescript.info/public/share/VX7Gktfr3_wIn4Bj7cl9StPBO1MN4R5lcJ11NE99hLg TOC: 00:00:00 Introduction - New book "What is Intelligence?" 00:01:45 Life as computation - Von Neumann's insights 00:12:00 BFF experiment - How purpose emerges 00:26:00 Symbiogenesis and evolutionary complexity 00:40:00 Functionalism and consciousness 00:49:45 AI as part of collective human intelligence 00:57:00 Comparing AI and human cognition REFS: What is intelligence [Blaise Agüera y Arcas] https://whatisintelligence.antikythera.org/ [Read free online, interactive rich media] https://mitpress.mit.edu/9780262049955/what-is-intelligence/ [MIT Press] Large Language Models and Emergence: A Complex Systems Perspective https://arxiv.org/abs/2506.11135 Our first Noam Chomsky MLST interview https://www.youtube.com/watch?v=axuGfh4UR9Q Chance and Necessity [Jacques Monod] https://monoskop.org/images/9/99/Monod_Jacques_Chance_and_Necessity.pdf Wonderful Life: The Burgess Shale and the History of Nature [Stephen Jay Gould] https://www.amazon.co.uk/Wonderful-Life-Burgess-Nature-History/dp/0099273454 The major evolutionary transitions [E Szathmáry, J M Smith] https://wiki.santafe.edu/images/0/0e/Szathmary.MaynardSmith_1995_Nature.pdf Don't Sleep, There Are Snakes: Life and Language in the Amazonian Jungle [Dan Everett] https://www.amazon.com/Dont-Sleep-There-Are-Snakes/dp/0307386120 The Nature of Technology: What It Is and How It Evolves [W. Brian Arthur] https://www.amazon.com/Nature-Technology-What-How-Evolves-ebook/dp/B002RI9W16/ The MANIAC [Benjamin Labatut] https://www.amazon.com/MANIAC-Benjam%C3%ADn-Labatut/dp/1782279814 When We Cease to Understand the World [Benjamin Labatut] https://www.amazon.com/When-We-Cease-Understand-World/dp/1681375664/ The Boys in the Boat [Dan Brown] https://www.amazon.com/Boys-Boat-Americans-Berlin-Olympics/dp/0143125478 [Petter Johansson] (Split brain) https://www.lucs.lu.se/fileadmin/user_upload/lucs/2011/01/Johansson-et-al.-2006-How-Something-Can-Be-Said-About-Telling-More-Than-We-Can-Know.pdf If Anyone Builds It, Everyone Dies [Eliezer Yudkowsky, Nate Soares] https://www.amazon.com/Anyone-Builds-Everyone-Dies-Superhuman/dp/0316595640 The science of cycology https://link.springer.com/content/pdf/10.3758/bf03195929.pdf1 hr![The Secret Engine of AI - Prolific [Sponsored] (Sara Saab, Enzo Blindow)](/assets/artwork/1x1.gif)
OCT 18
The Secret Engine of AI - Prolific [Sponsored] (Sara Saab, Enzo Blindow)We sat down with Sara Saab (VP of Product at Prolific) and Enzo Blindow (VP of Data and AI at Prolific) to explore the critical role of human evaluation in AI development and the challenges of aligning AI systems with human values. Prolific is a human annotation and orchestration platform for AI used by many of the major AI labs. This is a sponsored show in partnership with Prolific. **SPONSOR MESSAGES** — cyber•Fund https://cyber.fund/?utm_source=mlst is a founder-led investment firm accelerating the cybernetic economy Oct SF conference - https://dagihouse.com/?utm_source=mlst - Joscha Bach keynoting(!) + OAI, Anthropic, NVDA,++ Hiring a SF VC Principal: https://talent.cyber.fund/companies/cyber-fund-2/jobs/57674170-ai-investment-principal#content?utm_source=mlst Submit investment deck: https://cyber.fund/contact?utm_source=mlst — While technologists want to remove humans from the loop for speed and efficiency, these non-deterministic AI systems actually require more human oversight than ever before. Prolific's approach is to put "well-treated, verified, diversely demographic humans behind an API" - making human feedback as accessible as any other infrastructure service. When AI models like Grok 4 achieve top scores on technical benchmarks but feel awkward or problematic to use in practice, it exposes the limitations of our current evaluation methods. The guests argue that optimizing for benchmarks may actually weaken model performance in other crucial areas, like cultural sensitivity or natural conversation. We also discuss Anthropic's research showing that frontier AI models, when given goals and access to information, independently arrived at solutions involving blackmail - without any prompting toward unethical behavior. Even more concerning, the more sophisticated the model, the more susceptible it was to this "agentic misalignment." Enzo and Sarah present Prolific's "Humane" leaderboard as an alternative to existing benchmarking systems. By stratifying evaluations across diverse demographic groups, they reveal that different populations have vastly different experiences with the same AI models. Looking ahead, the guests imagine a world where humans take on coaching and teaching roles for AI systems - similar to how we might correct a child or review code. This also raises important questions about working conditions and the evolution of labor in an AI-augmented world. Rather than replacing humans entirely, we may be moving toward more sophisticated forms of human-AI collaboration. As AI tech becomes more powerful and general-purpose, the quality of human evaluation becomes more critical, not less. We need more representative evaluation frameworks that capture the messy reality of human values and cultural diversity. Visit Prolific: https://www.prolific.com/ Sara Saab (VP Product): https://uk.linkedin.com/in/sarasaab Enzo Blindow (VP Data & AI): https://uk.linkedin.com/in/enzoblindow TRANSCRIPT: https://app.rescript.info/public/share/xZ31-0kJJ_xp4zFSC-bunC8-hJNkHpbm7Lg88RFcuLE TOC: [00:00:00] Intro & Background [00:03:16] Human-in-the-Loop Challenges [00:17:19] Can AIs Understand? [00:32:02] Benchmarking & Vibes [00:51:00] Agentic Misalignment Study [01:03:00] Data Quality vs Quantity [01:16:00] Future of AI Oversight REFS: Anthropic Agentic Misalignment https://www.anthropic.com/research/agentic-misalignment Value Compass https://arxiv.org/pdf/2409.09586 Reasoning Models Don’t Always Say What They Think (Anthropic) https://www.anthropic.com/research/reasoning-models-dont-say-think https://assets.anthropic.com/m/71876fabef0f0ed4/original/reasoning_models_paper.pdf Apollo research - science of evals blog post https://www.apolloresearch.ai/blog/we-need-a-science-of-evals Leaderboard Illusion https://www.youtube.com/watch?v=9W_OhS38rIE MLST video The Leaderboard Illusion [2025] Shivalika Singh et al https://arxiv.org/abs/2504.20879 (Truncated, full list on YT)1h 20m
OCT 4
AI Agents Can Code 10,000 Lines of Hacking Tools In Seconds - Dr. Ilia Shumailov (ex-GDM)Dr. Ilia Shumailov - Former DeepMind AI Security Researcher, now building security tools for AI agents Ever wondered what happens when AI agents start talking to each other—or worse, when they start breaking things? Ilia Shumailov spent years at DeepMind thinking about exactly these problems, and he's here to explain why securing AI is way harder than you think. **SPONSOR MESSAGES** —Check out notebooklm for your research project, it's really powerfulhttps://notebooklm.google.com/ — Take the Prolific human data survey - https://www.prolific.com/humandatasurvey?utm_source=mlst and be the first to see the results and benchmark their practices against the wider community! — cyber•Fund https://cyber.fund/?utm_source=mlst is a founder-led investment firm accelerating the cybernetic economy Oct SF conference - https://dagihouse.com/?utm_source=mlst - Joscha Bach keynoting(!) + OAI, Anthropic, NVDA,++ Hiring a SF VC Principal: https://talent.cyber.fund/companies/cyber-fund-2/jobs/57674170-ai-investment-principal#content?utm_source=mlst Submit investment deck: https://cyber.fund/contact?utm_source=mlst — We're racing toward a world where AI agents will handle our emails, manage our finances, and interact with sensitive data 24/7. But there is a problem. These agents are nothing like human employees. They never sleep, they can touch every endpoint in your system simultaneously, and they can generate sophisticated hacking tools in seconds. Traditional security measures designed for humans simply won't work. Dr. Ilia Shumailov https://x.com/iliaishacked https://iliaishacked.github.io/ https://sequrity.ai/ TRANSCRIPT: https://app.rescript.info/public/share/dVGsk8dz9_V0J7xMlwguByBq1HXRD6i4uC5z5r7EVGM TOC: 00:00:00 - Introduction & Trusted Third Parties via ML 00:03:45 - Background & Career Journey 00:06:42 - Safety vs Security Distinction 00:09:45 - Prompt Injection & Model Capability 00:13:00 - Agents as Worst-Case Adversaries 00:15:45 - Personal AI & CAML System Defense 00:19:30 - Agents vs Humans: Threat Modeling 00:22:30 - Calculator Analogy & Agent Behavior 00:25:00 - IMO Math Solutions & Agent Thinking 00:28:15 - Diffusion of Responsibility & Insider Threats 00:31:00 - Open Source Security Concerns 00:34:45 - Supply Chain Attacks & Trust Issues 00:39:45 - Architectural Backdoors 00:44:00 - Academic Incentives & Defense Work 00:48:30 - Semantic Censorship & Halting Problem 00:52:00 - Model Collapse: Theory & Criticism 00:59:30 - Career Advice & Ross Anderson Tribute REFS: Lessons from Defending Gemini Against Indirect Prompt Injections https://arxiv.org/abs/2505.14534 Defeating Prompt Injections by Design. Debenedetti, E., Shumailov, I., Fan, T., Hayes, J., Carlini, N., Fabian, D., Kern, C., Shi, C., Terzis, A., & Tramèr, F. https://arxiv.org/pdf/2503.18813 Agentic Misalignment: How LLMs could be insider threats https://www.anthropic.com/research/agentic-misalignment STOP ANTHROPOMORPHIZING INTERMEDIATE TOKENS AS REASONING/THINKING TRACES! Subbarao Kambhampati et al https://arxiv.org/pdf/2504.09762 Meiklejohn, S., Blauzvern, H., Maruseac, M., Schrock, S., Simon, L., & Shumailov, I. (2025). Machine learning models have a supply chain problem. https://arxiv.org/abs/2505.22778 Gao, Y., Shumailov, I., & Fawaz, K. (2025). Supply-chain attacks in machine learning frameworks. https://openreview.net/pdf?id=EH5PZW6aCr Apache Log4j Vulnerability Guidance https://www.cisa.gov/news-events/news/apache-log4j-vulnerability-guidance Bober-Irizar, M., Shumailov, I., Zhao, Y., Mullins, R., & Papernot, N. (2022). Architectural backdoors in neural networks. https://arxiv.org/pdf/2206.07840 Position: Fundamental Limitations of LLM Censorship Necessitate New Approaches David Glukhov, Ilia Shumailov, ... https://proceedings.mlr.press/v235/glukhov24a.html AlphaEvolve MLST interview [Matej Balog, Alexander Novikov] https://www.youtube.com/watch?v=vC9nAosXrJw1h 1m
SEP 27
New top score on ARC-AGI-2-pub (29.4%) - Jeremy BermanWe need AI systems to synthesise new knowledge, not just compress the data they see. Jeremy Berman, is a research scientist at Reflection AI and recent winner of the ARC-AGI v2 public leaderboard.**SPONSOR MESSAGES**—Take the Prolific human data survey - https://www.prolific.com/humandatasurvey?utm_source=mlst and be the first to see the results and benchmark their practices against the wider community!—cyber•Fund https://cyber.fund/?utm_source=mlst is a founder-led investment firm accelerating the cybernetic economyOct SF conference - https://dagihouse.com/?utm_source=mlst - Joscha Bach keynoting(!) + OAI, Anthropic, NVDA,++Hiring a SF VC Principal: https://talent.cyber.fund/companies/cyber-fund-2/jobs/57674170-ai-investment-principal#content?utm_source=mlstSubmit investment deck: https://cyber.fund/contact?utm_source=mlst— Imagine trying to teach an AI to think like a human i.e. solving puzzles that are easy for us but stump even the smartest models. Jeremy's evolutionary approach—evolving natural language descriptions instead of python code like his last version—landed him at the top with about 30% accuracy on the ARCv2.We discuss why current AIs are like "stochastic parrots" that memorize but struggle to truly reason or innovate as well as big ideas like building "knowledge trees" for real understanding, the limits of neural networks versus symbolic systems, and whether we can train models to synthesize new ideas without forgetting everything else. Jeremy Berman:https://x.com/jerber888TRANSCRIPT:https://app.rescript.info/public/share/qvCioZeZJ4Q_NlR66m-hNUZnh-qWlUJcS15Wc2OGwD0TOC:Introduction and Overview [00:00:00]ARC v1 Solution [00:07:20]Evolutionary Python Approach [00:08:00]Trade-offs in Depth vs. Breadth [00:10:33]ARC v2 Improvements [00:11:45]Natural Language Shift [00:12:35]Model Thinking Enhancements [00:13:05]Neural Networks vs. Symbolism Debate [00:14:24]Turing Completeness Discussion [00:15:24]Continual Learning Challenges [00:19:12]Reasoning and Intelligence [00:29:33]Knowledge Trees and Synthesis [00:50:15]Creativity and Invention [00:56:41]Future Directions and Closing [01:02:30]REFS:Jeremy’s 2024 article on winning ARCAGI1-pubhttps://jeremyberman.substack.com/p/how-i-got-a-record-536-on-arc-agiGetting 50% (SoTA) on ARC-AGI with GPT-4o [Greenblatt]https://blog.redwoodresearch.org/p/getting-50-sota-on-arc-agi-with-gpt https://www.youtube.com/watch?v=z9j3wB1RRGA [his MLST interview]A Thousand Brains: A New Theory of Intelligence [Hawkins]https://www.amazon.com/Thousand-Brains-New-Theory-Intelligence/dp/1541675819https://www.youtube.com/watch?v=6VQILbDqaI4 [MLST interview]Francois Chollet + Mike Knoop’s labhttps://ndea.com/On the Measure of Intelligence [Chollet]https://arxiv.org/abs/1911.01547On the Biology of a Large Language Model [Anthropic]https://transformer-circuits.pub/2025/attribution-graphs/biology.html The ARChitects [won 2024 ARC-AGI-1-private]https://www.youtube.com/watch?v=mTX_sAq--zY Connectionism critique 1998 [Fodor/Pylshyn]https://uh.edu/~garson/F&P1.PDF Questioning Representational Optimism in Deep Learning: The Fractured Entangled Representation Hypothesis [Kumar/Stanley]https://arxiv.org/pdf/2505.11581 AlphaEvolve interview (also program synthesis)https://www.youtube.com/watch?v=vC9nAosXrJw ShinkaEvolve: Evolving New Algorithms with LLMs, Orders of Magnitude More Efficiently [Lange et al]https://sakana.ai/shinka-evolve/ Deep learning with Python Rev 3 [Chollet] - READ CHAPTER 19 NOW!https://deeplearningwithpython.io/