Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
History
TV & Film
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/7b/b6/c0/7bb6c07f-edec-088c-d09d-791a703decb9/mza_17572125323385696703.jpg/600x600bb.jpg
Pretrained
Pierce Freeman & Richard Diehl Martinez
39 episodes
5 days ago
10 years after studying at Stanford, two friends have somehow become AI experts. One builds startups, the other studies at Cambridge - together they break down LLMs and machine learning with zero BS and maximum banter.
Show more...
Technology
News,
Tech News
RSS
All content for Pretrained is the property of Pierce Freeman & Richard Diehl Martinez and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
10 years after studying at Stanford, two friends have somehow become AI experts. One builds startups, the other studies at Cambridge - together they break down LLMs and machine learning with zero BS and maximum banter.
Show more...
Technology
News,
Tech News
Episodes (20/39)
Pretrained
Our biggest predictions for 2026

We should really be on Polymarket. Pierce and Richard make their bets on GPT-6, competition between the different letters of FAANG, dynamic websites calibrated to user preferences, and increasing quality of OSS models.

Show more...
5 days ago
37 minutes

Pretrained
AI's ten big moments of 2025

It's been a long year in the world of AI. Benchmarks are now almost totally saturated; the financial bubble keeps growing; spending more on inference compute; increasing competition from open source models; agents finally reach the mainstream; the frankly horrible job market for people out of school; multi-model models are back and increasing converging on transformer architectures. We cover them all in our anything goes holiday recap show. Plus - all the things that didn't happen this year.

Show more...
1 week ago
1 hour 32 minutes

Pretrained
Looking back on a year of product market fit

Pierce reflects on his own 2025. Thoughts on choosing the right buyer persona, scaling an AI business from zero lines in a github repo, the feeling of finally reaching product market fit, boring versus interesting businesses, and more.

Show more...
1 week ago
38 minutes

Pretrained
Looking back on three years of an AI PhD

Richard takes the hot seat for the first episode of our 2025 recap series where we spend the rest of December looking back on what this year meant to us personally and in the world of AI/ML. We cover what it's like to defend a thesis in the UK, the difficulty of training meta-learning models, choosing a well scoped research topic, how to define small models, and what's needed to make them better.

Show more...
2 weeks ago
56 minutes

Pretrained
OpenReview got "hacked"

OpenAI is rolling out shopping support to their users and plotting an ads rollout to challenge Google's ad business, we get a peek behind the curtain on SOTA image generation models with the release of Alibaba's Z-Image (and speculate this might be how nano banana has great text performance), and OpenReview exposes the identities behind double blind reviews.

Show more...
3 weeks ago
1 hour 7 minutes

Pretrained
Pretraining is back in vogue with Gemini 3

Pierce and Richard cover OpenAI's new long range model compression in Codex, initial takeaways of Gemini 3.0 and Nano Banana Pro, Nvidia chip exports to the UAE and Saudi Arabia, and Cloudstrike's global outage. Plus - why Pierce prefers chicken to turkey.

Show more...
3 weeks ago
1 hour 1 minute

Pretrained
Teaching cars about traffic lights

Richard and Pierce break down the 5 levels of autonomy, whether Elon has a point about RGB vs lidar systems, sensor fusion algorithms, end to end learning in driving simulations, and more.

Show more...
1 month ago
1 hour 16 minutes

Pretrained
Pretty pretty please can you hack this

Pierce and Richard cover the news that Yann LeCunn is planning to depart Meta to focus on world models, Cursor 2.0 and their new home trained Composer coding model, Kimi K2 has great generalization performance for an open model but is lagging on code, Microsoft creates a super data center across 700 miles, and Anthropic reports the first hacking campaign orchestrated by AI.

Further reading:
https://arxiv.org/abs/2509.14252
https://www.digit.in/features/general/meta-chief-ai-scientist-yann-lecun-thinks-llms-are-a-waste-of-time.html
https://cursor.com/blog/composer
https://arxiv.org/abs/2507.20534
https://www.anthropic.com/news/disrupting-AI-espionage

Show more...
1 month ago
1 hour 6 minutes

Pretrained
How AI research actually gets published

Richard and Pierce talk about the major AI conferences, walk through the history of NeurIPS/ICML/ICLR, and retrofitting the peer review system.

Show more...
1 month ago
1 hour 7 minutes

Pretrained
A deep dive on OpenAI Atlas

Richard and Pierce break down all the new AI web browser entrants with a particular focus on OpenAI's new Atlas, tradeoffs between vision models and text based dom parsing, potential security vulnerabilities, and more.

Show more...
1 month ago
1 hour 13 minutes

Pretrained
The browser wars are just getting started

OpenAI releases their long awaited browser Atlas, Pytorch releases their distributed computation framework Monarch, the SALT reinforcement learning addition to GRPO, the HAL benchmark for agent evaluation, and trying to adapt the kv cache for text diffusion models.

Further reading:
https://openai.com/index/introducing-chatgpt-atlas/
https://pytorch.org/blog/introducing-pytorch-monarch/
https://arxiv.org/pdf/2510.20022

https://arxiv.org/abs/2510.11977

https://arxiv.org/abs/2510.14973

Show more...
1 month ago
57 minutes

Pretrained
Are we in an AI bubble?

Richard and Pierce take the bull case on whether we're in an AI bubble. They cover circular financial deals, energy build outs, AI representing 92% of GDP growth in H1 2025, and a comparison with the hype in 2000s around meaningless dot-com companies.

Show more...
2 months ago
1 hour 15 minutes

Pretrained
LLMs can get brain rot too

Articles written by LLMs have stabilized at exactly 50% of the internet (at least - so far as our classifiers can discriminate), the price of embedding models, OpenAI announces a new job board and certification programs for applied AI, Amazon releases the public availability of Bedrock AgentCore, and how pre-training on low quality data affects the capability of post-training.

Further reading:
https://arxiv.org/abs/2510.13928
https://openai.com/index/expanding-economic-opportunity-with-ai/
https://www.tensoreconomics.com/p/why-are-embeddings-so-cheap
https://graphite.io/five-percent/more-articles-are-now-created-by-ai-than-humans

Show more...
2 months ago
1 hour

Pretrained
AMD is back in the AI chipset race

OpenAI diversifies their chip suppliers through partnerships with AMD and Broadcom, Google starts a new AI Bug Bountry problem but only for computational security not for llm hallucinations, Nvidia ships their first prosumer computer, DeepMind has a new complexity theory proof solver, and Anthropic writes their own gibberish poison pill that works across model sizes.

Further reading:
https://openai.com/index/openai-amd-strategic-partnership/
https://investor.nvidia.com/news/press-release-details/2024/NVIDIA-Announces-Financial-Results-for-Second-Quarter-Fiscal-2025/default.aspx
https://bughunters.google.com/blog/6116887259840512/announcing-google-s-new-ai-vulnerability-reward-program
https://marketplace.nvidia.com/en-us/developer/dgx-spark/
https://arxiv.org/abs/2509.18057
https://www.anthropic.com/research/small-samples-poison

Show more...
2 months ago
58 minutes

Pretrained
The inaugural listener mailbag

You asked, we answered! Rich and Pierce do their first listener mailbag. Explaining RLHF, our current development stack, whether model competition is making things better for people using them, and more.

Show more...
2 months ago
55 minutes

Pretrained
California legislators come for LLMs

Breaking down California's recently passed SB 53 to legislate frontier model development, ISO standards in startups, and why this one passed where the older SB 1047 failed.

Show more...
2 months ago
1 hour 7 minutes

Pretrained
Move over TikTok - a new feed's in town

Building a modern AI app and architecting Sora II, first impressions of Sonnet 4.5, and the frontier labs go after n8n and Zapier.

Further reading:
https://openai.com/index/sora-2/
https://openai.com/index/sora-is-here/
https://www.lesswrong.com/posts/4yn8B8p2YiouxLABy/claude-sonnet-4-5-system-card-and-alignment
https://www-cdn.anthropic.com/872c653b2d0501d6ab44cf87f43e1dc4853e4d37.pdf
https://www.testingcatalog.com/openai-prepares-to-release-agent-builder-during-devday-on-october-6/

Show more...
2 months ago
1 hour 2 minutes

Pretrained
Gen z struggles to find coding jobs fr no cap

Richard and Pierce respond to the Times podcast about the scarcity of junior engineering jobs. They talk through the academic difference between Computer Science vs. Engineering, AI as a new engineering primitive, talent arbitrage through intern programs, and more.

https://www.nytimes.com/2025/09/29/podcasts/the-daily/big-tech-told-kids-to-code-the-jobs-didnt-follow.html

Show more...
2 months ago
51 minutes

Pretrained
The power of ten million deadlifters

OpenAI & NVIDIA’s 10GW partnership, GDPVal as a new human curated benchmark dataset, Gemini Robotics-ER 1.5, and Apple's distillation of AlphaFold.

Additional reading:
https://nvidianews.nvidia.com/news/openai-and-nvidia-announce-strategic-partnership-to-deploy-10gw-of-nvidia-systems
https://openai.com/index/gdpval/
https://deepmind.google/discover/blog/gemini-robotics-15-brings-ai-agents-into-the-physical-world/
https://arxiv.org/pdf/2509.18480

Show more...
2 months ago
1 hour 6 minutes

Pretrained
How countries are actually using AI

Pierce and Richard recap Anthropic's Economic Index. Differences between country use of AI, autonomy versus augmentation, and the real business use cases that Anthropic is seeing so far.

Further reading:
https://www.anthropic.com/research/anthropic-economic-index-september-2025-report

Show more...
3 months ago
51 minutes

Pretrained
10 years after studying at Stanford, two friends have somehow become AI experts. One builds startups, the other studies at Cambridge - together they break down LLMs and machine learning with zero BS and maximum banter.