This panel from the dAGI Summit brings together leaders from decentralized AI projects—Ambient, Gensyn, Nous Research, and NEAR AI—to examine why open-source, distributed approaches might prevail over centralized systems. The discussion centers on fundamental economics: closed labs face misaligned incentives (surveillance capitalism, censorship, rug-pull risk) while open-source struggles to monetize. Panelists advocate for crypto-economic models where tokens align global contributor incentives, enable permissionless participation, and create deflationary flywheels as inference demand burns supply. Key tensions emerge around launch timing (shipping imperfect networks risks credibility; waiting loses market), whether to embrace or hide Web3 properties, and whether distributed training can compete with centralized data centers.
Key Takeaways
▸ Trust as first principle: Open-source AI prevents centralized bias, censorship, and platform risk—critical as LLMs become "choice architecture" for daily decisions; users need models that won't serve provider interests over theirs.
▸ Incentive alignment problem: Closed labs monetize through services; open-source lacks revenue models—crypto tokens enable contributor coordination, revenue sharing for creators, and data provider compensation without corporate structures.
▸ Quality beats ideology: Users prioritize performance over privacy/decentralization—for open-source to win, it must deliver best-in-class capabilities; philosophical arguments alone won't drive adoption.
▸ Miner economics as foundation: Proof-of-work models make miners network owners; inference transactions burn tokens creating deflation while inflation rewards compute—mimics Bitcoin's flywheel at AI scale.
▸ RL changes everything: Reinforcement learning now rivals pre-training compute budgets—requires solving both inference and training scale simultaneously, accelerating need for distributed solutions.
▸ Privacy as unlock: Confidential compute using TEEs enables private inference where no party can see user data—necessary for user-owned AI and sensitive enterprise applications.
▸ Launch timing paradox: If comfortable launching, you've waited too long given AI's pace—but premature mainnet with exploits kills credibility; tokens can't be "relaunched" after failed start.
▸ Token utility beyond speculation: Staking for Sybil resistance, slashing for failures, global payment rails—tokens provide coordination impossible with fiat; also unlock capital for obsolete hardware.
▸ Different architecture advantages: Lean into distributed strengths—Gensyn's 40K-node swarm of small models learning via gossip protocols; edge deployment; multi-agent coordination impossible in monolithic systems.
▸ Inference-to-training flywheel: Some start with verified inference to build revenue, then fund fine-tuning and pre-training—inference demand creates monetary flywheel to subsidize training.
▸ User ownership vision: Future where users control data in secure enclaves, AI comes to the data rather than vice versa—eliminates hesitation about sharing sensitive info with centralized providers.
▸ Web3 integration split: Some say "hide crypto, just build best AI"; others argue lean into trustless properties as differentiator—non-custodial agents, fair revenue splits, permissionless innovation closed systems can't match.
▸ AI as future money: Provocative thesis that AI represents work, thus becomes money itself—though managing transition from fiat to AI-backed currencies remains unsolved challenge.
Rawson contrasts the US and Chinese AI ecosystems through culture, history, and market design. The US channels deep capital into fast-forming, efficient oligopolies that drive closed, frontier models and a massive compute build-out; China orchestrates a state-guided “swarm” that rapidly diffuses (often open-source) AI across industry, leveraging dense supply chains and process skill—but with thinner margins and policy constraints. Capital and AI are framed as parallel forces that centralize if unchecked; each country fears a different failure mode (US: centralized authority; China: disorder). Looking ahead, today’s US lead meets China’s long-term industrial advantages, suggesting a durable, competitive race. The recommended path is a balanced “narrow corridor” that blends US frontier strengths with China’s diffusion strengths—seeking modular, widely accessible intelligence while avoiding both elite techno-feudalism and chaotic collapse.Key takeaways▸ Speaker & lens: Early-stage AI/robotics investor with experience in the US and China; goal is to compare AI market structures and cultures.▸ China’s dualities: Modern infrastructure yet widespread low incomes; strong tech/manufacturing innovation amid macro softness (property, LG debt, youth unemployment); open-source AI leadership despite the Great Firewall; globalization’s winner now pushing self-sufficiency.▸ US vs China—opposites and mirrors: Freedom vs stability/harmony; individual vs family unit; over-consumption vs over-production; democracy vs autocracy—yet each also reflects the other’s excesses (“fearful mirror” idea).▸ Historical roots shape instincts: US frontier ethos → skepticism of centralized authority; China’s recurring upheavals → preference for order and stability (especially among older generations).▸ Different views of capital:* US: Capital as expression of freedom/market choice (but concentrates power via money/compute).* China: Capital as instrument of national priorities (internet crackdown as example).▸ Capital ≈ AI: Both optimize for efficiency/automation; they centralize power if unchecked. The US tends to fear centralized authority; China tends to fear disorder.▸ Market structure archetypes:* US “efficient oligopoly”: Deep capital markets quickly crown category leaders—efficient allocation and reinvestment, but concentrated power and higher prices.* China “subjugated swarm”: State sets direction; provinces fund many firms → Darwinian competition; strengths in volume/quality/cost and process know-how, but lower margins, “involution,” and rising trade pushback.▸ AI ecosystems & priorities:* US: Massive compute build-out, closed frontier models, aim at AGI/ASI and “human transcendence,” global distribution.* China: Tighter cross-sector coordination, rapid diffusion of AI across society, prioritizes open-source/commoditization—useful but can embed political biases.▸ Now vs later: US leads today (chips/compute/users), but long-run trends (power generation, open-source uptake, robotics/industrial base) could tilt some advantages toward China; expect a long, competitive race.▸ Modular vs vertical: Vertically integrated stacks lead now; the speaker expects a gradual shift toward more modular intelligence (distributed incentives harnessing long-tail compute/data/talent), though it’s hard.▸ AI is physical & geopolitical: Energy, fabs, robots, and data centers anchor AI to nation-states → emerging competing operating systems (US stack ≈ Global North; China ≈ parts of Global South).▸ Governance “narrow corridor”: Need balance between strong institutions and strong civil society to avoid AI-induced totalitarianism on one side or anarchy/uncontrolled SI on the other.▸ Complementary strengths: US (frontier, software, 0→1, freedom) + China (diffusion, hardware, 1→n, stability). The tragedy is worsening ties despite potential complementarity; call for mutual curiosity and learning.
This VC panel from the dAGI Summit explores venture capital's evolving landscape amid AI's transformative surge. The discussion tackles whether venture remains attractive (Sequoia's Roloff Beha argues it's "return-free risk"), examines talent consolidation toward major labs offering $10M+ salaries, and debates open-source versus centralized AI futures. Key tensions emerge: enterprise security requirements favoring closed models while advocates push permissionless innovation; the challenge of building decentralized systems when speed and capital naturally favor oligopolies. Panelists agree the power law will intensify—most funds lose money while winners capture trillion-dollar outcomes—but disagree on whether decentralized approaches can compete commercially beyond niche use cases.
Key takeaways
▸ Venture's extreme bifurcation: ~95% of funds will deliver sub-1x returns, but trillion-dollar outcomes are now plausible—creating unprecedented power law concentration where top funds massively outperform.
▸ Talent consolidating to labs: Major AI labs pay extraordinary compensation ($10M cash offers to 24-year-olds mentioned), creating negative selection for startups—though counterbalanced by smaller teams achieving more (cited: 2 people, $1M ARR).
▸ 1999 analogy breaks down: Unlike dot-com bubble, leading labs have real revenue (Anthropic at 35x revenue, 5x ARR growth)—though froth exists in oversubscribed seed rounds with 24-hour term sheet timelines.
▸ Open source paradox: Distributed AI progress disappoints despite philosophical appeal; ironically, China and Meta's commoditization strategy drive open-source advancement more than decentralized crypto projects.
▸ Decentralization handicapped: Startups require rapid iteration; decentralization excels at immutability (Bitcoin, DeFi)—fundamental mismatch for early-stage companies needing governance flexibility.
▸ Enterprise blocks open adoption: Security, liability, and procurement bureaucracy favor centralized labs; open/decentralized projects must solve compliance or target consumer first.
▸ Multipolar AI emerging: 10+ reasonably-sized labs now exist versus 2-3 two years ago—but open models still lag frontier capabilities significantly.
▸ Companions achieve PMF: AI companion apps showing strong product-market fit (0 to $2.5M revenue in 6 months cited); addresses loneliness crisis (average American has 1.3 friends versus 7 needed).
▸ Progress slowdown enables open-source: Open models become compelling when enterprises optimize for cost over cutting-edge; currently "AI curious" phase keeps everyone chasing frontier.
▸ Safety as structural advantage: Security/interpretability aren't just cost centers—they're deployment prerequisites and potential moats (insurance products, secure compute, model evaluation).
▸ Third-party evaluation essential: Labs can't grade own homework on capabilities/risks; independent evaluators necessary even as labs internalize safety work.
▸ AI transforming VC: Partners using AI extensively for decisions; one fund running parallel "AI portfolio" to test if AI outperforms human selection—humans becoming "data collectors" for AI decision-making.
▸ Bot performance advantage: Like poker bots that performed worse than players' peak but better than average (no tilt, bad days)—AI may outperform VCs across entire decision distribution, not just at peak.
In this talk, Stepan argues AI is pushing the economy from capturing attention to fulfilling intention. Instead of users spending hours searching, comparing, and coordinating, they will express goals (“Buy a Burning Man bike,” “Plan a Lisbon offsite under $X”), and a market of specialized AI agents will plan, source, negotiate, and execute. Because agents dramatically cut transaction costs, many tasks that once favored in-house teams will move to open markets where agents compete, yielding better outcomes and prices.
This system requires distributed market mechanics rather than a single platform or super-agent: agents compete in multi-attribute auctions over intents, settle via cryptographic contracts, and interoperate through emerging agent standards. Trust comes from privacy-preserving user context plus public agent reputation and verifiable work receipts. With agent autonomy improving exponentially (e.g., code, legal, marketing), the speaker expects working intent-economy rails within 1–2 years, creating major opportunities for builders, researchers, and investors.
Key Takeaways
Shift from “attention economy” → “intention economy.” Value moves from time/clicks to outcomes: you state a goal, a network of AI agents delivers it.
AI agents gain economic agency. Individuals will run dozens; orgs will run thousands—working 24/7 and transacting autonomously.
Post-Coasean dynamics. As agents slash search, bargaining, contracting, and enforcement costs, markets beat firm boundaries more often; AI-native orgs stay lean and move faster.
Why a network (not one super-agent): Such a singleton doesn’t exist; economics/history favor distributed, competitive markets over centralized platforms that may front-run or under-optimize user value.
Every intent becomes a market. Intents are posted; solvers (agents/companies) compete to fulfill them; auctions drive efficient price discovery.
Auctions must be multi-attribute. Matching isn’t just price—also SLA, ETA, constraints, policies, etc., turning intents into personalized RFPs.
Throughput advantage. Agent-to-agent comms scale at hundreds of tokens/sec, compressing coordination time versus human bandwidth.
Practical stack emerging. Interop and trust need standards: A2A (agent-to-agent context), MCP (tool/supply-chain orchestration), u004 (work validation via re-runs/TEEs/economic checks), X402 (agent-to-agent payments).
Join Piers Kicks from Delphi Intelligence as he explores the cutting-edge frontier of space-based computing with Philip Johnston, founder of Star Cloud. Philip is pioneering the development of data centers in space to harness abundant solar energy and overcome Earth's compute limitations. With launch costs plummeting thanks to SpaceX's Starship program, Star Cloud is preparing to launch the first H100 GPU to space in November 2025, marking a 100x increase in space-based compute power.
Starcloud: https://www.starcloud.com
🎯 Key Highlights
▸ From sci-fi dreams to space reality: how falling launch costs enable orbital data centers
▸ November 2025: launching the first H100 GPU - 100x more powerful than any space compute before
▸ The physics advantage: unlimited solar energy and natural cooling in space
▸ Why Earth's heat dissipation limits will force compute off-world within decades
▸ Radiation shielding and thermal management: the two biggest engineering challenges
▸ Starlink connectivity: solving the space internet problem for orbital workloads
▸ Defense and commercial applications: early revenue streams for space compute
▸ The roadmap to gigawatt-scale solar arrays and modular space construction
▸ Geopolitical implications: space as the new frontier for AI and defense
▸ Vision 2035: when most new data centers might be built in space
▸ From asteroid mining to Mars colonies: the broader space economy revolution
💡 Subscribe for more crypto & AI insights! 🔔
🧠 Follow the Alpha
▸ Philip's Twitter: @PhilipJohnst0n
▸ Starcloud's Twitter: @Starcloud_Inc1
🔗 Connect with Delphi
🌐 Portal: https://delphidigital.io/
🧠 Intelligence: https://www.delphiintelligence.io
🐦 Twitter: https://x.com/delphi_intel
🎧 Listen on
Spotify:
Apple Podcasts:
Youtube: https://www.youtube.com/channel/UC9Yy99ZlQIX9-PdG_xHj43Q
Timestamps
00:00 — Intro: Philip Johnston, Star Cloud
01:30 — Vision: data centers in space
02:00 — Launch costs drop: $60K → $500/kg with Starship
03:30 — Pivot: solar power → orbital compute
05:00 — Background: five brothers, sci-fi dreams
07:00 — Building the team: SpaceX & Microsoft vets
08:15 — Early tests: deployables in the living room
09:30 — Challenges: radiation & heat
11:00 — Radiation: LEO to deep space
12:30 — Orbits: dawn-dusk, no shadow
14:00 — Debris myths: Kessler overblown
17:00 — Space weather: flares & Carrington events
19:00 — Heat: radiating 5 GW in space
21:00 — Connectivity: Starlink for workloads
22:00 — Earth’s heat problem: compute to space
24:00 — Regulation: faster in orbit
25:30 — Model: energy provider, not hardware owner
26:30 — November launch: H100 + Gemini
28:00 — Defense & data security in orbit
29:00 — Gigawatt arrays: modular builds, 2030s
30:30 — Breakeven: launch costs vs viability
32:00 — Bitcoin mining: using spare capacity
33:15 — Space internet: real challenges
35:00 — Geopolitics: defending assets in orbit
37:00 — Sci-fi: Dyson spheres & missions
39:00 — AI risks & Fermi Paradox
41:00 — Future: Mars & asteroid mining
42:30 — 2035: space as default
44:00 — Competition: hyperscalers & startups
45:00 — Lessons: gov relations & realities
45:45 — Book: Elon Musk bio
Disclaimer
This podcast is strictly informational and educational and is not investment advice or a solicitation to buy or sell any tokens or securities or to make any financial decisions. Do not trade or invest in any project, tokens, or securities based upon this podcast episode. The host and members at Delphi Ventures may personally own tokens or art that are mentioned on the podcast. Our current show features paid sponsorships which may be featured at the start, middle, and/or the end of the episode. These sponsorships are for informational purposes only and are not a solicitation to use any product, service or token.
Join Pondering Durian and José Macedo as they dive deep into the future of AI-powered healthcare with Tanishq Abraham, founder and CEO of Sophont AI. At just 21, Tanishq has already graduated high school at 10, college at 14, earned a PhD in biomedical engineering at 19, and served as research director at Stability AI. Now he's building open-source foundation models to revolutionize healthcare through multimodal AI systems that can integrate diverse patient data for better diagnosis and treatment.
Sophont AI: https://sophontai.com
Tanishq Abraham: https://www.tanishq.ai/blog
🎯 Key Highlights
▸ Tanishq's accelerated academic journey and unique daily routine
▸ Why healthcare needs multimodal foundation models vs. specialized AI tools
▸ The "parable of the elephant" - integrating all patient data holistically
▸ Open source vs. proprietary models in medicine: trust and transparency
▸ Building a remote, Discord-based AI research company at 21
▸ Healthcare in 2035: continuous monitoring and proactive care
▸ Compute constraints in medical AI vs. general AI development
▸ US-China AI race: concerns about America falling behind in open source
▸ From astronomy to longevity: other fields Tanishq would explore
▸ Why medical AI can have faster patient impact than drug development
💡 Subscribe for more crypto & AI insights! 🔔
🧠 Follow the Alpha
▸ Tanishq's Twitter: @iScienceLuvr
▸ Sophont's Twitter: @SophontAI
🔗 Connect with Delphi
🌐 Portal: https://delphidigital.io/
🧠 Intelligence: https://www.delphiintelligence.io
🐦 Twitter: https://x.com/delphi_intel
🎧 Listen on
Spotify:
Apple Podcasts:
Youtube: https://www.youtube.com/channel/UC9Yy99ZlQIX9-PdG_xHj43Q
Timestamps
00:00 – Intro: Tanishq Abraham, Sophont AI
01:15 – The Tanishq Abraham production function
03:00 – Managing a startup team via Discord
05:00 – Accelerated education: college at 6 years old
09:15 – Finding your tribe and making friends across ages
13:00 – From astronomy to biomedical engineering
17:30 – Healthcare manifesto: why we need multimodal AI
21:00 – The elephant parable: integrating patient data
23:15 – Open source advantages in medical AI
25:00 – Academic vs. industry challenges in medical AI
28:15 – Healthcare experience in 2035-2040
32:15 – Compute vs. data constraints in medical AI
35:15 – Competition landscape and positioning
38:30 – Longevity predictions: 150-200 years?
41:30 – US-China AI race concerns
49:30 – Other research areas of interest
54:15 – Why medical AI offers faster patient impact
Disclaimer
This podcast is strictly informational and educational and is not investment advice or a solicitation to buy or sell any tokens or securities or to make any financial decisions. Do not trade or invest in any project, tokens, or securities based upon this podcast episode. The host and members at Delphi Ventures may personally own tokens or art that are mentioned on the podcast. Our current show features paid sponsorships which may be featured at the start, middle, and/or the end of the episode. These sponsorships are for informational purposes only and are not a solicitation to use any product, service or token.
Join Pondering Durian and José Macedo as they dive deep into the US-China AI competition with Alex Lee, co-founder of TrueNorth and former VP at Enflame (a $3 billion AI chip startup). With his unique perspective spanning both ecosystems - from his PhD in electrical engineering to roles at Temasek and McKinsey - Alex breaks down China's dominance in open source AI, export controls on semiconductors, what a potential Taiwan conflict would mean for global chip supply chains, and insider insights on Chinese tech culture, state-led industrial policy, and why the innovation gap may be closing faster than expected.TrueNorth: https://www.true-north.xyz🎯 Key Highlights▸ China's open source AI leadership: sustainable or temporary advantage?▸ How Chinese tech giants (Alibaba, ByteDance, Tencent) leverage broad business models▸ Export controls reality: are Chinese companies actually GPU-constrained?▸ Inside Chinese AI labs: Deep Seek vs Moonshot vs Zhipu vs the big players▸ The shift from pre-training to post-training and what it means for US-China competition▸ Why the "9-9-6" work culture narrative is more myth than reality▸ Industrial policy lessons: EVs and solar success vs semiconductor struggles▸ Taiwan semiconductor crisis: what would actually happen in a conflict?▸ The future of AI accelerators and system-level innovation▸ Context engineering as the new bottleneck in AI development▸ China's robotics supply chain advantage vs US physical intelligence models▸ 2035 predictions: will the US-China AI gap close or widen?💡 Subscribe for more AI, crypto & tech insights! 🔔🧠 Follow the Alpha▸ Alex Lee's Twitter: @moonshot6666▸ TrueNorth Twitter: @get_truenorth🔗 Connect with Delphi🌐 Portal: https://delphidigital.io/🧠 Intelligence: https://www.delphiintelligence.io🐦 Twitter: @delphi_intel🎧 Listen onSpotify: https://open.spotify.com/show/0Zp0R78f6wFQoPaVr99CCPApple Podcasts: https://podcasts.apple.com/us/podcast/2035/id1832694022YouTube: https://www.youtube.com/channel/UC9Yy99ZlQIX9-PdG_xHj43QTimestamps00:00 – Intro: Alex Lee and US-China AI dynamics01:30 – Is China's open source dominance sustainable?05:15 – The economics behind open source AI development09:30 – Post-training shift: advantage US or China?11:30 – Cultural differences: Silicon Valley vs Chinese tech16:30 – Innovation cycles: zero-to-one vs one-to-x19:45 – Chinese AI lab landscape: who's winning?25:30 – Are AI startups in China "cooked"?28:00 – GPU constraints: export controls reality check31:00 – China's industrial policy: successes and failures34:45 – Land financing model breakdown38:45 – Youth unemployment and "lying flat" culture42:45 – Taiwan semiconductor supply chain crisis scenarios49:45 – Leading edge vs trailing edge chip competition55:00 – The future of AI accelerators and system innovation58:30 – Context engineering as the new frontier1:03:30 – TrueNorth: AI-powered crypto discovery platform1:09:30 – Why build in crypto vs traditional fintech1:12:30 – Robotics: China's supply chain vs US intelligence1:18:00 – 2035 predictions: leveling the playing fieldDisclaimerThis podcast is strictly informational and educational and is not investment advice or a solicitation to buy or sell any tokens or securities or to make any financial decisions. Do not trade or invest in any project, tokens, or securities based upon this podcast episode. The host and members at Delphi Ventures may personally own tokens or art that are mentioned on the podcast. Our current show features paid sponsorships which may be featured at the start, middle, and/or the end of the episode. These sponsorships are for informational purposes only and are not a solicitation to use any product, service or token.