Key Topics What Nvidia actually bought from Groq and why it is not a traditional acquisitionWhy the deal triggered claims that GPUs and HBM are obsoleteArchitectural trade-offs between GPUs, TPUs, XPUs, and LPUsSRAM vs HBM. Speed, capacity, cost, and supply chain realitiesGroq LPU fundamentals: VLIW, compiler-scheduled execution, determinism, ultra-low latencyWhy LPUs struggle with large models and where they excel insteadPractical use cases for hyper-low-latency inference:Ad copy personaliza...
All content for Semi Doped is the property of Vikram Sekar and Austin Lyons and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Key Topics What Nvidia actually bought from Groq and why it is not a traditional acquisitionWhy the deal triggered claims that GPUs and HBM are obsoleteArchitectural trade-offs between GPUs, TPUs, XPUs, and LPUsSRAM vs HBM. Speed, capacity, cost, and supply chain realitiesGroq LPU fundamentals: VLIW, compiler-scheduled execution, determinism, ultra-low latencyWhy LPUs struggle with large models and where they excel insteadPractical use cases for hyper-low-latency inference:Ad copy personaliza...
Key Topics What Nvidia actually bought from Groq and why it is not a traditional acquisitionWhy the deal triggered claims that GPUs and HBM are obsoleteArchitectural trade-offs between GPUs, TPUs, XPUs, and LPUsSRAM vs HBM. Speed, capacity, cost, and supply chain realitiesGroq LPU fundamentals: VLIW, compiler-scheduled execution, determinism, ultra-low latencyWhy LPUs struggle with large models and where they excel insteadPractical use cases for hyper-low-latency inference:Ad copy personaliza...
Semi Doped
Key Topics What Nvidia actually bought from Groq and why it is not a traditional acquisitionWhy the deal triggered claims that GPUs and HBM are obsoleteArchitectural trade-offs between GPUs, TPUs, XPUs, and LPUsSRAM vs HBM. Speed, capacity, cost, and supply chain realitiesGroq LPU fundamentals: VLIW, compiler-scheduled execution, determinism, ultra-low latencyWhy LPUs struggle with large models and where they excel insteadPractical use cases for hyper-low-latency inference:Ad copy personaliza...