Charles Frye teaches people about building products that use d̶a̶t̶a̶ ̶s̶c̶i̶e̶n̶c̶e̶ m̶a̶c̶h̶i̶n̶e̶ ̶l̶e̶a̶r̶n̶i̶n̶g̶ artificial intelligence. He completed a PhD on neural network optimization at UC Berkeley in 2020 before working at MLOps startup Weights & Biases and on the popular online courses Full Stack Deep Learning and Full Stack LLM Bootcamp. He's now working on something new.
Parallel Processors: Past & Future Connections Between LLMs and OS Kernels.
In this talk, we'll travel back to the future, examining some of the early innovations in computer systems and showing their reinvention in the last year as cutting-edge optimizations for LLMs. Then we'll step back and consider what it really means for LLMs to be the "new kernels" for "Software 3.0". We'll pay special attention to what shows up missing on the LLM side in this analogy, like interrupts, kernelland/userland, open standards, and more.