It is the seventh year for Scale By the Bay. Lots of growth and lots of changes this year! We’re housewarming an amazing new venue - Oakland Scottish Rite Center - a historical building in with a grand ballroom, a theater featuring a full stage (with 92 curtains) and an organ, with amphitheater seating, a balcony overlooking Lake Merritt, marble lobbies, lounges, and numerous hallway tracks for which we are famous.
This year's keynote addresses are presented by Joe Beda, cofounder of Kubernetes, and Heather Miller, professor of distributed systems at Carnegie Mellon University, and previously Director of the Scala Center at EPFL.
The theme this year is building scalable distributed systems. More often than not, they are powering Machine Learning and AI applications now, and your AI is only as good as the data you can feed it.
Big Analytics is a term that could cover data engineering and ML together, as one of our leading partner meetups, SF Big Analysics, is called. Here’s a selection of Big Analytics talks covering its various facets.
In addition to that eternal computer science maxim, garbage in, garbage out and vice versa, running ML in production means that your business is only as good as how fast and how much you can feed your AI data, and how well you can use its results to affect the customer outcomes — in real time. Netflix, Spotify, Google, and IBM show how do it.
Moreover, the system is a living, breathing organism — not a monolith of yore but it is distributed across services itself. The teams building it are often distributed globally, and the processes they rely on for developer effectiveness are crucial for uptime, scalability, and correctness. Twitter, JetBrains, Databricks, Domino Data Labs, and others show how their foundations make it possible to scale safely.
The end-to-end data pipelines track shows the OSS stacks from soup to nuts, from cloud clusters to infra as code to data buses to compute to storage to ML/AI and big analytics, leading to the execution of business objectives in real time. It aligns software engineers, data scientists, and business owners around ML models, and there’s an emerging areas of model deployment, and MLOps. Which is a new name for what we’ve been focused on all along.
Our iconic three tracks now sound especially poignant, as each of them serves a function in the making of the best such systems deployed.
Functional programming and thoughtful software engineering
Microservices and reactive systems
Data pipelines, often feeding ML/AI
And reserve your seat soon to join us on November 13 for bespoke Serverless workshop with Google and the main conference on November 14-15!