From zero to prod with dplyr
Using dplyr the same code can scale from a few thousand rows, tens of thousands, to millions of rows all with very little changes. Learn about lazy tables, and backend agnostic coding.
Don't know how to start? Don't worry we've got your back.
We're still working out the kinks. Want to help us get ready for our release?
Learn to create production-ready
RESTful APIs ,
ETL workflows , and
applications
. Be able to integrate your analytics products into
existing frameworks, ensuring your solutions are
interoperable and scalable.
Together, you will
flourish
as a data scientist.
Write code directly in your browser. No installation required. Start coding immediately, without the hassle of setup or downloads.
With conceptual deep dives into the modern data stack you will learn in depth about the technology that is paving the future. We emphasize adopting the Composable Codex so you can break out of siloed data infrastructure.
At the end of each course, you'll build out a real project. Use the projects as templates to build a portfolio to showcase your skills.
With our conceptual deep dives and project based learning, you will learn how to build and deploy your own projects with confidence.
Using dplyr the same code can scale from a few thousand rows, tens of thousands, to millions of rows all with very little changes. Learn about lazy tables, and backend agnostic coding.
DuckDB is a fast, zero-dependency, in-process database made for data scientists. Lean on DuckDB's tight R integration to scale to larger-than-memory workloads.
Docker containers standardize deployment ensuring consistent environments across different systems. You will know how to encapsulate your production code making it reproducible, scalable, and easier to automate.
Courses under active development on GitHub