Machine Learning School
Many people know how to train Machine Learning models.
Unfortunately, this is around 5% of the work required to build an end-to-end system.
This program will show you the other 95%.
What Do You Get From Joining?
When you join the program, you get access to the following:
- Pre-recorded lessons. A group of video lessons focusing on the fundamental aspects of Machine Learning in Production.
- A 9-hour live cohort. Every month, there are 6 live sessions of 90 minutes each (9 hours total) where we will build an end-to-end Machine Learning system from scratch. You can attend live or watch a recording of the sessions.
- Assignments. This is a program for builders, and you will have plenty to do. Every session will give you a list of assignments to practice what you learned.
- A Community. You’ll join a community of professionals from every corner of the world with something in common: They are all building Machine Learning systems for a living.
Three things will happen when you finish this program:
- You’ll have a solid understanding of most theoretical aspects concerning Machine Learning systems.
- You’ll have experience building an end-to-end system using AWS SageMaker. You’ll understand how to process data, train, tune, evaluate, deploy, and monitor models in a production environment. You’ll know a few tricks from somebody who spent many nights trying to figure these things out.
- You’ll build connections with like-minded professionals working in the industry.
Who Is This Program For?
- This is a hands-on, technical program. Anyone who wants to use Machine Learning to build solutions for real-world problems will benefit from it.
- This program focuses on designing Machine Learning systems and doesn’t cover Machine Learning theory. You will not learn about the differences between Decision Trees and Neural Networks or how a larger learning rate will change your predictions.
- To get the most out of the program, you should have experience writing software. We use Python, but those who know a different language shouldn’t worry too much.
- Ideally, you have a basic grasp of Machine Learning terminology. You don’t need experience building models but should be familiar with the field. For example, you don’t need to understand the architecture of a Deep Learning model, but you should understand what “training” a model means.
Schedule
Every month, the time of the cohort changes. We will meet 3 times for 2 consecutive weeks. Every session will be recorded, so you can attend live or watch the recorded version later.
Here are the upcoming cohorts:
- Cohort #6: Sep 18 – Sep 29. 10 am EST. (Monday, Wednesday, and Friday)
- Cohort #7: Oct 16 – Oct 27. 2 pm EST. (Monday, Wednesday, and Friday)
We will start the program with a simple problem and build an entire end-to-end system for the six sessions. Every session is packed with information and code. It will be intense but fun.
Session 1 – Building a Pipeline
This session will introduce the program and start building the production pipeline. We’ll cover the following topics:
- Introduction to the program.
- An application about Penguins.
- Introduction to Machine Learning Pipelines.
- Designing a production pipeline.
- SageMaker Processing Jobs and the Processing Step.
- Transforming and splitting the Penguins dataset.
- Configuration and caching of pipelines.
Session 2 – Training and Tuning
This session will extend the pipeline with a step for training a model. We’ll cover the following topics:
- Training and tuning in production systems.
- SageMaker Training Jobs and the Training Step.
- SageMaker Hyperparameter Tuning Jobs and the Tuning Step.
- A multi-class classification network to predict species of penguins.
- Implicit and explicit dependencies between pipeline steps.
Session 3 – Evaluation and Registration
This session will extend the pipeline with a step for evaluating the model and another for registering it in the Model Registry. We’ll cover the following topics:
- Model versioning in production systems.
- Evaluating the Penguins model.
- Introduction to the Model Registry.
- The SageMaker Model Step.
- The SageMaker Condition Step and Fail Step.
Session 4 – Deploying the Model
This session will extend the pipeline with a step for deploying the model to an endpoint. We’ll cover the following topics:
- Deploying directly from the Model Registry.
- Custom inference code.
- Introduction to model repacking in SageMaker.
- Automatically capturing live traffic.
- The SageMaker Lambda Step.
- Extending the Pipeline to deploy the model.
Session 5 – Data Monitoring
This session extends the pipeline to computing a data baseline and sets up a Data Monitoring Job to detect anomalies with live traffic data.
- Identifying data drift from first principles.
- Computing a data baseline to detect data drift.
- The SageMaker QualityCheck Step.
- Setting up a Data Monitoring Schedule.
Session 6 – Model Monitoring
This session extends the pipeline to computing a performance baseline and sets up a Model Monitoring Job to detect any drift or anomalies with the model predictions.
We will cover the following topics:
- Identifying model drift from first principles.
- Computing a performance baseline to detect model drift.
- SageMaker Batch Transform Jobs and the Transform Step.
- Generating ground-truth data.
- Computing performance metrics.
- Setting up a Model Monitoring Schedule.
An important note about joining the program: You pay once to join and get lifetime access to every class, session, lesson, and resource in the community. No recurrent payments. Ever.
What students are saying
- (…) buying access to the community and courses is one of my best purchases. The value-for-money ratio is fantastic, and some of the additional work you have done on top of the SageMaker course is great. I was not expecting that much value other than a SageMaker course, and you have gone above and beyond that, so thank you very much! — A student who asked to remain anonymous.
Sales Page:_https://svpino.gumroad.com/l/mlp
Delivery time: 12 -24hrs after paid
Reviews
There are no reviews yet.