This roadmap outlines a phased journey from classic machine learning foundations to modern deep learning systems and production operations. Each phase highlights anchor lessons from the Coding for MBA series and recommends follow-on topics so you can keep advancing after Day 57.
📐 Looking for mathematical foundations? Check out the comprehensive ML Theory & Mathematics guide, which covers linear algebra, calculus, probability, neural networks, deep learning architectures, and all the mathematical concepts underlying this curriculum.
Start here if you are following the Day 40–53 sequence.
The goal of Phase 1 is to master supervised learning workflows, evaluation techniques, and model selection before layering on deep learning.
Day | Lesson | Key takeaway |
---|---|---|
Day 40 | Introduction to Machine Learning | Frame ML problems, manage the train/validate/test split, and measure performance with cross-validation. |
Day 41 | Supervised Learning – Regression | Fit linear models, tune regularisation, and interpret coefficients for business insights. |
Day 42 | Supervised Learning – Classification Part 1 | Compare logistic regression and decision trees while diagnosing accuracy, precision, and recall. |
Day 43 | Supervised Learning – Classification Part 2 | Ensemble methods, ROC curves, and thresholding strategy selection. |
Day 44 | Unsupervised Learning | Cluster customer segments and reduce dimensionality with PCA. |
Day 45 | Feature Engineering & Evaluation | Build repeatable feature pipelines and validate models with more nuanced metrics. |
Day 46 | Intro to Neural Networks | Understand perceptrons, activation functions, and gradient descent. |
Day 47 | Convolutional Neural Networks | Apply convolutional filters for image classification. |
Day 48 | Recurrent Neural Networks | Model sequential data with RNNs, LSTMs, and GRUs. |
Day 49 | Natural Language Processing | Build text classification pipelines with tokenisation and embeddings. |
Day 50 | MLOps | Package, persist, and monitor models for reliable deployment. |
Day 51 | Regularised Models | Contrast ridge, lasso, elastic net, and Poisson GLMs while measuring coefficient shrinkage. |
Day 52 | Ensemble Methods | Blend bagging, boosting, and stacking ensembles with calibrated probability estimates. |
Day 53 | Model Tuning & Feature Selection | Optimise hyperparameters with grid/Bayesian search and validate feature subsets via permutation importance and RFE. |
Day 54 | Probabilistic Modeling | Master Gaussian mixtures, Bayesian classifiers, EM, and HMM log-likelihoods to reason about uncertainty. |
Day 55 | Advanced Unsupervised Learning | Deploy DBSCAN, hierarchical clustering, t-SNE/UMAP-style embeddings, and autoencoder-driven anomaly detection. |
Day 56 | Time Series & Forecasting | Forecast seasonal demand with ARIMA/SARIMAX, Holt-Winters, and Prophet-style decompositions plus robust evaluation. |
Day 57 | Recommender Systems | Build collaborative filtering and matrix factorisation recommenders with implicit-feedback aware ranking metrics. |
Deep learning expands the model families available in Phase 1. Focus on building intuition for architectures, transfer learning, and optimisation.
Day | Lesson | Key takeaway |
---|---|---|
Day 58 | Transformers and Attention | Assemble encoder–decoder stacks, fine-tune pretrained checkpoints, and interpret attention heatmaps with deterministic demos. |
Day 59 | Generative Models | Compare autoencoders, VAEs, GANs, and diffusion denoisers while monitoring reconstruction loss curves on synthetic data. |
Day 60 | Graph and Geometric Learning | Prototype GraphSAGE and GAT-style message passing networks for toy node classification tasks with interpretable attention scores. |
Day 61 | Reinforcement and Offline Learning | Explore policy/value methods, contextual bandits, and offline evaluation baselines that converge to reproducible reward thresholds. |
Ensure that your models meet ethical, legal, and organisational standards before moving them into production.
Turn prototypes into production systems by investing in reliable infrastructure and collaboration workflows.
Day | Lesson | Key takeaway |
---|---|---|
Day 65 | MLOps Pipelines and CI/CD | Orchestrate feature stores, registries, and GitHub Actions workflows with DAG-style automation. |
Day 66 | Model Deployment and Serving | Compare REST, gRPC, batch, streaming, and edge serving patterns with load-tested adapters. |
Day 67 | Model Monitoring and Reliability | Detect data drift, evaluate canaries, and export observability metrics for retraining triggers. |
Progressing through these phases transforms the Day 40–53 lessons into a comprehensive ML competency path. Loop back to earlier phases whenever you encounter new domains or stakeholders—revisiting the fundamentals will keep each new system grounded in sound methodology.