Skip to main content

Master AI from

Zero → Production, 8 Weeks.

Build a RAG system. Train an AI agent. Ship to production.Weekly live session with Dr. Arjun Jain (IISc Faculty, ex-Yann LeCun lab). Your own TA, start to finish.

* ₹5,000 refundable deposit

IIT Bombay

Faculty at CSE Dept

IISc Bangalore

Faculty at CDS Dept

Max Planck

PhD at Max Planck

NYU

PostDoc with Yann LeCun

8000+

citations

Apple

Research Scientist

Fast Code AI

Scaled to 48 employees and 20+ clients

Mercedes Benz

Sold the last company to MB

Syllabus

Setup & Warm-Up banner

Get your machine battle-ready. Python refresher, NumPy drills, PyTorch installation. No surprises later. Complete at your own pace before Day 1—or skip if you are already set. Environment setup (Cursor/Windsurf, CUDA), Python refresh (functions, OOP), NumPy operations & tensor basics, plotting fundamentals, PyTorch installation & sanity check.

Intro & Classic ML banner

Stop writing rules; start learning from data. We transition from hand-coded algorithms to the Data-Driven Paradigm. You will start by analyzing the "Lazy" Nearest Neighbor approach, then move to the industry-standard Parametric approach (Linear Classifiers). Finally, you will demystify the "learning" process by breaking down Loss Functions (Cross-Entropy) and the engine of all Deep Learning: Gradient Descent. Topics: Data-Driven vs. Hand-coded, Nearest Neighbor (L1 Distance), Linear Classifiers & Matrix Math, Softmax & Cross-Entropy Loss, Optimization via Gradient Descent. Lab 0: Implement k-NN from scratch + Intro to Linear Scores using NumPy.

Forward/Backward from Scratch banner

Rip the neural net open. Compute forward passes. Derive gradients by hand. Verify with gradient checks. After this week, autograd is convenience - not magic. You'll never be confused by a PyTorch error again. Topics: Computational graphs explained, FC/ReLU/Conv forward passes, backpropagation derivations, gradient checking methodology. Lab 1: Manual backprop in NumPy with gradient verification.

Optimization & Regularization banner

SGD, momentum, Adam - know when to use what. Learning rate schedules that actually work. Weight decay, BatchNorm, LayerNorm. This week separates "ran a notebook" from "trained a model properly." Topics: Gradient Descent / SGD fundamentals, momentum & Adam optimizers, weight decay & L2 regularization, learning rate schedules, initialization strategies, BatchNorm vs. LayerNorm intuition. Lab 2: Optimizer shoot-out - compare training curves.

PyTorch: Autograd → Training Loops banner

Tensors, autograd, nn.Module - mastered. Clean training loops you'd ship to production. DataLoaders, hooks, checkpoints. Professional patterns. After this, your code looks like it came from a real ML team. Topics: Tensors & autograd deep dive, nn.Module architecture, Dataset & DataLoader patterns, hooks for debugging/inspection, clean train/eval loop structure. Lab 3: Train a small ConvNet end-to-end in PyTorch.

History of DL Architectures banner

Trace the evolution. LeNet → AlexNet → ResNet. RNN → LSTM → Attention. Know why each architecture was invented and what it solved. History = pattern recognition for building what's next. Topics: LeNet → AlexNet → VGG → ResNet → Inception, RNN → LSTM → GRU progression, the rise of Attention mechanisms, model zoo timelines. Mini-reading: Choose one landmark paper to skim. Quiz on architecture evolution.

Transformers (with Practical) banner

The architecture behind GPT, BERT, everything modern. Self-attention from first principles. Multi-head attention. Positional encodings. Scaling laws. Then you build a character-level Transformer yourself. Small model, massive insight. Topics: Self-attention mechanism unpacked, Multi-Head Attention (MHA), positional encodings (sinusoidal, learned), scaling laws & inference tricks. Lab 4: Build a tiny character-level Transformer on a toy corpus.

Generative Models I banner

Generate images from noise. GANs: the adversarial game, failure modes, fixes. VAEs: ELBO, latent spaces, reconstruction. Understand why training is unstable - and how to stabilize it. You'll generate MNIST/CIFAR images. Topics: GANs (minimax game, failure modes, stabilization tricks), VAEs (ELBO derivation, KL divergence intuition), RNN seq2seq vs. attention-based generation. Lab 5: DCGAN or VAE on MNIST/CIFAR (demo + code).

Generative Models II: Diffusion → LLM Post-Training banner

Where it all converges. Diffusion models: noise-to-image pipelines, U-Net, schedulers. LLMs: tokenization (BPE), embeddings, post-training techniques. SFT, RLHF, GRPO - the methods that turn base models into useful assistants. Capstone begins. Topics: Diffusion intuition (forward/reverse process), U-Net architecture & schedulers, Classifier-Free Guidance (CFG), LLM fundamentals (tokenizers, embeddings, logits→tokens), post-training (SFT, RLHF intuition, GRPO intuition), evaluation & safety. Lab 6: Tokenizer exploration + SFT-style finetune stub. Capstone kickoff.

Course details

Duration:

8 WEEKS Plus Week 0 (free, self-paced)

Schedule:

5th JANUARY 2026 — 28th FEBRUARY 2026

Price:

Risk-free trial included (details below)

₹44,999/-

(India)

$499

(Global)

Who can join?

  • You know Python beyond print("Hello World")
  • You have (or are pursuing) any undergraduate degree
  • You're serious about transitioning to AI

We review every application personally. Your passion counts.

  • Self-paced videos with weekly code review
  • 6–10 short videos/week (~90–120 min total)
  • 1 hands-on lab per week
  • 5–10 question quiz
  • Discussion forum + 2× office hours/week
  • Deadline: Sundays 23:59 IST
  • Week 0 and Week 1 (available directly upon acceptance)
  • A working RAG system
  • A task-oriented AI agent
  • A deployable template
  • Certificate from AI Masterclass
  • Access to private community
  • Guest lectures from industry experts
  • Career mentorship

Testimonials

Your Computer Vision course, covering everything from core ideas to current approaches, was what made the field click for me. The way you connected theory to applications helped things fall into place. I'd tell anyone learning from you that they'll walk away with genuine understanding, not just formulas.

Sahil Shah

Sahil Shah

Quant Trader at Tower Research Capital

It was a great pleasure to be mentored by Arjun while working on our research paper on 3D human pose estimation from monocular images. One thing that stuck with me from his teaching was his advice to fail fast, which is something applicable even beyond research. Given the deluge of information available online now about AI topics, it's essential to find a Guru that can help to cut through the noise and sharpen one's intuition for the subject. For those looking to level up in the path towards mastering AI, I can say with absolute certainty that you're in good hands with Arjun.

Saurabh Sharma

Saurabh Sharma

Doctoral Researcher at UC Santa Barbara

Prof. Arjun was my instructor for computer vision course at IIT Bombay, while I was an undergrad there in 2018. Some of the visually intuitive concepts around convolution network's forward pass/backpropagation has stuck with me all these years. He also helped me grasp math heavy traditional CV concepts like Structure from Motion (SfM), which has come in handy many times in my grad research and work after that.

Uddeshya Upadhyay

Uddeshya Upadhyay

Founding Autonomy Scientist at Droneforge

I have been a PhD student working with Dr. Arjun Jain. I also had the opportunity to be a teaching assistant to the computer vision course he taught at IIT Bombay. The one thing that stood out in his teaching is focus on the end objectives while explaining a concept. This would make the lectures much more helpful from an application from point of view. It also helps one to easily follow through the chain of thought without getting overwhelmed by large volume of words.

Rahul Mitra

Rahul Mitra

Staff Engineer at Qualcomm

What stood out to me in Prof. Arjun’s Computer Vision class was the principled way he built ideas, starting from classical vision to modern deep learning architectures, making concepts connect naturally and grounded in theory. One moment that really stuck was when he explained that “everything is a convolution,” even linear layers — it completely changed how I thought about neural networks. I would highly recommend learning from him!

Divyansh Srivastava

Divyansh Srivastava

Graduate Student Researcher, UC San Diego

I’ve had the privilege of being Arjun’s PhD student and seeing his knowledge and passion up close for about a decade. I particularly remember his lecture titled, ‘Babysitting the Learning Process’, which presented practical insights that are relevant even today. The new AI age is like rocket science, where details matter, and must be mastered! In this course, you’ll not only learn from a teacher, but also a mentor.

Rishabh Dabral

Rishabh Dabral

Research Group Leader at Max-Planck Institute for Informatics

FAQs

I'm not from CS background. Will I survive?

Yes. We have mechanical engineers, ECE grads, even a few doctors. You need Python basics and hunger to learn. Your TA ensures you don't fall behind.

Will this actually get me a job?

We teach you to BUILD, not just understand. You'll have 3 portfolio projects, and real implementations. That's what gets jobs. Your projects and practical skills speak louder than certificates.

How is this different from YouTube/Coursera?

YouTube doesn't review your code. Coursera doesn't assign you a human who's invested in your success. We do both. You get a personal TA who reviews your work, answers doubts, and ensures you actually build things.

Do I need to pay for AI tools like ChatGPT, Claude, or Cursor?

No. We provide individual accounts for Gemini Pro, Windsurf and Google Colab. You get full access to premium AI tools throughout the course.

What if I can't dedicate 6-10 hours some weeks?

Life happens. You get recordings forever. Your TA helps you catch up. But consistency beats intensity - better to do 1 hour daily than 10 hours on Sunday.

Why ₹5,000 upfront if I might get rejected?

It's fully refundable if we don't select you. This filters serious applicants and ensures committed cohort members. You get 100% back within 7 days if rejected.

Can I get refund after starting Week 1?

Yes. Try Week 1 with your personal TA. Not convinced? Get full ₹5,000 back. If you continue, pay remaining ₹30,000.

What exactly does "Personal TA" mean?

One human assigned to you. They review your code, answer doubts on WhatsApp, and do weekly check-ins. Response time: under 24 hours. Usually under 2 hours.

Do I need GPU/powerful laptop?

No. Basic laptop that runs Python. We provide cloud credits for heavy compute. You'll learn on Google Colab first anyway.

What roles can I target after this?

ML Engineer, AI Developer, Data Scientist (entry-level), AI Product Engineer. Not promising FAANG immediately, but definitely 2-3x better roles than pure software development.

Do you provide placement support?

Top 2 performers get FastCode AI internship/ job opportunity. Everyone gets LinkedIn optimization guide and referral network access. Your projects speak louder than certificates.

Can I build my own AI product after this?

Yes. You'll know enough to prototype any AI idea. Week 8 specifically covers deployment. Many students have launched MVPs during the course itself.

What if I'm not ready?

Apply anyway. Worst case: we refund ₹5,000 and tell you what to prepare. Best case: you discover you're more ready than you thought.

Can I join the next cohort instead?

Stay ahead in AI.

Monthly digest of top podcasts, industry news, practical resources, and masterclass insights.