Cohort 1.0
Ongoing5th JANUARY 2026 โ 28th FEBRUARY 2026
Check Your Application Status
Cohort Information
Start Date
January 5, 2026
End Date
February 28, 2026
Duration
8 WEEKS Plus Week 0
Status
Ongoing
Investment
โน5,000 refundable deposit
Full course: โน44,999 (pay after Week 1)
Updates & Announcements
Cohort 1: Slot 1 Recap, Matrix Magic, and Finding Your "Why"
Hi Everyone,
First off, a huge thank you to everyone who showed up at 8:00 AM today. The energy was fantastic, and it was great to see so many of you having already crushed Week 0.
For those who couldn't make it, you missed a lively session where we went far beyond the standard curriculum. Here is a quick summary and some key takeaways so you can stay in the loop.
Session Recap: From Basics to Deep Tech
We officially kicked off Week 1, moving from the basics of Python/Math into the Data-Driven Paradigm.
We started with a fundamental discussion on Linear Algebraโspecifically, moving past the "how" of matrix multiplication to the "intuition" behind it. Raushan and Nitish helped us visualize matrices not just as grids of numbers, but as linear transformation functions that rotate, scale, and skew vectors in high-dimensional space.
We also covered:
- The Rotation Matrix Challenge: I left you with a favorite interview question of mine regarding the special properties of rotation matrices (columns, dot products, norms).
- Logistics: We are moving to local development using IDEs (Windsurf/Cursor) and submitting via GitHub Classrooms. This simulates a real-world engineering workflow.
- Project Showcase: I walked through some of our internal projects at fastcode.ai, including our legal tech work, HR portals, and our research into "Needle in the Haystack" retrieval problems.
Key Insights & "Tangents"
The real magic of these live sessions often happens in the Q&A tangents. Today, Simrat, Prashant, and Gowrishankar led us into a deep dive on:
- Deployment Strategy: When should you actually deploy your own model vs. using an API? (Hint: Itโs usually about privacy or specific fine-tuning needs).
- Speech-to-Speech: We touched on the cutting edge of Full Duplex vs. Half Duplex models and latency challenges.
- Fine-Tuning Realities: Why fine-tuning an LLM often leads to "catastrophic forgetting" and why standard RAG or API calls are often the superior initial choice.
Why You Should Join Live
The recorded content is there to teach you the concepts. The live sessions are there to teach you the context.
As we discussed today, learning happens in stages. It happens when you struggle with a problem, and it happens subconsciously when you sleep on it. If you wait until the weekend to start, you rob your brain of the time it needs to integrate these concepts.
I encourage you to attend as many live sessions as possible. This is your time to:
- Pick my brain: Ask about industry trends, deployment nightmares, or research papers.
- Unblock yourself: Don't get stuck on logistics; let us help you move fast.
- Network: Your peers are asking brilliant questionsโlearn from them.
Next Steps
- Complete Week 1 early. Don't let the TAs chase you!
- Submit via GitHub. Get comfortable with the PR workflow.
- Find your "Why." As I mentioned at the end: If you don't have a strong reason for why you are putting yourself through this discomfort, it won't stick. Find that reason and use it as fuel.
See you in the next session!
Best,
Dr. Arjun Jain Founder, AIMasterClass.in
Why do we need non-linear activation functions?
Years ago, Arjun answered a fundamental question that still stumps many beginners: "๐๐ก๐ฒ ๐๐จ ๐ฐ๐ ๐ง๐๐๐ ๐ง๐จ๐ง-๐ฅ๐ข๐ง๐๐๐ซ ๐๐๐ญ๐ข๐ฏ๐๐ญ๐ข๐จ๐ง ๐๐ฎ๐ง๐๐ญ๐ข๐จ๐ง๐ฌ?"
Itโs easy to say "to learn complex patterns," but thatโs not an explanation. It's a soundbite. At ๐๐๐๐๐ฌ๐ญ๐๐ซ๐๐ฅ๐๐ฌ๐ฌ.in, we teach the geometric reality.
โ The Linear Trap
If we have a neural network without non-linearities, even with 100 more layers, mathematically, we are just adding linear functions to linear functions. ๐(๐ฑ) = ๐โ๐ฑ + ๐โ๐ฑ... is still just a line.
And that straight line fails. It creates a "linear" model that is too rigid for the real world.
โ The Power of the "Bend"
This is where the magic happens.
We introduced a simple ReLU (Rectified Linear Unit) activation function.
Suddenly, the network isn't drawing a straight line anymore.
By simply saying "zero out the negatives," we allowed the function to bend. That bend allows the model to wrap around the red dots and successfully classify the green ones. This is how Deep Learning actually worksโby folding space, not just drawing lines through it.
๐ The Lesson
If you can't visualize why the math works, you aren't an AI engineer yet; you're just an API user.
At AI Masterclass, we obsess over these first principles. We don't just teach you syntax; we teach you the intuition that lets you debug, design, and innovate. If you want to stop just running code and start understanding the mechanics of intelligence, come join us.
Weekly sessions with Arjun - poll
Cohort 1 Orientation: Recording, Slides & The Road Ahead ๐
Today
Hello @everyone,
It was incredible putting faces to names today. The energy in the room was palpableโwe have 37 builders, founders, engineers, and researchers here, and I am genuinely excited to see what we build together.
If you missed the session or want to re-watch the Q&A, the recording and slides are below.
๐ฅ Session Recording:
Click here to watch the Meeting Recording
Passcode: ryi2=ZwH
๐ Slides:
(See attached PDF: Orientation AI Masterclass.pdf)
โฑ๏ธ Session Recap & Key Takeaways
For those catching up, here is the essence of what we discussed:
- The Philosophy: Chaotic Action Beats Orderly Inaction
We are here to build, break things, and fix them. You will get uncomfortableโthat is a symptom of learning. If you fall behind, finish it first, and make it perfect later. Just make it exist.
- The Structure
Month 1 (Weeks 1-4): The heavy lifting. We build the foundation (Classic ML, Neural Networks from scratch, CNNs, Optimization).
Month 2 (Weeks 5-8): The advanced stack (ResNets, Transformers, Diffusion Models) and the Capstone Project.
- The Support System
You are not doing this alone.
Circle: Your first point of contact. Don't be invisibleโask, engage, and share your errors here.
Your Personal TA: You have a direct line via WhatsApp. Use it.
Head TAs: Siddharth, Mukilan, and Sanjay are here to ensure you succeed.
Me: My number is in the slides. If you are truly stuck, I am here.
- The "Don't Be Invisible" Rule
The only way you fail this course is if you disappear. Update your profile picture on Circle, introduce yourself, and find your tribe. This is how you find your future Capstone teammates.
โ Action Items before January 5th
Complete Week 0 Setup: Ensure your environment is ready.
Connect with your TA: Send them a message.
Circle Profile: Upload a photo and introduce yourself if you haven't.
Show Up: Week 1 content drops soon. Be ready.
The difference isn't talentโit's showing up. You are the founding members of this Masterclass. Let's make it count.
See you on the leaderboard.
Best, Arjun
Cohort 1 Orientation Dec 30, 7:00 PM IST
What weโll cover tomorrow:
โข Welcome & context
โข Course structure - how the next 8 weeks will unfold
โข Meet your TAs - how the support system works
โข Assignments & submissions - the workflow + feedback loop
โข AI tools & subscriptions - what youโll get access to
โข AMA - ask anything before we begin
โ Drop questions in advance on Circle:
https://ai-masterclass.circle.so/c/aim-cohort-1/#message_2059388557
Zoom Details
Time: Tuesday, Dec 30, 2025 | 7:00 PM IST
Join Zoom Meeting
https://us06web.zoom.us/j/83086324165?pwd=Hu68kEaaL1EAvYxiDeuSs3xN9rddHM.1
Meeting ID: 830 8632 4165