LEARN THE STATE-OF-THE-ART DEEP LEARNING SKILLS YOU NEED TO...

MAKE YOUR END-TO-END REVOLUTION!

Learn Cutting-Edge Deep Learning Skills to Build and Train End-To-End Systems

Dear friend,

If you're been following the field of self-driving cars lately, you probably noticed a "shift" in the algorithms being used... Indeed, most of the engineers were focused on topics like Perception, Localization, Planning, Control, V2V, and others...

​And just a few years later, it seems like...​

Everybody has suddenly shifted towards End-To-End!

I mean, look at these recent videos from the biggest self-driving car companies:

COMMA.AI, E2E PLANNER

WAYMO, EMMA

TESLA, FSD13

NVIDIA, HYDRA-MDP

MINUS ZERO, FOUND. E2E

WAYVE, EMBODIED E2E

Not only every big player is on End-To-End, but even 'Legacy' carmakers are

Set me tell you a quick story:

Last year, I was preparing a Deep Learning SOTA 2024 webinar, and I was going to hold it live with the Autonomous Driving team of a big carmaker. 400+ self-driving car engineers came to the webinar, and during 50 minutes, I presented how Deep Learning was used in the 4 pillars of autonomous driving: Perception, Localization, Planning, and Control.

Came the Q&A, and I was expecting a few questions about Deep Learning vs Traditional approaches, or what is the best model for a specific task, but it turned out that...

Almost every single question was about End-To-End Learning!

It felt crazy. My presentation only had 2 or 3 slides on it, and yet, I was overwhelmed with questions on End-To-End Parking, Visualizing the inside of E2E models, passing ISO norms with End-To-End, handling misrepresented events and edge cases, scaling to new cities, and more...

What started as a "Modular Deep Learning in Self-Driving Cars" conference ended being an End-To-End seminar!

And it wasn't just this one time.

Over the next months, I kept meeting companies, and the bigger the company, the more likely they'd have questions on End-To-End Learning. Self-Driving Car companies wanted to know more about End-To-End. For some, they HAD to, because catching up on Tesla, and being able to "judge" was a matter of survival.

It's the same for engineers:

Have you looked at job offers lately? How often do you see words like "End-To-End", or "Foundational Model", or "Imitation Learning" today versus 3 years ago?

Things are shifting to what british autonomous startup Wayve calls "AV 2.0", which can be visualized like this

This is what has been going on... Now comes a question:

How can you catchup on End-To-End now, with all these other things you have to do?

What is there to learn? How long is this going to take? Which papers do you have to read? And what would be the outcome? Some partial knowledge of some bricks here and there?

Things are moving really fast, and if you're ever asked to "catchup on End-To-End", chances are you'll only get a few bricks on knowledge here and there, but might never know enough to feel confident when making a decision related to it.

The last thing we want is to be that investor who isn't able to tell whether a startup has a BS product or genius product. In both cases, stakes are high.

It's the same thing for End-To-End, you have to learn enough to be able to develop some "judgment" about it. Because unlike what this course might seem like...

I don't necessarily recommend going full "End-to-End"!
​​For many companies, it makes absolute zero sense. But for others, it makes TONS of sense. However, I would recommend EVERYONE to take a close look at it, before either learning it, or disregarding it... and this is why I'm introducing..

END-TO-END REVOLUTION: Prophetic Skills to Master the Endgame of Self-Driving Cars

The first ever End-To-End Learning for Self-Driving Cars course, which contains:

  • Advanced Knowledge: How End-To-End Learning is studied, tested, implemented, and engineers in real self-driving cars
  • Cutting-Edge Projects: Build & Train your own foundational End-To-End Architecture
  • Real-World Connection: Study Tesla's E2E Patents, Understand Nvidia's architectures, and dive into the real world of E2E 

This course is made in 3 modules, let's take a look at what's inside each of these...

MODULE I: Imitation Learning Foundations

In module 1, we're going to learn the foundations of Imitation Learning, and then dive into simulators, "self-driving car nuketowns", and environments used to experiment E2E systems. Finally, you'll learn how to prepare a dataset to train E2E agents.

What's included:

  • My official apology video explaining why I didn't buy in the End-To-End concept in the first place, and the 4 major changes that made End-To-End Learning possible today
  • The 3 types of End-To-End Learning algorithms, and the difference between Reinforcement Learning, Inverse Reinforcement Learning, and Imitation Learning
  • The little known secret of 'Self-Driving Car Nuketowns', and where to find 6 of them in the world (full address provided)
  • Showcase of an End-To-End autonomous parking algorithm (and where to find the code and ROS bags to make it work)
  • A complete introduction to the world of simulators, and the difference between Open vs Closed Loop Simulators (many engineers work with datasets alone, but building an understanding of simulators can unlock many new projects to work on)
  • The 4 major parts of any imitation-based simulator (when you think about it, a self-driving car simulator must contain a world, traffic, and two other things)
  • A deep dive into Procedural Road Generation, with code examples showing how to create a highway or roundabout (many engineers use simulators like CARLA without really wondering how they're built, this part will introduce you to it and give a good understanding of the pros and cons of using simulation)
  • T-SNE Visualization reveals why simulators can't be trusted for self-driving car data collection, and what to use instead
  • 3 mathematical models we use to simulate traffic, and why it's creating problems in self-driving car realism (we'll also see other related topics like vehicle dynamics or sensor simulation, and dive into techniques to make it work)
  • Why the "End-To-End via GTA V" simulator was a lie from the beginning, and the #1 reason why we can't use Grand Theft Auto to train self-driving car agents
  • Why it's technically feasible to train End-To-End algorithms in datasets like Waymo, NuPlan, and others, and how to leverage Open-Loop datasets...
  • Possibly the best Self-Driving car simulator out there (much better than CARLA), and a complete overview of the market of simulation (while simulators aren't the only way to train end-to-end, they contribute a lot and having a solid understanding of them can help build a more complete expertise)
  • PROJECT 🔥: A complete walkthrough of the data needed for an End-To-End algorithm to work, including sensor data (LiDAR, Camera, ...), Waypoints/Trajectory, and even Auxiliary Tasks (Depth, Objects, Segmentation, HD Maps, ...)

Let's take a break here:​

​In the course, you'll have a "global" project made of 3 parts (one per module), and in the first part, you'll build practical skills in things like Image Encoding/Decoding (8 bit, 24 bit, ...), or have a good understanding of datasets.​​You probably noticed, this first module is about simulators and test environments. Starting with the end in mind is extremely important in End-To-End.​

MODULE II: End-To-End Architectures

In module 2, we'll dive into the most advanced End-To-End algorithms. You'll learn how to connect Perception to Planning, how to plan a trajectory using Deep Learning, and how to train E2E agents for waypoint prediction.

What's included:

  • The secrets of Deep Motion Planning, and examples of x4 algorithms that use Deep Learning to generate waypoints (although there isn't yet a complete 'classification' of Deep Planning algorithms, we'll use our intuition to build our own)
  • A Deep Dive into the thrilling TransFuser Architecture, and how it's used for End-To-End Autonomous Driving using LiDARs and Cameras (the magic part of it is that it's not just a camera based E2E algorithm, but also takes LiDARs, which will have us look into Deep Fusion)
  • Why the best end-to-end algorithms are "modular end-to-end", and 2 simple techniques to enable "Gradient Flowing" in self-driving cars
  • The number 1 rule to follow when plugging the output of Perception to the input of Planning (and some advanced variations of the simple sequential flowing)
  • What Andrej Karpathy meant when he said Tesla is using "Fixed Queries" in Occupancy Networks, and an eye popping demonstration of how to train queries for specific tasks like tracking or segmentation
  • A detailed overview of modular motion planning, both high and low level (if you are going to be the "Deep Planning guy' of your company, it'll make some sense to study Deep Planning right after studying traditional Planning, which is what we'll do here)
  • Deep Planning: How to use LSTMs/GRUs to predict a waypoint sequence that makes sense and follows a realistic trajectory (in fact, it's not just "how to use", but we'll see a concrete end-to-end example of Gated Recurrent Units in action for waypoint prediction)

Wait, pause:

​If you're wondering what we'll do here, we are going to study a point-by-point generation of trajectory. If a trajectory is a set of points, these points must follow each other and we must remember the past information. However, there are moments when the road shows a sudden curve, or need to brake, and in this case the past waypoints must be totally forgotten. This is what we'll see in this module on Deep Planning.

Let's resume:

  • An exclusive look into 3 extremely advanced Neural Planners (we'll see algorithms like NEAT, NIMP, and others built by self-driving car giants like Uber, Geiger, and others)
  • How to be the sole trusted "Go-To" person regarding Deep Learning and End-To-End Learning in your company
  • UniAD: How the CVPR23 winning algorithm works, block by block (we'll study it all from the inputs, to the blocks themselves - mapformer - occformer -trackformer, ..., to the loss function, and even the most important element: the "Attention Flowing")
  • Line By line Explanation of how to build a Self-Attention based Transformer that fuses LiDAR and Camera Data both spatially and temporally (and a clever trick to use CNNs on LiDAR point clouds)
  • Hydra-E2E: How to fuse HydraNets and End-To-End Learning to train an E2E network on multiple tasks (you'll also see the impact of HydraNets in E2E, and why it's almost impossible to build E2E algorithms without hydranets)
  • The often-overlooked motion model equations we can input in Deep Planners to make them more realistic in training
  • End-To-End Project 🔥: Inside a "from scratch" End-To-End architecture that fuse camera and LiDAR and plugs that to Deep Planning to drive autonomously in simulation data

MODULE III: End-To-End in Production

In module 3, we'll zoom out and try to answer critical questions regarding End-To-End use in autonomous driving, including the "black box" problem, ISO norms, and more...

What's included:

  • Inside the big lie of End-To-End Learning, and why it's impossible to have the simplicity of "data in, trajectory out" (imitation learning comes with millions of complexities, from the quality of your data, to the algorithm, to the way you train them...)
  • What is Causal Confusion, and a sure way to confuse almost any End-To-End algorithms using temporal sequences
  • 5 "Black Box Openers" you can use to visualize the inside of an End-To-End algorithm and make them as explainable as Modular autonomous driving
  • Why your End-To-End algorithms don't generalize well, and the "good practices" to adopt when scaling an algorithm
  • What happens when you learn from bad drivers, and 3 ways to solve the "imitation problem" (earlier this year, a video of a Tesla in FSD mode caught taking over a giant line to get first went viral — we'll see solutions to this problem)
  • Online vs Offline Glue Code — mobileye explains the tradeoff of removing 300,000 lines of code (and what to do instead)
  • E2E Visualization Project 🔥: Visualize the Inside of an End-To-End algorithm, including 3D Bounding Boxes, Attention Maps, Depth Maps, Trajectory, and more...

​If we pause for this last one, you will often be taught that End-To-End is a "black box". Yet, when you understand Deep Learning, you know that it's not true, and that you can visualize everything a model contains.

​​​​For example, in the final project, you'll learn techniques to open the black box via Attention Map Visualization. Notice in this short sample how the Attention is first focus on the background, but goes to the cyclist as soon as it enters our lane:​

End-To-End Learning is a challenging task, possibly one of the most difficult, but when you understand it well enough, you can develop intuition, up to a mastery of Deep Learning.

BONUS (Value €99)

Tesla Patent Study Vault 🔓

In this bonus, you dive into the real-world use of End-To-End with a complete breakdown of Tesla's work.

First, you will access my Tesla FSD Masterclass,30' deep dive explaining Tesla's HydraNets, Occupancy Networks, and transition to Deep Planning in FSD 12. Then, you'll get access to x3 patents study on HydraNets, End-To-End Learning, and Trigger Classifiers. 

This is the ONLY place where you can find such advanced teachings on Tesla FSD.

Frequently Asked Questions

Who is this course for and not for?

This course is not only advanced, but it's also not for everyone. So let me help you decide.

  • If you don't validate any of the the mandatory prerequisites, please don't join
  • Even if you do validate them, but don't validate the recommended prerequisites, make sure you're motivated before you join
  • If you have never heard about End-To-End before, you're probably too early to join
  • If you wish for a 50+ hours course showing you every possible thing in all possible topics, not the right course, or platform (all my courses are short)
  • If you are looking for a job, just be aware this course is primarily built for people already in the industry. You're far better off learning Kalman Filters and Stereo Vision if you need a first job.

Sounds good? Okay, so let's now see who I built the course for:

  • You're already in, or close to be in the AI/Self-Driving Car industry, and you want to be in control of the latest technologies
  • You've been pressured by your boss to learn everything you can about end-to-end for yesterday, and still don't know if you have all the keys
  • You'd like to learn End-To-End but don't know where to start
  • (Recommended) You validate the prerequisites and already have some strong Deep Learning skills

What is the format?

This is a self-study online course, which contains videos, articles, drawings, paper analysis, code, projects, and more...

How long is the course?

The course is estimated between ~5-10 hours, depending on whether you just want to watch the content or do the projects as well. If you really want to explore the field completely, then we also included many ways to explore on your own after taking the course; and I made sure you could go on 20+ hours if you want to.

What are the prerequisites?

At Think Autonomous, our courses are ADVANCED and you can't take them as a complete beginner. This is one of the reasons why you can complete it in just 5/10 hours (we don't need to reexplain all the foundations). So here are the prerequisites we estimate you have before taking it:

  • Required: Coding in Python
  • Required: Deep Learning Basics with PyTorch (Backpropagation, CNNs, MLPs, ...)
  • Required: High-School Maths (derivatives, sin/cos, ...)
  • Required: Intermediate Computer Vision (Depth Estimation, Segmentation, Object Detection, ...)
  • Required: Modular Self-Driving Car Theory (you know what the words Perception, Localization, Planning, Control mean)
  • Optional: Transformers (we'll see many Transformer based architectures)
  • Optional: Bird-Eye View (BEV Perception, BEV Fusion, BEV HD Maps, BEV Planning, ...)
  • Optional: HydraNets (most working End-To-End architectures are HydraNets)

Do I need a simulator?

No! For the purpose of this course, we have extracted data (images, points, trajectory, ...) from the CARLA simulator and will run it as "open-loop". However, and the good part is, the networks will be trained on CARLA, which means if you then want to take your algorithms to CARLA, you can do it without retraining!

 Do I get support if I'm stuck?

Yes, the course is the first who has been hosted on the Think Autonomous 2.0 platform, optimized for collabotation, chat, support, answers, and community learning.

This course is covered by the End-To-End Guarantee

The first time we launched the course, we wanted to try a guarantee for the first time ever, and allow End-To-End Revolution, one of our most premium course, to be guaranteed. This time, we are doing it again!

Here's how it works
: Within 14 days, even if you have watched this course from one end to the other, and aren't satisfied with it, you can cancel your purchase and get your money back.

This removes all risk on your end, the End-To-End Guarantee has your back! (If you purchase a bundle below, the refund will only apply for the End-To-End course, since this is the only one in my catalogue covered by this guarantee)​

THE FINISH LINE

STUDENTS OF THINK AUTONOMOUS SAY

"Very Very Well Done"

"The segmentation + segformers courses are very very well done, I'm actually finishing up the latter!

The pricing was originally an obstacle, but despite that, when the quality is high I don't mind paying more.

Now, as a result of buying the course, I got an actual understanding of the Attention mechanism and an overview of the Transformers world, including Segformers.

What I liked the most out of the course was the workshops! And then the drawings, you going over the different operations inside of the architectures.

I believe it's necessary for an actual understanding.

I highly recommend it if someone can afford it, and already has hands-on and theoretical knowledge/ experience in Deep Learning."

Alessandro Lamberti, Machine Learning Engineer @ NTT DATA Italia

"I cannot even begin to express the number of positive reviews we've gotten from the course"

"Last Week, Jeremy Cohen launched Visual Fusion for Autonomous Cars 101 inside PyImageSearch University. I cannot even begin to express the number of positive reviews we've gotten from the course.

Jeremy is an incredible teacher and the best person to to learn autonomous cars from."

Adrian Rosebrock, Founder of PyImageSearch.com

"This is a magnificent practical course"

"Before joining, my biggest obstacle is the time required to complete this course. But I enrolled, and I have discovered a network architecture for multitasking that I did not know how to implement. I loved the quality of the material and the practical content, and I liked to learn about possible practical application fields of multitasking.

This is a magnificent practical course for discovering real areas for the application of multitasking architectures. The didactic quality of the course and its material were great."

Xose Ramon Fernandez Vidal, HydraNet Engineer

Why is this course Unique? ☄️

This course is unique because it's an incredibly advanced dive into End-To-End Deep Learning for self-driving cars.

​While other generic solutions may simply overlook at End-To-End, and describe a 2017 architecture, this one shows the 2025 approaches, and introduces ideas like foundational models, simulators & digital twins, deep fusion, deep planning, and more...

For an engineer or a company, knowing exactly what's going on today matters, it gives you awareness, intelligence, and mastery of the field you work in.

pexels-pixabay-247786_.jpg

END-TO-END REVOLUTION

Here is what you get in the main offer:

  • END-TO-END REVOLUTION

[495€ value]

  • BONUS: TESLA PATENT STUDY

[99€ value]

Total value: $594

Enroll For: 495€

*Or 1 payment of 249€ today, and 1 additional payment of 249€.

END-TO-END PROPHECY

Here is what you get in the Premium offer:

  • END-TO-END REVOLUTION

[495€ value]

  • BONUS: TESLA PATENT STUDY

[99€ value]

  • LEARN TRANSFORMERS

[249€ value]

  • BIRD EYE VIEW

[249€ value]

  • HYDRANETS & DLC

[498€ value]

  • SELF-SUPERVISED LEARNING DOJO

[129€ value]

Total value: 1,719€

Enroll for Just: 997€

*Or 1 payment of 349€ today, then 2 additional payments of 349€.

z-logo-charcoal.svg

© Copyright 2025 Think Autonomous™. All Rights Reserved.