• Conference
  • Automotive
  • Use cases

InCabin.Sensing USA 2025: What OEMs Are Struggling with — And Why Data Is the Bottleneck

By: SKY ENGINE AI
scroll down ↓to find out more

InCabin.Sensing USA 2025 was the fourth in-cabin event SKY ENGINE AI attended this year. Across the US, Europe, and China, one message has been consistent: OEMs and Tier 1s see DMS/OMS as a top priority but training and validating these systems still remains a challenge

That’s why such in-cabin monitoring events matter. They reveal what still doesn’t work and what needs to change before in-cabin systems can be fully trusted in production.

Here’s our wrap-up from the conference, along with some insights from Krzysztof Smyl, Software Engineering Manager at SKY ENGINE AI, and Dr. Malcolm Souter, our Sales Engineer, who attended the US event.

OEM Reality Check: Questions That Dominated the Room

Both the main conference sessions and the talks we joined in Dearborn focused on OEMs’ current pain points. We’d suggest grouping them into three main categories.

Driver Readiness and Shared Control

When the participants talked about autonomy, most referenced the familiar SAE scale from Level 0 to Level 5.

SAE levels of driving automation

Up to Level 2, things feel relatively straightforward. The driver stays alert, the system assists, and responsibilities are clear.

Things get complicated at Level 3. Here, the car takes over all driving tasks. But the driver still needs to step in when something goes wrong. And that’s the catch. 

If the driver doesn’t need to pay attention most of the time, how can we expect an instant, focused takeover?

“There were many discussions that Level 3 may not be feasible at all, because getting the driver to intervene when the driver doesn't have to be attentive all the time is not very realistic. So some people said that they need to go straight from Level 2 to Level 4.”
Krzysztof SmylSoftware Engineering Manager at SKY ENGINE AI

Accuracy, Trust, and Edge Cases

Accuracy came up again and again at the conference. OEMs need DMS models that perform off the “happy path.” Night drives, shifting light, drivers with different faces, postures, and attention levels — these everyday situations put DMS and OMS models to the real test.

Edge cases were another hot topic. It’s almost impossible to capture all the variety of seating positions, occlusions, or unexpected driver behavior in real-world datasets. Yet, these are often the moments that decide if a system is trusted in production. Miss them, and models might pass tests but fail on the road.

OEMs kept emphasizing the same points: they need repeatable validation, full ground truth, and datasets that capture the full complexity of real-world driving.

Privacy vs. Personalization

Personalization is a big goal for OEMs. They want DMS systems that can adapt in real time to a driver’s distraction, drowsiness, or cognitive load.

To do that reliably, models need diverse training data: different faces, postures, gazes, and emotional states. Getting this data in all its variety is hard because it’s highly personal. Collecting or storing it can quickly run into privacy laws like GDPR, CCPA/CPRA, and other regulations, which treat these signals as sensitive.

So the challenge is: how do you train and validate models for personalization without touching real driver data and risking legal issues?

What OEMs and Tier 1s Are Doing About It

Here’re some ideas we noticed at InCabin.Sensing USA offering answers to what to do with those challenges. 

More Modalities, Not Just More Images

For DMS, visible-light data alone has never been enough. Night driving is the classic example. A system trained only on RGB camera data will struggle once the sun goes down. 

That’s why OEMs are increasingly looking at multimodal data. We heard a lot about combining inputs from cameras, radar, UWB, voice, and various biometric sensors. Thermal sensing also came up in a few conversations. 

For OEMs, it’s clear that a better DMS needs a broader view of the cabin, not a single lens. And as systems evolve, the number and variety of data sources will only continue to grow.

Face Problem (Quiet but Critical)

Body posture and head position are relatively easy to capture, and many OEMs already use these signals. But posture alone doesn’t tell the full story.

Faces and expressions reveal emotional states that directly impact driving behavior. But facial cues remain underrepresented in training data because high-quality, lifelike synthetic faces are hard to generate. Yet, they’re essential for DMS systems to truly understand driver behavior and edge cases.

Faces matter — maybe more than we admit — and investing in realistic facial data is critical for safer, more reliable in-cabin monitoring.

Training and Validation Are Splitting

For a while, many OEMs planned to train DMS models in-house. That thinking now seems to be shifting.

More OEMs are leaning toward Tier 1s handling model training, while OEMs focus on data validation. Their priority is becoming testing, stress-testing, and verifying the models they receive. 

This change reshapes expectations for dataset providers, especially when it comes to edge cases, rare behaviors, and extreme conditions to stress-test models before deployment.

Hidden Bottleneck: Why Real-World Data Can’t Keep Up

Training and validating a DMS with real-world data is becoming totally inefficient. The limits of that data make models stumble in tests and on the road.

  • Regulations demand full coverage. Rules like the EU General Safety Regulation and guidelines such as Euro NCAP 2026 and 2030 require OEMs to account for an extremely wide range of scenarios to pass compliance tests or earn high safety scores. Models must be tested across all meaningful combinations: occupants, behaviors, environments, lighting, weather, and more.
  • Real-world data can’t scale that much. DMS teams cannot collect enough diverse, real-life data across all the scenarios needed. It’s not only inefficient in terms of cost and time. It’s practically impossible, and data scientists frequently point to this major bottleneck.
  • Edge cases are rare but critical. Collecting them is restricted ethically (e.g., children in cars), legally, and pragmatically (some dangerous situations can’t be staged safely). And without edge cases, your DMS might miss risky behaviors and unusual driver reactions that could put safety at risk.

As a result, getting controllable, verifiable, and diverse data in the real world is no longer feasible, given the complexity of scenarios modern DMS must handle.

How SKY ENGINE AI Aligns With What OEMs Expect from their DMS/OMS

For our team, visiting in-cabin events is about more than pitching our product or networking. By talking directly with OEMs and Tier 1s, we learn their real business needs and explore how we, as data scientists, can help solve them. These insights guide the strategic development of our Platform and help us fully address the challenges our clients face today.

SKY ENGINE AI’s Synthetic Data Cloud is built to provide training and validation datasets needed to build reliable in-cabin monitoring.

  • Full control over the training environment. The Platform provides 124 key parameters and hundreds of secondary ones to generate training data for all types of drivers and occupants, cabin geometry, behaviors, environment, and more. This way, OEMs get data for regulatory scenarios they can’t collect in real life.

Setting up scenarios for detecting seatbelt use
  • Faces, emotions, and cognitive signals. We deliver realistic faces in 30 emotional states based on the Facial Action Coding System (FACS), complete with 216 facial landmarks, gaze vectors, and gesture data. DMS models learn to read these signals to see if a driver is tired, distracted, or overloaded.

Detecting different types of driver behavior
  • Multimodal synthetic data. Today, we provide static and animated scenes across RGB and NIR. And this is just the start. Our roadmap includes many more modalities coming soon.

Training DMS to perform in daylight and darkness
  • Ground truth for validation. Every render from the Platform comes with full 2D and 3D ground truth: segmentation masks, bounding boxes, gaze vectors, normal maps, depth maps, keypoints, and more. No manual labeling is needed: all data is generated automatically and plugs straight into OEM validation workflows.

Generating full ground truth for every render

What This Means Going Into 2026

As the transition to full L4 autonomy becomes the defining challenge in automotive, the demand for data to train and validate these systems will only keep growing. Given the limitations of real-world data, synthetic data is moving from “nice to try” to a critical infrastructure for safe, reliable autonomy.

If you’re an OEM or Tier 1 and want to see how high-quality synthetic data can improve your training or validation workflows, reach out. Our team is happy to discuss your challenges and explore solutions together.

Learn more

To get more information on synthetic data, tools, methods, technology check out the following resources:

Sign up for updates

By submitting this form, I understand SKY ENGINE AI will process my personal information in accordance with their   Privacy Notice . I understand I may withdraw my consent or update my preferences   here  at any time.
Please accept this