What if your industrial equipment could detect its own faults before they happen—no cloud required? ROHM’s latest AI-enhanced microcontrollers (MCUs) may just make that future a reality. By embedding machine learning directly into devices, they’re cutting the cord on network reliance—and setting a bold new standard for predictive maintenance.
What’s the News?
Japanese semiconductor manufacturer ROHM has launched a new line of AI-equipped microcontrollers—ML63Q253x-NNNxx and ML63Q255x-NNNxx—that independently perform both learning and inference directly on-device. These are the first AI MCUs in the industry to run entirely offline, marking a significant leap forward in endpoint AI.
Traditional AI deployments often require cloud connectivity and heavy-duty CPUs, which can be expensive, slow, and vulnerable to network issues. ROHM’s solution? Eliminate the need for connectivity altogether. Using its proprietary ‘Solist-AI’ technology, the company has embedded a three-layer neural network into its MCUs, making them capable of real-time fault prediction and anomaly detection at the hardware level.
These chips are powered by ROHM’s new AI accelerator, ‘AxlCORE-ODL’, which enables them to process data roughly 1,000 times faster than traditional software-based MCUs—at just 12MHz. That means faster detection of equipment abnormalities, lower maintenance costs, and fewer unexpected breakdowns.
The MCUs use a 32-bit Arm Cortex-M0+ core and consume only 40mW of power. They also support CAN FD communication, three-phase motor control, and dual A/D converters, making them ideal for everything from industrial motors to home appliances.
ROHM began mass production of eight models in February 2025. The company has also launched Solist-AI Sim, a simulation tool that lets engineers test and train the AI model before implementation—streamlining development and increasing confidence in deployment.
Why It Matters
This marks a major win for industries where uptime is mission-critical. From factory robotics to smart home devices, having MCUs that can learn and adapt on-site reduces both technical friction and operational costs. No cloud latency. No data vulnerability. Just intelligent, localized control.
It also unlocks real-time diagnostics in places with poor connectivity—like offshore facilities or remote power grids—giving manufacturers a smarter edge in managing complex equipment.
💡Expert Insight
According to ROHM’s internal benchmarks, the MCUs’ AI processing is approximately 1,000 times faster than their earlier models thanks to hardware-level acceleration. Industry analysts have long predicted that endpoint AI would see explosive growth once inference and learning could be done entirely offline. With ROHM’s entry, that vision now feels within reach.
GazeOn’s Take
This could spark a new wave of AI integration at the chip level—especially in embedded systems where connectivity isn’t always a given. Expect other semiconductor firms to follow ROHM’s lead in the race for edge-ready AI. And for developers? These chips offer a faster path to smarter devices with less engineering overhead.
💬 Reader Question
Could on-device AI be the key to safer, smarter machines in every home and factory? Let us know what you think.
About Author:
