Founded in 2017 by former Google engineers and product managers, Deepen AI provides artificial intelligence and annotation tools for autonomous systems. It has now launched its latest development in 4-D semantic segmentation for LiDAR and fused sensor data.
The company claimed that the ability to generate the accurate, scalable 4-D segmentation data (or in other words, 3-D frames progressing through time) is an industry-first.
LiDAR sensor readings have usually been semantically segmented on a non-sequential, frame-by-frame basis, to train autonomous applications. To semantically segment something means to correctly divide up objects and surfaces to identify what they are, the company explained.
"The process is tedious and costly, as every frame has hundreds of thousands of points that need to be scrubbed individually to accurately identify objects in a given frame," it added.
"For this reason, many autonomous manufacturers have had to rely on bounding boxes, which are easier to produce but are far less effective at discerning edges between objects, like a road and a sidewalk. "
Deepen believes its LiDAR technology can provide more precise readings for identifying and semantically segmenting objects in four dimensions.
The technology can also be used in mining, where dangerous tasks are increasingly being operated remotely or fully automated.
Mohammad Musa, founder and CEO of Deepen, said: "This is a massive step for developing AVs [autonomous vehicles] safe enough to deploy on an enterprise scale.
"Furthermore, a massive door has opened that will give innovators the opportunity to unleash their creativity to leverage 4-D data segmentation - perhaps in ways that we haven't even been thought of before."
The new technology is now available to Deepen's customers as a limited-access beta.