Announcing the Open-Source Moetsi Sensor Stream Pipe

Announcing the Open-Source Moetsi Sensor Stream Pipe

September 30, 2019

TL:DR;

The world is made of rocks and shops. Restaurants and highways. Beaches, parks, and bedrooms. Or it was. The ‘real’ world has a growing virtual counterpart. A digital twin. The environments around us are being overlaid with metadata to create augmented realities that deliver entirely new experiences. A new universal infrastructure upon which anyone can build anything.

You’ve probably already flirted with Augmented Reality (AR). You may have come across interactive advertising screens in your local mall. You might have been one of the 45 million who downloaded Pokemon Go. You definitely know someone who did. But mapping and intertwining the physical with the digital has implications that go way beyond gaming and gimmickry.

Coupled with 5G connectivity, innovations in sensor data-gathering will facilitate a whole global network of self-driving cars, instantly sharing information with one another. Emergency responders will be able to visualize real-time incident data as it evolves around them. Surgeons will wear headsets that overlay crucial data onto a patient’s body during procedures.

Anything you can think of that relates to the interaction and understanding of your environment (recognizing faces, surfaces, materials, distances, locations) can and will be amplified by digital elements.

At Moetsi our mission is to help digitize reality. To build an AR Cloud that enables the persistence of digital information in the physical realm, creating a seamless interface between the world as we know it, and a world augmented, improved and expressed through new technologies.

But there’s a big problem.

Creating something as unfathomably huge as an AR Cloud will require a frankly disturbing amount of computational power backed up by next-level math and searingly smart algorithms. And this means we’re gonna need far bigger and better hardware that can actually run that stuff — hardware that can’t but put in cars or on Hololenses.

Here’s the long and the short of it. Unless someone invents an insane super chip with a hundred times more computation power than current systems, the sheer volume of different, real-time data types gathered by local sensors is way too much for any local device to handle safely. Even if your battery lasted beyond a few minutes, the device may well explode in your face. Sure, you can physically plug your device into a “system on a chip” (SoC) with a cable (like Nreal) and take the processing off-device, but tethered processing will always lag way behind the power of a purpose-built workstation.

Take self-driving cars, for example. In order for them to actually navigate the roads, they need to build a digital copy of their immediate environment on the fly via continuous point cloud captures. Here’s a great video that explains how that works. This super demanding spec is the reason why companies like Tesla are now designing their own silicon — issues like power allocation and fault tolerance in both the hardware and software cannot be accounted for on your run-of-the-mill Superchips. In fact, their latest release boasts a 21x performance gain for an increase in wattage of just 26% (72W, vs. 57W) when compared to the Nvidia chips they were using previously. Though this trade-off isn’t to be sniffed at.

Future implementations of self-driving cars will not only have to fuse multiple concurrent sensor data streams into a single 3D model but also share it with other vehicles across a complex network to create a shared digitized reality. That’s a lot more watts. Even a car, with its huge power supply and super juiced GPUs, will have a meltdown — of course, an engineer won’t ever let that happen, but you can overload these things.

All of which means that future computations simply shouldn’t be done on-device — no matter how large that device may be — and should be processed on a remote server instead. But to transmit multiple sensor streams in real-time requires the compression of various sensor data because the bandwidth of any network is also finite.

And that’s where we come in. We have developed the Moetsi Sensor Stream Pipe to compress and stream sensor data to a remote server at low-latency and without throttling your bandwidth. You are no longer confined to the computational limits of your local device, and you don’t have to make a massive trade-off on time-to-computation because our pipeline is super fast. It sounds simple when you put it like that, doesn’t it?

Here’s how it works:

The on-device Sensor Stream Server takes in raw frame data and compresses it to reduce bandwidth. It’s then streamed over 5G or LTE to the Sensor Stream Client in the cloud, which is able to decompress the data at low latency for further processing.

Moetsi can currently stream existing RGB-D datasets such as Bundle Fusion or Microsoft RGB-D Dataset 7-Scenes at high compression rates and very low latencies. And we are able to stream Kinect color, depth, and infrared data at 20 Mbps (20x smaller than original data) with minimal overhead (~30 ms overhead/latency).

Right now our focus is on RGB/D/IR data, but we know that to digitize reality we need to do a lot more. We’re planning to build-in the capacity to compress and stream all sorts of sensor data, including LIDAR, [some others?]

We’ve made our pipeline completely open source. Why? Because we believe that the only way reality can be digitized is by collaborating together. We’ve already pieced together the first part of the pipe by interfacing with datasets (for testing downstream processing) as well as the new Azure Kinect (for real-time use), but there are a lot of sensors out there and, as much as we’d like to, we simply can’t build interfaces with them all ourselves.

So we’ve put it out there in the hope that the dev community will take what we are building and run with it. That inquisitive minds will evolve our pipeline to work with more sensor types, devices and compressions and create some killer applications. That way we can fast-track the creation of a digitized reality and benefit from a whole new digital infrastructure.

Our mission to digitize reality doesn’t end with the creation of our sensor stream pipe. Not by a long shot. We are currently working on algorithms that can process the raw depth data sent through our pipeline and send distilled, pertinent information back to the user. Here’s our not-so-secret plan:

  • We give developers a much-needed pipeline free of charge
  • They pour their sensor streams into our servers
  • We process a huge amount of sensor data
  • Our algorithms learn from this data and constantly improve the service

Then, once our computations are on point, we could charge users on a pay-per-computation basis to decipher their depth data like no one else, and return it — in parallel — to users in a useful form. Pretty neat.

Let’s digitize reality together!

We would love to support more sensor types and allow more configurations. So if the idea of digitizing reality is something that you like the sound of, get in touch!

Originally published at https://medium.com on September 30, 2019.


Read this article on Medium.com