software
← Back
12 November 2020

When the glove is released to consumers, we will ensure that it integrates seamlessly into a number of existing applications. We're targetting applications that rely on 3D interactions: CAD programs, video games, VR apps, etc. Users will be able to put on the glove and immediately gain the sensation of touch in their favorite applications, without having to write any code. Things like collision detection and surface modeling will be handled behind the scenes, and users can wander their virtual realms in blissful ignorance.

However, we also plan to open up the glove to third-party developers. As far as these people are concerned, the software stack looks like this: Simplified software stack

In short, the hand sends tracking data to the host, the host detects collisions between the user's hand and some region of virtual space, and the host responds with a specified force and tactile response which the glove then executes. Had I been rigorous in making this graphic, I would have included arrows going both directions between the modules of the glove because, in reality, the data flow is bidirectional. For all three modules of the glove, the designated driver monitors its module's performance and adjusts its parameters to ensure the highest possible performance. The inner workings of the low-level drivers are thrilling to embedded engineers, but pretty mundane to everyone else, so I'm not really going to talk about them here. To someone whose primary concern is programming an application for the glove, such knowledge is not important.

We don't yet have much information on what the glove programming interface will look like. It's unlikely that we'll allow access to the low-level drivers, but the degree to which third-party developers can access the embedded control stack has yet to be decided. There are a few possibilities as to what the final system will look like; any or all of them might be implemented: