Microsoft Build is happening this week. One of the announcements was for Project Kinect for Azure. This announcement caught my attention because this past week I was trying to use 3D Scan in Microsoft Windows 10. This app requires that you have a Microsoft Kinect to do a scan. Unfortunately, Kinects for Windows were discontinued last fall. Because I already had a Kinect 2.0 for my Xbox One, I just needed to get the adapter. Sadly, this also was discontinued.
As the push to use virtual reality and augmented reality in real-world settings increases, it makes sense that being able to scan and visualize existing spaces will become more important. One of the uses of Kinect was to scan a room to gather the spatial data for use in apps. This type of functionality can be used to build a gaming environment, but it also can be used in business applications to do things such as monitor warehouses. By using the spatial data along with cognitive services, you can gain insights that go beyond your standard cameras or motion detectors.
Although the Kinect for the Xbox is still dead, Microsoft has announced a new Kinect device, Project Kinect for Azure. This new device is not the rectangular bar you have likely seen in the past, but rather a small device that contains sensors, a next-generation depth camera, and additional computing technology. This new device was built to be connected to Cloud services (Azure), but more specifically it was built to connect on the Edge. Project Kinect for Azure is planned to work with Azure AI (artificial intelligence) and the other Azure services.
Figure 1: The Kinect unit, with descriptions
Depending on pricing, this new device can used to build solutions that need rich tracking of gestures, movement, location tracking within an environment, or other spatial mapping scenarios that need a high level of precision.
Even though Project Kinect for Azure has been announced, there are still a lot of questions that need to be answered. The technology is based on the previous Kinect technology. It will use Microsoft's Time of Flight sensor and add additional sensors. It will be a relatively small package. Whether it will use the previous Kinect SDK or require new coding is still a question to be answered. Whether it will require that you connect to Azure to collect the data is also an open question. Regardless, it will be interesting to see how the device gets incorporated into other hardware solutions as well as see how data from the devices are used going forward.
As for my Kinect project this past week, I was able to borrow an Xbox One adapter and hook my Kinect 2.0 to my PC. My next step is to use Windows 3D scan to start creating 3D scans of a few items. Although this isn't the high-tech solutions Microsoft is expecting with Project Kinect for Azure, it is a very practical use for the technology!
Hopefully, the new Kinect version will have a longer life than its predecessors. If you have ideas for how a device like Project Kinect for Azure could be used, post them in the comments or on the forum!