This has been on my mind since the unveiling of Hololens back in January. The following stack of augmented reality services comprised of Microsoft solutions is now achievable:
Arrows represent flow of data between a device and cloud. Now the cloud as an intermediate point seems logical as we would like to somehow get the data from Kinect to the HoloLens. Of course the flow of data from Kinect to cloud can only go one way, as this is a separate sensor, that is not receiving any data. This is not true for HoloLens. As we know, it will be fully untethered, see-through holographic computer. Which means it will also be able to receive data from the Internet.
In this case, we are evaluating a possibility of added value that a combination of using Kinect together with HoloLens would bring. Now the HoloLens already contains an advance version of Kinect v2 sensor to map the environment from the user perspective. But it obviously cannot see the user itself, apart from his arms and lower body (if looking down). This could be solved by using the Kinect to monitor the user from third perspective and feed this data in turn to HoloLens via the cloud. I think this opens the door to a lot of interesting applications, such as displaying the other player and his moves besides you in real time.