The HoloLens looks spectacular, but still far away: Watch Microsoft's rugged reality technology in action • HWzone

The HoloLens looks spectacular but still far away: watch Microsoft's rugged reality technology in action

Microsoft brings hundreds of goggles First to the Bild Conference: The technology of the layered reality leaves a strong impression, but also reveals limitations

Get updates from everyone in TelgramGet updates from us all at TelgramJoin the channel now

Microsoft has pretty much listed everyone at the Build conference with an up-to-date demonstration of its layered reality technology, The Hollins: Those who have experienced the prototype of the glasses, which since CES in January has evolved into an independent unit that is not tied to a nearby computer, have reported an innovative and compelling experience - even if currently subject to some significant limitations.

In contrast to the virtual reality technologies whose development into the home market has returned to fashion in recent years,Google Glass, Which has returned to the labs for another round of research and development, Holollins is not going to surround users in a virtual world, but to add virtual objects to our reality picture. This is done by projection Its lenses are powerful enough to compete with the light that comes to us from well-lit environments. But that's just a small part of the trick - the significant part is scanning and mapping the 3D environment using cameras with similar capabilities to the company's kink so that the integral computer can calculate exactly what angle and distance the projected object is.

The result? Among the demos experienced by the conference critics was a three-dimensional model of a castle that appeared on a table in the demonstration room: the user could turn around and paint his sideburns. Another demonstration showed two floating balls in the air in front of the user; By command, the bullets dropped down while interacting with the physical environment, rolling from sloping surfaces and being pushed away from furniture legs. The illusions, some of which already enjoy a 3D audio component, have been described as convincing, with an impressive presence of virtual objects even when approached and placed into the space they are supposed to fill a hand.

Some of the hololence demonstrations were shown on video only, but they illustrate some of the very broad spectrum of uses that Microsoft imagines for a product, a spectrum that will probably be extended to almost any area of ​​interest and human endeavor once His will begin to reach the market. One of them, in collaboration with the University of Western Australia's Case Western School of Medicine, showed how a lecturer could play with a full-size three-dimensional human figure and strip it of layers of skin and flesh to reveal desirable layers in the detailed anatomical model below. It does not take too much of a thought leap to move from this demo to a learning scenario that uses a virtual patient and allows students to move around freely and participate in the surgical procedure.


Along with the fact that the technology, which still seems too much Too fictional to be true, acting as reported, Holulence's appearance on Build has also clarified some of her limitations for now. First, all demos have taken place in relatively small, static and well-controlled environments - you might not want to build on the street-use holulence yet, where cameras and a computer will be more difficult to track in real time in an open space full of moving objects.

Microsoft's demonstrations focused mainly on professional uses, targeted work environments that are probably the best candidates for initial adoption of the technology, but the prominence of the demo turned to the home market and showed how to drag and place video screens and other applications 10 throughout the living room and essentially transform the entire living environment into the desktop of the future. It looks good, but to be really useful requires us to get a world where people go around with the hololans on their head a great deal of the time. To do this you will need a microprocessor to minimize the sensors, projector and computer to dimensions close to those of Ordinary, and we are still far away.

Finally, despite the strong sense of reality of the projected objects, which can be moved around naturally without experiencing Lag, the experimenter reports that the illusion the glasses provide is now broken too easily due to a limitation of the visual field they cover. The hololence now reaches only the central part of the eye, and the result is that the virtual part of the square reality picture disappears immediately upon reaching a rather wide margin of peripheral vision. With small objects, this is still tolerable, but when a large screen that is supposed to cover the whole wall reaches these margins, it is simply cut in a way that harms the coherence of the picture that Hollensens is trying so hard to create.

The bomb Microsoft imposed in the form of Bild Windows 10 support for Android and iOS applications Perhaps important mainly to Microsoft itself and application developers; Holollans, on the other hand, can be one of the first steps in a revolution that will change the way people experience the world. The technical challenge of expanding the Holollens Foundation to the entire field of vision will certainly not stop progress in the field, even though Microsoft currently has no launch date, not even a developer kit, 10, which just started.



7 תגובות

  1. I noticed that no object from the real world hid any virtual objects. Which is the most expressed with AR.

  2. URL] [/ URL]
    No such problem already exists in the Intel RealSense depth camera development kit.
    I helped develop the UNITY3D plug-in at Haifa Labs where the technology was developed.
    It was a challenge but it was not complicated.

  3. Make it work as under I know you can do. My intention is that they deliberately did not show any contradiction in the HOLOLENS video because the KINECT 2 that is connected to the head does not work enough accurately and is low enough to make it look as ugly as the video you connected.

  4. You say the project is ugly?
    I was insulted
    : Cry:

    If you mean "cutting" in the seam zone with the real world, it's because the camera I developed was experimental and of low depth data resolution.
    I guess it solved in the more advanced versions.

    Although near-field experiments I dropped snow on my hand and he piled up impressively in the lines.

  5. Oh no, sorry. I liked your part of UNITY. I referred to what you saw only in terms of the interaction of the three-dimensional with what is filmed that is not so much under your control but as the developer of the camera.
    To say that "I guess you can get rid of it" is like saying that with a little bit of will and effort what XBOXONE renders today on 720p in 30FPS will also allow you to render on the same Ex Box in 4K 144FPS.
    The difference in quality is the problem here, the algorithms of the AR are not necessarily the problem but the camera itself and the acquisition of 3D from the real world.

  6. URL] [/ URL]

    Text recognition, facial expressions, real-world object scanning for a three-dimensional model with texture, photo-taking and focus later.
    People work on it.
    It is no secret that the accuracy of the Kinect for generations in terms of depth data is not something to say the least.
    This is more than a GESTURES camera.

    What Intel develops in Haifa is a completely different level.
    Identifying objects in space, for example:

    Text recognition, facial expressions, real-world object scanning for a three-dimensional model with texture, photo-taking and focus later. People are working on it.
  7. When you have an impressive demo, you will speak. Lol
    I think I know what Intel's camera is ... because I may have been working on its first proto-tape at the Technion.
    KINECT 2 has improved a lot compared to KINECT 1, don't think that only "you" are developing the next generation, be sure that you are already working on KINECT 3.
    And I'm not impressed with Intel's level until I see a video of the depth and demos.

Leave a Reply

Back to top button