Announcement

Collapse
No announcement yet.

Virtual Reality

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    For me it's a series of small and medium size compromises wrapped up in one really big compromise. It's obvious that it's a kind of simulation of what Apple wants to ultimately build but isn't currently able to - i.e. a pair of normal spectacles which can project windows and graphics around you. And it sounds like they've built that simulation about as well as you can with existing technology.

    But I don't think that that simulation will ever be good enough for what Apple are trying to achieve. Filtering the real world through cameras and onto screens - the 'really big compromise' - seems like a technological dead end. No matter how good the cameras, how good the screens, and how good the video processing, it's just not ever going to be as good as looking at the world with your actual eyes.

    And all of the technological hardware required to achieve the effect as it stands represents one of the other problematic compromises - the headset is big, heavy, and gets warm and sweaty, meaning it's not feasible for wearing over the course of a workday (I'm thinking of my own workday here, which is at home alone, in a chair at my desk all day).

    Other issues - like the quality of the hand and eye tracking - I can actually envisage improving quite quickly. But I think even if those things are one hundred percent locked-on and rock solid, the bigger compromise of strapping a giant thing to your head to do a day's work isn't going to be meaningfully resolved by the current design.

    Unless Apple can advance quickly to a point where screens and cameras can be eliminated in favour of transparent glass, I think today's Vision Pro and its future iterations will ultimately be relegated to curio status.
    Last edited by wakka; 31-01-2024, 13:02.

    Comment


      Originally posted by wakka View Post
      For me it's a series of small and medium size compromises wrapped up in one really big compromise. It's obvious that it's a kind of simulation of what Apple wants to ultimately build but isn't currently able to - i.e. a pair of normal spectacles which can project windows and graphics around you. And it sounds like they've built that simulation about as well as you can with existing technology.

      But I don't think that that simulation will ever be good enough for what Apple are trying to achieve. Filtering the real world through cameras and onto screens - the 'really big compromise' - seems like a technological dead end. No matter how good the cameras, how good the screens, and how good the video processing, it's just not ever going to be as good as looking at the world with your actual eyes.

      And all of the technological hardware required to achieve the effect as it stands represents one of the other problematic compromises - the headset is big, heavy, and gets warm and sweaty, meaning it's not feasible for wearing over the course of a workday (I'm thinking of my own workday here, which is at home alone, in a chair at my desk all day).

      Other issues - like the quality of the hand and eye tracking - I can actually envisage improving quite quickly. But I think even if those things are one hundred percent locked-on and rock solid, the bigger compromise of strapping a giant thing to your head to do a day's work isn't going to be meaningfully resolved by the current design.

      Unless Apple can advance quickly to a point where screens and cameras can be eliminated in favour of transparent glass, I think today's Vision Pro and its future iterations will ultimately be relegated to curio status.
      You just have to look at the array of sensors and cameras on the front of this to realise that fitting a modern computer and all that tech into a small pair of glasses is way, way beyond what we're capable of doing. The way this tries to make it all look more natural to other users has failed here the screen on the front is supposed to be super dim very reflective and doest work well in bright environments so pretty useless. ITs adding more weight to the device too with all that glass and the weight of an extra screen.

      They cant improve the problems with hand tracking on this via software, it wasn't that it didn't recognise what was being asked of it but more the limitation of inside out camera solutions. With inside out tracking you're always at the mercy of the cameras and its blind spots, if your hands are out of view they don't get tracked nothing's going to fix this bar cameras that can view every inch of you and are able to track your full reach with no blind spots.

      Comment


        Originally posted by Lebowski
        You just have to look at the array of sensors and cameras on the front of this to realise that fitting a modern computer and all that tech into a small pair of glasses is way, way beyond what we're capable of doing
        This is where I always fall down in conversations about this stuff, because for me the part I'm interested in is, how does what this does now indicate where we might go in the future?

        So yeah, of course, we're way off what I'm describing. If we weren't, the Vision Pro (or another project from another vendor - Magic Leap tried exactly this) would already be that thing.

        But I do personally think that is what we're headed. And I recognise that is a very unpopular opinion generally and especially on this forum.

        Originally posted by Lebowski
        The way this tries to make it all look more natural to other users has failed here the screen on the front is supposed to be super dim very reflective and doest work well in bright environments so pretty useless. ITs adding more weight to the device too with all that glass and the weight of an extra screen.
        Yeah, this feature does not do what it's meant to do at all. It's another piece of overengineering in an effort to overcome the 'really big compromise' that this is effectively an attempt to simulate optical/glass based AR.

        Originally posted by Lebowski
        They cant improve the problems with hand tracking on this via software, it wasn't that it didn't recognise what was being asked of it but more the limitation of inside out camera solutions. With inside out tracking you're always at the mercy of the cameras and its blind spots, if your hands are out of view they don't get tracked nothing's going to fix this bar cameras that can view every inch of you and are able to track your full reach with no blind spots.
        To be clear, when I talk about this being improved, I mean in the Vision Pro 2, 3, 4, future products from other companies, etc. Not software updates. I think there probably could be the potential to improve some of this through software - not seeing more of your hands and eyes but enhancing how that data is interpreted in the software - but meaningful change will come through future hardware upgrades in tandem with software enhancements. I do think that this is the area that is likely to improve most quickly, much more quickly than the challenge of replacing our eyes with cameras and screens (which I think is probably insurmountable).

        Comment


          I'm semi of the view that companies think VR's moment has already passed and that's why we're seeing movement in AR. The trouble there though is that AR is also an old reheated idea and to be honest isn't remotely as impressive as they wish it was.

          They're basically trying to sell this as the future now:




          While it's still mostly like this:




          Yet people envisage it being like this:




          And yet even if it was I suspect it'd be a hot minute before they returned to keyboard and mouse



          Comment


            If where talking of the headset of the future, it will be nothing more than a fancy receiver, We need to get, 1-1 cloud processing, see through screens and wireless power to make any of this a reality though.

            Comment


              Originally posted by wakka View Post
              This is where I always fall down in conversations about this stuff, because for me the part I'm interested in is, how does what this does now indicate where we might go in the future?

              So yeah, of course, we're way off what I'm describing. If we weren't, the Vision Pro (or another project from another vendor - Magic Leap tried exactly this) would already be that thing.

              But I do personally think that is what we're headed. And I recognise that is a very unpopular opinion generally and especially on this forum..
              I'm sure it's where they'd like to get to, but the problem is always going to be one of input - getting displays in contact lenses is somewhere I think we can get to, but then you need a way of putting something on that screen and being able to interact with it. You're not going to see miniaturisation that small for it all to be self-contained.

              There's a reason we're still using keyboards - they work very well - and whilst touch screen input has gotten loads better, it still cannot beat a physical keyboard. And voice input? Alexa and Hey Google make me want to beat them with a stick. Two sticks. With nails. It's great when it works, but when it fails, then you have to correct it and you don't have a backspace button. It's a real PITA.

              Anything else really is the realm of science fiction - with having some kind of force feedback so you can type in the air as if you really do have a keyboard on a desk.

              And having to have something on your face, or in your eye all the time just so you can use a computer? That seems like more of a hassle than simply having a screen on in the background you can walk away from.

              I think VR is great (and why I have eight different VR headsets), but I don't want something in my eye 24 hours a day that some unscrupulous corporation is going to want to push adverts to.

              Comment


                Originally posted by MartyG View Post
                whilst touch screen input has gotten loads better, it still cannot beat a physical keyboard.
                Yep. And maybe there is a tech solution somewhere about haptic feedback or something but I don't see a situation in which a touch screen would be better than physical buttons.

                Comment


                  Originally posted by MartyG View Post

                  There's a reason we're still using keyboards - they work very well - and whilst touch screen input has gotten loads better, it still cannot beat a physical keyboard. And voice input? Alexa and Hey Google make me want to beat them with a stick. Two sticks. With nails. It's great when it works, but when it fails, then you have to correct it and you don't have a backspace button. It's a real PITA.

                  Anything else really is the realm of science fiction - with having some kind of force feedback so you can type in the air as if you really do have a keyboard on a desk.​

                  And having to have something on your face, or in your eye all the time just so you can use a computer? That seems like more of a hassle than simply having a screen on in the background you can walk away from.

                  I think VR is great (and why I have eight different VR headsets), but I don't want something in my eye 24 hours a day that some unscrupulous corporation is going to want to push adverts to.
                  I think Musk's Neuralink brain implants are more likely to be the future of this type of stuff than a small contact lens or a screen, he's pushing research into brain implants as a disability device, and has done his first implant this week.

                  Imagine, an implant with the ability to alter your perception of reality, it would be far more immersive than any headset. It would also have far wider reaches too, with the ability to cure and treat the brain. By directly stimulating or suppressing parts of our brain chemistry to cure things like depression and fibromyalgia, removing the need for certain medicines. ​

                  I can't see any downsides to the above, as i believe Musk would use it responsibly and wouldn't use it to reprogram us into a Mindless Musk Super Solider that don't feel pain and don't stop until their completely incapacitated.
                  Last edited by Lebowski; 31-01-2024, 15:33.

                  Comment


                    You guys point out that touchscreens aren't better than physical keyboards. That's true! And yet, how often do you use a touchscreen keyboard? All the time - because sometimes it's the right tool for the job.

                    I see hand and eye tracking the same way. If we envisage a point where we take a similar path with it that we did from the rudimentary touchscreens of the 80s and 90s to the iPhone in 2007, I think there are going to be situations where it's going to be the right tool for the job. It's going to be faster and more efficient to use it, even if a mouse is still much better suited for precise graphics work.

                    Comment


                      Originally posted by Lebowski View Post

                      I think Musk's Neuralink brain implants are more likely to be the future of this type of stuff than a small contact lens or a screen, he's pushing research into brain implants as a disability device, and has done his first implant this week..
                      The last thing I want is Elon Musk inside my head

                      Comment


                        Originally posted by wakka View Post
                        You guys point out that touchscreens aren't better than physical keyboards. That's true! And yet, how often do you use a touchscreen keyboard? All the time - because sometimes it's the right tool for the job.

                        I see hand and eye tracking the same way. If we envisage a point where we take a similar path with it that we did from the rudimentary touchscreens of the 80s and 90s to the iPhone in 2007, I think there are going to be situations where it's going to be the right tool for the job. It's going to be faster and more efficient to use it, even if a mouse is still much better suited for precise graphics work.
                        But if the idea is to replace a computer, then it needs a functional keyboard input. Because it's the right tool for the job that hand/eye tracking isn't going to properly replace, that a touchscreen keyboard also doesn't replace for anything other than rudimentary input. It's the difference between telling the computer what you want it to do, and the computer interpreting what you're trying to get it to do through gestures/voice - and fixing the errors it interprets incorrectly is time-consuming and irritating.

                        I wouldn't want to use either in my day job over a physical keyboard - I touch type, and you need to be able to feel keys to do that.
                        Last edited by MartyG; 31-01-2024, 16:19.

                        Comment


                          Originally posted by MartyG
                          But if the idea is to replace a computer, then it needs a functional keyboard input. Because it's the right tool for the job that hand/eye tracking isn't going to properly replace, that a touchscreen keyboard also doesn't replace for anything other than rudimentary input.

                          I wouldn't want to use either in my day job over a physical keyboard - I touch type, and you need to be able to feel keys to do that.​
                          I think you're misunderstanding me. My example of the touchscreen keyboard's usefulness despite its inherent drawbacks was intended to demonstrate the possibility that eye and hand tracking can be useful, even if more precise options exist.

                          Comment


                            I'm sure there are situations where it might be useful, but it isn't a replacement. And the messaging with Apple Vision Pro seems to be this is a replacement for your MacBook ("Spatial Computing") - and I can't see that being a thing if you want to get work done.

                            It's a more expensive Meta Quest Pro and has the same issues. I think people are mostly going to end up using this as a media consumption device - a very expensive media consumption device.
                            Last edited by MartyG; 31-01-2024, 16:27.

                            Comment


                              Originally posted by wakka View Post
                              If we envisage a point where we take a similar path with it that we did from the rudimentary touchscreens of the 80s and 90s to the iPhone in 2007, I think there are going to be situations where it's going to be the right tool for the job. It's going to be faster and more efficient to use it.
                              I'm not sure I agree. Is there anyone in the world who can type regularly on their phone or tablet screen without typos everywhere or autocorrect making them look like an idiot? I don't think there is. I think the reason we use them is not because they are faster or even really more efficient but simply because they are built into the devices we use all the time. Devices that have already proven their use in many other ways. Whereas with the Vision Pro, what I'm seeing is that some of how it is being sold is on so many things that our other devices already do better.

                              Now if they can bring us into that ecosystem and make it essential, then yes I think we'll end up using those gesture and touch interfaces but only because they are in the device that has proven its use in other ways. That's what I think anyway.

                              Edit: maybe we're kind of saying the same thing but coming at it in different ways.

                              Another edit: something I don't see talked a lot about (or haven't seen anywhere actually) is that I can see the eye tracking and gestures and vocal input being applied to a phone or tablet and probably end up being more useful than in a VR device.
                              Last edited by Dogg Thang; 31-01-2024, 16:28.

                              Comment


                                Originally posted by MartyG View Post
                                I'm sure there are situations where it might be useful, but it isn't a replacement. And the messaging with Apple Vision Pro seems to be this is a replacement for your MacBook ("Spatial Computing") - and I can't see that being a thing if you want to get work done.
                                Sure, that's fair. If you go back to my original post about hand and eye tracking that you originally replied to, I wasn't really talking about today's Vision Pro - I said that if we think about today's hand and eye tracking as the rudimentary touchscreen technology the 80s and 90s, and accept that a future version could approximate the type of quality increase seen in the iPhone's 2007 touchscreen, there's potential that the usability and usefulness of it could grow to the point where there are jobs that it becomes 'the right tool' for.

                                That's really what I'm getting at - not the idea that you would sit and do a day's work in today's Vision Pro and not use a mouse and keyboard.

                                I also think that even if the hand and eye tracking gets much better in the way that I think it's possible it will, you will still want a keyboard for a day's work. I don't see that being overcome. Mouse and trackpad usage, I could see possibly going by the wayside though for more casual navigation.

                                Originally posted by Dogg Thang
                                I think the reason we use them is not because they are faster or even really more efficient but simply because they are built into the devices we use all the time. Devices that have already proven their use in many other ways. Whereas with the Vision Pro, what I'm seeing is that some of how it is being sold is on so many things that our other devices already do better.

                                Now if they can bring us into that ecosystem and make it essential, then yes I think we'll end up using those gesture and touch interfaces but only because they are in the device that has proven its use in other ways. That's what I think anyway.
                                Yes - exactly! We use touchscreen keyboards because they are convenient! That's exactly how I think we could feel about hand and eye tracking in the future.

                                There are so many problems to be overcome for headmounted virtual displays to supercede what we have now but I think this is a relatively minor issue because hand and eye tracking will get better, and then we will use it because having a million virtual displays we can throw in front of our eyes is convenient, and the most convenient way to interact with them is the hands and eyes already attached to our bodies.
                                Last edited by wakka; 31-01-2024, 16:39.

                                Comment

                                Working...
                                X