Matthew Jarvis finally got a chance to experience the long awaited wearable tech eyepiece from Google – but will he give Glass a pass?
Despite months and months of writing about the highly anticipated and much-discussed Google Glass eyewear, I have to admit – I’d never actually tried it on myself.
As such, I had remained largely on the fence about whether Glass would herald a new era of (as non-believers predict) privacy concerns, disconnected conversations and overly-expensive copycat devices, or instead (as Glass lovers champion) usher in the next major technology revolution of connectivity, interactivity and convenience.
All this changed when I recently attended the FLUX Innovation Lounge in London (see a full-read up here), at which I was given the chance to try the divisive tech.
As someone who doesn’t wear glasses, the first thing I noticed was that the minimalist headband, glasses arms and nose pads didn’t feel nearly as comfortable as the scores of grinning Google Glass users in press images might suggest.
Perhaps it was my unfamiliarity with wearing glasses, but my discomfort continued due to the nose pads of the device wearing flimsy – particularly given the pricey cost of the tech.
Once I had finally arranged the eyewear into a form approaching comfortable, I began to try out the miniature display now occupying a portion of my right eye.
The screen is disabled when not in use, presumably to avoid distraction during other tasks, but I still found it quite distracting having a rectangle of glass continually bobbing in my vision.
The software itself is quite intuitive. The touch-sensitive right-hand arm of Glass acts as your control, with directional swipes and taps used to browse through clear and simple menu options.
The standard display once turned on via a single tap is a helpful digital clock, with the voice-activation phrase ‘ok, Glass’ written below.
The swipes and taps registered quickly, and minus a few slow menu changes, the Android-based operating system was easy to pick up and use.
Voice-activation was also impressive, with my ‘okay, Glass, translate this’ command registering on my first attempt, despite a reasonably loud room.
The command opened WordLens, an app previously found on smartphones and tablets which translates foreign text to English (and vice-versa) in real-time.
The camera view which opened in my view stuttered as I attempted to line it up with a demonstration warning sign written in Spanish – which could be attributed to the early stage of the Glass version of the app – but even so, it was still impressive to see, and a clear sign of the benefits Glass could have in the future for communicating and interacting with new environments.
I tried out a few more of the features of Glass, including the directions service, which quickly displayed a GPS-style route back home– a feature I can see being a massive boon, especially given the hassle of having to break out a smartphone or locate a map when walking around large unfamiliar cities.
Overall, following my hands-on experience, Glass remains a novelty concept to me. While many of its features and apps are indeed very useful, most, if not all, are available on smartphones – which for the majority of consumers require no additional investment.
The convenience of Glass failed to outweigh the discomfort and triviality of its existence. However, it’s early days for wearable tech, and I can easy see Glass evolving into something essential in the future. The promise is there – only time will tell if that promise can eventually justify the cost.