jump to navigation

Augmented Reality – “smart glasses” December 9, 2012

Posted by davevankampen in Mobility.
trackback

Outline

  1. Introduction
  2. Challenges
    • Reduction in Technological Capabilities
    • Battery Life
  3. Opportunities
    • Reduction in Technological Capabilities
    • Android Development Environment
  4. Conclusion

Introduction

Augmented reality is all about adding to and enhancing the environment the user is already in. It is about being mostly or totally unobtrusive, to the point of being invisible, until it is needed, and then adding value by displaying information about what the user is seeing, or handling, or perhaps thinking about. It is about adding value, and not taking any away. Granted, this is quite the tall task.

In a way, the smartphones that have proliferated today are already augmenting our reality, by the meaning of the word. Dictionary.com defines augment as an increase in size, number, strength, or extent. I feel it is the latter two that really fit to this type of augmentation. Consider this somewhat trivial example: out to dinner with friends, you are discussing your favorite movies. Someone mentions “Air Force One”, and says “wasn’t the bad guy the same guy that was in the Harry Potter movies?” 5 or 10 years ago, that would have to go unanswered if no one knew. However, today, with smartphones and a reliable network connection, we have effectively strengthened and extended our intelligence and “memory.” You can very quickly retrieve an answer to queries like this, and the conversation can be resolved right then and there. This is a good (though somewhat silly) example of how smartphones augment our reality.

So, naturally, my first thought when considering glasses that augment one’s literal view of reality was that the technology, both hardware and software, in smartphones could easily be adapted for this application. And in fact, that is where some current projects are already headed. The first main reference for this post is Google’s Project Glass. Google, the developer of the Android smartphone operating system, is already hard at work making an augmented reality device built around that system. They are obviously a powerful player in the market already, and have many skilled engineers on their side.

Challenges

However, this does bring us to what I consider a couple of the key challenges in the “smart glasses” market. I think it is important to resist the urge to just directly translate the smartphone market into this field. That would be too easy, and there a are few differences that make it impractical.

For one, as pointed out during a feedback session from one of my classmates, smartphones are very optimized for the communication media available to them. Specifically, the touchscreen, haptic feedback, high-res displays, etc. These are all not possible on a smart-glasses application – it needs to be see-through, and preferably only require interaction from the user’s eyes – as you require more physical interaction, the device becomes more obtrusive. So if the smartphone operating system you are using is built all around the assumption that you have a big, powerful, high res screen available, there will be significant challenges migrating that architecture to the smart-glasses realm. If nothing else, some significant paring-down of features and functionality is necessary, in order to re-factor the UI so it can be seen through, instead of just looked at. It needs to be in the field of view, but not the focus of the view.

Another significant challenge that presents itself for smart-glasses is battery life. Smartphones, a similar type of device, w.r.t. functionality, have large batteries, but still only operate for a few short hours. And that is when the screen is off for most of the time. Depending on how its implemented, I see smart-glasses as operating in an “always on” mode, in some form. To offer real time augmenting capabilities, they should be always there, ready to provide information on whatever the wearer is viewing. In fact, this sort of “real-time” response is precisely what Microsoft recently filed a patent for. In that application, Microsoft discusses possible ways a visual device can augment views – registering attendance at an event, detecting field of view, user geographic position at the event, and providing information about what is near them or within their field of view. These functions are wonderful, but of course very difficult to implement. Especially if the device is simply a re-factored smartphone. It demands fast network connections, long-lasting battery life, rapid processing power, and likely large memory stores, to hold information both about the user and the operating environment. And thats all not to mention the cost of creating such a device.

So, there are significant challenges. However, if there were no challenges in a given area, then there would be no opportunity for business advancement or profit, because anyone would be able to develop a competitive product. Market areas that provide challenges also provide opportunities, and those will be discussed next.

Opportunities

One significant area of opportunity for the smart-glasses application falls in line with one of the aforementioned challenges. I think the display technology for smart glasses will make-or-break a project’s success, more than almost any other factor (aside from cost). So much battery power, processing power, and cost on a smartphone device is invested in the screen. This is because the screen is the end of the user’s field of view. The more vibrant, bright, and responsive it is, the more satisfied the user is with the experience of it.

Fortunately, do to the nature of the implementation, this is not necessary on smart-glasses. In fact, it would be a significant hindrance. Smart glasses need to be seen through, not looked at. To be safe, they should never block the user’s field of view. Of course, the user should not wear them when driving or doing similar activities that require their full attention. But even when they are doing an activity where it is safe to wear them, they should be as visually “out of the way” as possible.

This is a good thing though, because it provides the opportunity for lower-power displays. Lower power, lower resolution, lower response granularity (smartphones need to know precisely where the user touched, smart glasses likely do not need the same granularity, since vision angle changes so rapidly anyways). A low-res, mostly transparent display w/ a low-res camera (as eye sensor) should suffice. This requires less battery, less processing power, and thus lower cost. So though is different than the smartphone market, it provides an opportunity.

Though there are many shortcomings of the Android operating system in this application, it is still a viable candidate for being the OS for smart glasses. One reason is the developers network and environment. Google has done an excellent job creating a development environment and application deployment network that encourages many developers to get involved. In addition, Android does a good job abstracting specifics about hardware sensors from the software developer. They simply provide very easy to use and well documented APIs that allow the software to access hardware sensor data. This developer base will likely rapidly get behind a new and different platform, as long as the development suite and toolchain remains consistent to what they have grown to expect. THis is a valuable asset. The project doesn’t have to be project glass (which will likely cost in excess of $1K). It can be any device running Android.

Conclusion

There are some obvious challenges associated with working on augmented reality based smart-glasses. A battery that can last as long as the user typically wears glasses and goes about their day is atypical for most smartphones today, and those have the luxury of being able to house (relatively) large batteries internally. This is not an option for smart-glasses, and that will need to be worked out as battery technology no doubt advances. The other key challenge (reduction in technology) is an area of benefit and growth, as it allows for simpler hardware.

Overall, I think there is a lot of potential in the coming years for smartphone developers to expand their offerings into augmented reality smart-glasses. Google is trying already, but I see them paring “Project Glass” down to make it more affordable and usable for the everyday customer.

Advertisements

Comments»

No comments yet — be the first.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: