jump to navigation

Bluetooth Technology; the Unsung Hero February 12, 2013

Posted by danthomas3 in Mobility.
add a comment

Each new year inflates anticipation of new consumer electronics and technology at annual events like the Consumer Electronics Show and other private auditorium-filled venues from technology giants like Microsoft, Apple, Sony, etc. A considerable amount of freelance engineers are represented at larger technology conventions of innovation hoping their technology wizardry outshines larger fish sharing the pond. Free market consumers and technology bloggers are equally salivating for the latest technological gadget that pushes the envelope well beyond the previous years’ now considered stale technology. What could possibly come next from the theorized fruits of Moore’s Law a mere one year later?

Predictably technology engineers continue to unveil more energy efficient and sophisticated general and gaming processors including networking devices with increased agility and throughput. Form factors have gotten significantly slimmer and conspicuously more appealing. Boastful are the jubilations by the science of fitting more pixels, thus having more clarity, in devices with displays and cameras. Larger companies manage to offer these latest gadgets at prices matching other similar mainstream products sometimes with not much price inflation at all. During each year’s unveiling, there seems to be something understated with the latest developments in Bluetooth potentials, aside from audio transmission.

I would pit many of the aforementioned engineering feats to inventions like the automobile considering that consumers are able to do more, faster and with relatively more portability. But what about accessibility and universalism, with the latter piquing reluctancy for profit-crazed exchangers?

Bluetooth wireless technology aims to serve as the universal low cost, user friendly, omnidirectional air interface that will replace a plethora of propriety cables that people need to carry and use to connect their personal devices [1]. The latest protocol stack supports up to 128 kb/s audio transmission, so transmitting packets outside this use is certainly feasible perhaps exploiting the control path of the Bluetooth protocol stack. Basic seriated layers of identification and addressing are present. The push is for developers to leverage infrastructure-less technology in applications to support a non-exhausted list of features like P2P gaming, file sharing, and resource discovery. Maybe this reawakened Bluetooth investment from developers would amplify the latest consumables of Bluetooth at electronic shows as its hosted machinery undoubtedly evolves.

[1] Bisdikian, Chatschik. An Overview of the Bluetooth Wireless Technology. IBM Corporation


Wearable Computing in Baseball December 17, 2012

Posted by downeyjm in Mobility.
add a comment

Wearable computing in sports holds the potential to change sports and the way we think about them in a positive way. Embedding sensors into the athletes uniforms, cleats, and pads could be a game changer with regards to medical advancement in terms of treating and preventing injuries, game analysis in terms of providing coaches, statisticians, referees/umpires, and fans with more complete information about each game, and can even provide valuable feedback to the athletes themselves to aid them in improving their skills.
Specifically, this can be applied to baseball. In fact there are already baseball specific, wearable computing, applications out there. For example, a group of students from Northeastern University have already developed a compression shirt equipped with motion sensors and conductive threads to enable the shirt to detect the movement and acceleration of the pitching arm. This shirt is linked to software that records the information giving coaches real-time information on a pitchers mechanics without needing to be in a lab [1].
This design attempts to monitor baseball player’s movements as well as the wear and tear their bodies sustain during the course of a game and/or season.

Baseball consists of hitting, fielding, pitching, and base running. So the application needs to focus on providing good sensing data for each of those actions. To do this, there are three important aspects of the player that the application needs to sense. They are the player’s location, motion, and the forces/pressure there body takes.
This data can then be used in a variety of ways. The location data can give a more accurate reading of a players true defensive value by showing what a players range (how much area can they cover) when used in conjunction with traditional stats such as errors. Injuries can be prevented or treated more effectively with the use of motion sensors that monitor a pitchers mechanics and can provide feedback based on their mechanics.
The general idea of the application is for the player to have certain sensors embedded into their uniform which wirelessly communicate, over an 802.11n network, with software that performs any necessary pre-processing prior to putting the data into a database to be analyzed as needed. A summary of the various sensors and their implementation is below.

Location Sensors
To measure a player’s location a Bluetooth local positioning system is used. Bluetooth is chosen over other technologies such as wireless and RFID due to its combination of security, accuracy, and coverage area [3]. To implement this, a set of beacons are placed around the stadium at fixed locations (these act like the satellites in GPS) and the player has a transponder that wirelessly communicates with those beacons. The system uses triangulation to determine the location of the player. This location sensor is located on a player’s hat (or helmet) as it is easy to implement, unobtrusive to the player, provides the clearest path to the beacons (highest point on the player), and adds to the sensors durability since hats generally don’t take a beating from diving or getting hit by balls.

Motion Sensors
To measure a player’s mechanics a set of motion sensors could be embedded in a player’s undershirt as was developed by Northeastern University [1]. The possibility for also creating a pants version of the shirt exists to help aid in running form, but it is not likely that every player will want to where them during the games and there already exists many applications (as well as knowledgeable trainers) to aid in proper running form so that will not be a part of the application.

Force Sensors
Lastly, it will be important to know what kinds of stresses are being put on a player’s body in terms of force. These would be especially important for catchers and pitchers. This is because the acceleration of a pitchers arm could be indicative of future arm injuries and a catcher takes a beating catching and blocking all the 90mph pitches as well as the stresses put on their knees from all the squatting. Sensing how much pressure the catcher’s knees are taking could help advise coaches on when to give them days off helping keep their legs fresher for later in the season and preventing career threatening injuries.
To measure the force (or wear and tear) a player sustains pressure sensors can be used. This is most beneficial from a catching standpoint as they are the ones on the field that take the biggest beating. These sensors could be placed in a players cleats to determine where their weight is on their feet, which has a couple of uses. The feet sensors could be used to determine the pressure put on the catchers knees when squatting by showing how a player’s weight is distributed on their feet (along known variables such as a player’s height and weight). Other force sensors could be placed on a catcher’s helmet to help determine the impact a foul tip has when it hits them in the head to help prevent and detect concussions.

Potential Issues
As with all applications there are some potential issues involved with this one.
Cost: The first is that it must be low enough cost to make this a viable option for players. This does not seem like an issue for professional players as I am sure they would gladly pay a couple hundred or even thousand dollars on this application if it can save them from overpaying a player by millions (through improved analysis) or to prevent a player from getting injured. This may be a bit on the pricey side, however, for most non-professional players which limits its market.
Size/Weight/Unobtrusiveness: The weight and size of this application shouldn’t be much of an issue given today’s technology and the fact that the additional weight it would add seems to be a very miniscule part of the total weight of a uniform leading to an overall unobtrusive product.
Power: Providing power to these devices seems to be a fairly decent obstacle but one already overcome (at least to a degree) by the Northeastern University development team [1].
Durability: Durability has the potential to be an issue due to the sensors potential for sustaining forceful impacts but that is hopefully minimized due to the location of the sensors. The other issue with durability is the weather, which means the sensors and power source need to be somewhat weather resistant (i.e. waterproof).
Wireless Infrastructure: The last major issue is the wireless infrastructure that handles all the transaction of the data needs to be reliable and secure.

1. “Wearable Computers for Pitchers Could Come To Major League Baseball.” Popsci. 2012.
2. “Wearable Computing: Sports.” ETH. 2012.
3. “Real Time Location Systems.” Clarinox Technology Pty Ltd. Nov 2009.

Nano-sized GPS Tracker December 11, 2012

Posted by louloizides in Mobility.
add a comment


Impoverished and war-torn countries suffer from high kidnapping rates [1]. Secret GPS tracking devices that could aid in recovering these kidnapping victims are often part of science fiction plots, spy dramas and cop shows. A completely concealed GPS tracking device can help aid in the recovery of kidnap victims.



The current state of the art for a GPS tracking device is a small box that can be worn on an armband or kept in a pocket [2]. The problem with using this device in abduction situations is that the device would be found and confiscated very quickly. For a GPS tracking device to be completely concealed it would have to disguised as another object or hidden inside of the user’s body.

Hidden inside the body would be the preferred implementation of this device as it would be impossible to detect by the naked eye. There are two routes for accomplishing this – an implanted device and a device that could be swallowed. A device that can be swallowed is ideal for privacy concerns as the user could stay separated from the tracker and activate it as necessary.

For this implementation the device couldn’t be larger than the largest medicine pills available, size 00 which is approximately 0.8 inches in length and 0.3 inches in diameter [3]. Several other design considerations are required. Any device would need to allow the user to activate or deactivate it as necessary. For a tracker in a pill form, the device might be disposable and only activated once it’s swallowed. If a tracker were hidden in another object, however, the design would have to balance privacy considerations vs. the ability to activate it when the abductee might not be able to get to the device.

The type of networks available for a GPS tracker must also be considered. Cellular GSM, CDMA and LTE networks are obvious choices for such a device. But in kidnap prone areas such as Afghanistan and Somalia these networks might not always be available. So, a worthwhile device must either be capable of ad-hoc networking or sending out some type of a locator beacon.

For a GPS tracker to be useful location accuracy is an important consideration. A tracker’s location would have to be accurate enough for an abducted person to be located, and GPS positioning devices can lose accuracy indoors due to signal loss and multi-path propagation. The GPS tracker, therefore, would likely have to supplement its GPS position broadcast with another form of locator, such as the beacon previously described.

Data delay tolerance is extremely important for this type of device. The tracker won’t always be able to be connected to a network. It should buffer a reasonable amount of location data in case reliable connections aren’t available. In addition, location data should be able to be supplemented by movement data when necessary, such as a person’s change in speed and direction based on accelerometer readings. The buffer also allows data to be stored and sent to the server more efficiently in bursts rather than being sent instantly.

Of course, for all of the technical requirements, increasing the knowledge that such a device exists and is in use might actually make the most significant impact to safety by lowering abduction rates.


Privacy Concerns

Carrying a tracking device is a significant privacy risk itself. And the user of the device must have a reliable way to activate or deactivate it, while ensuring that the device couldn’t be activated or deactivated unintentionally.

The interface for obtaining tracking data should be extremely secure so that a person’s location isn’t used against them. Furthermore, having a buffer on the device also creates a privacy concern. Someone could potentially read a device’s buffer and find out where they have been. Both the tracking data and buffer, therefore, should be encrypted whenever possible. The designer of the tracking device must be able to balance encryption, privacy requirements, hardware requirements, battery life requirements and size constraints.


Current Art

Most GPS trackers today are worn in hidden clothing pockets or on armbands, where they can be easily found. Interest in creating a device small enough to be swallowed does exist. One inventor has already patented an ingestible GPS tracking device (patent no. US7554452 [4]). The device, however, does not appear to exist and the inventor is likely a patent troll. Additionally, a company in Mexico, Xega, produces implantable tracking devices to help combat kidnappings. But these devices are RFID tags and would become useless if separated from an external GPS tracker [5].


Ideal Design

At the least this device would have to have the following components:

1)     A cellular radio

2)     GPS chip

3)     A GPS antenna

4)     A CPU to process the data

5)     WIFI to supplement the cellular connection, provide location data and act as a beacon (optional)

6)     A memory buffer for delay tolerance (optional)

Producing this device in pill form is still outside of the realm of current technology. In order to build this tracker two key technological improvements have to happen. The size of the necessary components has to shrink significantly. Furthermore, long-term power sources need to be available to power these components.

Using power from a person’s body would be possible, but impractical. Pulling power from body heat would not work as the thermal converter would need both a cold and hot side to generate a current[6]. Another idea from the field of nanorobotics is to create a fuel cell using blood glucose and oxygen. Unfortunately, this combination only yields very small amounts of power, currently estimated in tens of pico watts [7]. Modern GPS chips require milli-watts of power – five orders of magnitude greater than what glucose fuel cells can currently provide.

Batteries used in hearing aids, however, have excellent energy densities. A size 13 hearing aid battery is 7.9 mm in diameter (slightly larger than the size 00 pill), 5.4 mm tall and provides 300 mAh of power at 1.4V10. This is enough capacity to provide a 50 mW 1.6V GPS chip with close to 36 hours of power if it updates its location for 10 minutes out of every hour [8].



A clear need exists for GPS tracking devices that can be completely concealed to counteract abductions and kidnappings. Because of companies like Xega, a market for combating abduction likely exists in countries like Mexico where the vast majority of the population is poor, but affluent citizens exist that can afford tracking.

It’s questionable whether or not the technology would be developed based on the demand of the application alone, but existing technology is on the verge of making a pill based GPS tracker a reality and current technological developments will likely create the required technology naturally.



  1. Dickenson, Elizabeth, “Kidnap Capital”, Foreign Policy Magazine Online, http://www.foreignpolicy.com/articles/2011/07/05/kidnap_capital
  2. Meitrack Press Release, “The Latest and Greatest in GPS Trackers-Introducing the MT90,” http://www.meitrack.net/about-meitrack/news/279-the-latest-and-greatest-in-gps-trackers-introducing-the-mt90
  3. Capsule Connection Capsule Sizing Information, http://www.capsuleconnection.com/capsules
  4. Cole, Gary, USPTO Patent US7554452, http://www.google.com/patents/US20050228268
  5. Purvis, Carlton, “GPS Implants May Be More Fiction Than Science,” Security Management, http://www.securitymanagement.com/news/implanted-gps-tracking-may-be-more-fiction-science-008954
  6. Leonov, V.   Torfs, T.   Fiorini, P.   Van Hoof, C., “Thermoelectric Converters of Human  Warmth for Self-Powered Wireless Sensor Nodes”, Sensors Journal, IEEE. Vol 7,  No 5. p.p: 650-657. May 2007.
  7. Hogg, Tad, “Chemical power for microscopic robots in capillaries,” Nanomedicine, 2 Oct 2009, http://www.nanomedicine.com/Papers/NanoPowerModel2010.pdf
  8. Duracell Size 13 Battery Specifications Sheet, http://www.duracell.com/media/en-US/pdf/gtcl/Product_Data_Sheet/NA_DATASHEETS/13.pdf

Mobile Hunting Application December 11, 2012

Posted by brltkd in Mobility.
add a comment


It is important to be aware of your location while hunting as well as the applicable rules and regulations. Hunters often go to different locations and regulations may differ between them. They also need to be aware of the land where they are hunting to ensure they are on the property legally and are not trespassing. Mobile technology can be leveraged to make this information easily accessible to hunters.

Example Scenario

You are set to walk from the car to your hunting location and open the application. It displays an alert reminding you that you are in a shotgun only management unit so you are sure to use the correct weapon. This is a new area of public land that you have scouted but you are not very familiar with the topography. You follow the directions to the location where you will hunt for day which you have previously marked. A few minutes after arriving, your phone vibrates to alert you that the legal opening time has passed.

After a while you decide to take a walk and explore a new area. Your phone vibrates to indicate that you are approaching the property line. You check the map to verify your location and change direction to prevent crossing into private property. You are alerted about an hour before the end of the hunting hours to start heading back since the application know it will should take about 45 minutes to return to your vehicle. As you approach the road your phone vibrates to indicate you are too close to the road to take a safe shot.

Technology Overview

There are several technological aspects to this application which I will address in the context of the example scenario.

Network Availability

The primary concern with this application is the presence of a data connection. The majority of the population is covered by cellular data services but hunting often occurs in unpopulated areas with unreliable or no coverage. [1]

The application must be able to cache information retrieved when the network is available and use that data when no coverage is present. Even if network coverage is available, it may not provide enough bandwidth to transfer the information in real-time. This requires sufficient storage space on the mobile device as well analysis to know what data may be needed during offline operation.

Often higher speed data connections are available near roadways and they degrade as you move into more obstructed areas. Cached data is still used when only a slower network like a 1X connection is available. Volatile data such as the current weather forecast will still update with the slower connection. Data requiring higher bandwidths, such as maps, will only be updated when a higher speed connection like a 3G network is present and this data is cached.

In the example, the destination is already known before setting out. The application should retrieve the relevant data for the area within a user specified radius of this location. This will enable the application to function using the cached information for the area if no data connection is present.

The actual caching can be accomplished by implementing web storage functionality in HTML5. Using the localStorage attribute allows the creation of local storage objects which hold the information. [2] The application first checks to see if the necessary data is available in a local storage object prior to retrieving it. The local data is used if available or new data may be retrieved and stored if a fast enough data connection is available.

Additionally the HTML5 application cache allows the specification of a manifest file. This file allows application to specify how and when different portions of the data should be downloaded. [3] Specific pages and resources can be listed in this file so they are cached immediately after they are first downloaded. This is ideal for static information such as JavaScript, CSS, and image files. Additionally it can specify resources that are only accessible when the network is available and a fallback method for when it is unavailable so the application gracefully handles offline operation.

Location Determination

Both GPS and cellular phone based locating systems can be used together to determine the users location. Cellular phone coverage is often available in many rural areas where data coverage is unavailable. The GPS would still provide a sufficiently accurate location even if cellular coverage is not present.

This information is retrieved by implementing the Geolocation API, which provides a transparent method for retrieving the current location. This standard API is available in most current mobile devices and is implemented by many applications. This API uses multiple factors, including but not limited to those listed above, to determine the user’s current location. [4] The application itself does not need to calculate the location itself. However, it does need to trust that the location provided by API is accurate.

Mapping Information

The map functionality is a primary aspect of this application. General map data is readily available through services such as the Google Maps API. This is used to display and provide the map interaction. This creates a familiar interface for the user without the need to develop and maintain a large amount of native code.

The maps are loaded and managed through a Javascript API that has many methods and properties to control the maps appearance and operation. Providing the API with the latitude and longitude coordinates retrieved from the Geolocation API allows it to display the user’s current location. [5]

Geographical Information Systems (GIS)

Utilizing the public GIS information creates the ability to provide alerts the user regarding public land and property boundaries. The Wisconsin Department of Natural Resources (DNR) provides GIS files containing information about state owned, managed forest crop land, and other land accessible to the general public. [6] They are available through FTP, which is sufficient for this application as the files would be cached and only updated if a newer file was available. Their information can be display on the map by using Keyhole Markup Language (KML) layers in the Google Maps API. [5]

Other GIS information such as property line locations and owner data is maintained at the county level. General availability of this data varies by county. Ideally each county would publish their GIS files to a single location for the state where the information can be easily retrieved.

Even without the county level information, the application would still be able to alert the user about their location relative to the public land.

Hunting Regulations

The hunting regulations are published and distributed by the DNR. They are currently only available as PDF documents which the application can download and cache so the user always has them available. [6] This format would provide a version that the hunter can easily read and search.

However, these documents are difficult to process directly to provide alerts to the user. Converting the files to a structured format that is easily parsed is necessary. One possible solution is to publish the files in an XML format. The application can download and cache these files on the device making the readily accessible regardless of network availability. This allows the application to generate alerts automatically for the opening and closing hunting hours.

Future Work

This application could be expanded to other outdoor sports in addition to hunting. Many of the aspects this application addresses would also be applicable to fishing, snowmobiling, in addition to hunting. Information regarding their rules and regulations would need to be available in a manner similar to the hunting regulations. Other data such as snowmobile trails and conditions along with lake specific conditions could also be beneficial.


[1] Verizon, “Network Facts,” Verizon, October 2012. [Online]. Available: http://aboutus.verizonwireless.com/bestnetwork/network_facts.html. [Accessed 13 November 2012].
[2] I. Hickson, “Web Storage,” W3C, 8 December 2011. [Online]. Available: http://www.w3.org/TR/webstorage/#the-localstorage-attribute.
[3] W3Schools, “HTML5 Application Cache,” 2012. [Online]. Available: http://www.w3schools.com/html/html5_app_cache.asp.
[4] A. Popescu, “Geolocation API Specification,” W3C, May 2012. [Online]. Available: http://dev.w3.org/geo/api/spec-source.html.
[5] Google Inc., “Google Maps Javascript API V3 Reference,” Google Inc., 18 November 2012. [Online]. Available: https://developers.google.com/maps/documentation/javascript/reference.
[6] Wisconsin Department of Natural Resources, “Hunting Regulations,” 7 December 2012. [Online]. Available: http://dnr.wi.gov/topic/hunt/regulations.html.

GluMon: A Diabetic Monitor Incorperating Environmental Factors December 11, 2012

Posted by Drew Williams in Mobility.
add a comment

Diabetes is one of the most prevalent of chronic diseases, with over 25 million people in the United States alone living with some form of it [1].  The illness comes in several forms, however it consistently renders a person unable to make, or unable to process, proper amounts of insulin.  Although great efforts have been made to find a cure for this chronic illness, there is not yet a solid solution – leaving these millions to find ways of managing diabetes throughout their lives, and live with the disease.

The most common method of living with diabetes includes the use of a diabetic monitor, which uses a blood sample from the user in order to assess current blood sugar levels.  Levels that are too high are balanced with a shot of insulin, levels that are too low require the diabetic to eat a quick snack to boost their blood sugar.  Although strides have been made in developing monitors that do not require painful pricks to the finger in order to obtain blood samples [2], most current diabetic monitors still require pricking of the finger in order to acquire the aforementioned blood sample.  Using this method of assessment of the diabetic’s current condition, in addition to the dosing that needs to take place, can place stress on the diabetic – perhaps one of the reasons why adolescent diabetics are watched for depression [3].  Furthermore, external conditions, such as recent exercise and current trends, are not taken into account with finger prick monitors, or even most continuous glucose monitors.  Sudden drops or spikes can only be recognized by the diabetic feeling ill and taking another reading – there’s no method of assessing by either context or constant readings whether a diabetic is reaching a dangerous sugar level.

I propose the creation of a smarter, smaller, more efficient monitor; which I shall tentatively refer to as GluMon.  GluMon is diabetic dosing and monitoring system that works by taking advantage of current forays into minimally-invasive monitoring methods, and monitoring a diabetic’s current activity level, the current time, blood glucose reading, and any prominent trends.  The cornerstone of GluMon is a modified Glucowatch – a minimally-invasive monitor that works by using a “low electric current to pull glucose through the skin.” [4] With the Glucowatch, stripped of its visual display and now utilizing both an accelerometer for determining activity levels and a Bluetooth module, GluMon couples a modified dosing pen with a Bluetooth sensor installed, and an application installed in the user’s phone that can be paired with the GluMon Glucowatch.  As most people bring their phone with them everywhere, pairing a monitor with the processing power of a smart phone is a perfect fit.

Such a monitor will not only eliminate the need for constant pricking what with the noninvasive readings, but also allow a user to receive alerts (or have alerts sent to friends or family members) if levels trend toward or reach dangerous heights or lows.  Exercise and time of day will be considered with current levels, as exercise especially can have sudden effects on blood glucose levels [5], and it’s important to keep in mind that stable levels after a workout may not remain so stable.   Furthermore, with a small sensor active on the paired dosing pen that comes with GluMon, if the monitor reads a series of doses and/or trends that indicate the user may have forgotten a dose or overdosed, the monitor can (again) alert the user.  My hope is that such a device will help in preventing hypoglycemia and hyperglycemia, and allow diabetics to live longer, happier lives while we continue to search for a cure for the disease.


Easy Shopping – A Smart shopping assistant December 10, 2012

Posted by Sapna Sumanth in Mobility.
add a comment


Shopping, which used to be a fun activity, nowadays has become a hassle. It’s time consuming and stressful, both physically and mentally. People need to time it appropriately in order to avoid long lines and chaos at the supermarkets. Even though these shopping places have enough counters for checkout, less than half of it will be active at any given time which makes checkout process a painful activity.
The application proposed here is called “Easy Shopping”. Easy Shopping will be available on all smart devices and offers a variety of features making shopping a very pleasant experience. Easy Shopping will have a list of stores like Walmart, Target, Sendiks, Pick & Save etc. and the user can choose which store he/she would like to go for shopping and use its features.

Application Features

When the user starts Easy Shopping application he/she would be authenticated and directed to their respective accounts. Once they are authenticated the user can access the following set of features. There is also a settings view which the user can access it using smartphone settings. Here the user can save their credentials, credit card information and other profile information.
1. Shopping lists
Users can create shopping lists based on retail chain’s with which Easy Shopping is integrated with. They can assign friendly names to identify those lists. This list would essentially contain all the items the user wants to buy. They can perform activities like search for products, filter them based on categories, view product info, manufacturer info, product reviews, availability before they actually add to the list.
2. Store navigation using store maps
Once the user is at a store they can pull up the app. The app smartly recognizes the location of the user and brings up the shopping list appropriately. If the users have multiple shopping lists for the same retail chain, the app shows a list of shopping lists from which the user can select. After that if the user needs to locate a product they can simply click on the “navigate to” button. This would essentially guide them visually to the correct aisle. This feature will be especially useful for physically challenged people and also aids in faster shopping by avoiding search time and effort.
Whenever the layout of the store changes, the super markets will need to send the updated information to the app. Also for locating a particular item the app can make use of crowd-sourced information where other users indicate location information.
3. Self-checkout using your smart device
This feature will allow users to scan the items using their smartphone cameras before placing them in their cart. The user can modify (update/delete) the cart any time before the checkout.
After they shop for all the items they will be ready to checkout. Clicking on the pay button in the Easy Shopping app will pay for all the items in the cart using their credit card information configured previously. This is very similar to the self-checkout counters we see in most of the stores today.
4. Electronic Receipts
Once the user checks out the items in the cart and click on the pay button, the app will generate an electronic receipt. This receipt will be stored permanently on the user’s smart device and on the server and can be viewed/deleted anytime using Electronic Receipts feature.
Also this feature will be very useful in case if the user wants to return items or do price adjustments (i.e. after a customer has purchased a product if the price of that item reduces, then the customer can get a price match within a certain period from the date of purchase.) at the store. They will be able to easily pull up the receipt. This will eliminate paper receipts which essentially reduces the operational cost for retail chains.
5. Coupon passbook
Coupon passbook is basically a collection of all the coupons. The passbook will contain all the coupons of the stores it is integrated with. The user can also add coupons to the passbook either by taking a photo of the coupon or manually add the coupon code based on the retail chain. All the coupons in the passbook will automatically be purged by the app when it either expires or used.
As and when the user scans an item, if a coupon exists in the passbook it will automatically be applied to that item and the coupon will not be applied if it is invalid (expired/used). This way the users will never miss a coupon and need not maintain them & they’ll be able to save more money.
6. Product reviews integrated into app
This feature will allow the user to take advantage product reviews that will be integrated into the app. If the user wants to try out a product he/she can click on the reviews button integrated into the shopping list and read experiences of other users. They will also be able to share their experience about the product.
7. Voice assistant
Voice assistant is a feature that will allow users to give voice command inputs and outputs to perform actions in the app. This will be particularly useful for visually impaired people.
8. Expenditure analytics
Expenditure analytics will track users spending and provide them a visual representation on how much they are spending segregated by categories. This will help users to budget their finances.


1. It increases the number of customers of the stores listed in easy shopping application due to the convenience that it offers to the shoppers of these stores, there by increasing the revenue for these stores.
2. It makes the shopping more efficient both time and effort wise.

Business Model

Easy shopping app will be backed by a SAAS ecommerce platform which integrates (XML, EDI, Flat File web service, etc.) with big retail chains like Walmart, Target, Sendiks, Pick & Save, for data such as product info, store info, promotions etc. This data will be stored/pulled based on the type of integration and deliver it to the customers. There will be a guaranteed increase in volume/sales at these retail chains and also reduced operational costs by features like self-checkout, electronic receipt and many others. These retail chains will pay a percentage (x %) per transaction to Easy Shopping. This app will be available as a free download on all major platforms like iOS, Android, Windows Phone.


From the technology perspective all the features will be exposed as a restful web service. Therefore the choices for that would be java web services and .NET WCF. And these services can be hosted on the cloud such as Amazon EC2 or Microsoft Azure, so that we can achieve scalability, performance, reliability and security.
Apps for different platforms will be built using their native frameworks for e.g. iOS will use xcode, android will use java, windows phone will use .net etc. This app will in turn consume all the web services to retrieve data for the users.

System Design

As per the figure, the user interacts with the smart device and the smart device in turn interacts with the application (web service) hosted on the cloud service provider such as Amazon EC2 by sending a soap request. In response to that the application will send back the requested information to the user. POD’s are the building blocks of any cloud data center where each pod contains an array of servers such as app server, file server, database server, batch server, search server etc. All the application code will be hosted on the app server’s, file’s will be on file server, database will reside in database server, all the batch processing such as nightly product loads, maintenance processes etc. will run on batch servers and search server will basically host search engine which will be used to do searches that will be supported by the easy shopping app.

[1] http://www.popsci.com/technology/article/2012-03/video-smart-shopping-cart-future-follows-you-through-store

Autonomous Driving Vehicle Network December 10, 2012

Posted by rzmuedu in Mobility.
add a comment

I live about 60 minutes away from work and commute on highway every day. I have always been talking to friends hoping that cars can drive themselves safely and efficiently. I’m taking this opportunity to design my version of autonomous driving system.

Autonomous driving really includes two big pieces, the car itself and the driving network. The reason that the system involves two big pieces is that having one autonomous driving car doesn’t solve the problem because still too many variables exist in the system and over complicate the driving system. For example, your self-driving car is cruising on the highway. All of a sudden another car next to you cut you off. Even if we are sitting behind the wheel driving will have a hard time coping with situation like that and I believe a good shaped human neuron system reacts faster than the complicated self-driving system. In a situation like that, the self-driving system will encounter a sudden change from many inputs such as range sensing, camera view, etc. The system will break hard based on the speed and range combination and it won’t be a happy ride for the passenger. However, if all cars on the road are self-driving and they can talk to each other, not only we won’t have the sudden cut line behavior, but the traffic will run so much smoother. Just like what we saw in the “I, Robot” movie. We all agree that Will Smith’s driving behavior is dangerous and not acceptable.

Extensive experiments had been done on automatic driving system such as Google who recently got their permit to have a self-driving car road test in Nevada. Therefore it is a proven concept that a self-driving car is not a dream. Although from what I learned, usually these cars run at a lower speed just to be safe because higher speed means all computation and reaction must happen in a shorter time. Given the complexity of the system, it will be very challenging.

In a high level design, the autonomous driving vehicle should include the following sub-systems. Range sensing, steering control, break control, acceleration control, visual sensing, and a “brain”. It is a mimic of human being. The range and visual sensing are derived from our eyes that can see what’s going on around the vehicle and determine how far away we are from an object. Our eyes and brain works together to derive more information such as speed and time to impact. Then we have our arms and hands to take care of the steering and feet to take care of break and acceleration. The self-driving car’s subsystems will function just like a human being to control the car.

The range sensing system needs to sense the front, back, both sides and beneath the car and constantly provides these information to the control system. The front and back range sensing helps to determine if the car is running into something when going forward or backward respectively.  The back range sensing can also act as rear end avoidance. For example a car is running into you from behind and the back range sensor picked that information. The control system can drive the car out of the way to avoid the collision which sometimes can also cause fatal injury to passenger. The side range sensing is used guide lane change. The sensor sensing the beneath information can help avoid driving over something that’s too high and can cause damage to the drivetrain underneath the body.

The steering, breaking, acceleration control subsystem can take care of their functions by executing commands sent from the control system. These three system will work together to avoid any dangerous driving behavior such as making sharp turn at high speed or anything that can cause damage to the car or passenger. These are rules and restriction implemented in the control system.

The visual subsystem is very important. It can contain not only optical sensors but also IR sensors to pick up visual information when the lighting condition is bad such as driving at night/fog/rain. These sensors will pick up 360 degree visual information to provide complete view to the control system. Combined with good image process software, recognition software, the control system can then pick up useful visual information such as road signs, line guide, and much other good information. However, image processing is computation heavy operations that require time and power. Proper balance of the software must be taken into consideration. With image recognition, the system can pick up landmarks to better locate itself along with the help of GPS. This is especially important in big cities since the GPS signal is either blocked or bounced off accuracy by all the tall buildings and leaving it very unreliable.

Once the self-driving system is completed and proven to be working, the network between cars can be established. This means some more components to be added to the car.

One network is the GPS network. Utilizing the GPS satellites, the car can get a very good idea of where it is when in open field. Then using digital map information such as Google Map, the car can place itself and plan route to the destination. After that, cellular network can also come into play to provide additional traffic, weather information. With the development in cellular network, it is very easy to transmit data to the control system to let it know if there is traffic jam on the route or road construction or bad weather in front. The control system can take all information into consideration and plan a better route based on the user’s specification of either faster time or shorter distance. Next is the ad-hoc network between self-driving cars. A good wireless communication between cars nearby can form an ad-hoc network. Then they can start sharing useful information such as travelling speed, destination, current state, next move, etc. This is very useful in eliminating some of the uncertainty in the overall environment since the car’s nearby neighbors has shared all their information with you. For example, the car on your left needs to get off the highway on the next exit and needs to move to your lane. Your car which is driving at a constant speed maintaining a safe distance from the car in front of you, can then slow down a little to create enough space for the car exiting the highway to complete their lane change move. Since everything is known, there is no surprise like cutting lane, the passenger will experience a smoother ride and less traffic jam is likely to form.

Benefits from the system are obvious, less human involvement means less chance of human error. How many times did we hear someone stepped on the gas when they meant to break? Missing a highway exit will never happen. Reckless driving disappeared from the road. Higher fuel efficiency and less traffic jam. The benefits are countless.

Of course testing such a system needs to be comprehensive and constantly done. Besides cars, there are pedestrian, bikers, roller-skates on the street. They don’t have the deterministic equipment installed and can’t share their information with the autonomous driving cars. They are the variables in the system now and any damage to them by the autonomous driving cars system will be fatal and put the system in danger. Regulations need to be modified in order for it to come true, as well as mandate of only autonomous cars can go on street.


Kageyama, Y. (2012, 11 30). Toyota Smart-Car Technology Let Autos ‘Talk’ To Each Other, Sense Pedestrians And Red Lights. Retrieved from Huff Post Tech: http://www.huffingtonpost.com/2012/11/12/toyota-smart-car-technology_n_2115570.html?utm_hp_ref=technology

Ma, J. (n.d.). Ford, GM, Tech Firms Driving To ‘Smart’ Car Era. Retrieved from Investor.com: http://news.investors.com/technology/040711-568445-ford-gm-tech-firms-driving-to-smart-car-era.htm#ixzz2C1RCRWVZ\” data-mce-href=

Newcomb, D. (n.d.). You won’t need a driver’s license by 2040. Retrieved from CNN, Wired: http://www.cnn.com/2012/09/18/tech/innovation/ieee-2040-cars/index.html

Mobility Challenges and Opportunities December 10, 2012

Posted by bkrugman in Mobility.
add a comment

Application: Intranet Bidding System

Looking at the topic of Mobility Challenges and Opportunities I can think of many different areas and applications that can fall under that umbrella. Health care applications, Smart Cars, and other internet-related applications rely on a mobile device communicating with a server that is at an unknown location on the web.  What I decided to focus on was a system that leverages mobility to achieve its goals, but does so in a very secure and short range.  The application that I built the concept for is one that I could see being beneficial to organizations that do not have a large amount of spendable capital, but would like to maximize fund-raising auctions at an organized event.


This Mobile Bidding System will provide a way to facilitate accepting bids for items that are being auctioned at a live auction. The system will allow anyone who has the mobile application or has been provided a mobile device that is configured for the application to monitor and place bids on items at the same time as the participants who are in attendance of the live auction.


  • Local secure auction management
  • Low cost setup and management
  • Stronger management of fund-raising auctions (ensuring minimal bids are met)
  • Ability to expand potential bidder base to attendees that might be moving around the event
  • Potential increase in bids and item prices through increased bidder base
  • More data to help make decisions for auction items in the future
  • Deployment to mobile devices
  • Authenticating users and devices
  • Management of devices
  • Syncing and receiving bids
  • Management of live auction attendees and online attendees
  • Software
    • Internet Information Services
    • Web services
    • Hardware
      • Personal computer (run as local web server)
      • Mobile devices
        • Android compatible tablets
        • Windows 8 tablets
        • Windows 7 tablets
  • Wireless router


Technology Available

The opportunities that I have listed above have come from attending a numerous live auction fund-raising events.  While I was attending those events there were two main issues that I noticed.  The first was that even though the organization had a minimum bid set, the auctioneer did not always notice it and would sell an item for under what the organization needed.  This required the organization to pick up the difference between the auction price and minimum price needed.  So rather than the organization making money they would actually lose money.  A second issue that I noticed was that there were a lot of attendees who were not always able to make it to the live auction. Exhibitors needed to stay at their booth, and some attendees would be negotiating a deal at a booth.  With these potential bidders unable to attend, the price of the item might end up going lower than what it could have, because there were fewer people bidding on it.  I have seen a couple of instances where someone who was working at a booth offered to buy an auction item from the winner after the live auction because they wanted it and are willing to pay more than what the item sold for.  This is another example of how the organization is losing potential funds.  All of the opportunities above provide the organization a reason to implement and use a system that increases the potential revenue with a minimal up-front cost.

When you think about the challenges that a system like this presents, there are two main issues that I see, which need to be resolved.  The first issue is the application distribution and/or the mobile device management.  This is a large issue, because it can increase the cost to the organization to implement, if they want to create a locked down system.  Or, it can create additional user management issues, because an organization would only want people who are approved to bid on items placing bids.  Placing bids and receiving the updates is the second main challenge that I see.  With an auction being conducted live and online at the same time there needs to be a way to provide the live updates as well as inform the auction attendees if the price increases.  I think that from a network perspective this issue is easier to resolve.  By ensuring that all of the mobile devices and the auctioneer are synced into a Local Area Network that is running the auction software, the different users can be presented with real-time updates.  The diagram below shows the type of network that would be created to handle this.  By running a local network the organization can ensure that there are no lengthy round trips between the web service and the mobile devices.

Network Image

The network that is described can be created for a low cost because it is able to leverage different hardware that most organizations already own.  Also, to help keep the hardware cost down, with some operational cost increase, the organization could have the attendees leverage mobile devices that they might bring.  This could provide to be very beneficial since more and more people are using smart phones or different tablet devices.


By looking at the system that I have discussed above I have provided a different way to think about mobile computing and how to leverage it.  Rather than focusing on how mobile computing can be leveraged in large organizations that have built complex infrastructure.  I have provided an example of how an organization can achieve positive results using mobile computing on a small localized scale.

Mobility and the VANET Opportunity December 10, 2012

Posted by kristinamensch in Mobility.
add a comment

Today people are more connected than ever. You may not realize it but “mobile devices have reached more people in many developing countries than power grids, road systems, water works, or fiber optic networks.” [1] This, in my opinion, is an amazing fact. The scope of technology is so much farther than most people realize; it connects people more remote than roads can reach. Mobile computing is not limited to smartphone technology, but encompasses a wide range of ideas and incorporates many fields.

Mobile and ubiquitous computing are concerned with the integration of technology in the world surrounding us, the standards and protocols that should be used to accomplish this, and the applications that can be developed. Technologies that are included in these computing spaces include augmented reality, tangible interfaces, wearable computers, and vehicular networks (to name just a few). The future possibilities of Vehicular Ad Hoc Networks, or VANETS, are particular intriguing to me.

VANET Overview

Vehicular Ad Hoc Networks, or VANETs, are networks in which vehicles and roadside infrastructure nodes can send, receive, and route communications. VANET-capable vehicles contain on-board units (OBUs) that enable both vehicle to vehicle (V2V) and vehicle to infrastructure (V2I) communications. VANET roadside units (RDUs) provide internet access and message forwarding to the vehicles within their range.

There are three general classifications of VANET applications: active road safety, traffic efficiency and management, and infotainment applications. Active road safety applications seek to decrease traffic accidents and injuries by providing collision avoidance and hazardous condition information to drivers. Traffic efficiency and management applications provide updated road and traffic maps and messages to drivers. Infotainment applications can provide drivers with local areas of interest, internet connectivity, and service applications.

The nature of VANET requires that all vehicles and infrastructure follow certain standards and protocols to ensure that all makes and models of vehicles can communicate. In the USA the networking standards for VANETs have been defined by the IEEE 802.11p and IEEE 1609 protocols. The FCC has also set aside the 5.9GHz band for VANET communications.

Application Opportunity

When my stay at home parenting days ended last spring it ushered my family into a new world in a variety of ways. One of the most frustrating aspects of two fulltime working parents was the coordination of daycare pick-up. The difference in our workday hours and length of our commutes often meant getting home at nearly the same time. Trying to figure out who was closer to home and could pick up the children first was an exercise that usually resulted in us passing each other in the daycare parking lot. In order to solve this problem I have decided to design an application that will leverage a previously implemented and moderately saturated VANET.

Example Scenario:

John and Jane are a married couple with children who work in opposite directions from the family home. While at work their children attend a daycare center close to their home. John and Jane work similar hours and because John’s commute is shorter than Jane’s he usually picks up the children on his way home. Today, while preparing to leave the office, John was called into his supervisor’s office to answer a ‘quick question’ and was detained for an extra half hour. Jane ends her work day and begins her commute home as normal. When she passes the final overpass before exiting the freeway for home her vehicle computer breaks into the music she is listening to and tells her that John has not yet passed this point in his afternoon commute and she will be the first parent to pass the daycare center on the way home.

‘Would you like to pick up the children?’ the system asks.

‘Yes,’ she responds.

She has accepted the pickup task from the application. When the children are safely picked up a notification is sent to John’s vehicle’s application to let him know there is no need to stop. Jane will also have the option of delaying the pickup task if she decides to make a quick stop at the grocery store or gas station before picking up the children. Jane can manage this stop and add this task to her route by interacting with the application’s user interface that runs on her in-vehicle computer.

System Requirements:

  • VANET- equipped vehicles
  • Area-wide, moderately saturated VANET infrastructure
  • Private sector application
  • Cloud based application server that provides
    • Location aggregation
    • Traffic monitoring
    • Time to destination calculations
    • Send notifications to the participant


The feasibility of this application is dependent upon the infrastructure and wide spread adoption of VANETs therefore discussion of the general challenges facing the successful implementation of these networks is pertinent.  Many of the challenges facing such an implementation are outlined in [2] along with their potential remedies. The biggest challenge in deployment of a vehicular network is privacy and security. We must be able to balance the need for authentication of messages to provide security within the network with the necessity of personal anonymity. If we can accurately authenticate users, messages, and locations, while maintaining driver anonymity we can mitigate potential weaknesses and points of attack to vehicular networks and the applications that utilize them. For vehicle authentication, the anonymization service described in [2] creates and presents a constantly changing public identification key that allows for driver and vehicle anonymity. The vehicle ‘entanglement’ solution outlined is also a potential way to present reliable and accurate positioning data which can be used for message and location authentication.

Another challenge that VANET systems face is how best to manage the routing and delivery of data packets [3]. The management and tracking of highly mobile vehicle IP addresses and locating these addresses within the network at any given time is a complex process. [3] suggests that address management be done based on geographical regions. Position based, or geo-routing, is a potential answer to the issue of locating the intended receiver of packet data. This protocol routes a data packet based on the last known location of vehicle IP addresses within the network. Another data challenge that vehicular network applications face is the potential congestion of communication channels as private infotainment application usage increases. The increased latency and packet dropping due to congestion cannot be tolerated on an overcrowded VANET when the network is also responsible for safety critical applications and services such as collision avoidance. The network traffic for these infotainment applications must not impede the communications used by VANET safety applications. Potential solutions to this specific challenge include a specific portion of the allocated bandwidth be set aside for safety critical communications or that vehicles contain two transponders, one to receive safety transmission and the other to receive all other communications [4].

The proposed application also faces unique challenges. The Pick-Up Manager application requires a solid VANET infrastructure base and moderate level of network adoption to provide a well covered network within the local area it is operating in. Without infrastructure resources and constant vehicle to vehicle communications the application will not be able to determine location, speed, or traffic conditions with the necessary granularity required to accurately determine the arrival times of the users. The application data information must be securely transported over the network to maintain user privacy. This will require encryption of the data packets being routed to and from the user vehicles and application server. Further, data stored on the server must be secured to ensure the privacy of the users and the trustworthiness of the information.


VANET technologies are continuing to develop and evolve. Researchers in government, academia, and industry are working together to develop better communication protocols, networking architectures, and standards to move closer to the widespread deployment of vehicular networks. When the time comes for widespread deployment of vehicular networks I believe that infotainment applications, like the Day Care Pick-Up Manager, and the services that they provide will be the driving force behind adoption of this technology.  Not only can this application be utilized by parents to coordinate pick up in a daycare situation, it could potentially be expanded to be used by anybody trying to coordinate errands or meet-ups with another person or people. In my opinion infotainment applications and the public’s desire to be connected with them will drive the adoption and necessitate the nationwide creation and deployment of a successful vehicular network.

[1]  C. Z. Qiang, M. Yamamichi, V. Hausman and D. Altman, “Mobile Applications for the Health Sector,” December 2011. [Online]. Available: http://siteresources.worldbank.org/INFORMATIONANDCOMMUNICATIONANDTECHNOLOGIES/Resources/mHealth_report.pdf. [Accessed 10 December 2012].

[2]  B. Parno and A. Perrig, “parno.pdf,” 17 July 2008. [Online]. Available: http://conferences.sigcomm.org/hotnets/2005/papers/parno.pdf. [Accessed 28 11 2012].

[3]  G. Karagiannis, O. Altintas, E. Ekici, G. Heijenk, B. Jarupan, K. Lin and T. Weil, “Vehicular Networking:,” IEEE COMMUNICATIONS SURVEYS & TUTORIALS, pp. 584-616, 2011.

[4]  H.T. Cheng, et al., Infotainment and road safety service support in vehicular networking: From a communication perspective, Mechanical Systems and Signal Processing (2010), doi:10.1016/j.ymssp.2010.11.009

Wearable Computing December 10, 2012

Posted by pvidosa in Mobility.
add a comment


Project Glass from Google is a project that is focused on developing a wearable computer built into a pair of glasses.[1]  The idea behind it is that we have the ability to augment a user’s reality by giving context about the environment that he is in.  The glasses would also provide a lot of functionality that is already available through today’s smartphones.

There are other wearable computers available that look to achieve augmented reality for the user or they simply are used as hands-free computers.  EyeTap is a device that looks similar to Project Glass which provides augmented reality to the user.[2]  Olympus’s MEG4.0 is a wearable display that displays screen content from your smartphone.[3]  There is more and more interest in this field as the technology is finally available to make this reasonably priced and usable.


I think there is a huge opportunity right now for someone to jump into the consumer wearable-computer market.  Users would have the ability to have a camera at ready whenever they are wearing the glasses.  This would lead to fewer lost special moments.  Also, the user has the ability to use the device even if his hands are occupied.  This would provide the user with more privacy than he would have when using his phone.  Others can’t simply look over your shoulder to see what you are doing.  However, the greatest aspect about this device would be the fact that you would have access to your smartphone experience at all times.  Everything that your phone can do, this can also do and it is always in front of you.  There are going to be so many new possible applications for a wearable device that just wouldn’t be possible or may not make sense with a smartphone.


There are many challenges for creating a device like Google Glass.  One of the challenges is including a large enough battery that will be able to keep the device powered for an entire normal day of usage while not making the device extremely bulky.

Battery Power

How long the device is able to run off of batteries would be a huge concern.  The device looks to be fairly compact and unnoticeable, but that doesn’t fit well with long battery life.  This device would need to be able to power a camera, LCD or OLED display and Bluetooth radio for communication.  A small LCD display would draw much less power than those used in smartphones because of the fact that there would be fewer pixels to power.  However, most smartphone users don’t have their screens on all day as that would quickly drain the battery.

Google Glass is said to use Bluetooth as the primary data connection for the device.  It will need to be connected to another device using Bluetooth which has an active internet connection.  This could be a smartphone or personal computer.  The advantage is that Bluetooth uses much less power than Wi-Fi or a 4G data connection.  Bluetooth uses less power because it is only intended for short range communication.  However, even though it is a low power connection, doesn’t mean that it won’t consume a lot of the device’s battery power.  To provide context about the user’s surroundings, the device will need an active connection to the internet.  Also, the device is shown to be able to make phone calls and participate in video chat.  Video chat would use quite a bit of bandwidth and it could quickly drain the battery.


Another major challenge to this project is how to actually control the device.  The first obvious solution is to use voice commands.  Voice commands are becoming fairly popular now with the smartphone operating systems, but there are many situations that you may not want to or cannot use voice commands to control the device.  You may be typing up a message that you want to keep private or you may be in a location such as, a library where you shouldn’t be talking.


Even with all those challenges, I still think that Google Glass is a viable product.  There are so many advantages that will outweigh the disadvantages at launch.  I believe I have solutions to some of the challenges that will be encountered by the device and I will discuss them here.

The challenge of having enough battery power will be a very difficult challenge to overcome.  There are a few solutions that can be considered.  The first option is to include a larger battery with the glasses.  This is likely to not happen as it will make the glasses too bulky.  Another solution is to provide better power management software that will turn off sensors and other components when they are not being used or when we can guess that they will not need to be used.  This will be a part of the solution because it is something that we can do without adding too much cost to the device.  Another option is to allow the user to buy an external battery that can be connected to the glasses and stored in a pocket.  This will give heavy users an option to increase their battery life, but it definitely shouldn’t be something that should be necessary for average users.

The display will likely only be a challenge for a short amount of time.  We have seen pixel densities in displays increase dramatically in the past few years.  There are phones with five inch displays with 1920 by 1080 resolution.[4]  There is not much that can be done in this area besides wait for higher pixel density displays.  Once those are available, the experience with these glasses will improve.  With a pixel density of around 430ppi, this should be sufficient for a first iteration of the glasses.  In this case, I would make the screen about one inch wide by half an inch tall.  This would provide a resolution of about 430 by 215 pixels.

The input device has a possibility for a very simple solution.  Since these glasses will likely be used with a data connection provided by a smartphone, I propose using the smartphone as a Bluetooth keyboard.  This would give the user extra control while in a situation where he shouldn’t be talking out loud or wants to use his device privately.  Also, there is another option that may be more viable in the future which is a glove that has a keyboard built into it.[5]  This has the advantage of not requiring the user to be looking at their phone to be using the glasses or to type a message.

The rest of the device is much more standard as it will likely share many components with current smartphones.  The CPU and GPU do not need to be extremely high end.  The focus here should be low power consumption.  Since the display will be fairly low resolution, the GPU doesn’t need to be as powerful as those in today’s smartphones.

Since the device will be used in conjunction with a smartphone, it doesn’t need to have 3G or 4G modems.  These components use quite a bit of power in smartphones and so this is another way to reduce power consumption.  The glasses will have Wi-Fi and Bluetooth modems so that they can communicate with the smartphone and be able to make use of its data connection.  Wi-Fi would be used when the device needs a connection that is capable of higher bandwidth.  Bluetooth will be used most of the time as it will allow the device to consume less power.[6]

The camera will be an integral part of the device.  It will be mounted somewhere on the front of the frame, probably near the display.  The reason for this is that the display will be in front of the dominant eye.[7]  Thus when taking a picture, the image that you want to take is the image you see from your dominant eye.  The camera will also come with an LED flash right next to it so that low-light situations can be illuminated somewhat.  I can also imagine this as being a very useful head-mounted flashlight.

As far as hearing audio with the device there are a few possibilities.  I believe it would be very useful to have built in speakers that will be somewhere near the ear.  This would allow you to hear the audio from the device, but not require it to be too loud so that it disturbs others.  Another possibility is that you could use a Bluetooth ear bud or ear buds.  For voice commands, I would imagine there would be multiple microphones on the device so that the device could cancel out the noise from the environment to better understand your commands.


From what is described here, I think many consumers will be excited about a device like this.  It will further integrate technology into our everyday lives and make our lives easier.  I believe this new platform will create many opportunities for new applications that are not possible or feasible on smartphones.  This is the next step in mobile computing.


[1] http://en.wikipedia.org/wiki/Project_Glass

[2] http://en.wikipedia.org/wiki/EyeTap

[3] http://www.wired.com/gadgetlab/2012/07/olympus-resurrects-wearable-display-initiative/

[4] http://crave.cnet.co.uk/mobiles/htc-butterfly-is-a-5-inch-1080p-phone-causing-a-storm-50009926/

[5] http://gauntletkeyboard.com/

[6] http://en.wikipedia.org/wiki/Bluetooth

[7] http://phandroid.com/2012/10/16/why-knowing-your-dominant-eye-will-be-important-for-project-glass/


Identification Documents—Mobile Technology Opportunities and Challenges December 10, 2012

Posted by kirbyr in Mobility.
add a comment


A trope in science fiction plots is that characters’ personal information is consolidated into one master identification document, which can be a physical document or an electronic device embedded in a person’s body.  This master ID has information on a person’s finances, medical history, citizenship, and also tracks a person’s movements.  Our society is moving closer to the idea of a master ID—smartphones can now be used to check in for flights, or act as a credit card.  However the concept of location or movement tracking is absent.  College campuses present a great opportunity to enhance identification documents with mobile and location tracking technology.

Imagine a system which will use RFID tags to track user’s locations as they move around a university campus.  Each “user” of a college campus (student, faculty, and staff) will have an RFID tag embedded in their campus ID.  Similar to the master IDs found in science fiction, campus ID badges perform lots of different functions: store meal program information, act as a debit card for campus purchases, check out library books, and provide access to campus buildings.  Campus ID badges are routinely needed throughout the day, so most members of a campus community carry their ID badge with them at all times.  This makes ID badges the ideal location to embed an RFID tag in a campus environment.

RFID technology is at the point where the tags are small enough to easily embed into a standard ID badge the size of a state driver’s license.  The user may not even notice that the tag is incorporated into the ID badge.  In addition to the ID hardware components, RFID sensors and other sensors will be located in buildings, classrooms, and outdoor locations across the campus, such as near security phones.  These sensors will collect user’s locational information and forward this information on to a master control unit.  In addition, the RFID sensors could be programmed to perform smart building features, such as opening a door or turning on a classroom light.


The primary concern for system that tracks geographical movements of individuals is privacy.  Many users will feel creeped out by such a system and will need to be reassured that their information will be kept private.  A recent example from the news is that the Saudi Arabian government is using tracking technology in passports to send automatic alerts to male guardians when female family members leave the country [4].

An identification and location tracking system will have to strive to keep individual user’s identities private as much as possible.  Individual identity will be divulged to differing extents for different applications.  For example, a class role call that is populated through reading students RFID signals in their ID badges will only be available to the teacher of the class.  This information will be encrypted when it is transmitted to protect unauthorized access to the information.  For other applications, individual identities will be protected and that information can only be accessed by select university officials, within specified parameters.  Such as, during an emergency situation officials can access names of individuals within a building or room.  As always, federal student privacy rules (FERPA) will have to be followed.

Another privacy concern is that of unauthorized individuals intercepting the RFID signal to either gain information embedded in the RFID tag (skimming) or to track the individual (tracking) [3].  Skimming is not a huge issue, as the RFID tags will contain extremely limited information on the individual user; most information describing the user will be contained in the master control program and not on the RFID tags.  However, tracking is a valid privacy concern that will need to be addressed.

A second concern is cost.  Although individual RFID tags are cheap, installing RFID readers across an entire college campus is an expensive proposition.  Estimated costs are $9K per mounted building sensor and $75K for the master control program hardware and software [2].  The RFID tags in ID badges cost about $0.10 apiece.  Incorporating smart building features into the system will add additional costs.


Tracking location information using RFID for the population of a college campus can be useful for many purposes.  Instructors would no longer need to take classroom attendance, thereby freeing more time for learning purposes.  The system could incorporate intelligent building features, which have many benefits, including energy savings and security.  Emergency response would be improved, especially in high threat situations such as a campus shooting.  Location information can aid campus security plan a response by providing the number of individuals in an area, and their personal identities.  Finally, aggregated location information can be used to identify additional needs on a campus.  Examples include underused study areas or high demand for the gym.  This information can be used to reallocate space on campus or to plan for future needs.


[1] An Introduction to RFID Technology, R. Want, Pervasive Computing, Jan-March 2006, Vol 5, Issue 1, pages 25-33, IEEE CS and ComSec

[2] What it Costs to do RFID Asset Tracking the right way the first time, P. Sweeney, Insider’s Blog from the RFID Experts, June 2011: http://blog.odintechnologies.com/bid/64996/What-it-costs-to-do-RFID-Asset-Tracking-the-right-way-the-first-time

[3] The U.S. Electronic Passport FAQ, accessed 11/2012, http://travel.state.gov/passport/passport_2788.html

[4] Uproar over Saudi Women’s ‘SMS Tracking,’ accessed 12/2012, http://www.bbc.co.uk/news/world-middle-east-20469486

Healthcare Mobility December 10, 2012

Posted by cgreigmu06 in Mobility.
add a comment

I would like to propose an opportunity for mobile computing in the current healthcare industry.  From a recent visit I had to the hospital, I noticed a few things that could be improved upon with the simple implication of a few simple mobile computing devices.  For those that may not know, there are many levels of hospital staff (nurses, doctors, therapists, and technicians) who all work together to accomplish one goal, making sure that the patient is cared for and will hopefully leave the hospital in better shape than when they first came.  Using mobile technology, we can improve the care that is provided by the hospital staff.

As of right now, technology is a main part of current hospital care, but it is also one area that is vastly under used and unknown [1].  Each area of the hospital is divided into units.  Patients are assigned depending on the type of care they need.  A typical room which is located in a unit, (each unit has 5 –  25 rooms) is equipped with a desktop PC and another PC located outside of the room, which can be used to monitor when the patient is sleeping or family is visiting.  Monitors outside of the room are also used to monitor multiple rooms.  Staff can also view their patients anywhere in the unit as long as they have access to a PC strategically placed around the unit.  A final set of PCs are located at the nurses’ station which also helps in monitoring patients, but  none of these devices talk to one another when the nurses move around the unit.

With all of the available workstations, it would be very beneficial to integrate all of these devices with sentient [2] and context aware [3] computing.  The following will go through some examples and in more detail of how the system will work in the hospital environment.


When dealing with a patient, many areas are working together to help care for the different aliments a patient may have.  With the help of the mobile computing device, nurses will have a better chance of staying on top of any situation.

  •  Situation –For example, when a patient enters into a hospital, medical labs are drawn in order to determine the cause of an illness.  On occasion, due to some form of a lack of communication, a lab result is lost or a problem is detected, but this information is not always straight forward in coming to the nurse when the lab result is needed.
  • Benefit – Each lab should contain an RFID tag that will keep track of an order and notify the nurse or other staff members if anything has gone wrong or when it is available.  These types of warnings can then popup on any mobile device or desktop PC depending on where the nurse is, telling them about the problem.  Also, it would be beneficial for the staff member to be able to track pending orders, so that they can send notifications to the lab department if a particular lab is not moving along properly.
  •  Outcome – Having clearer lines of communication between the two areas will hopefully increase any chance of problems occurring anywhere in the workflow and provide the best care to each patient that enters the hospital.


As we know from our own experience each situation is different, but there is normally an underlying factor that is related in some way or another.  The mobile environment that has been introduced will also benefit from incorporating context aware pervasive systems into its functionality.  One of the hardest and most critical steps in dealing with a patient is diagnosing what could be causing the problem.  With context aware, we will be able to research through millions rows of data to help in determining what could be the root of the problem.  Also it would be helpful in locating certain things or people.

  •  Situation – For example, a patient has some rare form of lung tissue scaring which is causing their lack of breathing, with the device search through other known diseases, it identifies the problem and then notifies the respiratory therapist to come and check on the patient right away.  This may not be the case for every situation, but it shows how each device will communicate with each other to solve a particular problem.
  • Goal – When looking at the system more closely, each device will be able to talk to each other, using the latest operating system and using a system that can talk between the different computer languages.  The context aware sensors will be used to help monitor patients and control certain environment items (temperature, air pressure, etc.), anything that could affect the health of the patient.  Each unit will have its own dedicated server that will handle the entire request.  Backup servers will be in place to switch any overloading areas to other units in the hospital.
  • Outcome – This will then help contact and locate a particular staff member alleviating any additional stress on the nurse or other hospital staff members.


As with any technology, we like to increase the boundaries and provide new state of the art innovation.

  • Situation – Each patient is normally hooked up to a monitor which keeps track of their different vitals.  Currently this is viewed on a monitor in a line graph format.  Using a tangible user interface [4] we would be able to view this on a mobile device (mobile eyepiece) showing a 3D representation of the patients’ current heartbeat, respiratory rate, and blood pressure.
  • Goal – Having a visual of the patient may help determine a particular problem or make detecting one a little more clearly.  The mobile eyepiece will provide the most mobility, providing the care givers with a hands free device that will not interfere with their line of work.
  •  Outcome – This technology would be helpful, but it could not possibly show a true representation of the human anatomy.   It would only be using data to the best of its predicting knowledge, and from other sources of how the 3D representation should look based off how the computer language interrupts the information being fed to the device.  X-rays will still need to be applied when gaining this inside knowledge, but the mobile device would provide a quick representation.


As we know with every new technology the risk of loss of personal data is the most talked about topic.

  • Situation – In order to keep this as secure as possible, the hospital will have its own secured network that the staff member will have the privilege of using.  Each staff member will be assigned their own personal identification, along with each device, equipment, patient, etc. which will also be tracked with an RFID tag.
  • Goal – If an item is moved to a wrong location or being used in a wrong way, a notification is sent to the security department.  There will also be a way for staff members to quickly and securely notify a supervisor or security of a potentially harmful situation.
  • Outcome – This system could be problematic if there are any faulty sensors, so each tracking sensor will need to communicate with the other sensors to verify that they are working correctly.


The main cost of the system will be in its components (sensors, eye pieces, software, etc.), training and ongoing maintenance.

  • Situation – The PCs, monitors, and current RFID tracking devices will need to be integrated with the new mobile computing devices.
  • Goal – Sensors can cost anywhere between a few dollars to hundreds of dollars depending on the typical usage.   Each room will have a handful of sensors and a unit will have hundreds of sensors, the larger the area the more expensive it will be to implement each sensor.   Mobile eyepiece devices will cost anywhere between a few hundred dollars to a couple of thousands of dollars, but this price will gradually decrease as time goes on.  The price will increase with the more employees a unit has.  The devices can be transferred between each shift, but it will be more beneficial if employees have their own device designed to their own specifications.   Software will also cost anywhere between a few hundred dollars to thousands of dollars depending on the number of licenses and applications.
  •  Outcome – Bulk purchases will help to reduce the cost.  Implementing a fully sentient and context aware computing system will be expensive, but it will be essential for hospitals to stay current with their present technologies.


Patient care is most important in a hospital setting, but if the staff members are not using their resources to the highest potential, care is going to be affected.  Mobile computing will help staff members to diagnose, care, and treat patients faster, leading to a better experience and wellbeing.  Life gets a little easier with each technological advancement.




[1] http://www.ehow.com/facts_7164156_use-technology-lacking-united-states_.html

[2] Sentient Computing. Andy Hopper. Philosophical Transactions of the Royal Society of London, vol. 358 (Aug. 2000), pp. 2349-2358. DOI=10.1098/rsta.2000.0652.

[3] A Survey of Context-Aware Mobile Computing Research. G. Chen and D. Kotz. 2000. Tech. Rep. TR2000-381, Department of Computer Science, Dartmouth College (Nov. 2000)

[4] Tangible User Interfaces: Past, Present and Future Directions. Eva Hornecker and Orit Shaer. Foundations and Trends® in Human-Computer Interaction, vol. 3, issue 1-2 (2009), pp. 1-137. DOI=10.1561/1100000026. Also available at http://strath.academia.edu/EvaHornecker/Papers/167491/Tangible_User_Interfaces_Past_Present_and_Future_Directions.

Mobility Challenges and Opportunities- ARAC System December 10, 2012

Posted by polyakd in Mobility.
add a comment

1.  Introduction

Every cycling enthusiast has a cycle computer; it may even have a GPS and turn-by-turn navigation support.  The problem is that every time you go to look at your current speed, heart rate, location you are taking your eyes off of what is in front of you.  This is potentially dangerous because you are focusing on something other than the road.  Augmented reality is a cool technology that be used to overlay information over the world we see.  Imagine if this technology were integrated into your cycling glasses.  Using augmented reality, important information is always displayed in your current line of sight.  The information overlaid into your view of the physical world is limitless. It could project a line on the road showing a selected route, help you find the appropriate gear ratio based on the terrain gradient, and display the current wind speed, your current cadence, heart rate, and power output to help you to optimize your ride and performance.

Such an Augmented Reality Assistance for Cycling (ARAC) system could provide tremendous assistance to cyclists, whether recreational, amateur, and professional.  The ARAC system needs to be connected all the instruments on the bike and the Internet to function properly.  Cellular data connections would be used to consume web services and provide information for navigation, weather, wind speed and other valuable information.  With cycling gaining popularity this device could be revolutionary.

2.  Example Scenario

Imagine you are in a race or out for a training ride in the country.  You are constantly looking down at your cycling computer to see your current speed, heart rate, power output, and cadence. Then you look up and realize you have been so concentrated on the computer that you have no idea where you are.  You have two choices, you could backtrack to try and get back on route, or you could pull out your iPhone to try and find your way back.  Now imagine if the glasses you are wearing could prevent this entire situation.  Instead of constantly looking down at your computer for valuable information all you have to do is pay attention to the road because this information is overlaid onto your view by your glasses, enabling you to see it and the road.  Not only could your ARAC glasses display this information in real-time but they could also overlay a line on the road ahead of you showing you exactly where you need to go.

Now imagine you are riding along another day.  It’s a hot day and you’ve been riding for hours.  You are getting tired but continue your ride.  Little do you know a sleep impaired driver is a quarter of a mile behind you and when the car passes you are nearly hit but the near miss causes you to lose control and end up in the ditch.  Cyclists are always worried about that one driver that will not be paying attention.  What if the same glasses you wear during your rides could provide you navigation and other important cycling information, and could point out and warn you of a hazard in front of you or a hazard coming up behind you?

3.  Requirements

The Augmented Reality Assistance for Cycling (ARAC) need to provide a cyclist with any important information they may normally receive from their cycling computer.  This information includes at least the following:

  • Location
  • Navigation (turn-by-turn with route overlaid on the road ahead)
  • Current weather information
  • Current wind speed
  • Future weather information based on the rider’s expected destination
  • Current speed, max speed, average speed
  • Timer
  • Current cadence, max cadence, average cadence
  • Current power, peak power, average power
  • Current heart rate, heart rate zone, time in heart rate zone
  • Camera and hazard recognition algorithm, including car recognition algorithm
  • Access to remote resources via smartphone

4.  Current Technologies

The ARAC system behaves similarly to the Google’s Project Glass[i], an augmented reality head-mounted display system that overlays digital information onto the lens of the glasses as seen in Figure 1 and Figure 2.  “Augmented reality is a live, direct or indirect, view of a physical, real-world environment whose elements are augmented by computer-generated sensory input” such as vital cycling information and location data for the ARAC system.[ii] It requires the use of the current cellular data infrastructure by tethering to a smartphone to share its data connection.  The ARAC system uses Wi-Fi and Bluetooth to communicate with a 3G or 4G enabled smartphone.  The system also uses the ANT+/ANT PLUS protocol and adapter for smartphone communication with speedometers, cadence sensors, heart rate monitors, and power meters.
Figure 1
Figure 1: Google Project Glass map augmentation.[iii]
Figure 2
Figure 2: Google Project Glass weather augmentation.[iv]

5.  Architectural Approach

5.1.  Information Models

The ARAC system uses the current cellular data 3G and 4G infrastructures for Internet access by tethering to a 3G/4G enabled smartphone.  An Internet connection is necessary for consuming web services that provide information for navigation, weather, wind speed and other information that will be presented to the user.  The glasses in the ARAC system are relatively simple and are used to do not require complex and expensive hardware; rather the system uses a smartphone’s shared resources, such as cellular network, CPU, and flash storage.

The ARAC system requires connection to bike mounted sensors in order to access and augment power, heart rate, speed, and cadence data.  The common protocol for communication with sports sensors is the ANT+/ANT PLUS protocol.  The ANT+/ANT PLUS wireless technology protocol allows sports monitoring devices to reliably communicate sports, wellness and home health data.[vii], [viii]

6.  ARAC System

To work the ARAC system requires a connection between the augmented reality glasses and a smartphone.  The smartphone will provide resources such as cellular data, GPS, processing and storage that the glasses will display to the user.  To display useful information to the user the system requires connection to ANT+/ANT PLUS sports monitoring devices (e.g., power meter, cadence sensor, speed sensor, heart rate monitor, etc.).  To allow a smartphone to communicate with ANT+ devices it needs an ANT+ adapter like the Garmin ANT+ adapter for iPhone.[ix]

7.  Conclusion

With cycling gaining popularity an Augmented Reality Assistance for Cycling (ARAC) could provide tremendous assistance to all cyclists, whether recreational, amateur, or professional.  To work, ARAC needs to be connected all the instruments on the bike, and to the Internet.  Cellular data connections are used to query web services and provide information for navigation, weather, wind speed and other valuable information.  This system helps to keep riders safe, alert, and informed of their surroundings because they will not continue to look down at their handlebar-mounted computer.

[i] http://en.wikipedia.org/wiki/Project_Glass

[ii] http://en.wikipedia.org/wiki/Augmented_reality

[iii] http://www.washingtonpost.com/rf/image_606w/2010-2019/WashingtonPost/2012/04/04/Business/Videos/04042012-58v/04042012-58v.jpg?uuid=AcGm2H6JEeG_hEz71OdQ3Q

[iv] http://i.dailymail.co.uk/i/pix/2012/04/04/article-2125139-1277E78E000005DC-706_306x300.jpg

[v] http://www.oakley.com/images/catalog/generated/750×350/15/4f43dfdfe8200.jpg

[vi] http://images.apple.com/iphone/home/images/gallery_design.jpg

[vii] http://www.thisisant.com/consumer/ant-101/what-is-ant/

[viii] http://en.wikipedia.org/wiki/ANT%2B

[ix] https://buy.garmin.com/shop/shop.do?pID=103887&ra=true

[x] www.trekbikes.com/us/en/bikes/road/race_performance/madone_7_series/madone_7_9

December 9, 2012

Posted by daleklein in Mobility.
add a comment

Mobility Challenges and Opportunities – “Smart Cars”

What once was a product of man’s imagination just ten to fifteen years ago, a vision of what tomorrow’s technology should yield has quickly advanced into the beginning steps of transforming dreams into reality and releasing them to the general public.

We have seen rapid advances in cell phone technology were they are no longer just a wireless phone used for vocal communication.  The advances in technology have morphed them from a cell phone into a smartphone.  They are literally a small portable computer that uses GPS for location-based services, are capable of linking to the internet, have integrated web cams for video chats, some use voice recognition software interfaces and are able to run numerous computing applications.  These same capabilities have been extended to the automotive world as an infrastructure to expand into the world of tomorrow. This transfer has given rise to a new era of “smart cars” which we will look at the technology that exists, the direction and future opportunities that researchers are exploring and the challenges to achieving these grand visions.

Computers have become an integral part of our everyday lives but the man to machine interface requires human interaction that is responsible for the understanding, or a lack of understanding, of the data they are inputting.  To overcome the complete reliance on human perception for data gave rise to the field of sentient computing. Sentient computing is a form of ubiquitous computing which utilizes sensors to perceive its environment which allows applications to be more responsive and react to the sensor data.  A lot of the current technologies revolve around sentient computing in order to assist us in our daily routines.  One of those technologies employed by Toyota and Ford is the Active Park Assist.  Ford employees a quicker and less costly system which we will describe.  A driver activates the system by pressing a switch.  The system then uses ultrasonic sensors that measure and identify an empty space of reasonable size.  The system will ask the driver to accept its assistance.  Once accepted the steering controls are controlled by the computer and it parks the car hands free.  The driver retains control of the gas and braking pedals.  Audible and visual notifications alert the driver about its proximity to other vehicles or objects.  The driver is still in control but the sensors allow for quicker and more accurate decision making than the drivers. [1] I know I have never been a fan of parallel parking since it’s a guessing game that can take multiple attempts to fit your vehicle into a parking space.

According to the National Highway Traffic Safety Administration there were more than 100,000 crashes related to drivers driving drowsy.  The crashes are linked to 40,000 injuries and over 1500 fatalities annually.  To help combat those statistics automakers have been implementing crash-avoidance technologies which are being offered in a growing number of vehicles. [2] This is no substitute for common sense but it’ll provide a little bit of a guardian angel effect when you do drive.

The term “crash-avoidance technology” covers a variety of applications: adaptive headlights which are designed to help improve night vision around corners and curves and available by many of the luxury vehicle automakers such as BMW, Mercedes, Audi, etc. It works off of sensors that measure speed, steering angle and the degree of rotation around a vertical axis known as yaw.  The sensors send signals to small electric motors that adjust the beams to the left or right to maintain them on the road ahead.  Another application is forward collision-warning which uses sensors such as cameras, radar or a technology called light detection and ranging (LIDAR) to detect vehicles in front of the vehicle. Some systems also have automatic braking which it starts to apply when a collision is pending. [2]  One car commercial shows this applied to a vehicle that has a back-up camera but did not immediately see a kid dart behind the vehicle but was detected by sensors and applied brakes before the driver reacted.

The following are two examples stay-alert systems currently deployed.  Ford has a Lane Keeping System that uses a small forward facing camera.  It can identify the lane markings on both sides of the road.  While the vehicle is in motion the system scans ahead and predicts where the vehicle should be measured against actual markings.  A warning chime sounds if there’s a discrepancy and a symbol lights up on the dashboard.  Failure to respond initiates an additional chime warning.  Mercedes-Benz has an Attention Assist system that gathers data about a driver during the first minutes of driving.  The system creates a profile and signals an alert chime when there’s a deviation from the profile. [2]

We’re already starting to see separate threads of an autonomous-car future being weaved into current real-world tests.  Within a few short years autonomous vehicles will be able to brake and change lanes on their own in order to avoid collisions or alter routes to avoid adding to a developing traffic congestion.  The systems should be able to communicate with one another so that when actions are taken it can alert other vehicles in the nearby vicinity.  The U.S. Transportation Department is promoting the development of an advanced Wi-Fi known as dedicated short range communications or DSRC.  The DSRC could provide a universal electronic toll system as well as coordinate signal lights in real time for improved traffic flow. [3]

Toyota Motor Corporation is currently working on testing a car safety system which is predicated on vehicles being able to communicate with each other.  They are using a newly completed test facility the size of three baseball stadiums to test the Intelligent Transport System.  The cars receive information from sensors and transmitters that are installed on the streets.  The objective is to minimize the risk of accidents from a vehicle missing a red traffic light, cars that advance from your blind spot or pedestrians crossing a street.  The system allows cars to transmit information to one another. “Toyota has also developed sonar sensors that help drivers avoid crashing in parking lots. One system even knows when the driver pushes on the gas pedal by mistake instead of the brakes, and will stop automatically.”[4]

A more recent form of vehicle-to-vehicle communication (V2V) is currently being tested in Ann Arbor.  Vehicles are allowed to share situational data to avoid crashing into each other. V2V communication would allow vehicles to share their position, destination and intended route with a central station. [5]

In Europe, Volvo is testing a concept of using “road trains” to allow for more efficient driving.  A line of vehicles traveling close together, much like a NASCAR race, would allow for higher throughputs and an increase in fuel savings due to the utilization of drafting.  The lead vehicle acts as the master unit and the slave units parrot the movements and actions of the lead vehicle. [5]

Many of the challenges faced for “smart cars” are the same as those faced by smart phone communication networks.  Vehicle networks are unique in that the information conveyed over a vehicular network may affect life-or-death decision; therefore security is of the highest importance.  In an early work entitled “Challenges in Securing Vehicular Networks” they identified several network challenges.  The first was authentication versus privacy.  By binding a driver to a single identity they can reduce the probability of various spoofing attacks. Another challenge identified was availability.  Vehicle networks rely on near real-time responses which can make applications vulnerable to Denial of Service attacks.  Another challenge is a low tolerance of errors.  Decisions made can easily involve life or death situations which means there needs to be a nearly zero level of error. [6]

These are just a few of the technical challenges.  Given that there are multiple automakers and they all sell their vehicles globally we need to consider that they may not all agree on the same solutions and not all solutions may address all infrastructures.  A system in the US may not be the same as in Europe.

I believe that the end goal is to move closer and closer to an autonomous system but the use of some forms of driver assisted systems is the use of baby steps.  It does not represent baby steps from the perspective of technology but rather the psychological perspective of acceptance.  By having the driver still retain primary control but relinquishing some of it to the computing system, the comfort zone of end users has been shifted and as it becomes the norm for the masses, then you have established a new baseline from which to start the process all over again.  Wide-spread acceptance is probably the greatest challenge in this area.


[1] Active Park Assist – Ford, http://media.ford.com/images/10031/APA_Toyota.pdf

[2] Crash-Avoidance Technology Can Help, but Won’t Replace Rest, Jan 1, 2012,    http://www.edmunds.com/car-safety/technology-aimed-at-helping-drowsy-drivers-stay-awake.html

[3] Ford, GM, Tech Firms Driving To ‘Smart’ Car Era, April 7, 2011, http://news.investors.com/technology/040711-568445-ford-gm-tech-firms-driving-to-smart-car-era.htm#ixzz2C1RCRWVZ\” data-mce-href=

[4] Toyota Smart-Car Technology Let Autos ‘Talk’ To Each Other, Sense Pedestrians and Red Lights, Nov. 12,2012,http://www.huffingtonpost.com/2012/11/12/toyota-smart-car-technology_n_2115570.html?utm_hp_ref=technology

[5] You won’t need a driver’s license by 2040, Nov. 18, 2012, Wired magazine.

[6] Challenges in Securing Vehicular Networks, B. Parno, A. Perrig,

Augmented Reality – “smart glasses” December 9, 2012

Posted by davevankampen in Mobility.
add a comment


  1. Introduction
  2. Challenges
    • Reduction in Technological Capabilities
    • Battery Life
  3. Opportunities
    • Reduction in Technological Capabilities
    • Android Development Environment
  4. Conclusion


Augmented reality is all about adding to and enhancing the environment the user is already in. It is about being mostly or totally unobtrusive, to the point of being invisible, until it is needed, and then adding value by displaying information about what the user is seeing, or handling, or perhaps thinking about. It is about adding value, and not taking any away. Granted, this is quite the tall task.

In a way, the smartphones that have proliferated today are already augmenting our reality, by the meaning of the word. Dictionary.com defines augment as an increase in size, number, strength, or extent. I feel it is the latter two that really fit to this type of augmentation. Consider this somewhat trivial example: out to dinner with friends, you are discussing your favorite movies. Someone mentions “Air Force One”, and says “wasn’t the bad guy the same guy that was in the Harry Potter movies?” 5 or 10 years ago, that would have to go unanswered if no one knew. However, today, with smartphones and a reliable network connection, we have effectively strengthened and extended our intelligence and “memory.” You can very quickly retrieve an answer to queries like this, and the conversation can be resolved right then and there. This is a good (though somewhat silly) example of how smartphones augment our reality.

So, naturally, my first thought when considering glasses that augment one’s literal view of reality was that the technology, both hardware and software, in smartphones could easily be adapted for this application. And in fact, that is where some current projects are already headed. The first main reference for this post is Google’s Project Glass. Google, the developer of the Android smartphone operating system, is already hard at work making an augmented reality device built around that system. They are obviously a powerful player in the market already, and have many skilled engineers on their side.


However, this does bring us to what I consider a couple of the key challenges in the “smart glasses” market. I think it is important to resist the urge to just directly translate the smartphone market into this field. That would be too easy, and there a are few differences that make it impractical.

For one, as pointed out during a feedback session from one of my classmates, smartphones are very optimized for the communication media available to them. Specifically, the touchscreen, haptic feedback, high-res displays, etc. These are all not possible on a smart-glasses application – it needs to be see-through, and preferably only require interaction from the user’s eyes – as you require more physical interaction, the device becomes more obtrusive. So if the smartphone operating system you are using is built all around the assumption that you have a big, powerful, high res screen available, there will be significant challenges migrating that architecture to the smart-glasses realm. If nothing else, some significant paring-down of features and functionality is necessary, in order to re-factor the UI so it can be seen through, instead of just looked at. It needs to be in the field of view, but not the focus of the view.

Another significant challenge that presents itself for smart-glasses is battery life. Smartphones, a similar type of device, w.r.t. functionality, have large batteries, but still only operate for a few short hours. And that is when the screen is off for most of the time. Depending on how its implemented, I see smart-glasses as operating in an “always on” mode, in some form. To offer real time augmenting capabilities, they should be always there, ready to provide information on whatever the wearer is viewing. In fact, this sort of “real-time” response is precisely what Microsoft recently filed a patent for. In that application, Microsoft discusses possible ways a visual device can augment views – registering attendance at an event, detecting field of view, user geographic position at the event, and providing information about what is near them or within their field of view. These functions are wonderful, but of course very difficult to implement. Especially if the device is simply a re-factored smartphone. It demands fast network connections, long-lasting battery life, rapid processing power, and likely large memory stores, to hold information both about the user and the operating environment. And thats all not to mention the cost of creating such a device.

So, there are significant challenges. However, if there were no challenges in a given area, then there would be no opportunity for business advancement or profit, because anyone would be able to develop a competitive product. Market areas that provide challenges also provide opportunities, and those will be discussed next.


One significant area of opportunity for the smart-glasses application falls in line with one of the aforementioned challenges. I think the display technology for smart glasses will make-or-break a project’s success, more than almost any other factor (aside from cost). So much battery power, processing power, and cost on a smartphone device is invested in the screen. This is because the screen is the end of the user’s field of view. The more vibrant, bright, and responsive it is, the more satisfied the user is with the experience of it.

Fortunately, do to the nature of the implementation, this is not necessary on smart-glasses. In fact, it would be a significant hindrance. Smart glasses need to be seen through, not looked at. To be safe, they should never block the user’s field of view. Of course, the user should not wear them when driving or doing similar activities that require their full attention. But even when they are doing an activity where it is safe to wear them, they should be as visually “out of the way” as possible.

This is a good thing though, because it provides the opportunity for lower-power displays. Lower power, lower resolution, lower response granularity (smartphones need to know precisely where the user touched, smart glasses likely do not need the same granularity, since vision angle changes so rapidly anyways). A low-res, mostly transparent display w/ a low-res camera (as eye sensor) should suffice. This requires less battery, less processing power, and thus lower cost. So though is different than the smartphone market, it provides an opportunity.

Though there are many shortcomings of the Android operating system in this application, it is still a viable candidate for being the OS for smart glasses. One reason is the developers network and environment. Google has done an excellent job creating a development environment and application deployment network that encourages many developers to get involved. In addition, Android does a good job abstracting specifics about hardware sensors from the software developer. They simply provide very easy to use and well documented APIs that allow the software to access hardware sensor data. This developer base will likely rapidly get behind a new and different platform, as long as the development suite and toolchain remains consistent to what they have grown to expect. THis is a valuable asset. The project doesn’t have to be project glass (which will likely cost in excess of $1K). It can be any device running Android.


There are some obvious challenges associated with working on augmented reality based smart-glasses. A battery that can last as long as the user typically wears glasses and goes about their day is atypical for most smartphones today, and those have the luxury of being able to house (relatively) large batteries internally. This is not an option for smart-glasses, and that will need to be worked out as battery technology no doubt advances. The other key challenge (reduction in technology) is an area of benefit and growth, as it allows for simpler hardware.

Overall, I think there is a lot of potential in the coming years for smartphone developers to expand their offerings into augmented reality smart-glasses. Google is trying already, but I see them paring “Project Glass” down to make it more affordable and usable for the everyday customer.

Mobile Emergency Response System (MERS) December 9, 2012

Posted by mattpassini in Mobility.
add a comment

­The Mobile Emergency Response System (MERS) is a combination of sub-systems that work together to provide emergency responders with the most immediate and detailed information available both before and during a response.  The system consists of an online application acting as the central hub for displaying all information available for a given address or response type.  A secondary distributed system will be deployed on mobile devices carried by emergency responders that will primarily utilize its cellular internet connection when available, and fall back to a wireless ad-hoc peer-to-peer network that will hop its way back to the main system.  This system is also designed to interface with other third-party systems, such as computer aided dispatch (CAD), other agencies systems, and hospitals.

The following is synopsis of the overall system.

Common elements between modules – Intercommunication with Dispatch
This system will interface with the emergency dispatching systems.  When a department is called to respond, the Computer Aided Dispatch (CAD) system will automatically send all available information, such as the exact location of the emergency (GPS coordinates) as well as full address, and any manually added notes and information.  The online application will also display a ‘live’ (interactive) Google Map of the location.

Fire Module
The main system page will be the Fire Module screen.  The screen will show the following information
User entered pictures related to the address
List of key holder/main contact information
List of relevant notes about the premises
List of hazardous material kept on site

EMS Module
Ability to choose Medical or Trauma response, or read directly from the dispatch system.
Direct access to Wisconsin Ambulance Run Data System (WARDS) for immediate reporting.
Interface with hospitals to allow for instant, potentially two way, communication.
When medical device sensors become more widely available, the ability to wirelessly interact with a multiple of biological markers would be added.

Vehicle Rescue
The vehicle response module would have the ability to choose make, model, and year for each vehicle involved in the emergency, or have the information automatically sent via dispatch’s CAD if available.  The system would integrate with a third-party database which holds diagrams for each vehicle, including vital information such as battery location, airbag locations, special metals used in the engine and structural reinforcement, along with any unique electrical systems becoming ever more common in all electric or hybrid vehicles, which pose a great risk, specifically for fire.

Search (for people/animals/etc)
Display Standard Operating Procedures (SOPs) for creating and organizing a small to large scale search and rescue mission based on location, number of patients, and time of day.  The system would allow input of latitude and longitude which will update the Google map to that specific location.  With E911, dispatch centers with the necessary equipment are able to retrieve the GPS coordinates of the cell phone used to call 911, if the cell phone is capable of doing so.  This allows for a real-time overview of the geographical location, allowing for extremely efficient execution of a large scale distributed search effort.

Other Rescue (Special Rescue)
Ability to choose type: Water, Ice, High Angle (ropes), or other, and the potential to read it from the dispatch system, if available.  Display the proper SOP based on that information.  The system would also display other helpful hints, such as how to tie specific knots, or produce special pulley hoisting systems.

On-Scene Command Module and the Secondary Distributed Mobile Application
The On-Scene Command Module is hosted within the online application and communicates with the distributed native mobile applications via the internet or by falling back to ad-hoc networks.  The key to this application is complete integration requiring no user interaction.

Regardless of the technology used to transmit data to and from the various devices, there is a large number of small data that can be continuously transmitted.  The system becomes even more valuable when external sensors are interfaced with the mobile devices.  For instance, biological sensors could record vital signs, such as pulse and body temperature.  Another sensor on the Self Contained Breathing Apparatus (SCBA) could transmit the amount of oxygen left, along with trends of oxygen intake, and thus estimate the amount of time the responder can stay in the building.  An accelerometer could also be used to sense movement, or specifically, lack thereof.  Lastly, an exact location of the responder could be given, which aids in both real-time management of resources, as well as retrieving a downed responder in an emergency.

A second use of the mobile devices could be enhanced by adding a Heads Up Display (HUD), that would ideally be integrated into the facemask for the SCBA.  Not only could the HUD display important information about the responder themselves (vitals, air and time remaining), but it could also display the status of their partner(s).  It could also read information from equipment and apparatus that is on scene.  For example, a fire engine could provide a firefighter with the current level of water left in the truck, as well as the current gallons per minute (GPM), or flow, to each hose line.  Other information could be sent directly to responders, such as location of potential victims, location of hurt firefighters, updates on the stability of areas of the structure, or any other commands.

Outside, the command module on the cloud hosted system will aggregate the data being received from responders and provide meaningful information, such as the biological status of responders, air remaining, or current activity.  This module would also allow for direct communication to the responder, such as sending commands of what to do next, which would be displayed on the responders HUD.

With the various amounts of data being logged, extremely detailed and accurate reporting could be automated.  This would greatly decrease the amount of time necessary post-incident and greatly relieve much of the frustration when attempting to remember every detail of, at times, extremely complex and long responses.

Home Automation December 7, 2012

Posted by 3562pittena in Mobility.
add a comment


The term automation is used to describe tasks typically done by humans that are done automatically. Applying this to homes means taking tasks done around the home and automating them. This includes things like temperature control and security. A few examples of uses for home automation would be to turn on and off lights on a schedule. This would make it appear as if you are home, even when you are not. Also, you could check if doors or windows are open remotely. This would be nice to ensure your home is safe when on vacation or at the office. It could also be used to turn up or down the thermostat when you are away from home. [1]

There are a few benefits to automating things at home. The user can control things from remote locations like turning the temperature down on the thermostat after leaving for vacation. Also, automating tasks like this allows for some tasks to be scheduled (like a programmable thermostat).

A few examples of uses for home automation would be to turn on and off lights on a schedule. This would make it appear as if you are home, even when you are not. Also, you could check if doors or windows are open remotely. This would be nice to ensure your home is safe when on vacation or at the office. It could also be used to turn up or down the thermostat when you are away from home. It is also nice to centralize all of this control through your home computer.

Home Automation Tasks

Typical home automation tasks include things like temperature control, light control and general home security. [2] These types of home automation tasks can be done through security companies like ADT and typically have a monthly fee associated with them. [3]

There are many home automation tasks that don’t seem to be offered, currently, that I feel could be included with a simple package like this. Proximity sensors for windows and doors (may or may not be included with the “security” package) could be included. Also optional could be sensors like smoke detectors or gas/carbon monoxide detectors. These could prove useful in detecting when bad things are happening. You could also include controls for electronic devices like an audio/visual system. Also, electronic shades could be controlled by the same system. An electronic sprinkler system could also be tied into the same system and given the same benefits.

Design of System

In order for this system to be successful, it has to have an architecture that is modular, has a central base station and is easy to integrate (See Figure 1).

System Architecture

Figure 1 – System Architecure

The system needs to be modular in order to accommodate for the specific needs of each person. One family may not need any window sensors because they live on the second floor of an apartment building and aren’t worried about windows open/closed. They also may not need a sprinkler system controller. Each component must be modular in order to ensure the success of the system.

There must be a central base station in order to accommodate for the modular design. There must be one central point of contact that all the modular components communicate with. This device could fairly easily be a home PC. A home PC is a good central base station because it could be easily configured for an application like this. Also, most people already have one that they leave on all the time. This central base station would need to be used like a server in order to allow for external communication (remote control).

If this system is going to have a modular design with a central base station, it must be easy to integrate. If the average Joe cannot open the modular sensors and easily connect it up with the central base station, the user will get frustrated and not use the product. Having the home computer as the base station allows for easily customizable descriptions of sensors. It also only displays information from reporting sensors and leaves out non-existent information (sensors not connected).

Designing the system in this way creates a few perks. Having the central base station allows for the user to easily schedule any control. It also creates a simplified, common interface for the scheduling system. Having the central base station also would allow for features like sensor alert levels. This would allow the user to set a level at which they would like to be alerted. After smoke is detected or the gas/CO levels go above a certain level, an email could be sent to the user. Also, alerts could be created for any other sensors (windows, doors, etc). Then the user could decide on what to do with these alerts once they get them (call the authorities, do nothing).

Benefits of new system

Some of the benefits of this type of system would be the price and ease of use. Most current companies that create systems like this charge a monthly service fee. This type of a system would be free to the user. The initial cost of the sensors and base station is the total cost of the system. Having a system like this that is easy to use and install means users will not be driven away by its interface. The modular interface also is a benefit of the system.


In conclusion, having a system architected like Figure 1 would allow for an easily integrated, modular design. This would be beneficial because the current way companies sell these systems are service oriented and often include a monthly fee. This modular design creates a consistent interface to all the sensors and controls.


[1] Smart Schedules. Alarm.com. Online. http://www.alarm.com/productservices/homeautomation/smart-schedules.aspx

[2] Home Automation. Vivint.com. Online. http://www.vivint.com/en/solutions/packages/home-automation

[3] ADT Pulse. ADT. Online. http://www.adtpulse.com/

Mobility Opportunities and Challenges October 24, 2012

Posted by Marquette MS Computing in Mobility.
add a comment

This is the area where opinions about Mobility Opportunities and Challenges will be presented.