About Online Matters

Archive for the ‘mobile web’ Category

PostHeaderIcon A Primer on Geolocation and Location-Based Services: Geolocation from IP Address


We now take a slight turn from more exact geolocation technologies to one that is more basic – IP address-based geolocation.  IP address-based geolocation (IPG) was the earliest online geolocation technique and has been around since 1999.  It determines a user’s geographic latitude, longitude and, by inference, city, region and nation by comparing the user’s public Internet IP address with known locations of other electronically neighboring servers and routers.  While IPG is not specific to mobile, it is used in the geolocation of mobile devices by the more complex algorithms.  It is thus worth taking time to understand what it is, how it works, and how accurate it is.

What we will find is that as a stand-alone technology, IPG is not very accurate for the purpose of locating a device with any reasonable degree of accuracy.  Moreover, their is no magic to linking an IP address to a location – it must come from some type of third-party service that  has manually (or semi-manually) mapped IP address to a geolocation.   Even then, without the help of your ISP in providing more information about a device, the best you can do is the location of your ISP’s host server. However, in a later entry, we will discover when combined with other forms of geolocation IP address can be used as an extra signal to confirm location.


Every device connected to the public Internet is assigned a unique number known as an Internet Protocol (IP) address. IP addresses consist of four numbers separated by periods (also called a ‘dotted-quad’) and look something like 192. 168.0.1.

Since these numbers are usually assigned to Internet service providers within region-based blocks, an IP address can often be used to identify the region or country from which a computer is connecting to the Internet. An IP address can sometimes be used to show the user’s general location.  At one time ISPs issued one IP address to each user. These are called static IP addresses. Because Internet usage exploded far beyond what was envisioned in the early design of the IP standard (known as IPv4) and the number of IP addresses is limited,  ISPs moved toward allocating IP addresses in a dynamic fashion out of a pool of IP addresses using a technology called Dynamic Host Configuration Protocol or DHCP.  This dynamic allocation makes physically locating a device using an IP address tougher.

As we move forward in this discussion, an example will help us understand what is required to convert an IP address to a physical location.  Below are two different services which provide geolocation information from an IP address

example of how two different services - Google and whatismyip - use IP address to determine a location


The first service is Google search.  If you type “what is my ip address” into the Google search box, a set of results is returned.  The IP address of the device from which the search was made appears at the top of those results.  On the left-hand side, Google shows it has auto-detected my location in Carmel Valley Village (actually, I am about a mile away and few hundred feet above Carmel Valley Village).

The second service is WhatIsMyIPAddress.com, which is the first organic listing in the result set returned from the “what is my ip address” search.  In this case, the service shows “my device” as being in Salinas, California, about 20 miles away as the crow flies.  Or actually it doesn’t show my device as being in Salinas.  It shows that my ISP is Comcast Cable and that my ISP is in Salinas.

Same query.  Two different services.  Two very different results.  The reason for the difference is that Google is using multiple sources to geolocate my device (IP address, wifi-based latitude and longitude) whereas WhatIsMyIPAddress is only using DNS-based information including traceroute mapping for this particular page.  Once I approve the use of geolocation services, whatismyipaddress yields similar results to Google because it also triangulates across multiple sources.

The rest of this post will delve into why this has occurred, which involves understanding the technology used to perform IP-based address geolocation.

Its a Ping Thing

To understand what is going on, we have to start at the most raw form of the technology underlying IP Addresses, which is the TCP/IP model itself.  Our most basic entry to determine an IP address or a host from an IP address is the ping command.  We won’t go into how ping operates in detail here, but you can find a great overview of this at the GalaxyVisions website. However, by definition the ping command, which is an application and sits in the application layer of the TCP/IP model,  reaches down into the internetworking layer of the TCP/IP model directly with an ICMP Echo Request message and receives back an IP address in an Echo Reply message.  Here is an example of what a ping looks like for


What the image shows is that through the echo request/reply, Ping is able to retrieve information about the hostname of the particular IP address, which in this case is a server at comcast.net.   Note the IP address is supposedly the IP address of my computer in my house.  But it isn’t.  Instead what is returned is the location of the server of my ISP to which my account is attached.

From DNS Hostname to Location

So in the prior step, we were able to use Ping to get to a server/domain name.  The next step is to get from the server name to its location.  This is where it took me some time to understand the options available and how they work.  In this section I am going to discuss four:

  • DNS LOC Records
  • Whois Data
  • W3C Geolocation Services in Web browsers
  • Third-Party Service Providers that Map IP Address to a Physical Location

DNS LOC Records

In the Domain Name System there is an experimental standard called RFC 1876 which is named ” A Means for Expressing Location Information in the Domain Name System.”  This standard defines a format for a location record (LOC record) that can be used to geolocate   hosts, networks, and subnets using the DNS lookup.  You can read the standard to get all the details, but the format of the record looks like this:

DNS Location record for RFC 1876

Sample format of a DNS LOC record

The size is the diameter of the sphere surrounding the chosen host and the horiz pre and vert pre give the precision of the estimate.  Latitude, longitude and altitude are pretty obvious as to what they mean.

DNS LOC has two problems.  First, it has only been defined for a few sites around the world.  “Defined” means that as ISP has manually created a LOC record for their hosts (they add the record to their DNS servers).  Second, it once again only gives the location of the host server – in this case hsd1.ca.comcast.net – not the location of my device.

So this doesn’t help us.


Anyone who has used the Internet extensively knows about the whois service.  This service describes the owner of a particular domain.  It is possible to gain some geolocation information about a domain from it – but that data is usually the headquarters of the owner of the domain and has almost no relation to the location of my domain host, much less my computer.  The example from Comcast:

comcast's whois entry for ip address-based geolocation

Example of Comcast's whois Entry


Note that there is no entry for my specific hosting server hsd1.ca.comcast.net (left image) and the information about the top-level domain shows it in Wilmington, DE (right image).  Not much help at all.

W3C Geolocation Services in Web Browsers

W3C geolocation services describes an API that provides scripted access to geographical location information, such as latitude and longitude, associated with a specific device via the device’s browser.   Geolocation Services are agnostic of the underlying location information sources which can include Global Positioning System (GPS) and location inferred from network signals such as IP address, RFID, WiFi and Bluetooth MAC addresses, and GSM/CDMA cell IDs, as well as user input.

The API allows developers to access both “one-shot” position requests and repeated position updates.  It also provides the ability cache historic positions and use the data to add geolocation functionality to their applications. For the geeks in the room and so newbies won’t be confused when they find more geolocation-based acronyms,  Geolocation Services builds upon earlier work in the industry, including [AZALOC][GEARSLOC], and [LOCATIONAWARE].

Note that Geolocation Services draws on third-party services – it does not do any geolocation itself from the device.  Thus, all W3C geolocation services – and this is not to minimize their value in developing online and mobile location-aware applications – are simply an aggregation tool to allow developers to draw on whatever third party sources are available to geolocate a device and feed that information into their applications.

Also note that there is nothing explicitly tying these services to the DNS record.  How do you make that connection?

Well, as the next section shows, W3C geolocation services can draw upon a service like hostip.info to get the geographic location of the host.

Third-Party Service Providers

At the end of the day, there is no “magic bullet” technology that links an IP address to a geolocation.  The only way this occurs is through a third-party service that has used numerous, usually labor-intensive and semi-manual techniques, to acquire a geolocation for an IP-address

A part of me wants to chat about Netgeo here – which was one of the earliest attempts to geolocate devices by their IP address.  However, Netgeo has not been maintained , and frankly I’ve covered pretty much everything they discuss in the prior sections.  But if you are interested in this bit of IP address-based geolocation, click on the link.

Having used a number of these services,  I can tell you that the majority do not do a particularly good job of geolocating a host server using an IP address, much less a specific device.  I’ll use hostip.com, as it is the most transparent.  hostip.com uses crowdsourcing to geolocate a host. Developers and ISPs can enter the location of their servers into the hostip database.  The database is then freely available to anyone who wishes to use it.  Here is an example of how my location fared:

Example of hostip.info service

Example of hostip.info IP address-based geolocation


Tustin is several hundred miles to the south of my location. So as you can see, not very accurate at all.

 Circling Back to Our Example

So how do these two services handle geolocating my computer?  First, they are both using the W3C geolocation API.  What differs are the sources they use to identify a location.

Obviously Google is relatively accurate in this example, although I do not consider a one mile radius to be particularly useful for those of us who are trying to deliver fine-grained location-based services.  Google manages this through a combination of sources:

If you allow Google Chrome to share your location with a site, the browser will send local network information to Google Location Services to get an estimate of your location. The browser can then share your location with the requesting site. The local network information used by Google Location Services to estimate your location includes information about visible WiFi access points, including their signal strength; information about your local router; your computer’s IP address. The accuracy and coverage of Google Location Services will vary by location.

Google Chrome saves your location information so that it can be easily retrieved. This information is periodically updated; the frequency of updates depends on changes to your local network information.

I should add that when it comes to Android and Android Location Services, Google also uses GPS and Assisted GPS technnologies for geolocation.

 Android Location Services periodically checks on your location using GPS, Cell-ID, and Wi-Fi to locate your device. When it does this, your Android phone will send back publicly broadcast Wi-Fi access points’  Service set identifier (SSID) and Media Access Control (MAC) data.

And it this isn’t just how Google does it; it’s how everyone does it. It’s standard industry practice for location database vendors.

whatismyipaddress, on the other hand, is only using the IP-based address from a third-party service.  This is a choice on their part because I haven’t opted in to use location-based services.  Once I do, I get the map below and my location by one source is correctly shown as Carmel Valley (it is interesting to note, as well,  the different results depending on which third party provider you use).  But this is because we are now triangulating not just from IP address-based geolocation.  whatismyipaddress.com is also using wifi-based geolocation and cell-tower triangulation via the W3C location services API to get a more accurate read.


Basically, after exploring IP address-based geolocation, the conclusion is it is a non-starter for any application but those that can live with the broadest of geolocation options.

Next up: Assisted GPS.



PostHeaderIcon A Primer on Geolocation and Location-Based Services: The Global Positioning System

Well, here we are how many months later, and it’s my first post? My friends have been ribbing me – an online guy active in social media – not updating his blog for almost a year.

Sadly, I haven’t been all that active in social media, either, until the last couple of weeks. Checked out completely. Nowhere to be found in real-time. Gone from the immediate consciousness of those I hold near and dear (or at least, in the case of Twitter, the great founts and filters of information). No longer part of the noise. Like a black hole, I might be there but no one could detect me. Tsk, tsk.

The reality is, however, that for the last year I have been heads down moving deeply into machine learning, audience segmentation, behavioral targeting and recommendation engines for mobile advertising. You try pulling all that down and back into your tool set after 25 years way from a masters in robotics (which involved what was then called adaptive learning) even as you deliver product specs, running deep analytics, and building product. I dare you. While I ain’t done yet, but I’ve finally gotten to a point where I can take a breath and look up and see what’s going on in the world. However, it’s not like it used to be when it comes to my participation in social media. Deep analysis requires intense concentration (at least for me), and all the interruptions from Tweetdeck just kill my train of thought. So it’s beginning of end or end of day mostly, with an occasional day where I can just relax and range through lots of immediate information.

Sigh. The price I pay for working in an area that is intensely mathematical and has become my passion. The price I pay for reveling in the ability to build incredible products on huge data. A price, but well worth it.

So now that the mea culpa is past, what’s on the agenda for today. Given I have been working in the mobile space, it seems appropriate to start with the issue of geolocation. Geolocation data represents a relatively new input to data mining, but one that can provide a host of opportunities to identify and segment audiences. Admittedly, there are a number of services like Loopt, Foursquare, and others that make their bones on using geolocation to understand where you are and what might be of interest to you.  But believe it or not, for the majority of businesses and even many technology companies, the whole idea of using geolocation data as elements of customer profiles is completely new. Many are still trying to wrap their heads around how to best set up a mobile website and integrate it into their overall marketing programs. The science of geolocation? Not even on the radar yet.

It has certainly been an eye opener for me to learn this field – it is deep, rich and complex. So I thought I would build a primer for those who, like me, had to start from scratch and understand how geolocation works and how you might use it to enhance your customer offerings. There is a lot to cover, so this will be another multipart series.

The Basics of Geolocation

Most people know that anyone with a mobile device can be geolocated.  But what many people do not know is that a device can also be geolocated if you are online through information transmitted by the device’s browser (especially Google Chrome and FireFox).  The combination of these technologies provides a powerful set of tools for tightly locating a device (meaning a radius of under 200 feet) even when GPS, the most fine-grained way of locating a device,  is not turned on.

The core methods by which a device can be geolocated and which we will discuss in the next sections are:

  • The Global Positioning System or GPS
  • IP Address
  • Assisted GPS
  • Network Base Station Data
  • Network (or Cell Tower) Triangulation

The Global Positioning System

For those few who have never been to a James Bond flick, watched Law and Order, or seen  a TomTom commercial, GPS stands for Global Positioning System. But even if you know the term “GPS” you may not know how it works.  So let’s start there.

GPS is a space-based satellite navigation system that provides location and time information in all weather, anywhere on or near the Earth, where there is an unobstructed line of sight to four or more GPS satellites. It consists of 24 satellites, is maintained by the United States government and is freely accessible to anyone with a GPS receiver.

GPS is accurate to a very tight radius – current technologies can get a horizontal accuracy of ~1 meter (3 feet) and a vertical accuracy of  ~1.5 meters.  But GPS accuracy for most mobile phones and pads is probably on the order of  a 30-50 foot radius.  Garmin, a maker of navigation systems, says its devices are accurate to 15 meters, for example.

Most mobile devices have a GPS receiver built in, although it is not turned on by default due to the fact it drains batteries very quickly.  This default is, in fact, the biggest hurdle to accurately geolocating a device, since GPS is by far the most accurate mechanism available.

GPS satellites transmit two low power radio signals, which travel by line of sight. As a result, they can pass through clouds, glass and plastic but will not go through most solid objects such as buildings and mountains.

A GPS signal contains three different bits of information – a pseudorandom code, ephemeris data and almanac data.

The pseudorandom code is simply an I.D. code that identifies which satellite is transmitting information.

Ephemeris data is information GPS satellites transmit about their location (current and predicted), timing and ‘health’. This data is used by GPS receivers to enable them to estimate location relative to the satellites and thus position on earth.

Almanac data tells the GPS receiver where each GPS satellite should be at any time throughout the day. Each satellite transmits almanac data showing the orbital information for that satellite and for every other satellite in the system.

Each GPS satellite is located ~12,000 miles above the Earth and makes two complete rotations every day. GPS receivers in mobile devices attempt to locate four or more of these satellites, calculate the distance to each, and then use the information to geolocate a 3D position (latitude, longitude, altitude).  Once the user’s position is determined, the GPS receiver can calculate other information, such as speed, bearing, track, trip distance and much more.

The calculation is based on trilateration, which is a mathematical model for determining the absolute or relative position of points using the geometry of circles, spheres, and triangles.  Unlike triangulation, which is what most people think GPS uses to fix a location, it does not involve the measurement of angles.  To emphasize this, I have chosen a slightly more technical diagram to represent the concept.  Note that this calculation does not just involve calculating the intersection of the three radii (point B, which is what we are geolocating) – there are also components that relate to the relative positions of the three foci of the circles.

trilateration image for GPS

Geometric Representation of Trilateration Model

Sources of GPS Signal Errors

As we start talking about GPS accuracy and the accuracy of other geolocation technology, we need to understand what types of errors can enter into each system.  For GPS, there are six types of signal errors that can occur.  Fortunately even with them GPS is incredibly accurate. The table below summarizes the size of the potential effect of various errors, which are then described in more detail.


Source of Error Size of Error
Multipath Effect +/- 1 meter
Atmospheric Effects +/- 5 meters
Receiver Clock Errors +/- 2 meters
Geometry Shading +/- 2.5 meters
Ephemeris Errors +/- 1 meter


Signal Multipath Errors.  Signal multipath errors are caused by the GPS signal reflecting off objects such as tall buildings or other large, highly reflective surfaces before it reaches the receiver.  This increases the travel time of the signal, thus introducing errors into the calculation.   The resulting error typically lies in the range of a few meters.

Atmospheric Delays.  Atmospheric delays represent the largest potential source of GPS signal error. Satellite signals slow as they pass through the ionosphere and troposhere. While radio signals travel with the velocity of light in outer space, their propagation in the ionosphere and troposphere is slower. In the ionosphere in a large number of electrons and positive charged ions are formed by the ionizing force of the sun. These charged ions refract the electromagnetic waves from the satellites, resulting in an elongated runtime of the signals. In the troposphere, varying concentrations of water vapor further elongate the runtime of signals. These errors are mostly corrected by calculations in the GPS receivers, since typical variations of the velocity while passing through the atmosphere are well known for standard conditions.

Receiver Clock Errors. A receiver’s built-in clock is not as accurate as the atomic clocks onboard the GPS satellites. Therefore, it may have very slight timing errors.

Ephemeris Errors. Ephemeris errors occur when a satellite incorrectly reports its position.

Too Few Visible Satellites. The more satellites a GPS receiver can “see,” the better the accuracy. Buildings, terrain, electronic interference, or sometimes even dense foliage can block signal reception

Geometry Shading. Another factor influencing the accuracy of the reported position is “satellite geometry”.  Satellite geometry describes the position of the satellites relative to each other from the view of the receiver. Ideal satellite geometry exists when the satellites are located at wide angles relative to each other. Poor geometry results when the satellites are located in a line or in a tight grouping. For example, if a receiver sees 4 satellites and all are arranged in the northwest, this leads to a “bad” geometry. In the worst case, no position determination is possible at all, when all distance determinations point to the same direction. Even if a position is determined, the error of the positions may be significant, although in practice it is usually no more than 2.5 meters. If, on the other hand, the 4 satellites are well distributed over the whole firmament the determined position will be much more accurate.

 Next Installment: Geolocation using IP Address


PostHeaderIcon Will HTML5 Replace Native Apps Any Time Soon?

I’m back.

It has been well on nine months since I last posted, and I am nothing short of disgusted with myself – a guy who promotes blogging regularly, even if it is just a short post. In a classic case of “do as I say, not as I do,” I got so lost in a major project – unexpectedly so, may I add – that time for blogging disappeared faster than newly released iPads do from store shelves.

The project took me back to my online product development roots, and allowed me to build an incredible analytics system for mobile devices. I figured I knew how to build algorithms and real-time search engines, so it might be good for me to know how to build the analytics system that measures the results and feed them back into the behavior of the engine.

I am now officially hooked on mobile as part of the repertoire. Mobile today reminds me of the web in 1996 – 1998 – it is the wild west of technology markets. The rise of smartphones and tablets has changed the entire dynamic of how people interact with information. Everyone is experimenting, and their are thousands of small firms making bets on various approaches to mobile. Evolution is fast and furious, with winners and losers coming and going almost overnight as market changes and technology enhancements literally create disruptive change in the ecosystem.

One ongoing technology argument is HTML5 versus native apps and whether HTML5’s advantages of cross-platform compatibility, channel freedom (not being dependent on the app stores), enhanced discoverability due to better search engines, large base of developers, better analytics tools, and substantially lower cost of development will cause it to overtake native app development as the platform of choice for next-generation mobile apps.

I was at the AppNation conference last week and Trip Hawkins – someone who knows about as much about mobile gaming as anyone – weighed in.

“The browser will beat the app store,” he argued. “It’s more convenient, it’s driven by search and it’s more viral for consumers. It doesn’t matter what device your friend has… All technologies start as silos, but get 100 times bigger when they become inter-operable; think about roads or text messaging.”

I agree with him completely – ubiquity trumps functionality every time, and ultimately the ubiquitous platform surpasses the proprietary platform because more people invest in evolving it to do what they need it to do. Think open source and the “community of the commons” which helped evolve platforms like Java, Linux, Twitter, and Facebook. Or even more fundamentally and applicable to the HTML5 discussion, think the original Apple versus Microsoft Windows. Windows ultimately dominated – and still dominates – Apple as a platform because Microsoft early on allowed its OS to be licensed to any hw manufacturer, extending the franchise and opening the market to an overwhelming level of app development investment that to this day dwarfs what Apple can offer.

This is the economic law that Brian Arthur has described and quantified in his evolving work on Complexity Economics and something we all understand intuitively nowadays. That is, the value of the network (think technology network) increases as it becomes more ubiquitous, thus drawing more investment into it which only further increases its value. It is a virtuous cycle, and one reason becoming the market share leader in a new technology is so critical.

Apple has learned a few things over the years, and with the iPhone and iPad they have quickly moved to gain dominant share and the largest developer community. They are evolving their tools for developers and their OS faster in order to keep their developer community growing and provide more opportunities for developers to make money on the platform. It gives Apple a tremendous lead and advantage at this point against any open platform alternative.

But developing native apps is, relative to HTML5, time consuming and expensive. Unless you are in an industry like gaming where the quality of immersive graphics and performance is critical, HTML5 offers a cost and time to market advantage that provides a lower barrier to entry into the mobile market.

Then there is also an arrogance to Apple that is driving developers to consider ways to bypass the platform. Consider:

  • The iTunes store is a bottleneck. It is a single place to sell, and it gives Apple tremendous power over who succeeds and who does not. And Apple has been more than willing to display its dominance to developers and the community at large.
  • There is a relatively and unpredictably long approval process to get an app live, and often apps are rejected for issues that were not apparent in advance.
  • Apple changes the rules continuously and in many people’s minds, arbitrarily. Take the recent paid download spat with Tapjoy and others. In a single dictum, Apple basically removed a major monetization mechanism for developers.
  • You cannot tell when a push notification actually goes out through Apple. So you pay someone like Urban Airship to send the token to Apple, but then about 1/3 of the time the pushes never arrive. Apple is a “black box” on this and won’t tell you why (although they say they are working to fix this). Now if that isn’t arrogance, I don’t know what is.
  • Apple keeps track of everything developers’ users do on their platform, but do not make the data easily accessible to allow developers to improve their products and optimize their revenues.

All of this give Google and Windows 7 a potential opening to exploit HTML5’s advantages, and they will. Google is already doing so, although they are to a certain extent hedging their bets right now. And Facebook pushing HTML5 apps gives the platform momentum. Eric Schmidt was at Sun with me when Java came to market, and knows first-hand how to leverage an open platform to disrupt an entrenched proprietary competitor. Voila Android. And very soon, Android devices will have a larger installed base than iOS, if it doesn’t already.

The weakness Android has in the app market is fragmentation. In my limited experience looking at the data, there have got to be at least 30 variations of Android out there. When you look at the ROI for developing on Android, it only pays to develop for one or two versions – and this sentiment has been documented extensively in magazines like Fortune .

But that argues strongly for using HTML5 as a development platform. It allows developers to undercut Apple’s power position in the industry and develop for all Android phones.

HTML5 thus provides the opportunity to develop faster, cheaper, with a wider audience to sell to. And Apple is more than providing the motivation, just like Microsoft did for Java. As a rule, developers are a libertarian bunch who hate being dictated to by anyone, and they are getting increasingly upset with Apple.

The only question is how fast this transition will occur, and that remains to be seen. I think ABI Research is right when they say that native app development will peak in 2013 – that just feels right to me given my experience with numerous technology adoption cycles. But any prediction like that is fraught with peril. On the one hand, some killer mobile app built on HTML5 can come along tomorrow and change user behavior to the point where the browser becomes a natural first point of entry into a mobile device. On the other, Apple could change its modus operandi and salve the wounds of upset developers.

So while it is a matter of time, developers for now will have to make a bet on when HTML5’s time will come.


PostHeaderIcon Eich Bin Ein Mobile Netizen

Like many I have an iPhone. Admittedly I still have a 2G because I’ve become more thrifty in my old age (funding a national-level gymnastics career and private school tuition will have that effect) even at the risk of having my Silicon Valley friends call me a technology troglodyte. But I am an avid user of all the main apps (I can bump with the best of them), use it for location-based searches (e.g. AroundMe, Google maps), send images to Facebook, Tweet in real-time at events, Ping when I can, check my blog traffic with Google apps for the iPhone, and know how to plan/execute advertising campaigns specifically on mobile phones. No one who knows me would say I am in any way behind on my use of mobile technology.

But until now, I’ve never felt like mobile has really changed the basic way I have experienced the world. I go to trade shows and listen to all the ideas for the latest mobile services or how mobile concepts should change my daily life, but I have never felt that I had crossed a Rubicon with the real-time nature of mobile in the same way I did when I got my first laptop or sent my first Tweet.

That changed yesterday. I had the pleasure to take my family to a performance of The Smuin Ballet at the Sunset Center in Carmel. I am an admitted ballet snob, and Smuin is a wonderful company with talented, disciplined dancers and creative choreography. So whenever they are in Carmel we go to see them. The second act was a performance of Smuin’s Medea, which was first performed in 1997. It is a dramatic ballet that retells the story of Medea, the wife of Jason, the intrepid explorer of Jason and the Argonauts.

The myth of Jason and Medea is a dark and haunting tale of revenge and self-destruction. But in the dark as the curtain rose on Act 2, despite being a student of mythology, I couldn’t for the life of me remember even the outlines of the tale to tell my wife and eight year old daughter, neither of whom knew the first thing about this particular myth.

So what did I do? I pulled out my handy iPhone and while shielding it so as not to disturb others, I did a web search on Medea and pulled up the Wikipedia entry.

Despite the dark and small print, I was able to glean from Wikipedia the details of the story. Jason met the sorceress Medea, daughter of King Aeetes of Colchis. Medea falls in love with Jason, and he convinces her to help him to acquire the Golden Fleece in return for a pledge of marriage.  After acquiring the Fleece, Jason and Medea flee to Corinth and have children. In Corinth, King Creon offers his daughter Glauce to Jason in marriage, and Jason feels he cannot pass up the opportunity to marry a royal princess. Despite explanations and promises of support from Jason, Medea feels betrayed. She gives Glauce a wedding gown covered in poison that kills the bride. Then to ensure that nothing of Jason’s will outlive him, she kills their two sons.

I quickly related the story to my wife and daughter (who was especially engaged by the dark and intense drama onstage).  I then sat back and enjoyed the performance, knowing that they now could now understand what they were seeing and, as a result, enjoy it with more insight.

Not a big deal, you might say. I say otherwise. Prior to this, I had used my mobile capabilities to get directions or find a resource nearby. I was using the mobile device as a tool for location-based information. It was a parallel use to a GPS, which while practical, was not a fundamental change in my experience over a map. It was easier than a map and had better information, but the experience was just a replacement of one form of information (paper) for another (digital).

In this case, for the first time I used my iPhone to plug into the wisdom of the commons, into the global village, to enhance and extend the quality or content of an experience.   In other words, the phone and the information it provided in real-time became an integral part of the experience, although it was unintentional (from the perspective of the choreographer or the dancers) and because it was unintentional, it was distracting. But this little thing, this one act, was a fundamental change in how I interacted with the world. This was not a substitution of paper-based data with a digital version. Instead it delivered the true promise of the mobile web. It allowed me to access a completely new set of information that was then added to a real-time, real world event to enhance the experience. The equation is real-time event + Internet information = a real-time multimedia experience.

Admittedly, the experience was not perfect because it wasn’t intentionally developed by the show’s producers. But what if Smuin developed a mobile video app that during the Intermission allowed audience members with iPhones to see the full story of Medea in an entertaining way that also tied the story to how the ballet attempted to recreate that story in dance. That would be a more intentional, and less invasive, way to provide the same integrated (but completely new) experience.

So yesterday for the first time, I experienced the true power of the mobile web.  I can now say “Eich Bin Ein Mobile Netizen.”

Posts By Date
September 2017
« Jul