About Online Matters

Posts Tagged ‘social media’

PostHeaderIcon Search Engines: Social Media, Author Rank and SEO

In my previous discussions of social media, channel architectures, and branding, I discussed the fact that I am manic about locking down my online brand (onlinematters) because there seems to be some relationship in the universal search engines between the number of posts/the number of sites that I post from under a specific username and how my posts rank.  It is as if there is some measure of trust given to an author the more he publishes from different sites and the more people see/read/link to what he has written.  I am not talking about authority given to the actual content written by the author – that is the core of search.  I am talking instead about using the author's behavior and success as a content producer to change where his content ranks for any given search result on a specific search term.  It is similar, in many ways, to what happened in the Vincent release where brand became a more important ranking factor.  In this case, the author and the brand are synonymous and when the brand is highly valued, then those results would, under my hypothesis, be given an extra boost in the rankings.

This was an instinct call, and while I believed I had data to support the theory, I had no research to prove that perhaps an underlying algorithm had been considered/created to measure this phenomenon in universal search. 

I thus considered myself twice lucky while doing my weekly reading on the latest patents to find one that indicates someone is thinking about the issue of "author rank."  On October 29th, Jaya Kawale and Aditya Pal of Yahoo!  applied for a patent with the name "Method and Apparatus for Rating User Generated Content in Search Results."  The abstract reads as follows:

Generally, a method and apparatus provides for rating user generated content (UGC) with respect to search engine results. The method and apparatus includes recognizing a UGC data field collected from a web document located at a web location. The method and apparatus calculates: a document goodness factor for the web document; an author rank for an author of the UGC data field; and a location rank for web location. The method and apparatus thereby generates a rating factor for the UGC field based on the document goodness factor, the author rank and the location rank. The method and apparatus also outputs a search result that includes the UGC data field positioned in the search results based on the rating factor.

Let's see if we can't put this into English comprehensible to the common search geek.  Kawale and Pal want to collect data on three specific ranking factors and to combine these into a single, weighted ranking factor, that is then used to influence rank ordering based on  what they term "User Generated Content" or UGC.  The authors note that typical ranking factors in search engines today are not suitable foir ranking UGC.  UGC are fairly short, they generally do not have links to or from them (rendering the back-link based analysis unhelpful) and spelling mistakes are quite common.  Thus a new set of factors is needed to adequately index and rank content from UGC.

The first issue the patent/algorithm has to deal with is defining what the term UGC includes.  The patent specifically mentions "blogs, groups, public mailing lists, Q & A services, product reviews, message boards, forums and podcasts, among other types of content." The patent does not specifically mention social media sites, but those are clearly implied. 

The second issue is to determine what sites should be scoured for UGC.  UGC sites are not always easy to identify.  An example would be a directory in which people rank references based on 5-star rating, where that is the only user input.  Is this site easy to identify as a site with UGC?  Not really, but somehow the search engine must make a decision whether this site is within its valid universe.  Clearly, some mechanism for categorizing sites with UGC needs to exist and while Kawale and Pal use the example of blog search as covering a limited universe of sites, their patent does not give any indication of how sites are to be chosen for inclusion in the crawl process.

Now we come to the ranking factors.  The three specific ranking factors proposed by Kawale and Pal are:

  • Document Goodness.  The Document Goodness Factor is based on at least one (and possibly more) of the following attributes of the document itself: a user rating; a frequency of posts before and after the document is posted; a document's contextual affinity with a parent document; a page click/view number for the document; assets in the document; document length; length of a thread in which the document lies; and goodness of a child document. 
  • Author Rank.  The Author Rank is a measure of the author's authority in the social media realm on a subject, and is based on on or more of the following attributes:  a number of relevant posted messages; a number of irrelevant posted messages; a total number of root documents posted by the author within a prescribed time period; a total number of replies or comments made by the author; and a number of groups to which the author is a member.
  • Location Rank.  Location Rank is a measure of the authority of the site in the social media realm.  It can be based on one or more of the following attributes: an activity rate in the web location; a number of unique users in the web location; an average document goodness factor of documents in the web location; an average author rank of users in the web location; and an external rank of the web location.

These ranking factors are not used directly as calculated.  They are "normalized" for elements like document length and then combined in some mechanism to create a single UGC ranking factor. 

The main thing to note – and the item that caught my attention, obviously – is Author Rank.  Note that is has ranking factors that correspond with what I have been hypothesizing exist in the universal search engines.  That is to say, search results are not ranked only by the content on the page, but by the authority of the author who has written them, as determined by how many posts that author has made, how many sites he has made them on, how many groups he or she belongs to, and so on.

Can I say for certain that any algorithm like this has been implemented?  Absolutely not.  But my next task has to be to design an experiment to see if we can detect a whiff of it in the ether.  I'll keep you informed.

FacebookTwitterFriendFeedStumbleUponDeliciousDiggLinkedInMultiplyBlogger PostPingDiigoGoogle ReaderMySpacePlaxo PulseSphinnTechnorati FavoritesTumblrWordPressShare

PostHeaderIcon Social Media Channel Architectures – Part 2

We left off last time having defined a conceptual approach to channel architecture with an example of a new type of soap – “Greensoap” which has three unique advantages: it sends fewer harmful chemicals into the water supply, it doesn’t get mushy when stored, and it is 50% cheaper than other leading brands.  How do we leverage a social media channel architecture for this product launch?

Following the model (and in fact basic marketing 101), we first need to know what audiences we are targeting.  In this case there are three which can be defined based on attitudes and behaviors:

  • Green Consumers.  Green consumers most important attitude is that they believe the environment must be protected, that current economics doesn’t measure the “true cost” of products, and that if the true cost were available we would see that dumping chemicals into the environment (and later having to remediate) is more expensive than just selling a green product in the first place.   They are split evenly between men and women, predominantly 18 – 35, and average income of $45,000/year.  Their behaviors: they tend to shop at smaller stores with a focus on environmental sensitivity, they are relatively price insensitive up to a 20% price increase over non-green products, and they tend to be vocal in online communities around green issues.
  • Vacation Travelers.  These folks tend to bring soap rather than use what is in the hotel room because their stays are longer and they travel with their families of 2.2 kids (whereas business travelers stay short periods, want to travel as light as possible, and so use in-room soap provided by the hotel).  This audience is mainly women 30 – 50 and is concerned with minimizing the burdens of “household  care” – meaning keeping a clean house, clean kids, and organized environment.  75% have a job, are incredibly time constrained and stressed.  They tend to shop at one store, usually a major grocery chain outlet between work and home.  If a product makes their life one iota easier, they will consider it.  They are highly swayed by friends and family validation that a product meets its promises.  Once they try a product, they are incredibly loyal up to a price premium of 25% over their current brand.
  • The Thrifty Shopper.   This buyer always worries about money and saving it is their first priority.  Split equally between men and women, the demographic is flat across all age groups, with a slight peak within 60+ years groups due to their fixed incomes (+ life experience during Depression and WWII).  This buyer shops in big box stores and low-cost chain grocery stores like Safeway and Savemart.  No product loyalty whatsoever – they’ll switch brands for as little as a 5 cent savings. 

The next step in the model is to figure out what kinds of media these audiences use and where they are likely to be on the web.  Figure 1 shows the mapping of audience to media, platform, and social media sites.  The sites listed are intended to be category “examples” – meaning that they are only one potential site that could be used.  For example, in column 2, cafemom indicates women-focused social media networks. 

 Figure 1
Mapping Prospective Audiences to Social Media Channel Categories

Mapping Example Audiences to Social Media Channels

The third step is to define our messages to each audience (if we haven’t already done so).  For purposes of this example, we’re going to keep this to one message per consumer segment:

  • Green Consumers: Greensoap leaves you AND the world a cleaner place.
  • Vacation Travelers: Greensoap keeps your family clean and your life simple.
  • The Thrifty Shopper: Greensoap keeps you clean and green at 50% of the cost of other soap.

In the next post we’ll put together the campaigns and then show how we apply them to the various media channels.  Stay tuned.

 

FacebookTwitterFriendFeedStumbleUponDeliciousDiggLinkedInMultiplyBlogger PostPingDiigoGoogle ReaderMySpacePlaxo PulseSphinnTechnorati FavoritesTumblrWordPressShare

PostHeaderIcon The Economics of Twitter for Advertisers, Part 2

Let’s continue our discussion of Twitter economics.

The average Twitterer has 549 followers.  Now this is skewed by corporate accounts (e.g. like our travel sites) and news sites that have a very large number of followers.  I have gone through a number of accounts to determine what seems like a realistic average number to use – and I am going to assume 200 followers.  Our experience is that for the first generation of followers, 10% pass along an offer (the theory of this is also quite enlightening but I will not cover it here).  For subsequent generations it is much lower, usually in the 2-5% range.  We mentioned previously that 15,000 is the average number of followers for the Big 3 sites (Expedia, Orbitz, Travelocity).  The calculation therefore looks something like the following:

15,000 (followers)

+ (15,000 * 200 *.1) = 300,000 (first generation pass along)

+ (300,000 * .02 * 200) = 1,200,000 (second generation pass  along)

= 1,515,000 (total number of individuals)

The number of impressions is then this base of 1,515,000 multiplied by the number of offers “seen”.  Expedia seems to be making offers every five minutes, as does Hotwire  (they must have set up some kind of automated feed into their Twitter accounts).  Travelocity and Orbitz seem to be making offers once a day (or even less).  The big unknown is how many offers does the average follower actually see?  They aren’t always online, or if online, they are doing other things and their attention is not focused on Twitter.  Or they are on Twitter, but the offer doesn’t register through the noise of all the other tweets.  Without any really good data, I will assume that each individual “sees” two offers/month – which I hope is a conservative number.

This means that the total number of impressions is: 1,515,000 * 24 = 36,360,000 per year

Given this number of impressions, what is the potential economic impact for Expedia, Orbitz, and Travelocity?  Typical conversion rates on these sites runs 3-5% according to various published data I have seen.  But, this is not a situation where someone has either typed in a keyword or clicked on an ad that appears when a keyword is typed in.  This is much more of a grazing situation.  Many offers are made, but only a few are relevant to any specific individual.  So the response rates look more like email, and yet they are even smaller.  Why?  Because while the first generation is signed up to receive notifications (parallel notion to an email, in this case), the second and third generation are not.  Our first benchmark is therefore an email conversion rate from the initial mailing – which is calculated as  follows (I am ignoring losses due to bad addresses, since that is not an issue for online accounts - although see below for a related issue of dormant accounts):

# of impressions * open rate * conversion rate

Typical average open rates for good emailings are 10-12%, and conversion rates vary but let’s assume 2%, which is a number that comes from my experience with emailings.  That would yield the equivalent of a .2% conversion rate for the first generation.  But for the second and third generations, the response would be substantially smaller, maybe .1% or even as low as .05%.  Since the first generation is such a small number of individuals, I will use .1% as the conversion rate for the entire base of impressions.

The last pieces of data we need are the number of tickets purchased, the number of purchases per individual in a year, and the average revenue to the travel agency from each ticket purchased.  Again, I am going to use data that is fairly well known in the travel business.  These are gross averages and do not take into account a number of variables, such as the type of travel (business vs. personal), destination (domestic vs.international), and type of flier (managed vs. unmanaged)

Number of trips per year: 2

Average number of tickets purchased/trip: 2.2

Avg revenue per ticket to agency: $25

So now let’s do the annual revenue calculation for the economic impact of Twitter for a large online travel agency:

36,360,000 * .001 *2 *2.2*25 = $3,999,600

 

For a big travel agencies, which have around $1B in annual revenue, this is small (.4% of revenue) but it isn’t chump change either. 

Before I close, one other issue needs to be explored – and that is the issue of dormant accounts.  The model presented assumes that every individual who is following or who receives a retweet or direct message is an “active” Twitter user.  But as we all know, many from our own experience, you may set up a Twitter account and then never go back to it.  Or you may visit it only rarely.  I call these dormant accounts.  There has been a lot written on this topic – just type “dormant twitter accounts” into Google.  Nicholas Carlson recently wrote a post for BusinessInsider.com titled “60% Of Twitter Users Quit After A Month“. Carlson cites Oprah (@oprah) as an example of someone who has become “bored” with Twitter and reports that Nielsen Online estimates that 60% of Twitter users quit after a month. The post goes on to say that the 60% number may be misleading as Nielson only measures Twitter usage based off Twitter.com and not from mobile use or apps like TweetDeck.  Given this data is pretty consistent with other social media sites, and the fact that a lot of tweets happen off of twitter.com, I think we can safely assume that the dormancy rate for Twitter is 50%.

In this case, our approximately $4mm in annual revenue has now become $2mm in annual revenue. 

Not huge, but I think we could say that the ROI on the costs associated with maintaining a corporate Twitter account for this purpose are probably pretty spectacular. 

I do not doubt that this post will cause a lot of discussion/controversy (at least I hope it will), and I look forward to all feedback. 

Reblog this post [with Zemanta]
FacebookTwitterFriendFeedStumbleUponDeliciousDiggLinkedInMultiplyBlogger PostPingDiigoGoogle ReaderMySpacePlaxo PulseSphinnTechnorati FavoritesTumblrWordPressShare

PostHeaderIcon The Economics and ROI of Twitter for Advertisers, Part 1

All I hear about is what the value of Twitter is (hopefully) to investors. What is Twitter’s business model?  How will it make money? As a business person, I really don’t care about how much Twitter’s founders and investors will make (which is no doubt a heck of a lot more than I ever will). I care about my favorite radio station – WIFM – better known as What’s in It For Me?  The two questions are not unrelated.  For Twitter to make money, it will almost certainly need a base of advertisers who want access to it’s audience.  There may be other revenue streams that the creative minds at Twitter will conceive over time, including some form of CPM, CPC, CPA or CPL.  That advertising opportunity, however, does not exist on today’s Twitter.  Yet, advertisers are trying to leverage Twitter now to increase sales. 

Is there a way to model the ROI from investing in a presence on Twitter as it exists today?  Let me suggest that there is and provide the approach and calculations below. 

First, we need to understand what Twitter is and how its audience uses it.  I view Twitter as multithreaded Internet chat.  It’s like being in a coffee house with conversations going on all around you and choosing which ones you want to participate in.  Moreover, the form of communication – 140 characters – lends itself mainly to status updates and quick bursts of timely information.  Twitter is at its best when it is used to communicate information whose value deteriorates at  a rapid rate.  It particularly does because the information is streamed – and the stream flys by so fast that anything much older than a few hours is effectively lost unless you actively search for it in historical tweets – which is a change in consumer behavior that few have associated with Twitter yet.  Thus, Twitter in its near real-time form is perfect for businesses and business models whose information quality degrades quickly – e.g. stock prices, airline ticket prices and availability, exploding special offers/deals (an offer that has a specified end date), employment opportunities, immediate local expiring opportunities (e.g. ticket availability at the stadium just before a big game), among others.  

So let’s say you are an online travel agency that sets up and maintains a Twitter account.  Do you care?  Is it worth the effort to put specials out through that mechanism?  Let’s look at some numbers.  Here are the number of followers of various travel agency Twitter accounts:

  • Expedia – 13,281
  • Orbitz – 14,087
  • Travelocity – 16,133
  • Cheapoair – 1,925
  • Vayama – 1,193
  • Travelzoo – 8,020
  • Priceline – 16,212
  • Hotwire – 3,769

Assume that you have a Twitter account with 15,000 followers (the average of the big sites) where you post daily specials on travel.  They are obviously interested in the opportunities – so we have an audience that, relatively speaking, is highly motivated to purchase if they can find the deal they want.  As active participants, they are likely to forward information to friends and family who have a similar interest – so they retweet or they forward via direct message.   I call this the amplification effect.

The key question to consider in this is what percentage of people pass the information along and then, subsequently, what percentage of people subsequently retweet to the next level?  We actually have good data on this from word-of-mouth marketing campaigns we have run for some of our travel industry clients and from Twitter follower data.  There is also research (see Norman, A. T., and Russell, C. A. (2006). The pass-along effect: Investigating word-of-mouth effects on online survey procedures. Journal of Computer-Mediated Communication, 11(4), article 10. http://jcmc.indiana.edu/vol11/issue4/norman.html) that aligns well with our experience.

More in the next post.

FacebookTwitterFriendFeedStumbleUponDeliciousDiggLinkedInMultiplyBlogger PostPingDiigoGoogle ReaderMySpacePlaxo PulseSphinnTechnorati FavoritesTumblrWordPressShare

PostHeaderIcon What is Influence and Why is It?

I was at the Social Media Club – Silicon Valley last night where there was an excellent session on “What is Influence” with Domunique LaHaix (eCairn), Scott Hirsch (getSatisfaction), Jennifer Leggio (Fortinet, ZDNet), Ryan Calo (Stanford CIS) (among others).  Great topic, great crowd. 

The first question asked of the panel was “What is influence?” and I am going to weigh in here because I don’t think that anyone got to the core of what is influence online, how you grow it, how you maintain it.  This is going to be like a Celtic design – I am going to weave in many topics that, to this point, have only been discussed disparately but, which to my mind, make up the whole quilt of influence online.

Influence can be considered the power to persuade others to some end.  Now you would say “well that’s a definition.”  But in the case of online, it is a necessary, but sufficient condition for the definition.  In many cases, people influence online with no intention to persaude.  In some cases, as in the case of a search algorithm, influence is created almost by default by the items returned from the search and the sort order in which they appear.  I mean, if this blog appears in position 31 in Google, how much influence can I have?  Or as another example, if the mullahs in Iran wanted to prevent any other viewpoints from being top of mind other than their own, they would create a search engine that only returned the results they wanted people to see. 

In order to understand what influence is online, we have to understand WHY it is. 

So the first question: why does anyone bother with social media?  Why spend your time on it?  Why actively participate in it?  The simple reason is that social media is based on gift giving – in this case the gift of information.  In his book, Influence:  The Psychology of Persausion, Robert Cialdini discusses the six “click-whirr” responses which, when triggered, can get you an automatic and predictable response from most people.  The most powerful of these is reciprocity.  Imagine for a moment I ask you for a recommendation on LinkedIn about my performance at a place where we both worked?  What’s the likelihood you will respond to the request.  If statistics are any guide, about 33%.  If, however, I first give you a recommendation on LinkedIn, the likelihood you will respond to my request is 66%. 

Why the difference?  Simple.  We are genetically wired for cooperation, not competition.  It’s how we survived as small, hairless, vulnerable proto-humans on the savannah against wild beasts and other threats.  If I do something nice for you, you feel almost obligated to do something nice for me.  If you don’t believe this is a genetic trait, then also read Frans de Wall’s Our Inner Ape, where you will see that this behavior is prevalent in chimpanzees (our nearest living cousins), as well. 

So why am I blogging?  Why do I respond to comments on Facebook?  Why do I tweet?  I mean beyond the obvious fact I may enjoy it and I can keep up with my friends.  Why does social media exist at all – why are humans wired in such a way that social media actually works?  At its most fundamental level the answer is gift giving.  People who write provide the gift of information – think about how many tweets include a link to some web page.  So, by necessity, I feel some pull to return the gift and give information back. 

That’s the fundamental mechanism that makes social media work – enjoyment, need to keep up with people and an infinite pipe of information, are built on that behavioral foundation.  Without the reciprocity rule around information, social media would be nice, but humans wouldn’t respond to the stimulus.

That is the first extension of the concept of influence.  There is a second underlying mechanism at work – and it shows up clearly in the behavior of people who grew up with the Internet from birth and those who did not.  This behavior relates to the way the generations learn and collect information  Those who are (approximately) 40+ have a different mindset.  There brains are wired (literally) for linear learning.  They read books or articles and went relatively deep into the content.  You may recall the quaint notion of “speed reading” where, in order to take in enough information, people learned to read quickly to garner “the gist” of longer articles or books.  Basically, this was the older generation’s way of dealing with information overload.  An additional technique, which approached the information overload issue by going deeper into fewer sources, was to scan the table of contents of a magazine or book and only read the articles/chapters that seemed relevant. 

The problem is the Internet doesn’t just cause information overload.  It is effectively an infinite source of data.  There is no way any human being could ever in a hundred years find and digest everything they would need to know on the subjects important to them.  Relevant results from search engines help when you want to “go deep” on a single subject, but when it comes to looking widely across all our interests, it is completely impossible to gather a reasonable subset of information by yourself.

So the 20-something generation has learned to use the eyes and ears of their peers to act as search engines for relevant content across the range of their interests.  The stream of information is in multiple parallel threads from numerous sources. We call it multitasking, but in reality it is better called multigrazing.  By staying in touch through social media, the digital generation can consume more information across a broader range of topics than any individual could do alone.  And the relevance is higher because it comes from trusted sources: their friends or people they follow who share their concerns/interests.

So tying this back to the gift of information: the need for better, more efficient means for collecting relevant information combines with the gift-giving nature of social media to create a powerful behavioral motivator for digital learners to participate in social media – and that is the why of influence as it relates to human interactions online.

What about the definition of influence as it relates to machine-based entities like Google?  As mentioned before, search engines are a tool for an individual to cut through the clutter of an infinite multithreaded data stream and find the most relevant “deep links” -  published information – on a specific topic.  They can also get relevant information from their social web of contacts (the social web, for short), but that tends to be more random, “shallower”, and less immediate.  This suggests that the search engine, by its very nature, has influence, since its algorithm determines what is useful.  The latest case of this is the recent change in Google’s algorithm to favor big brands (see Google’s Vince Update Produces Big Brand Rankings; Google Calls It A Trust “Change”), which I have been ranting about to anyone who will listen.  Basically, Google has determined that “big brands” are a more authoritative source of information about themselves than third parties.  But is that really true, say, in the case where a company’s product doesn’t work and bloggers are covering the fallout?  How many times have I been hired by Fortune 500′s to push unfavorable comments about them off the first page of the SERPS, even though they were factually accurate?  Now I don’t have to work as hard.  Google is doing the work for me.  And I’m sure that the fact that Google is trying to generate more Adwords revenue from big brands has nothing to do with it. 

THAT is the influence of an algorithm.  It is not about the power to persuade, per se.  It is about the power to choose what is relevant to a conversation, based on some programmer’s (group of programmer’s) views of what relevance means.  And no matter how much you can look to the research and say that you are following good Information Architecture design that is intended to be value neutral, it is impossible in reality to achieve that.  It’s like asking a human to manually generate a random number – it isn’t possible.  The bias can be consciously or unconsciously embedded in the algorithm, but it is there.

In other words, the search engine is like a trusted person – you can think of it as an avatar – that you also use to allow you to deal with the infinite information of the web.  Only this trusted source goes deep into a subject with immediacy, rather than helping with your multigrazing.  It is the Internet equivalent of a table of contents, whereas multigrazing is the Internet equivalent of speed reading.  Add like a trusted friend, you give the gift of information back to the search engine in the form of your click behavior, which is one of the factors our current generation of search engines use to determine the relevance of specific documents.   What is not the same in this relationship is that you cannot know what bias is built into the algorithm and how it changes over time, which it does many times without any notification by the major search engines.

So now we can go back and define influence in its online context.  Influence is the ability to share relevant information with others who share a common interest or concern, in the hopes that they will, in turn, give the gift of information back to you. Human or avatar, it is the same definition.  Both humans and search engines have their biases they bring to the conversation.  The only difference is you probably know something about the inherent biases of your friends or trusted human sources, whereas with a search engine you can only infer it, and even then it changes so often that effectively the bias can’t be known.

That’s it.  So what do you think of that logic?  Please tell.

Reblog this post [with Zemanta]
FacebookTwitterFriendFeedStumbleUponDeliciousDiggLinkedInMultiplyBlogger PostPingDiigoGoogle ReaderMySpacePlaxo PulseSphinnTechnorati FavoritesTumblrWordPressShare
Posts By Date
August 2014
M T W T F S S
« Jul    
 123
45678910
11121314151617
18192021222324
25262728293031