Around 15 people participated in this discussion, including Lisa Williams from Placeblogger, Ben Terris from Boston.com's Your Town, Adam Weiss of Boston Behind the Scenes, Persephone Miel from Internews Network, and Doc Searls from Harvard's Berkman Center. You can hear the conversation here:
The conversation covers a wide range of topics, including:
- Trends and directions of hyperlocal news. Where the emerging opportunities might be.
- What the user demand might be around hyperlocal news - where the current gaps are in addressing user needs.
- The rising importance of immediacy and speed of hyperlocal solution deployment
- The problem of scale and searchability around hyperlocal sites
- How hyperlocal sites and the online-offline proximity connection might address the human need for social cohesion
On the evening of Thursday, February 5th, WBUR in Boston will be hosing their sixth (seventh?) monthly informal gathering at the station. WBUR regularly convenes the Boston social media community for the purpose of facilitating discussion around social technology and its growing role and impact on local community, news, and public media. All are invited to attend this free and open event. Details here.
At this event, WBUR has agreed to let me lead a discussion on hyperlocal news - in part due to the good discussion that's stemmed from this hyperlocal blog post and my interest in doing a follow-up on hyperlocal's future potential. Won't you join us?
Keep an eye on this blog for a follow-up from the event.
The term "Hyperlocal" generally refers to community-oriented news content typically not found in mainstream media outlets and covering a geographic region too small for a print or broadcast market to address profitably. The information is often produced or aggregated by online, non-traditional (amateur) sources.
Hyperlocal news is conceptually attractive because of its perceived potential to rescue struggling traditional media organizations. Most attempts at hyperlocal news websites have not proven to be entirely successful. Hyperlocal appears attractive to traditional media organizations for the following reasons:
- There is a perceived demand for news at the neighborhood/community level. The costs of print production and distribution have historically made providing this unprofitable, but the lower cost of web distribution could be used to serve this need.
- In an online world, regional media outlets are no longer the gatekeeper of news content and therefore must rely on their geographic relevance to provide unique value. Hyperlocal news leverages geographic relevance.
- The rise of citizen journalism and Web 2.0 seems to suggest that users could provide the majority of local content, thereby reducing or eliminating staffing costs.
- Local online advertising seems like a promising and not yet fully tapped revenue source.
History & Approaches
Hyperlocal seems to have emerged as a popular concept in 2005, even while regional news websites and blogs were already becoming common1. In 2006-2007, the first significantly funded hyperlocal sites and platforms were launched. There were high-profile failures, most notably Backfence.com (2007) and LoudounExtra.com (from Washington Post in 2008). Many early efforts took the form of online newspaper websites, employing local reporters (or sometimes bloggers), and attempting to source user-generated content by inviting individual submissions or incorporating user discussion functionality. There was much speculation on why this approach often failed. Regardless of the specifics, their universal unprofitability suggests that producing a local newspaper-like presence simply doesn’t create enough demand (online readership) to justify the costs (local staff). Of note are a few surviving examples like the Chicago Tribune’s Triblocal project that create and distribute hyperlocal print editions from their online content, and many hyperlocal blogs which operate on less auspicious budgets.
Around the same time, a slightly more promising wave of information-heavy regional news sites (such as pegasusnews.com) emerged. These sites were inspired by the success of regional review sites such as yelp.com and Yahoo! Local and in response to the high costs of local content production. These new efforts focused on incorporating dynamic regional data, such as crime stats, permit applications, real estate listings, and business directories in lieu of an emphasis on hand-crafted local reporting.
A third and perhaps most promising wave of local news sites emphasized the aggregation of third-party content. These include platforms such as outside.in, topix.com, and everyblock.com – all of which are framework approaches - aggregating content, mostly through RSS feeds, for many geographic locations (in some cases thousands) in order to build enough accumulated traffic to make a local business model work. Some slightly different takes on this model have individuals in specific locations acting as editors and republishing aggregated content (universalhub.com) or aggregator sites focusing on particular types of content (Placeblogger.com).
You can’t serve online users the same way as newspapers or broadcasters serve regional audiences. The news and information demands are wildly different. It is not enough to reduce printing and distribution costs or put content into "bite-sized" pieces. The user-consumer is trying to solve radically different problems from a unique perspective around their online information needs.
Giving participatory tools to users does not make them publishers. Users do not produce material that looks anything like mass media content. Users have an expectation of being involved, and their efforts (such as sharing) can be helpful or even necessary in some contexts. However, assumptions about traditional publishers shifting effort "to the crowd" are misguided. Users are also notoriously fickle in their socially-driven motivations. Our understanding around what motivates people to participate online and in what context is limited.
Manually producing local content is expensive. This isn’t a surprise. What shocked people is that there is not enough consumer demand online to justify this cost.
Aggregation is cheap, and if done effectively can create enough demand to be profitable – particularly across many locations. As more sources make their content available through RSS feeds and APIs, this is only going to get better.
1To be clear, the hyperlocal hype from traditional media organizations took fire in 2005, but local sites like Craigslist and H20Town were long-standing successes by this point, thereby playing their part in fueling the excitement.
The world of podcasting is markedly different from that of broadcast radio. Below is a top ten list that highlights what works well in this medium and how podcasting can be different than straight-ahead broadcast. To this end, I have avoided listing downloadable versions of broadcast radio shows, although this eliminates a third or more of the most popular podcasts. The following descriptions focus on what makes each approach noteworthy. If you want to learn more about the podcast itself, I encourage you to give it a listen.
Recorded spoken performances such as stand-up comedy and conference presentations have long been a popular podcast format. The Moth Podcast is arguably the granddaddy of them all. A popular and long-running podcast, these short storytelling segments are recorded in front of a live audience without notes.
While most episodes of this show are in fact broadcast, the majority of the fan base seems to listen exclusively to the podcast. TSOYA uses the popular podcast format of single-host / single-guest talk show. This podcast attracts a strong following in part due to its focus on popular and entertaining guests who are otherwise below the cultural radar. The show further focuses by frequently selecting guests who are comedians, media professionals, or musicians.
3. iTunes Weekly Rewind [iTunes Link]
This Apple-produced music review is a unique take on music podcasting, highlighting songs discovered over the course of the week on TV, online, and in the movies. This is a markedly different approach from focusing on new releases, who's on tour, what's popular, or the traditional, curated music show. This is also an Apple Enhanced Podcast, providing users the ability to move back and forth through visually-enhanced song chapters.
A hybrid instructional / talk show format, EconTalk is a surprisingly popular podcast that presents often complex economic concepts for non-expert audiences. Most guests are academic experts. The magic here seems to lie in the host of the program directing and clarifying the guests as they attempt to explain and explore complex ideas and opinions. Instructional / explanatory podcasts seem to be rising in popularity, although this specific format for presenting complex ideas is somewhat atypical.
This weekly podcast reviews top stories from digg.com and incorporates several characteristics found in many podcasts. For example, the hosts are conversational but often off-topic with plenty of snarky, insider commentary, and they discuss timely events surrounding an existing, popular website or web community.
Popular and unique, this podcast highlights several growing trends. Hosts are remotely connected via phone conferencing or skype through which the podcast is recorded – often live. The conversation is unscripted and uses many rotating participants, most of whom seem to be under 20 and have likely never met in person. All participants are part of a popular online Harry Potter community. In fact, the participants, listeners, and podcast content are an extension of what is already happening in and around existing vibrant online interaction.
The "BS amongst friends" format is perhaps the most popular approach to creating a podcast. There are hundreds if not thousands of podcasts that are purely conversational, using 2-4 hosts who are often friends. Some are themed, many contain explicit content, some deploy more traditional radio show techniques using guests, call-ins, regular bits, drop-ins, etc. YLNT is noteworthy in that it breaks from typical radio morning show tactics (e.g. political leanings, wacky antics, etc.), and is professionally edited down before release.
8. Grammar Girl
It's true - there's a podcast on grammar that's wildly popular. It's short, practical, and handy for bloggers. Unlike much out there, the host seems to be perfectly normal. This is one of the first instructional podcasts to land the big numbers, in part thanks to Oprah.
9. Planet Money
This is an NPR program on the economy, money and global markets, but it qualifies for this list because it's not broadcast as a public radio show, per se. Some of what is produced makes it to air, but the NPR podcast is just that. A podcast. Well, it's also a blog. And videos. And links. OK, it's pretty undefinable, but it's entertaining and instructional and timely and edited to whatever length it is. I suspect "programs" will look a lot more like this in the future.
Here's an odd one. A podcast where each episode is defined and titled by its tempo (BPM), tailored specifically for your workout. Sound weird? Well, it's the #1 music podcast in iTunes right now. Behold the power of the untapped niche.
The following is a basic description for a proposed approach to integrating VRM into an existing software application. I welcome your input on this emerging idea in the comments section here or follow the evolution on the Project VRM Wiki.
Update (5/19) - Audio slideshow and presentation video on ListenLog now posted.
The VRM ListenLog is a proposed method for integrating simple user-driven functionality into an online audio player device or application. The ListenLog concept was devised in part for the Public Radio Tuner iPhone project, where it will likely be first introduced. The ListenLog is a consolidated and documented history of an individual's online listening activity. It is simply a recorded activity log, in a standard and open format, documenting an individual's listening actions through one (or more) online devices. The ListenLog is unique in that its aim is to give the user complete control over what to do with their listener activity data, including where the data lives, who to share it with, and how it can be used.
While tracking listener behavior is not a new concept, the ListenLog is a novel user-driven approach to deploying early VRM functionality. While a simple activity log might not be the killer app, it succeeds by putting in place a small piece of user-driven infrastructure into a larger application - one with a promise of relatively wide distribution. Since this infrastructure component will write, store, and share listener activity in an open and standard format, we hope that such a log will become significantly more useful as other devices and tools leverage the standard to increase what an individual can do with their ListenLog data. This type of sideways approach holds the promise of planting the seeds of VRM onto lots of devices without requiring the primary application functionality (i.e. audio listening) be purely user-driven.
A user-driven activity log works well for an application that pulls together audio streams and files from a number of different sources. Of course, online audio providers (vendors in the VRM model) can already track and aggregate listening behavior data, but only for the audio they control. When the user acts as the sole point of integration, pulling together audio from multiple sources, their own consolidated log becomes unique and powerful. Only when the listener is the point of integration does such an approach yield unique value.
Here is a working document of some emerging ListenLog Specifics as we flesh them out.
Public Radio Tuner iPhone Application
A collaborative effort that launched a single, free Public Radio iPhone application to support radio streams and on-demand public radio program content from all public radio networks (NPR, PRI, APM, and PRX). The application builds upon APM's Public Radio Tuner application, and the 1.1 release available on 1/6/2009 incorporates over 200 public radio station streams from around the US, a GPS-enabled local stream finder, and station search functionality. We hope to have ListenLog functionality incorporated in V 2.x.
The standard to-do list1, whether created online or scrawled on the back of an envelope, performs three basic functions: storing tasks, recalling tasks, and organizing tasks. Each of these functions, however, cause problems. If like me you are the type of person that relies heavily on to-do lists, these problems can ultimately poison your productivity.
Storing Tasks – The Benefits
To-do lists are a great way to squirrel away all the things we must remember to do - stowing tasks so that they can safely be forgotten. This frees you up to concentrate on other things. Aggressive organizers leverage this aspect ruthlessly. We take comfort in knowing that our task is safely saved, to be retrieved when it is ready for reconsideration. Without storage, our worrying nature is invoked, keeping us on edge, mentally juggling outstanding tasks and continually unsure of what it is we've forgotten to do.
Storing Tasks – The Problem
To-do lists are used to stow away tasks indiscriminately. If the task is non-urgent enough to make it to the to-do list, it's fair game. A task might be critically tied to life happiness or it might be picking up milk at the store – doesn't matter, both have now been safely stowed away. Worse yet, since the forget-factor is deployed to alleviate worry, it works much better on big, scary, and potentially more important tasks. This results in to-do lists becoming the miscellanea drawer for our life's most important goals. Just jam them in and forget about them. They'll be safe where we don't have to actually do them.
Recalling Tasks – The Benefits
Time to do some stuff? Just pull out the handy to-do list! No need to search my memory for what to rub some elbow grease on – it's all right there in the list, right?
Recalling Tasks – The Problem
The problem with crossing tasks off the to-do list is that we get to pick and choose which items to tackle. The easy, the quick and the convenient get the attention (oh look! I'm already at the mall!). If we manage to get past the convenient items, we inevitably get stuck at the urgent items. If you're a chronic hider like me (see point #1 above), there's sure to be plenty of items that have reached super urgent critical volcano erupting status. Soon, the to-do list becomes the urgency list and all that gets done is one urgent item after another. The most nefarious problem of all with recalling tasks from the to-do list is the avoidance of looming, big important items. Often, these haven't been carved up into bite-sized tasks, so they just float around the bottom of the list with innocent sounding names like "lose 10 pounds" or "get a better job." Figuring out how and where to get started on these items keeps us from doing them. These important (but maybe not urgent) items generally just wait until there's more time, you know, later.
Organizing Tasks – The Benefits
Organizing tasks means moving stuff around, and moving stuff around means being able to prioritize and plan. Similar items can be grouped together. Items can be ordered and sorted by urgency or inter-dependency. When we're finally ready to roll up our sleeves and do some work, we'll have nice, clean, structured marching orders.
Organizing Tasks – The Problem
Why do any tasks at all when you can just feel good by pushing items around on the list all day? In fact, why not put "reorder list" onto the list? Organizing to-do lists doesn't naturally lead to drilling-down and rolling-up tasks. Drilling-down, or breaking tasks into sub-tasks, only makes the list longer. And rolling-up, using lists to understand how we work and what is truly important, is an abstract activity – generally not included alongside left brain list-making. Which is a shame, because the one thing a to-do list could ultimately do for us is not help us get more done, but help us figure out what's most important, so that we can, in fact, do less.
1 The focus here is on basic to-do lists and doesn't include more advanced planning and scheduling tools. And yes, I know about GTD.
An usual opportunity as provided by Autumn, a sixth grader, asking the following question in Yahoo! Answers:
If I am a geek, how do I survive Middle School?
I'm going into the 7th grade. Does anyone have any advice on what to expect?
By "geek" I'm assuming that you mean you are unpopular. If you are unpopular in American secondary schools, this means you are probably smart. Keep in mind that unpopular smart kids don't seem to have any barriers to becoming rich and envied later in life (think Bill Gates was popular in Junior High?)
Smart kids are unpopular most likely because they care less about being popular than dumb kids. It's not that you wouldn't like to be popular (trust me, I know), it's just that the gargantuan amount of effort required to be popular in school is simply not worth it to you. You're smart enough to rationally consider the options - and spending every waking moment of your life trying to please dumb people just seems, well, dumb. Time could be better spent coding IRC bots, designing custom levels in Quake, hacking your neighbor's roomba, or whatever smart kids are doing these days. You see, being smart is just too darn interesting to waste all of your time trying to be popular, and unless you aspire to sell used cars for a living, I recommend you not bother.
My advice: spend your time being smart. Find other smart people. Cultivate your own sense of curiosity. Understand the world is not as cruel and boring as it probably appears from inside the walls of your school.
Frankly, I'm surprised it took so long to happen. Or maybe I just didn't notice it happening much until now. When google pioneered contextual advertising, I assumed that the rest of the world would follow in spades. We'd be getting emailed, nudged, banner-added and text messaged whenever we displayed online intention or contextual curiosity. There is a world of nuance between blatantly unsolicited email spam and "relevant online communications," and I assumed that businesses would rush in to fill this gap. But I really haven't seen it that much. Until now.
Enter the Twitter Hawks. Businesses that hover on top of Twitter search terms and then @ you if you mention something relevant to their business. For example, I just got an @ message from an airport shuttle service when they saw me use the name of an airport in my tweet. Obviously, they're monitoring the public feed using Twitter Search or the Search API and replying publicly to tweets that mention airport travel in their business service range. But is this spam? Well, I suppose not the traditional type, but it's definitely unwelcome when my @ stream is filled with unsolicited business messages from orgs - no matter how "well intentioned" who are hovering over my communications.
Like any good enabling technology, people see opportunity and rush in to explore, address, solve, and experiment. I'm not surprised that people are exploring this gap, I'm just surprised it took so long.
Last week I posted a mini-app that helps find popular twitter users near you. Simply enter a location, and Twitterstars will search regional tweets and return the top five most-followed Twitter users.
I got some good sleuthing and feedback from the genius behind lolcode, and have subsequently made some updates and learned enough to provide some caveats. Tips & Caveats:
- Since this app hits multiple web services, expect a little bit of waiting time as the data is retrieved.
- If the page returns empty, this is likely because Twitter is struggling under server load or is rejecting API requests from Yahoo! Pipes (known issue)
- I've locked the radius of search to 15 miles, which in most cases encircles users who put the city name you've searched for in their profile (twitter search API uses LAT and LONG coordinates). I have discovered some examples where the search API stumbles on stated locations, however
- The Twitter search API returns a maximum of 100 tweets and must analyze users from within that collection. This means that if a popular user has not tweeted within the time window determined by the 100 most recent tweets (sometimes as little as a few minutes in the case of, say, NY, NY), then they will not be included in the search results. Try multiple times during the day to get different results.
- The Twitter Search API is notorious for its latency. If you're trying to catch a very recent tweet in the result set, you generally won't be successful.
- Pipes requests in rapid succession will return cached data, so it's not enough to simply hit refresh on the results page (sorry). Wait a few minutes and try again, or hack the URL to change the search radius or LAT/LONG, etc.
If you find this mini-application useful, please let me know. Suggestions for modifications and improvements are always welcome.
[Note: I've posted a Twitterstars update]
Finding and connecting with local social media 'superstars' can be a valuable short-cut for anyone trying to ramp up quickly in online social environments. These enthusiasts are knowledgeable about social media tools, are highly-connected, and understand well how to succeed in the online social environment.
But how do you find the local social media superstars? Today, many of these individuals use Twitter. The "Local Twitterstars" mini-application below takes any
US geographic search area that you provide and returns a feed of the top five most followed individuals on Twitter who have been recently active in the region. Below is a more detailed explanation of how I built this mini-application. I also posted an update here.
This mini-application uses the Twitter Search API, the Twitter REST API, Yahoo! Pipes, and some simple HTML.
- The simple HTML form above constructs a server GET request through both hidden and user-populated form fields.
- This constructed URL queries a custom-built Yahoo! Pipe that takes the location from the URL and converts it to LAT-LONG coordinates.
- A Twitter search API query is then constructed by the Pipe using the LAT-LONG and radius data, returning the 100 most recent tweets in this region. Depending on your search area, this could include only very recent tweets or could span a much longer time period. Twitter has some internal smarts around matching the coordinates to include a variety of data that users put into the location field of their profile, including towns, zip codes, iPhone GPS coordinates, etc.
- The Pipe then takes all the tweets and constructs a series of queries to the Twitter REST API, pulling back user profile data from each user behind the tweets.
- After removing duplicates, the Pipe selects the top five most followed users in the list and builds an RSS feed presenting the username, a link to their twitter account, and the current number of followers they have.
NOTE: If the feed request is empty, try changing your search criteria. It's also quite possible that Twitter is struggling to handle load and won't fulfill the API requests.
If you find this mini-application useful, please let me know. Suggestions for modifications and improvements are always welcome.
It turns out that no one agrees on which way the swing states may go in this election (gasp!). Here is a clever widget from The Takeaway, outlining 13 news sources and their predictions of how the electoral votes will add up in November.
Each column is a state. Each row is news source. Click on a cell to get the breakdown. NPR, just for the record, is currently predicting a McCain win.
In the early part of the 20th century, radio programs reached national audiences through newly-constructed radio networks. For the first time, mass media news had a human voice – and later with television networks, a face. This drove networks to develop trust as a human asset, and news anchors cultivated personalities that you welcomed into your home and returned to again and again. Over the ensuing decades, we stopped relying primarily on our friends and neighbors to learn about what was going on in the world and instead looked to a few critical human voices that were trusted without question.
This trust began to unravel in the late part of the 20th century. News media fragmented into biased channels, public opinion of reporting eroded around clashes with the federal government, and shifting advertising revenue and downsized newsrooms led to highly visible gaps and gaffs in a previously trustworthy and consistent news reporting environment.
Meanwhile, the Internet is helping consumers become increasingly savvy about media, and new expectations around participation and transparency in information delivery is emerging. In this new environment, a singular voice of the news ceases to make sense – except perhaps when John Stuart mocks the system as a whole. Online tools are enabling collaborative and person-to-person communications as potentially more reliable and trustworthy mechanisms for getting news. Individuals now capture the news on their cell phones, deliver the news on their blogs, and share the news through social networks.
Perhaps news is no longer presented as a single story, cobbled together by a single agency and delivered through the mass media by a single voice. What was once a single story now becomes interpreted and conveyed by a range of voices through different formats, channels, and modes. As humans, we still build trust through human interaction and engage with stories that are delivered with emotion and conviction. Some stories are made more meaningful by our connection to the individuals telling the stories, and others because a fresh authentic human voice speaks to us. I believe we yearn for this raw communication as a method for getting our news and making sense of the information within – something that historically has not been present in mass media.
If the future of news communication is more humanistic and distributed, delivered by an array of authentic storytellers, where does that leave the traditional journalistic reporter? Their importance doesn't suddenly evaporate. What is their place amongst this array of voices? Are we now all journalists or do we expect the ones with the credentials to develop their own authentic voices? Both situations are currently happening in this new environment, as some bloggers are vaulted into mainstream public attention, and some existing journalists now craft their own blogs.
However, I think there's a third less explored role for journalists. A need that arises from an array of unknown, emotion-fueled storytellers who do not necessarily engender trust. The very nature of these raw voices will cast doubt on the validity of the underlying information. Journalists must help the information in their stories be valid, and the stories be trusted. I believe the secret lies in weaving together these new voices into a more cohesive whole. The time-tested role of editor re-emerges to perform this critical function. Perhaps the contemporary journalist wields new media editing tools like the traditional journalist wields a typewriter. The news is not delivered through a single human voice, but by collecting together the voices of others. The editor's red pen ensures the facts are preserved, underlying truths are revealed, and opinions are exposed. In this way, we get original voices, rich with information and authenticity. We are not led astray by their subtle biases or gaps in reporting. A new voice of the journalist emerges, crafting the news out of independent tellings, spinning the traditional piece on its head. Here, truth can be served in a compelling new way, and a variety of voices reveal new insights only possible through the stories of regular people.
Four years ago, I wanted to work towards helping get a president elected, but became discouraged. I thought what can one person really do to change a whole election?
Now, we likely face an even more critical moment in our nation's history. This got me to thinking. There are probably other discouraged people like me wondering if one person can really help get a president elected. But what if together we identified one thing an individual could do, and then all actually did it?
This is precisely what I propose we do.
With help from friends, I put together a list of suggestions for things an individual could do to help get a candidate elected. Choose the one you think is best or provide a suggestion of your own. In a couple weeks I'll add up the responses and publish the results.
But here's the Catch: By voting, I ask that you make a commitment to actually do the one thing we arrive at together. This way, what's possible is so much more.
As seen on public bulletin board in Medford, MA:
Culinary Touchstone relegated to dung heap in lieu of fast food chain brand name. Yet another sign of the impending Apocalypse.
NPRbackstory is an experimental web mashup that I created to dig through the NPR archives and unearth the Public Radio backstory on currently trending topics. This "application" is currently running in Beta as a Twitter account. To use the application, you need to follow NPRbackstory in Twitter. I welcome any feedback on this idea in the comments section below.I should note that I built this as a personal project to play with the public version of the NPR API. At the time I was not an NPR employee (I am now), so this experiment doesn't reflect the strategy of NPR or even have their official support. I'm grateful for the coverage that Harvard's Neiman Journalism Lab and others have given this project and to NPR for not pulling my API key ;-) Follow the NPRbackstory Twitter account
My favorite public radio segments provide thought provoking backstories on current news items. It might be a Terry Gross interview from a few years back of a famous person that just passed away, or a cultural sketch of an unfamiliar country that had a coup d'état this morning.
One of great things about the backstory approach is that it provides context and lends meaning to a current event. The backstory brings the listener up to date on a trendy news item without wallowing in the sensationalist details often found in mainstream news coverage.
In an attempt to bring this great idea to the web, here is a simple web application that generates an RSS feed of NPR online content. Rather than just a feed of NPR news, the NPRbackstory application tries to intelligently match fast-rising, trendy search terms to existing content on NPR.org. This goes beyond news coverage to include media from NPR blogs, interviews, NPR music, program content, podcasts, and station pieces (all thanks to the NPR API).
Below is the latest few items from the NPRbackstory Twitter feed. The keyword in parentheses is the fast-rising search term. The headline is the story, blog post, audio segment, or media from npr.org.
I'm encouraged by initial results from NPRbackstory. Here are some interesting "backstories" from the first few hours:
(ryan seacrest) Apparently, Ryan was recently bitten by a shark, resulting in a surge of web searches on his name. The backstory? A "Morning Edition" audio piece and write-up from September 2007 on Ryan entitled, "Hosting a TV show, how hard can it be?"
(jerry lee) Jerry Lee Lewis just detained for allegedly trying to take a gun on a plane. NPRbackstory returns his downloadable NPR Music "Song of the Day" from 2006.
(medical information) This web trend spiked because of a medical record leak of up to 200,000 people in Georgia. The backstory turns out to be a bit eerie: A "Morning Edition" segment on the trade-offs of online medical records from April of 2008.The NPRbackstory "Application" was created by Keith Hopper using the NPR API, Dapper, Twitterfeed, Feedburner, and Yahoo! Pipes. If anyone is interested in the details, let me know and I can post them here. And why not follow @khopper on Twitter to see what else I might be up to?
Ze Frank recently did an interview with Jesse Thorn as part of Sketchfest NYC where he spoke about how Colorwars 2008 was an experiment in trying to break the dominant metaphor of the web (23:00 mark). This sounds similar to Doc Searls' insight that when we frame the web as a space or a construction, it can limit what we do with it. Ze goes on to suggest that bringing companies and customers together under this new paradigm yields something new and meaningful. Again, this sounds very much like the idea behind Doc's VRM concept. It's likely no coincidence that these two ideas fit together.
Ze's quote from the interview:
What's holding the web back in some sort of way is a metaphor that people use for it, which is as a play space. You go to Flickr to look at your Flickr photos, but the real strength right now is real distribution of media. If you distribute media intensely and fully, then place can't really be the dominant metaphor anymore. This idea of creating play spaces that are mediated by personalities in some sort of way, that you can move fluidly and play a game inside Google street view and then move out to another space and things like that was an opportunity to play with breaking [the metaphor] down a little bit.
I'm really interested in getting brands – companies – involved directly in these kinds of games. To have the proximity of consumers and brands shrink a little bit. This is a capitalist society and we're not going to escape that relationship, and good citizenry comes out of that proximity. [For example] getting Jet Blue to sponsor a contest like this and also judge it and talk to people that are engaged in this kind of thing.
The first ever VRM Workshop conference just wrapped up, filled with smart discussion, great ideas, and forward momentum. One topic quickly gaining traction is the Relbutton, an early-stage VRM tool in the works that allows customers and vendors to visually declare their willingness to relate to one another on equal terms. "Relate" here means a wide variety of potential communications, transactions and intentions that might flow between a customer and vendor. "On equal terms" means that the responsibility for initiating, sharing and storing interactions rests equally on the two parties.
It might seem odd that such a distinction of equality is necessary here, but in our current economic reality, the responsibility for customer-vendor relationships lies almost entirely with the vendor. Historically, this hasn't always been the case. For at least a century, customers have subtlety and perhaps unwittingly handed over to vendors increasing responsibility for initiating and supporting their market relationships.
Customers once shopped primarily at marketplaces and bazaars, purchasing directly from individual sellers. Now customers interact with vendors in very different ways, relating with larger and more formally-structured organizations. In a bazaar, you might have set the price along with the seller. Nowadays, this price is generally dictated in advance. You might have located a product in the market by asking around or actively seeking out merchants on your own. Now, vendors take on the responsibility of finding and informing customers through their well-honed marketing, selling, and advertising push machines.
These advancements aren't designed to be nefarious. Customers often benefit from this shift in responsibility. For one, it's less work for us, and many enjoy simply choosing from among the product options marched before them. Additionally, I suspect vendors are initially unwilling to take on this added responsibility. Like all responsibilities, it means more work and increased risk. The first vendors forced to publically declare their prices in a market probably did so begrudgingly. And I don't know about you, but nothing sounds like more fun than implementing a corporate-wide CRM system. Good times.
This shift from shared responsibility to primarily vendor-owned has occurred for two reasons. First, relationships with vendors have gotten significantly more complex. As sellers morphed into more stable, meaningful and larger organizations, they found themselves with new needs – product development, brand management, enduring customer service, and shareholder accountability – all demanding excellence in order to compete. Addressing these needs required more and deeper relationships with customers, and the emergence of information technologies made managing these relationships possible. Good vendors now touch the customer at virtually every step of their value chain and tools like CRM systems helped feed the flames.
The second reason for vendors taking the relationship responsibility is that customers didn't step up and do it themselves. Individuals have been too distributed and independent to take on any sort of enlightened responsibility. It's hardly surprising that vendors picked up the ball. However, the Cluetrain Manifesto has taught us that the Internet holds the power to change this dynamic. Distributed connectivity has enabled a new set of communications and transaction environments to support individually-driven marketplace activity like customer price setting, product finding, and product reviews.
But these new internet offerings constitute proprietary destinations that are formal vendor organizations in their own right. What VRM is championing with the relbutton is a new type of distributed tool set that does not belong to any one company, that is vendor-agnostic, and that is mutually-beneficial for both customers and vendors. With the relbutton, individuals can begin to once again share in the responsibility of initiating and maintaining relationships, for example, by proactively communicating their needs in a more open and uncontrolled way. Shared responsibility is an important factor in all healthy relationships, and as VRM tools gain adoption, this paves the way for a more natural and authentic way of doing business.
Last month I had the opportunity to play expert on a conference discussion panel along with the likes of Henry Jenkins (well, at least virtually).
Providing comic relief, I jammed a 1/2 hour presentation into 6 minutes. The thrust of my presentation was to ask that people first consider building online traffic and enagement before trying to implement monetization (is it OK to base a presentation on a pet peeve?). Oh, I snuck in some edge comments later that were a little less snarky. It was fun, and a big thanks to the Center for Social Media for giving me the opportunity to speak. Below is a writeup they did up on my contribution.
Keith Hopper, Product Manager at Public Interactive, advised media makers to focus on getting more online users and building user interaction—such as product downloads, references in blogs and social networks, and participation in online discussions. "User interaction is the new currency," he said, noting that Google and Yahoo give away most of their content for free in order to build users. "This buys you significant leverage with partners and underwriters," he said, adding that currently, "Most public media doesn’t have enough user interaction to monetize."
Ned Gulley and Karim R. Lakhani presented The Dynamics of Collaborative Innovation (description, audio/video) last week at the Berkman Luncheon series. I had previously recorded and blogged Karim's similar, less detailed Open Innovation presentation back in May for the Berkman@10 Conference. Karim and Ned have been measuring various aspects of collaborative innovation around a programming contest that seems ideally suited for this purpose. I suspect that few real-world environments have such a good built-in mechanism for objectively measuring the strength of innovative contributions.
I found their insights into the differences between novel, game-changing submissions and incremental improvements particularly interesting. Within this programming contest, each individual user submission is objectively measured for performance against a desired outcome (e.g. algorithmic best fit), and the current best performing code submission is highlighted for all to see. This structure may create a problem in that innovative new approaches often do not immediately yield the best result as compared to an incremental code improvement. The social reward of being highlighted as the current best may encourage incremental improvements over novel approaches, potentially having the overall innovation outcome stuck in a local max.
Assuming that introducing novel ideas increases the chance of an eventual best outcome, then innovation environments like this might benefit from better incentives to reward novelty. Additionally, this contest environment has no inherent mechanism for identifying novel and potentially useful knowledge. Ned and Karim highlighted a specific example where a novel, under-performing programmatic approach introduced early was eventually adopted later in a programming contest and provided the conceptual foundation for the winning and final submission. Had this novel approach been overlooked, it is unlikely that the winning code would have performed as well.
I would argue that incentives designed to encourage the introduction and eventual sharing of novel information would prove useful, especially considering our human tendency towards only exchanging shared knowledge and withholding unique (and potentially important) knowledge in many social circumstances. Cass Sunstein explores this hidden profiles phenomenon at length in his book Infotopia.