In my lead-in talk for the Tufts Symposium on Innovation, I attempted to wake up the audience with a shocking tale of world domination.
Well, sort of.
We at the Awesome Foundation were surprised four years ago when our idea to give away money resonated as much with givers as with receivers. The very principles that directed our giving efforts turned out to be powerful facilitators for organizational growth.
Here are a few of them:Low Barriers to Entry
What began as a desire to allow anyone, anywhere with a good idea to more easily pull it off translated easily into anyone, anywhere with a desire to start a chapter of the Awesome Foundation to do the same. No permission required. No formal organization to set up. We think of it like a highly portable brand anyone can own and use, an idea framework to extend, a set of tools to deploy, an excuse to join up with friends and make a difference.Balanced Power Dynamics
A strict no-strings attached grant policy, a refusal to narrowly define what awesome is, and a lack of formal leadership or governance structure helps create an environment where work is driven by individual contributions, decisions are made locally, and anyone with a good idea doesn’t need permission to make it happen.Constant Experimentation
We fiddle with things. Since no one needs permission, projects just happen (vs. a frenzied vetting of every new suggestion). This lets the best ideas gain traction while the less than stellar initiatives starve. This environment of experimentation works well on a global level as well. Each chapter imagines its own future, launches its own events, and sets its own rules. Then we learn from each other and steal the best ideas. Distributed ownership plus collective learning can be a powerful combination.
Dear Fellow Bostonians,
At its core, The Awesome Foundation is about community. And for Boston, the marathon represents our city's community at its best. It'd be a real shame if violence is all anyone thinks about now when they think of the Boston Marathon.
So we're collecting awesome stories and memories from Boston Marathons past to showcase this incredible community event. Let's remind the world about the 116 years of amazing memories, not a single sad one.
Read, share, and contribute here: 26milesofawesome.tumblr.com
Sharing a story can be as simple as completing the sentence: "The Boston Marathon is awesome because..."
Here’s to making the 2014 Boston Marathon the most awesome one yet!
The Awesome Foundation, Boston
There are scores of news sources covering the elections, but which provide us with a clear picture of how things are progressing, especially when so many sources these days rely on sensationalism or political bias to build audience? One potentially under-valued way to understand what’s happening in our world is to take a look at online prediction markets.
Online prediction markets are sites where individuals can place bets - usually non-monetary – on particular outcomes of world events. These "bets" take the form of either/or outcomes, such as whether a particular movie will be number one in the box office or whether a political candidate will win or lose an election. Bets are then traded much like on a stock market. If you bet wisely on a particular outcome, then you stand to "profit" on your bet. As individuals buy and sell on potential outcomes, the collective prediction fluctuates in real time. As bystanders, we might gain some insight into predicted outcomes and their underlying drivers by watching the fluctuations in predictions as they unfold.
But are these predictions accurate? Markets such as Iowa Electronic Market have been recognized as being a fairly accurate and effective predictor of past elections. While past performance is no indicator of future outcomes, it can be enlightening to watch the outcome percentages fluctuate over time. Behind shifts in predicted outcomes lies the aggregated knowledge of all the market traders. In other words, when a percentage shifts there might be new information driving that shift. In the case of an election, this shift might represent the punch of a campaign ad or the release of a new piece of potentially damaging candidate history. Because traders stand to benefit if they move quickly on a piece of new information, outcomes in exchange markets often represent the very latest information and can be used as a bellwether of sorts around particular events as they occur. For example, perceived performance during the course of a live debate might drive real-time market predictions. Additionally, as bystanders we can gain a better understanding of important events without first having to track down and analyze the underlying facts. The underlying information is still critical of course, but instead of finding it and determining what it all means we might do the opposite – look for impacts and then find the information behind it that might prove meaningful.
Monitoring outcome percentages in prediction markets might be useful for the casual news consumer. I have pulled prediction numbers from three online prediction markets on the web around the upcoming presidential election in order to explore whether this theory might be correct. Specifically, I include predictions in real-time from Intrade, Foresight Exchange, and the Iowa Electronic Market around whether the incumbent president will secure a second term1.
Once per day, I will automatically pull the prediction numbers from the three respective markets and publish them to my Twitter feed. My hope is that these percentages and their fluctuations over time might help the casual news consumer get a slightly better understanding of not only the election outcome, but of meaningful news information that might be happening surrounding the election without having to wade through a sea of potentially conflicting reports. I’m curious whether this experiment might be useful or the predictions accurate. I have no idea – perhaps you can help me understand. I’ll be watching responses on twitter and reporting back if interesting stuff arises.
1 To be clear, I could have tracked whether Obama will either win or lose the election (% chance of losing = 100 - % chance of winning for those keeping score at home). The decision to track the chance of a reelection and the decision to track the presidential election more generally does not represent my views on the outcome (e.g. whether or not I prefer either of these outcomes), but instead represents my desire to do the least amount of math possible, since all three prediction markets provide prediction data in the affirmative (win) rather than the negative (lose). It’s also worthwhile to note that these three prediction markets represent slightly different potential outcomes, for example, it is technically possible for Obama to not secure a second term without losing the election. I feel these outcomes are similar enough for the type of monitoring I propose to treat as equally representative of a predicted reelection outcome.
Intrade tracks Obama's re-election, Foresight Exchange and Iowa Electronic Market track any Democratic candidate winning the presidency.
If you're familiar with NPRbackstory and want to know how it was put together, you can't do better than with this online tutorial.
The Awesome Foundation is a simple idea. We support people doing awesome things in the world. Every month we give out a $1,000 of our money to an idea we think is awesome and should be released upon the world.
Yes, but what do you think is awesome?
Awesomeness is more the product of a creator’s passion than the prospect of audience or profit. Awesome creations are novel and non-obvious, evoking surprise and delight. Invariably, something about them perfectly reflects the essence of the medium, moment, or method of creation. Awesome things inspire and attract.
Here's how we support more awesome:
- You apply by writing a few sentences about your awesome but unrealized idea. There are absolutely zero restrictions on who can apply and what sort of idea could win.
- If we like your idea, we give you $1000. Possibly in a brown paper bag.
- There are no strings attached or hoops you have to jump through. Of course, we hope you'll execute on your idea, but, you know, whatever.
Lots of people have been asking to find out more about the Awesome Foundation. Here's some background.
Every day brings an avalanche of new ideas and novel creations to the web, from witty t-shirts and viral videos to innovative methods of collaboration and powerful new software. The creation of unique and interesting things is not new, but the current surge in individual creative activity and its subsequent high visibility on the web is unprecedented.
The most compelling of these creative products I have been referring to as The New Awesome, and they represent a tiny portion of the total creative output. Historically, the word "awesome" might have been used to describe the power of a tornado or the grandness of a majestic vista. Today, the word is more often used to qualify the ingenious or impressive products of personal creativity, such as using hairspray to launch a potato 200 yards, hosting a talk show in Halo 2, or mocking the Kansas school board’s ruling with an ingenious take on religion.
But there’s more to the New Awesome than merely creative flair. The most interesting and, well, awesome creative products seem to share some common characteristics:
- It is novel and non-obvious
Nothing like it has really quite been done before. Whether a clever approach, an unforseen bending of the rules, or just a commitment to excellence far beyond the expected, the New Awesome never fails to evoke surprise and delight.
- It emerges from passion, without the prospect of audience or profit
From the first encounter, it's clear that the creator felt compelled to make this. Recognition or revenue is icing on the cake.
- It is initially under the cultural radar
The New Awesome invariably emerges from the depths of the long tail. While the creator might be previously known for their creations, your mom has never heard of them.
- It captures the essence of the medium, moment, or method
For something to truly stand out in the sea of creativity, the creator needs to tap into something true and magical. Don't ask me to define it, because I can't. In the words of Justice Potter Stewart, you know it when you see it.
- It evokes passion, community, like-minded behavior, and the insatiable desire to pass along
The New Awesome is meme fodder. From it springs a thousand remixes, knockoffs, spinouts, and analogs. People gather around the hem of awesome.
An interesting result of this creative surge is the rising importance of effective discovery and distribution of the best creative products. In other words, when there is a rising sea of mediocrity, how do we find and highlight the very best? Alas, this will have to be the subject of a future post.
Kickstarter is a new way to fund creative ideas and endeavors through online collective support. Individuals get rewards in exchange for backing a project, and no one is committed unless a project gets all the support it needs.
The Open Business Cards experiment uses Kickstarter to help fund the creation of 100 Creative Commons licensed background images for anyone to use for free. As part of the project, up to 10 packs of mini business cards will be created using the images and distributed as part of an exclusive run.
The idea was inspired in part by MOO MiniCards, which let you put custom photos on the back of high quality mini business cards. Since MiniCards come in packs of 100 and you can upload up to 100 custom images, I thought it would be fun to create a pack with every card you hand out being unique and open.
Before initiating this project, I had begun to create a library of textural background images from my iPhone. This was inspired by the discovery that if you power off your iPhone with the camera app running, you'll get an impromptu close-up shot when you next turn it on. This is usually a shot of a table surface, the ground, your shoes - many of which provide interesting textural backgrounds.
I have shot 53 photos so far (as of this post). I will open up the licensing of this photo set once I have shot 99 acceptable background photos.
The following is a basic description for a proposed approach to integrating VRM into an existing software application. I welcome your input on this emerging idea in the comments section here or follow the evolution on the Project VRM Wiki.
Update (5/19) - Audio slideshow and presentation video on ListenLog now posted.
The VRM ListenLog is a proposed method for integrating simple user-driven functionality into an online audio player device or application. The ListenLog concept was devised in part for the Public Radio Tuner iPhone project, where it will likely be first introduced. The ListenLog is a consolidated and documented history of an individual's online listening activity. It is simply a recorded activity log, in a standard and open format, documenting an individual's listening actions through one (or more) online devices. The ListenLog is unique in that its aim is to give the user complete control over what to do with their listener activity data, including where the data lives, who to share it with, and how it can be used.
While tracking listener behavior is not a new concept, the ListenLog is a novel user-driven approach to deploying early VRM functionality. While a simple activity log might not be the killer app, it succeeds by putting in place a small piece of user-driven infrastructure into a larger application - one with a promise of relatively wide distribution. Since this infrastructure component will write, store, and share listener activity in an open and standard format, we hope that such a log will become significantly more useful as other devices and tools leverage the standard to increase what an individual can do with their ListenLog data. This type of sideways approach holds the promise of planting the seeds of VRM onto lots of devices without requiring the primary application functionality (i.e. audio listening) be purely user-driven.
A user-driven activity log works well for an application that pulls together audio streams and files from a number of different sources. Of course, online audio providers (vendors in the VRM model) can already track and aggregate listening behavior data, but only for the audio they control. When the user acts as the sole point of integration, pulling together audio from multiple sources, their own consolidated log becomes unique and powerful. Only when the listener is the point of integration does such an approach yield unique value.
Here is a working document of some emerging ListenLog Specifics as we flesh them out.
Public Radio Tuner iPhone Application
A collaborative effort that launched a single, free Public Radio iPhone application to support radio streams and on-demand public radio program content from all public radio networks (NPR, PRI, APM, and PRX). The application builds upon APM's Public Radio Tuner application, and the 1.1 release available on 1/6/2009 incorporates over 200 public radio station streams from around the US, a GPS-enabled local stream finder, and station search functionality. We hope to have ListenLog functionality incorporated in V 2.x.
Last week I posted a mini-app that helps find popular twitter users near you. Simply enter a location, and Twitterstars will search regional tweets and return the top five most-followed Twitter users.
I got some good sleuthing and feedback from the genius behind lolcode, and have subsequently made some updates and learned enough to provide some caveats. Tips & Caveats:
- Since this app hits multiple web services, expect a little bit of waiting time as the data is retrieved.
- If the page returns empty, this is likely because Twitter is struggling under server load or is rejecting API requests from Yahoo! Pipes (known issue)
- I've locked the radius of search to 15 miles, which in most cases encircles users who put the city name you've searched for in their profile (twitter search API uses LAT and LONG coordinates). I have discovered some examples where the search API stumbles on stated locations, however
- The Twitter search API returns a maximum of 100 tweets and must analyze users from within that collection. This means that if a popular user has not tweeted within the time window determined by the 100 most recent tweets (sometimes as little as a few minutes in the case of, say, NY, NY), then they will not be included in the search results. Try multiple times during the day to get different results.
- The Twitter Search API is notorious for its latency. If you're trying to catch a very recent tweet in the result set, you generally won't be successful.
- Pipes requests in rapid succession will return cached data, so it's not enough to simply hit refresh on the results page (sorry). Wait a few minutes and try again, or hack the URL to change the search radius or LAT/LONG, etc.
If you find this mini-application useful, please let me know. Suggestions for modifications and improvements are always welcome.
[Note: I've posted a Twitterstars update]
Finding and connecting with local social media 'superstars' can be a valuable short-cut for anyone trying to ramp up quickly in online social environments. These enthusiasts are knowledgeable about social media tools, are highly-connected, and understand well how to succeed in the online social environment.
But how do you find the local social media superstars? Today, many of these individuals use Twitter. The "Local Twitterstars" mini-application below takes any
US geographic search area that you provide and returns a feed of the top five most followed individuals on Twitter who have been recently active in the region. Below is a more detailed explanation of how I built this mini-application. I also posted an update here.
This mini-application uses the Twitter Search API, the Twitter REST API, Yahoo! Pipes, and some simple HTML.
- The simple HTML form above constructs a server GET request through both hidden and user-populated form fields.
- This constructed URL queries a custom-built Yahoo! Pipe that takes the location from the URL and converts it to LAT-LONG coordinates.
- A Twitter search API query is then constructed by the Pipe using the LAT-LONG and radius data, returning the 100 most recent tweets in this region. Depending on your search area, this could include only very recent tweets or could span a much longer time period. Twitter has some internal smarts around matching the coordinates to include a variety of data that users put into the location field of their profile, including towns, zip codes, iPhone GPS coordinates, etc.
- The Pipe then takes all the tweets and constructs a series of queries to the Twitter REST API, pulling back user profile data from each user behind the tweets.
- After removing duplicates, the Pipe selects the top five most followed users in the list and builds an RSS feed presenting the username, a link to their twitter account, and the current number of followers they have.
NOTE: If the feed request is empty, try changing your search criteria. It's also quite possible that Twitter is struggling to handle load and won't fulfill the API requests.
If you find this mini-application useful, please let me know. Suggestions for modifications and improvements are always welcome.
NPRbackstory is an experimental web mashup that I created to dig through the NPR archives and unearth the Public Radio backstory on currently trending topics. This "application" is currently running in Beta as a Twitter account. To use the application, you need to follow NPRbackstory in Twitter. I welcome any feedback on this idea in the comments section below.I should note that I built this as a personal project to play with the public version of the NPR API. At the time I was not an NPR employee (I am now), so this experiment doesn't reflect the strategy of NPR or even have their official support. I'm grateful for the coverage that Harvard's Neiman Journalism Lab and others have given this project and to NPR for not pulling my API key ;-) Follow the NPRbackstory Twitter account
My favorite public radio segments provide thought provoking backstories on current news items. It might be a Terry Gross interview from a few years back of a famous person that just passed away, or a cultural sketch of an unfamiliar country that had a coup d'état this morning.
One of great things about the backstory approach is that it provides context and lends meaning to a current event. The backstory brings the listener up to date on a trendy news item without wallowing in the sensationalist details often found in mainstream news coverage.
In an attempt to bring this great idea to the web, here is a simple web application that generates an RSS feed of NPR online content. Rather than just a feed of NPR news, the NPRbackstory application tries to intelligently match fast-rising, trendy search terms to existing content on NPR.org. This goes beyond news coverage to include media from NPR blogs, interviews, NPR music, program content, podcasts, and station pieces (all thanks to the NPR API).
Below is the latest few items from the NPRbackstory Twitter feed. The keyword in parentheses is the fast-rising search term. The headline is the story, blog post, audio segment, or media from npr.org.
I'm encouraged by initial results from NPRbackstory. Here are some interesting "backstories" from the first few hours:
(ryan seacrest) Apparently, Ryan was recently bitten by a shark, resulting in a surge of web searches on his name. The backstory? A "Morning Edition" audio piece and write-up from September 2007 on Ryan entitled, "Hosting a TV show, how hard can it be?"
(jerry lee) Jerry Lee Lewis just detained for allegedly trying to take a gun on a plane. NPRbackstory returns his downloadable NPR Music "Song of the Day" from 2006.
(medical information) This web trend spiked because of a medical record leak of up to 200,000 people in Georgia. The backstory turns out to be a bit eerie: A "Morning Edition" segment on the trade-offs of online medical records from April of 2008.The NPRbackstory "Application" was created by Keith Hopper using the NPR API, Dapper, Twitterfeed, Feedburner, and Yahoo! Pipes. If anyone is interested in the details, let me know and I can post them here. And why not follow @khopper on Twitter to see what else I might be up to?
What's the smallest portion of a person's name that can be typed into Google before the Suggest feature returns their full name? To allow for meaningful analysis and comparison, I propose the Google Infamy Coefficient (GIC).
GIC = (# successive characters entered into Google Suggest before the full name is revealed) / (# characters in the full name)
The results on the way to my own name are graphed below (The GIC of Keith Hopper = .727). The most infamous Keiths appear to be Keith Richards and Keith Olberman, tying at GIC's of .23. Anyone know what name might have the lowest GIC? Blog props and bragging rights to anyone who figures it out and lets me know.
Hat tip to uber-geek Randall Munroe (.385) for the inspiration.