Category Archives: big data

Cognitive Cooking and #IBMFoodTruck: My Review of #Food at #SXSW 2014

Social activation to drive the menu choice at SXSW2014

Social activation to drive the menu choice at SXSW2014

One of the most impressive displays of data driven innovation and equally of the best stories of social media campaign hacking at SXSW 2014 came from IBM’s Cognitive Cooking efforts.

IBM created a small, rather understated pop-up space just steps from the Austin Convention Center.  Inside there was small food truck serving one of six select menus including #chili, #dumplings, and #burritos. [hearty southern food!]

Serving up #Poutine at SXSW 2014

Serving up #Poutine at SXSW 2014

This was a menu created with the help of Watson – IBM’s renown self aware computational genius of a computer.    IBM input the chemical food structures of over 10K foods (or so I was told at the food truck).  Watson then matched foods coming up with new combinations based on flavour profiles.  Now add a partner like New York’s Institute of Culinary Education – who took the new favourful combinations and made a menu.

IMG_7183

Peruvian Potato Poutine – an IBM Food Truck creation in collaboration with the Institute of Culinary Education and Watson

On the day that I was sampling the #IBM Food Truck Fare, I interviewed the chef to understand if Watson’s involvement took away any of his enjoyment.  The chef revealed that whereas Watson recommended the food combinations, the computer did not give a recipe, amount of ingredients to be used relative to other ingredients nor information on how to prepare the food.  And so he felt there was a lot of territory to explore as a chef.   With that, he gave me a sample of Peruvian potato #poutine.

Well the poutine was a fine combination of potato, roasted cauliflower, spicy tomato sauce and goat or feta cheese on top.  It was amazing!  Who thought to add roasted cauliflower to poutine?  Watson.   As I wandered around the IBM Food Truck, I noticed an extraordinary number of Quebeckers also enjoying poutine.    So I interviewed one to see if the poutine lived up to her expectations..  [coincidentally the wife of someone I really enjoyed working with in the JWT Montreal office]

Screen Shot 2014-04-27 at 10.36.19 PM

Full credit to @TP1 and @nvanderv – who twitter hacked IBM’s cognitive cooking contest by inserting a ‘fake’ entry and winning. Absolutely brillant

Talking to a few IBMers at the booth – I learned the onslaught of Montrealers was no accident.  Having #poutine on the menu was a beautiful hack to IBM’s cognitive cooking campaign.  You see – to add a layer of social activation for SXSW, IBM marketed the six menu choices, each with their own poster, and encouraged South by Southwesterners to vote by hashtag on what menu they wanted for each day.

That’s where @TP1 and @NVanderv enter.  They created their own ‘fake’ poster, entered it into the socialsphere and voila! They gained so many ‘votes’ that IBM agreed to make #poutine.    If you can read French, this is really explained much better in the Minimal Blog “Informatique cognitive et fromage en grain”.  Anyhow – I just love how you can create a campaign but the audience might take over – in a hack that is so much more than IBM could have ever planned.

Advertisements

How to write a strategy deck… Bullshit, Prove It, So What –

When I was a junior consultant at IBM – working with an ex-Kraft marketing VP and ex-Campbell’s brand director – I learned the ‘Bullshit, Prove It, So What’ design for strategy presentations.  I’ve never forgotten it.  BULLSHIT – is the hypothesis line.  Prove it – is the chart, research, etc that proves or supports the bullshit.  Then ‘so what’ is the implications for the brand.  Sounds casual but it is actually a fine model for strategy presentations.

Today, I work with many individuals who help make sense of big data for agency clients.

Social listening – which is truly about finding patterns among copious amounts of data – is something that I rely on as one key input to digital strategy.  In doing so, I find myself training many individuals – not how to gather social listening, for we have tools that do that, but to become suspicious of what is being offered and then package insights.

Big data is more than simply a matter of size; it is an opportunity to find insights in new and emerging types of data and content, to make your business more agile, and to answer questions that were previously considered beyond your reach.” – IBM website.

Caption:  An interesting look at the usage of ‘big data’ in google searches.  We can see it emerging in the last two years. Image taken from Stephane Hamel’s blog post explaining big data.

First, I truly encourage everyone to understand how data is collected. Its a little bit like understanding the Google algorithyms.

For instance, let’s consider the key sources of mentions in the forums category for social listening platforms.  If a niche Canadian forum does not pass country information in its API to the social listening platform, is it still considered a Canadian forum?  The answer is no.  It is considered to be a US forum.  In which case, you can have gross misrepresentation when doing some forum analysis.  This is the case with the automotive sector – which is host to many niche forums down to the nameplate or model of a car.

The same goes for automated sentiment which is, for many unknown reasons, accepted and presented as defacto accurate by many.

But beyond questioning where the data comes from or how it is collected, I insist that folks demonstrate more than just ‘fact gathering’ (however qualitative this ‘fact gathering’ actually is).

Many people who do social listening just regurgitate what a tool presents to you.  So much so that reports become just a presentation of the what is seen in the social web.  What I am demanding is that the research first consider the issues and form a hypothesis.  What are you trying to demonstrate?  This is fundamental to “issues based consulting” – something that I attended in IBM University in NYC.

After hypotheses are formulated, we collect data that may prove *or* disprove the hypothesis.  For instance – perhaps you consider Canadians to be well informed about a major retailing event called Black Friday.  But in gathering SEO activity and social listening – you can see that Canadians are not knowledgeable about a retailing holiday that is based on an American holiday.

With issues, hypothesis and ‘facts’ (or I prefer to call them findings) – we can move to the ‘so what’ stage.  It sounds easier than it is – while doing social listening, you might go back & forth testing hypotheses three to four to five times.

The holy grail then is coming up with the brand implications from the data found.  That is the ‘so what’ fun part.  For instance, if Canadians do not understand black friday – when do they start looking for their answers compared to when retailers start offering answered.  There is a gap.

It sounds so incredibly simple.. and yet, few use it.