Meaningful human insights: Ask a machine?
Is it becoming harder to find powerful insights to inspire persuasive, innovative ideas?
It’s a highly subjective question which, no doubt, would generate a number of insightful answers in itself.
But, even though there are dozens – hundreds, thousands even – of different ways of arriving at consumer insights to hang a creative thought on; everything from focus groups and research panels, to good old brainstorm-fuelled serendipitous gut instinct, it’s always the same thing we’re after. And, generally speaking, we know it when we see it…
That is, the truth. Only, the truth from a different angle. Something that strikes a chord, rings true, and offers a new perspective to approach things from.
I’ve always thought the most useful insights are the ones that have an element of the unexpected about them. Especially if they’re going to have any chance of driving a thought that would cut through the advertising clutter. Anything too obvious feels… well, too obvious.
The unexpected ones are the kinds of notions that you give a second glance to, the angles that challenge and require a little thought.
Now, I’m no planner and I don’t pretend to be, but it always seemed to me that it was at the points where disparate data or streams of information are clashed together that seemed to be the most interesting ones. These are the places that can take you somewhere new.
So it’s an interesting notion to think whether the process of unearthing insights could ever be an automated one.
After all, isn’t it basically a process of feeding-in different variables (the data) and trying to apply a bit of vision and sense to spot the patterns and gaps within it?
Well, it’s hardly that simple. Anyone will tell you that it could never be done by a machine, exactly in the same way that copywriting couldn’t. (But efficiency-driven processes are certainly beginning to dip their toes into that particular pool, just look at localised retailer advertising by some large European brands.)
No, bringing a vision to raw information or data so as to make sense of it requires experience, intuition, perceptiveness and the ability to think both logically and tangentially. It’s a uniquely human skill.
That’s all true. But, it’s also a bit of a given as well. And in this business, when something is ‘a given’ isn’t it just a little tempting to break it? Just to see what happens?
Earlier this year, I was fortunate to listen to the creator of Mathematica Stephen Wolfram talk about his knowledge engine, Wolfram Alpha. By releasing this ambitious project a few years before, he said, a way had been found to manipulate data in users’ questions to display results in new, creative ways.
By its own admission it uses the largest amount of computable knowledge ever assembled, consisting of ‘many trillions of elements’. It’s also not a search engine. Instead, it deals only with ‘facts, not opinions’. And, crucially, it’s an unfinished project – and will never be completed. It is evolving all the time.
Wolfram Alpha works because it uses algorithms and analysing software to identify patterns in data.
An article written by The Guardian describes it best: ‘Wolfram Alpha is not just intended to be a massive database of random facts – the facts need to be able to ‘talk’ to each other, so they can be used in computations. For example, if you enter the question, “What was the weather in London like on the day Prince William was born?” the site needs to be able to link weather information with biographical information. If you ask, “What is the distance to the moon divided by the length of the Amazon river?” it needs to link astronomical with geographical data. (The answers, by the way, are rainy and overcast with an average temperature of 16C and, currently, a ratio of 63.07.)’
So could it be used, provided the right combinations of query are entered, to help users arrive – albeit relatively serendipitously – at useful insights about audiences? Well, maybe not quite yet. But, perhaps eventually? And, by that point in time, would AI have evolved to a level when it is capable of intuitive thinking too? And what then?
After all, those ‘Eureka moments’ when that elusive insight – that indisputable truth that’s been hiding amongst the research evading detection for days – which humans are particularly adept at recognising, can always use a little coaxing out, right?
And humans can sometimes get weighed down by the burden of knowledge, realism, experience and lack of time. We’re always on the lookout for the quickest route to the best possible answer.
Also, all too often the sticky mud of risk aversion and commercial reality can play havoc with gut instinct, especially when it comes to isolating insights and interesting jumping-off points into a brief. Admittedly, it may be a while before Wolfram Alpha learns how to deal with cultural context, which can lead to many fertile insights, but there’s probably no reason why it couldn’t given enough time.
It’s a matter for discussion. It’s also highly subjective, and peppered with angles, entry points and facts. But with an insightful, unexpected answer lurking in there somewhere.
Bit like Wolfram Alpha, really.