Steve Ross: Big data still needs the human touch

Business is increasingly reliant on 'big data', but with so much digital information to process, is there still a place for human input?

This article contains affiliate links. We may earn a small commission on items purchased through this article, but that does not affect our editorial judgement.

Big data analytics failed to recognise 'fake' news during the race for the White House. Picture: Ty Wright/Getty ImagesBig data analytics failed to recognise 'fake' news during the race for the White House. Picture: Ty Wright/Getty Images
Big data analytics failed to recognise 'fake' news during the race for the White House. Picture: Ty Wright/Getty Images

Big data refers to the vast amount of information modern businesses receive, store and analyse – to enhance performance and inform decision-making into the future. That data, arriving in great volume from multiple channels, includes detailed information on products, transactions, social media interactions, website visits and more.

With so much data to process, interpretation becomes a crucial part of the process and by necessity involves significant IT resources. Complex algorithms are programmed to sort through terabytes of information quickly, and use the results to anticipate upcoming trends, cycles, and patterns across the business landscape.

Hide Ad
Hide Ad

As data collection platforms become more efficient, businesses are becoming increasingly reliant on automated analysis – but is taking interpretation completely out of human hands necessarily a good thing?

Election errors

The 2016 US election provides a timely example of how automated algorithms can struggle with big data analytics and produce aberrant results.

As the November vote drew closer, Facebook’s news feeds were inundated with thousands of stories – many of which were, at best, inaccurate or, at worst, downright false. The automated processes used to recognise those “fake” news articles failed to catch and present them as such to readers. Without the benefit of human interpretive skill, false information was propagated to millions of users across the social network.

To address the problem, Facebook is tweaking the automated algorithms it uses to detect these stories, and flag them as “false” to its users. Interestingly, Facebook is also opting to introduce a strong human element to the verification process, including options for users to report “fake” stories, changing advertising policies to cut off “fake” site revenue and asking journalists to help detect “fake” stories.

While the success of these new approaches is yet to be seen, it’s clear Facebook is working to understand how a human perspective can enhance its big data infrastructure.

The human advantage

The key to successful data interpretation lies in an organisation’s ability to find the “sweet spot” – the necessary balance between IT capability and human experience.

Automated IT platforms offer a variety of powerful, innovative tools – including database and statistical software, language detection and text-mining – and allow observers to drill down deeply into the raw information hidden within terabytes of collected data.

But those platforms can only ever go as far as their programming allows: from that point on, human skill comes into its own, offering subjectivity, insight and inspiration as a foundation for astute, informed decision making.

Hide Ad
Hide Ad

That human element is crucial to business. The sheer volume of data modern businesses must deal with means automated IT processes remain an indispensable part of the analytical infrastructure.

However, to make the data you collect meaningful, there is no replacement for old-fashioned human expertise. The trick is, learning when and where to let the algorithms stop, and let your salaried experts take control.

• Steve Ross is managing director of Shackleton Technologies

Related topics: