In modern society, data pours into organisations from a wide variety of sources and in such volumes, at such speed and with such complexity that it is impossible for traditional database tools to make sense of or manage. The term given to this is “Big Data”, which can be neatly characterised by three Vs: increasing variety, velocity and volume.
Big Data has become a conspicuous part of our daily lives. Whether it involves Google monitoring and analysing the search terms to help it predict the next super flu outbreak, businesses monitoring their Twitter handles to allow them to pull large publicity stunts, or the highly data-driven election campaign run by President Obama and his advisers, it is clear Big Data is becoming more prevalent in all aspects of our lives.
To give just a small number of practical examples of how Big Data is both used and collected on a daily basis, think: crime prevention (fraud detection); medical advancements (risk screening); disaster relief work (real time disaster maps); epidemic prevention (epidemic spread mapping); digital maps (Google Earth); communication (Whatsapp); energy (smart meters); transport (Uber); retail (targeted advertisement); and fitness (Nike FuelBrand).
Given Big Data’s infiltration of almost every element of our lives, it is unlikely that the trend of collecting and analysing it will diminish any time soon.
A practical example will help emphasise the increasing importance of understanding and management of Big Data. Take a small business, employing 150 people. Despite its small stature, this business has a tremendous capacity, through its reliance on e-mails, file sharing systems, back-ups, mobiles, laptops and desktops, to generate and store vast amounts of electronic information. This could total almost 20 terabytes of electronic information – the equivalent of 1,250 16GB iPhones or two copies of the printed collection of the US Library of Congress. In consideration of these immense volumes of data, it is clear there is a daunting challenge for even smaller organisations in how to contextualise and make the best use of this valuable data flow.
There are numerous advantages to the collection of Big Data. One of the enormous benefits and valuable opportunities presented by effective analyses of Big Data is undoubtedly in the historical data that can be harvested. This old data can be used to generate predictive analytics for use in shaping and evaluating what-if scenarios, which are key to informed decision-making in any field.
There is a further advantage that is of more relevance, perhaps, to the legal profession than any other. Big Data can often, through the methodologies and judgment of experts, yield valuable information during the course of investigations, compliance, governance and regulatory reviews as well as during disputes. This is where a further platform comes into play: e-discovery. It is commonplace during the course of a dispute or investigation that huge amounts of data will routinely have to be sifted, evaluated and recorded. To do so effectively, those reviewing the data, particularly lawyers, must implement and fully understand the e-discovery process and have the tools at their disposal to make light work of some of this – metaphorically speaking – heavy lifting. Furthermore, they must be able to work closely with the client’s IT departments and external forensic experts.
The value of experienced lawyers able to understand and master the data landscape cannot be overstated. Without it, results pulled from a specific set of Big Data may be, at best, erratic and, at worst, entirely wrong. The expertise of solicitors in conjunction with the providers of e-discovery tools can bring about a defensible and effective solution.
• Hayley Pizzey is a solicitor and e-discovery expert with Shepherd and Wedderburn LLP www.shepwedd.co.uk