I have spent the last two days at the Innovation Enterprise Big Data conference. At first I felt overwhelmed by the barrage of new terminology and acronyms – Hadoop, Hive, Storm, Map Reduce, ETL, distributed nodes, polyglot infrastructure…I could go on.
But after taking a few deep breaths and listening to what people were doing with their Big Data it dawned on me that – firstly I will never be a database infrastructure expert – but secondly that the ‘big’ part of ‘big data’ is really more of a technical challenge than anything else.
The term ‘Big Data’ has been around for some time now and the most consistent definition I can find is in terms of data that is high volume, variety, and/or velocity. Basically loads of data. The technology and protocols to allow companies to explore it are becoming well established but the challenge remains the same as with datasets of any size. How do we extract meaning from it?
The really clever, and frankly creative part of big data analysis is in the ability of people to find novel and clever ways to look at the data. Data scientists are beginning to emerge as the new gods in the data dependent world we live in. As market researchers and research technologists with sometimes decades (not mentioning any names) of data analysis experience behind us we are perfectly placed to be pioneers in the interrogation of big datasets.
Big data could represent a golden age for market researchers, offering us unprecedented insight into peoples lives and behaviours – as long as we embrace it and use the skills we already have to offer our clients the algorithms and insights that transform the raw data into business intelligence.