Big Data, Small Business – Seeing the Picture

Big Data, Small Business – Seeing the Picture

Big Data. It’s certainty a trendy term but how can a small to medium-sized enterprise (SME) ‘do’ it?

We, the good people of Earth, create the equivalent of over 200 billion two-hour HD movies each and every day. It’s all generated through our human-electronic machine interactions – whether for business or pleasure. Everything ranging from the minutes of the last Board Meeting through to your Friday night beer-goggled selfie is stored and indexed somewhere. [1]

To borrow from Neil Young, it seems that data, like rust, never sleeps.

Yet, in a business context, unless we’re a bank or supermarket squirreling away growing mountains of rapid-fire transactional data, what we and our work colleagues create is usually more manageable. Even if it is still amassing every day.

Big Data for Small Business

Big Data has been said to be characterised by volume, velocity and variety. [2] In an SME Big Data is probably best characterised by customers, commerce and capital. Every Australian business, whether selling legal services or lipsticks, healthcare or hotel rooms, collects data related to its customers, sales of products or services, and financial (cash-related) movements.

To my reckoning, then, Big Data at an SME level is about drawing insight from what you have on customers, commerce and capital. And augmenting this with other data, including what you don’t have.

Insight can be derived from any data set, especially when combined with something else that you can beg, borrow, or steal.

Crucially, Big Data for SMEs is therefore more a way of thinking than it is spending wheelbarrows of cash on some snazzy software that you hope will automagically and potently leave you breathless with clarity.

Here’s a story.

Well over ten years ago, when I had a bit more hair, I was involved with a project to determine the underlying cause of premature failures in water and sewerage pipes for a Local Government Authority.

We had textual data about the pipes, their material and age, coded with unique identifiers. We had textual data on the types of failures, recorded by house number and street name. And we had a Geographic Information System (GIS) map with a layer that shared the unique pipe identifiers.

So, we geocoded the failure data to a new map layer then used a spatial join to identify the closest pipe. Naturally there were exceptions and these were tidied up manually with minimal fuss. We then obtained soil information from a surveyor as well as temperature and rainfall averages from the Bureau of Meteorology. Conflating all this data together provided the Eureka moment that related cause and effect.

Once an elm tree had penetrated a vitreous clay (enamel) sewer pipe, any treatment – including physical root removal or herbicide – would last almost exactly one year before the roots returned. Burst water mains were the result of fitting failures on plastic pipes that occurred seasonally and could be attributed to a particular contractor who worked in one area of the city.

None of this would have been apparent had we not coalesced data sources within the Council together with relevant and readily available data living outside of it.

So, how do you reach an epiphany moment staring at your disparate facts and figures? Sure, you can buy an application that purports to be an automatic seer of insights. But I reckon, this comes down to someone thinking about:

  • What you have;
  • What sorts of business questions you have;
  • How to join all the dots; and
  • How to lean on tried and true statistical techniques to expose hidden insights.

If you undertake the above first, then you will be embarking on a journey that facilitates the creation of models to unlock the hidden secrets within your data and postulate analytical insights. Your goal, then, is to manage that journey so that you can periodically refresh your models with the latest data yourself, even if you need some external help getting to that point first.

What do you think? Have I got it all wrong? Is Big Data really about letting Arnold Schwarzenegger’s T-800 Neural Net “Learning Computer” wreak havoc on our data and spit something out? Perhaps my experience of all those years ago was a Rise of the Machines and it’s apt we’re now seeing a Genisys.

Next Steps

  • Share this article – who else might find this of interest?
  • Start a chat at work – how is your company drawing insights from data?
  • Leave a comment below – what do you think?

Contact Us or phone 1300 LOFTUS (1300 563 887) to discover how the Loftus team can help you make sound decisions by revealing the secrets locked within your company’s data.



[1] IBM. 2015. What is big data? [ONLINE]

Available at: http://goo.gl/zfLhEP.

[Accessed 23 April 15].

[2] IBM. 2012. Demystifying Big Data. [ONLINE]

Available at: http://goo.gl/lHdypz.

[Accessed 23 April 15].

Share:
facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail
Comments are closed.