Big Data is the catchword of the day. But at what point is big data actually “big”? When is data analysis by conventional methods sufficient, and when are big data methods really called for? A company that ships a million packages a month could collect compressed package data for over 400 years before accumulating 500 GB of data. If this company with such an impressive volume of shipments were to lower the period of data collection to a more realistic 5 years, the resulting volume of data could still be easily analysed with a simple personal computer running standard software. So do supply chains really generate “big data” that needs to be analysed?
The answer to this question is clear: It depends. For projects where the objective is to analyse the performance, costs, and vulnerabilities of logistics operations and assess the effects of optimisations, traditional data analysis (“business intelligence”) is adequate. Here, the goal is simply to obtain a view based primarily on past data, though in some cases this data can certainly allow extrapolations into the future. For many companies, even such relatively simple analyses harbour great potential for improving workflows and saving costs within and across supply chains.
‘Real’ big data in the supply chain makes sense when you want such analyses to include dynamic factors outside your own sphere of influence. An example specific to the supply chain would be advance knowledge of supply chain risks (e.g. industrial action). At this point, the boundaries between strict logistics and the planning, control, and implementation of procurement, production, sales, and after-sales services already begin to blur. Take for example real-time control of supply based on the purchasing process on e-commerce portals. We’ve all heard about the vision of moving goods before an order is actually placed based on expected customer behaviour. But the significance of such knowledge would diminish in direct proportion to the distance to the end customer.
If it is possible to predict sudden surges in demand, natural disasters, or strikes with sufficient accuracy, what good does this knowledge do if you lack the capacity to respond appropriately? To benefit from big data over the long term, you also need to build up an agile and flexible supply chain with the ability to react in real-time and cater for pro-active measures. Big data has the potential to introduce a new generation of risk management.
The question that remains, of course, is where one can obtain the required data of sufficient quality. It is likely that we will see further market developments in the area of service providers that deliver “bite-size” data sets on the political or meteorological climate, areas of turmoil, commodity prices, trends, etc. And we can continue to hope that the technology will produce data processing systems of increasing intelligence that are more resistant to singular misinterpretations. Such developments will surely help to improve and enable better real-time data analysis capabilities throughout global supply chains.
Advance knowledge of these types of business-relevant developments promises an invaluable competitive advantage. It is precisely this vision that feeds the hype surrounding big data – also beyond supply chains, of course, as its potential seems nearly endless. And that’s exactly why the idea behind big data will indeed materialise. In the end, big data is a tool and not a neatly packaged solution: We all need to decide for ourselves if, when, and how we use it. For many businesses, broader-based conventional analysis of the data now directly available would already offer great benefits, and would provide the perfect basis for leveraging big data in future.