Supply Chain Assessment : (Sample) size matters

Supply Chain Assessment : (Sample) size matters

Since years, corporations have made a lot of efforts to investigate their supply chains, going all the way back to the farm, the mine or the boat. Understanding who are the suppliers of their suppliers, and collecting compliance information for products and factories at every tier.

So how come we have still media and NGOs reporting clear cases of child labour, occupational hazards or food fraud ?

One of the explanations is sample size. Supply chain investigation programs are cost and time constrained. As a consequence, usually a low proportion of suppliers are inspected/audited/surveyed, and this small group may not reflect the true proportion of non compliant factories in the supply chain. It's a classic in statistics : The smaller the sample, the higher the probability of not catching "outliers" (and liars). It's like trying to know if they are whales in the ocean, by just scooping a glass of water.

The emergence of industry standards and/or initiatives is a very good thing, as it promoted formalised, comparable ways of assessing performance on. key topics such as forest stewardship, workplace rights or sustainable agriculture practices. However, to make sure  the participation to such programs is a clear winner, a relevant sample size has to be taken into account.

Statistical science has built various ways to define an optimal sample size, using what is called confidence intervals. It answers to questions like :"How many suppliers (or products) do we need to analyse if we want to have a clear idea of the proportion of suppliers which comply with criteria X, with an error margin of 5%"

When the size of the population is known - there are very handy tables showing the minimum number of individuals to "survey". But when it comes to assessing complex, international supply chains, it may be be difficult to know if there are 1,000, 3,000 or 10,000 suppliers in your business network. One of the rules of thumb used by some professionals is called "the iceberg proxy": 8/10 of your supply chain is not visible and stays beyond your horizon.

In that case, there are other mathematical methods to define the number of suppliers to investigate. Depending upon the confidence index you need, sample size may jump from 100  to 10,000, with 99% of observations within range.

Increasing sample size implies higher data collection and verification costs, and therefore a trade off has to been found, between data accuracy and $$. The stronger product claims are (GMO free, gluten free, organic fibers, vegan, ...), the higher the cost of non compliance will be. Consumers are more and more sensitive to product claims and labels - and this creates a new incentive to raise the bar in terms of confidence indices. A classic example of this is the number of controls/samples (200+) made every day in bottling operations of mineral water, which has more health-related claims than any other type of beverage.

We are living  in a time where every business is speaking / interested about #bigdata, machine learning, predictive analytics. In order to get the gist of it, Big data needs to be really big enough, - if you want to have a reliable vision of your supply chain. This can only be done by the investigation of not dozens but hundreds of facilities. How large is your own sampling of suppliers today?

Louke Koopmans

Innovation lead circulaire grondstoffen Ministry of Agriculture & raadslid D66 Ede/Regio Food Valley

7y

Well written, I like your metaphore with the iceberg which is true for some of the raw materials. Luckily some chains are very transparent.

thank you my friend

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics