top of page
Search
  • Writer's picturemartingreenaway

Cost of Quality Review

1.    CoQ Literature Review

1.1.                   Introduction

 

The origins of the CoQ concept can be traced back to the American quality guru’s Juran, Feigenbaum and Messey in the 1950’s, and became fortified into mainstream quality management thinking via its promotion through American standards ASQC 1971, MIL-Q-9858A, and British Standards BS6143 Part 2 1990. Along the way it can be seen that the concept was also promoted by Philip Crosby in the 1970’s and early 1980’s, and became something of a corner stone of the emerging Total Quality Management theory of this time.

 

Its popular position within TQM theory has therefore led to numerous published articles and papers on this subject, which provides a wealth of review and critique of the concept for practitioners to consider when developing a CoQ approach. Despite the apparent age of the concepts around CoQ, and its enshrining in TQM theory through text books, articles and national standards, it still appears to be rarely used in practice in organisations (Shah and Fitzroy: 1998, Sjoblom: 1998, Oliver and Qu: 1999), and has had little or no theoretical innovation since Feigenbaum despite evolutions in TQM theory and practice (Williams et al: 1999).

 

1.2.                   PAF and Optimum Cost

 

From a review of the literature it can be seen that the CoQ concept has become synonymous with what is known as the Prevention, Appraisal, Failure (PAF) model (Schiffauerova and Thomson: 2006), some writers considering this a preoccupation with the PAF model (Williams et al: 1999). The PAF model provides a method of categorising quality costs, and emphasises the value of the trade-off between expenditure in prevention activities as opposed to the cost of failures to an organisation, following the well-worn cliché that prevention is better than cure. The original PAF theory however also warned against the problem of diminishing returns on money spent on prevention activities, and suggested that a point will be reached whereby any further expenditure in prevention will no longer provide a return benefit in reduction in failure costs to warrant the outlay. So in effect the PAF model was designed to enable organisations to quantify its CoQ into these standard categories, and search for an optimum point in trade-off between prevention costs and failure costs, standard diagrams of PAF costs indicating an optimum CoQ well below the perfection level of 100% conformance, that is an organisation has to realise an acceptable level of defects and resultant failure costs.

 


Fig 5: Standard Cost of Quality Model (Schiffauerova and Thomson: 2006).

 

It is this central concept that there is an optimum CoQ that has resulted in most of the criticism of the traditional PAF model attributed to Feigenbaum et al. This criticism is also nothing new, despite the fact that research papers over the last 20 years have continually illustrated this point; the concept wasn’t without its detractors even when it was first published and developed in the 1950’s. W E Deming, a contemporary of Juran and Feigenbaum, and hugely influential American quality guru attributed largely with the success of post war Japanese economic recovery, totally dismissed the concepts purported by the CoQ theorists of the time, arguing that most costs associated with defective products delivered to customers were unknown and unknowable (Deming: 1986), such as the loss of sales due to loss of goodwill. Oliver and Qu (1999, p236) state that Demings view was that ‘cost analysis for quality is a misguided waste of time’. Theorists and researchers that investigate and illuminate the argument regarding hidden and unknown costs can be seen to often quote directly to Deming in their papers (e.g. Albright and Roth: 1992, Dahlgaard et al: 1992), and argue that the only optimum point for the CoQ is at the perfection or near perfection end of the spectrum of quality conformance (Porter and Rayner: 1992).

 

Indeed, despite Crosby’s high profile position in the argument towards the adoption of a CoQ approach, he is also attributed with the popularisation of a ‘zero defects’ approach to quality, which has also become an apparent cornerstone of TQM ideology. As such it appears that CoQ and Zero Defects make strange bedfellows in the realms of TQM, as on the one hand we propose accepting levels of defects, yet on the other we do not tolerate any defects at all. Yasin et al (1999) indicate that the concept of an optimum CoQ at less than 100% conformance is now pretty much consigned to history, as technological advances such as factory automation have made the 100% conformance, or zero defects, an economic possibility (Juran and Gryna: 1993 cited in Yasin et al: 1999). Srivastava (2008, p194) also states that ‘there seems to be a consensus that perfect outgoing quality can be achieved at a finite cost because of the rapidly developing technologies in automation, robotics, etc’.

 

Freiesleben (2004) contrasts the original PAF model, and its associated optimum quality point, with modern concepts of quality, in particular the Six Sigma approach and its near perfection target of 3.4 parts per million defects, a target significantly higher than the traditional models approximate 80% conformance optima. Freiesleben (2004) theorises that the early PAF model was representative of an organisation first embarking on quality improvement, and therefore not having the skills or resource in determining and fixing all root causes relating to defects and failures. He also points to the fact that allocation of overheads to units of production would change as more good products share this cost burden as quality improves, therefore lowering overall costs. Freiesleben (2004) draws together many earlier writers theories regarding the stochastic nature of the original PAF model, taken from an era of manufacturing inspection quality control, and failure to capture hidden costs to illustrate what he terms the ‘New CoQ Model’ which shows the optimum CoQ at the 100% conformance mark when these arguments are considered. In a later paper Freiesleben (2005) utilises an ‘opportunity cost’ approach in order to uncover some further hidden costs, for example the lost sales income from scrapped products that could have been sold had they been produced correctly.

 

Although some of the mentioned writers have consigned the concept of optimising quality costs to history based on what is considered a more modern approach to quality management and quality improvement, the concept of optimisation can still be seen to pervade in some articles, for example Ball (2006) re-delivers this original concept as he approaches the subject from an accountants perspective. This may simply be a legacy issue where accountants continue to grapple with the original CoQ concepts, whereas the quality profession has refined its thinking and theories. But although Ball (2006) draws on the original PAF categorisation and concept of optimum cost, he does expand on the subject in his article to demonstrate costing various scenarios to determine what course of action may reduce the overall cost burden of an organisation, so his arguments on optimisation are more to do with trading off various alternative actions, rather than searching for an overall optimum CoQ for an organisation, which the original theory proposed. Other contemporary writers such as Srivastava (2008) and Crandall and Julien (2010) also continue to talk of CoQ in relation to the PAF model, continuing the enduring legacy of Feigenbaum’s early work.

 

1.3.                   Effect of Increased Prevention

 

Claims have also been made against the traditional PAF model that organisations have achieved reductions in failures and associated costs without increasing their prevention expense (Porter and Rayner: 1992, Ittner: 1996, Skank and Govidarajan 1993 cited by Sjoblom: 1998), indeed an investment in prevention activity may have resultant benefits in failure cost reduction that are only realised some time after the initial investment, and that may sustain ongoing improvement and failure cost reduction with no additional prevention investment (Hwang and Aspinwall: 1996, Ittner: 1996, Freiesleben: 2005). Equally the positive benefit of quality improvement on improved sales, for example, is not accounted for in the traditional PAF model. The positive impact of improved quality may be through increased sales volume, or through being able to charge a premium or increased price for superior goods (Porter and Rayner: 1992), and therefore could be considered as an area for positive investment, rather than a cost control exercise (Williams et al: 1999).

 

1.4.                   Technical Validity

 

In exploring the original concepts of the optimum CoQ in the PAF model, many researchers have called into question the technical validity of the original claims of the theorists, and the data and facts upon which the original PAF diagram was based (Plunkett and Dale: 1988, Hwang and Aspinwall: 1996, Carr and Ponemon 1994 cited in Yasin et al: 1999, Williams et al: 1999). Furthermore many researchers note that in fact there is no clear definition of CoQ (Hwang and Aspinwall: 1996, Williams et al: 1999, Machowski and Dale: 1998 cited in Schiffauerova and Thomson: 2006), and many writers that have drawn on the original theory have utilised illustrative arguments and diagrams that show significant variance between each other such that the concept has become confused, and as a result confusing to any potential practitioner of this approach (Plunkett and Dale: 1988). Oliver and Qu (1999) make the interesting connection to the lack of a definition of ‘quality’ in the first instance being potentially problematic therefore to a definition for CoQ. Yang (2008) however does relate various definitions of quality costs together, and concludes that ‘quality costs’ is synonymous with ‘cost of poor quality’, as it is seen that many writers define the CoQ as being the difference between an ideal operating cost (all processes being perfect) and the actual costs incurred by an organisation.

 

Other writers can be seen to draw back to the original arguments for the adoption of a CoQ approach, in particular with regard to the potential size of the CoQ, and therefore its resultant potential impact on the interests of top management. Many papers point to the fact that there have been a wide range of values attributed to the potential magnitude of the CoQ within organisations, ranging from a moderate 5% of turnover (or sales value), to an often quoted range of 25-35%, and extremes of values upwards of 45%. It appears however that in many cases these quoted figures have not been determined through research of any academic rigour, and are further confused by the denominator used in such calculations, from sales value to manufacturing costs, or other quantification of overall business costs used to normalise, or put into perspective the derived CoQ. Some writers go so far as to suggest that these figures appear in fact to be speculative or even fictitious, and as such call into question the strategic significance of the adoption of a CoQ approach (Plunkett and Dale: 1988, Williams et al: 1999).

 

Although Williams et al (1999) point to the issues mentioned above regarding the validity of published data on CoQ, they do provide an excellent summary of published data, however this data only appears to justify the adoption of quality improvement, or TQM, not necessarily the adoption of CoQ. The figures quoted of cost savings however do appear to indicate the significance of savings available to an organisation, and are within the broad ranges quoted in many articles and papers.

 

Oliver and Qu (1999) however point to the fact that in many organisations that have apparently adopted CoQ, and therefore reported some measures using this framework, that they have not involved their accounts departments in the collation and analysis of this cost data, and therefore question the reliability and accuracy of this data.

 

1.5.                   PAF Categorisation

 

The PAF model also draws criticisms from its categorisation, with writers arguing that it is often difficult to attribute costs into these categories (Porter and Rayner: 1992, Dahlgaard et al: 1992, Hwang and Aspinwall: 1996, Chiadamrong: 2003), for example a design review may be preventive on a new product development, or may be a failure cost if the review is the result of some reported defect or non-conformance with the product (Porter and Rayner: 1992). Oakland (1993, cited in Srivastava: 2008) states the difficulty in determining prevention costs, arguing that everything that is done in a well managed organisation could be considered as efforts to prevent quality problems. In addition these cost categories do not sit with normal accounting practices, although many writers point to the inadequacy of standard accounting practices when it comes to capturing and reporting quality related costs, rather than the PAF categories themselves being at fault (Ross and Wegman: 1990). Yang (2008) also comments on the inadequacy of traditional accounting methods to capture CoQ, but also suggests an unwillingness of quality practitioners to adopt a CoQ approach.

 

It is interesting to note that CoQ has not been widely adopted in Japan (Ito: 1995), maybe this is a legacy of the influence of Deming in Japan and his aversion to the concept of CoQ, but writers claim that this is due to the Japanese adoption of TQM practices in that these activities are totally embedded in the working operations of all departments and all personnel (Ito: 1995). The western model of quality management, or TQM, has been more built around the specialisation of the quality function into professional quality personnel, and separation of activities into quality departments, for example inspection and auditing, such that there is at least some ability in ring fencing what could be considered as quality costs, the Japanese integrated model of TQM however further dilutes these costs across the organisation, making the PAF categorisation even more difficult to achieve (Porter and Rayner: 1992, Ito: 1995). Ito (1995), Anderson and Sedatole (1998, cited in Sjoblom: 1998), and Wu (2010) also make the point that the traditional PAF model emphasises the concepts that quality is defined by conformance to specifications, and does not consider quality of design. When quality of design is also considered in CoQ the PAF model is found to be further wanting in being an adequate methodology for capturing or defining these costs. Williams et al (1999) describe the PAF categorisation as a post-collection exercise, undertaken only to fit the PAF convention, and serving no real purpose at all.

 

The apparent dis-interest of Japanese organisations in CoQ throws up another strange contradiction in the adoption of this approach, as it is the Japanese economic success that has largely influenced the west in adopting TQM, therefore western organisations looking to emulate the Japanese success, and drawing on their TQM approach would not find CoQ as a core activity in Japanese TQM programmes.

 

1.6.                   Hidden Costs – Taguchi

 

There have been developments and alternatives offered to the traditional PAF model following the critiques outlined above, that attempt to address some of the issues associated with this approach. In terms of the PAF models failure to address hidden costs, some writers draw on the teachings of the Japanese quality guru Taguchi, and his ‘loss function’ concept (Albright and Roth: 1992, Schvaneveldt and Enkawa: 1992, Kim and Liao: 1994, Ortiz: 2002). Taguchi argues that there is a loss to society for any deviation from a nominal value of a product specification, and that this cost is not simply incurred only when the product crosses the specification limits, and therefore enters into the traditional categorisation of non-conformance. Taguchi illustrates his arguments with diagrams of parabolic curves shown plotted against specification limits, and complex statistical calculations to define this curve function, and the resultant loss for any position along the continuum of this curve. Despite the interest in Taguchi’s methods from theorists, it appears that practitioners of Taguchi’s loss function are few and far between, due no doubt to the ethereal nature of his arguments regarding loss to society, the complex calculation processes involved, the assumptions behind his theory relating to the loss curve, and its focus on specification limits as opposed to less ‘specified’ measures of quality, such as customer satisfaction.

 

Yang (2008) takes the traditional PAF model and adds two further categories to capture ‘extra resultant costs’ and ‘estimated hidden costs’, providing an extensive table of examples of where these costs might be incurred throughout an organisations departments and processes, including those outside of the traditional ‘quality’ function. He goes on to utilise a matrix approach to identifying and quantifying and proportioning these costs. Unfortunately Yang (2008) fails to provide a sound methodology for obtaining these hidden costs, relying on estimates but providing no estimation process.

 

1.7.                   Hidden Costs – Time

 

Nandakumar et al (1993) take more of a time based view in their strategy for considering quality costs in an attempt to uncover hidden costs relating to production bottle necks and changes to production schedules that result from reworking or remaking products, and the subsequent impact this can have on customer demands not being met due to late delivery. They argue that a time based strategic view is more important than a pure traditional CoQ view, as low apparent costs of one product under the traditional model may actually have a large impact on timeliness, but would not be identified for improvement under the traditional CoQ method.

 

1.8.                   Hidden Costs – Process Cost Model

 

Other writers that dismiss the traditional PAF model turn to process theory, and the concept of a process cost model (Porter and Rayner: 1992), describing this approach in theoretical terms, or deriving a mathematical model based on these principles (Chiadamrong: 2003). Many writers (e.g. Williams et al: 1999) associate this model with the Crosby simplification of quality costs into Price of Conformance (POC) and Price of Non-conformance (PONC), where the CoQ can be considered as the costs incurred because you did not do things right first time. This concept allows simplification of our considerations into deriving what the costs should be in theory if everything were done to perfection and right first time, conceptually these costs could be gathered through traditional accounting approaches. The process cost model also utilises process mapping techniques to consider the flow of activities and products through an organisation, breaking these activities down into discrete steps. These steps can then be costed, along with appraisals made of the yield of each process step and resultant cost of non-conformance. The process mapping approach can be seen to uncover some of the hidden costs that the PAF model could not illuminate, for example wasteful rework loops in a manufacturing process that have over time become part of the process, and costed into the production overheads for a product. The process cost model has received significant attention, and has also been formalised into British Standard BS6143-1:1992 as an alternative method for assessing CoQ to the original PAF model, however it has not had widespread usage (Goulden and Rawlins: 1995, cited in Schiffauerova and Thomson: 2006).

 

Concepts of process mapping can also be seen to have carried forward to current continuous improvement methodologies, such as Lean Manufacturing and Six Sigma, that rely on this process analysis methodology to identify areas for improvement.

 

1.9.                   Why Use Cost Data Anyway ?

 

The above criticisms, coupled with the fact that very few organisations have successfully implemented a CoQ approach based on the PAF model, or any other alternative model that has been offered (Sjoblom: 1998, Williams et al: 1999), seriously questions whether any organisation embarking on a TQM programme, or looking for a new initiative within its quality management, should even consider ‘CoQ’. But not only does CoQ lack any clear, proven, and widely accepted and adopted methodology, its usefulness in a contemporary business environment where it is now largely accepted that traditional financial measures are unsuitable for business improvement puts a further nail into the CoQ coffin (Kaplan: 1996). Sjoblom (1998) summarises the Kaplanesque problems with the shortcomings of financial data in providing accurate data quickly in order to facilitate decision making, and that other leading indicators can be useful proxies for this kind of data, such as defect rates. As this proposition is now widely accepted it would appears ludicrous to attempt to shoe horn other quality related indices, such as numbers of customer complaints, into financial metrics. It would appear that the original argument championed by the likes of Crosby for the adoption of CoQ in order for the quality professional to talk the language of business may have been superseded by the likes of Kaplan’s efforts in getting senior management to understand the language of quality. Despite this Sjoblom’s survey (1998) indicates that there is still a desire to implement financial measures into quality management practices mainly in order to gain top management commitment, however as Sjoblom’s survey was undertaken with quality practitioners who may well have been influenced by the prevailing wisdom of Crosby et al, it is dangerous to draw any conclusions from this.

 

If we take the comment from Texas Instruments (Ittner and Kaplan: 1988 cited by Sjoblom: 1998), we can see some industry recognition of the positive effect of CoQ on gaining top management commitment, but lacking in providing direction or analysis to middle management and below in addressing, diagnosing and therefore solving quality related problems, and that this is because CoQ uses financial data. Texas Instruments even went on to refine its opinion of CoQ (Ittner and Kaplan: 1989 cited by Sjoblom: 1998) as being little more than an awareness tool, and perhaps not the most effective awareness tool in an age where quality management has matured. This again points to the lost relevance of CoQ in a contemporary business environment from its original 1950’s manufacturing roots.

 

Ball (2006) also supports the view that CoQ has been used as a high level estimate of costs, but has not provided useful data for continuous improvement, the latter requiring a ‘bottom up’ approach to CoQ. Ball (2006) goes on to say that the traditional CoQ methodology only makes sense where top management are not aware of the magnitude of quality costs.

 

Williams et al (1999) echoes the comment from many papers in their review of CoQ in that the activity can serve to grab management attention and commitment, however once that is achieved the purpose of CoQ is largely done. This would again suggest that in organisations with mature TQM systems, and an acceptance at the highest levels in the organisation of the importance of pursuing quality and quality improvement, there is no need to formulate CoQ measures.

 

Edmonds, Tsay and Lin (1989, cited in Oliver and Qu: 1999) further suggest that continuous quality improvement requires long term commitment, and the use of cost data in CoQ measures does not support this due to ‘shortermism’ of its reporting, focusing on costs and cost reduction in a narrow time frame. Its notable also that Oliver and Qu (1999) in their survey of Australian firms reported a common justification for not implementing CoQ was that there was already an existing and established continuous improvement culture.

23 views0 comments

Recent Posts

See All

Quality Circles

The following is an anonymized piece from my MBA study some years ago which I believe still has relevance although forgive the outdated...

Comments


bottom of page