Abstract
Tolerancing began with the notion of limits imposed on the dimensions of realized parts both to maintain functional geometric dimensionality and to enable cost-effective part fabrication and inspection. Increasingly, however, component fabrication depends on more than part geometry as many parts are fabricated as a result of a “recipe” rather than dimensional instructions for material addition or removal. Referred to as process tolerancing, this is the case, for example, with IC chips. In the case of tolerance optimization, a typical objective is cost minimization while achieving required functionality or “quality.” This article takes a different look at tolerances, suggesting that rather than ensuring merely that parts achieve a desired functionality at minimum cost, a typical underlying goal of the product design is to make money, more is better, and tolerances comprise additional design variables amenable to optimization in a decision theoretic framework. We further recognize that tolerances introduce additional product attributes that relate to product characteristics such as consistency, quality, reliability, and durability. These important attributes complicate the computation of the expected utility of candidate designs, requiring additional computational steps for their determination. The resulting theory of tolerancing illuminates the assumptions and limitations inherent to Taguchi’s loss function. We illustrate the theory using the example of tolerancing for an apple pie, which conveniently demands consideration of tolerances on both quantities and processes, and the interaction among these tolerances.
1 Introduction
It is likely that the modern concepts of tolerancing have their origins in the notion of interchangeability of parts [1,2]. Such concepts date back over half a millennium as Gutenberg’s press (1450s) relied on interchangeable letters. Over the ensuing years, it became clear that making parts interchangeable is not as easy as one might expect. Nonetheless, the emergence of steam power in the 1780s demanded that parts be made with challenging accuracy. Benjamin Franklin reported, in 1785, of a French gunsmith making muskets with interchangeable parts. And, 100 years later, with emergence of the Industrial Age, mass manufacturing on an assembly line required part interchangeability.
Parker [3,4], working at the Royal Torpedo Factory in Scotland, is credited by Liggett [5] with being the first to formally address “position tolerance theory.” Since that time, tolerance theory has emerged as a major subdiscipline of engineering design and manufacturing. In the earlier years, tolerances were mainly associated with part geometry resulting in the discipline of geometric tolerancing. The need to properly interpret part specifications led to standards for dimensional tolerancing [6] and, with the emergence of computers, Requicha and Voelcker [7–10] developed a theory of geometric modeling that enabled computer-aided design.
A key problem in the setting of tolerances is referred to as the problem of “stack-up” [11]. This problem occurs when a series of parts must fit or work together within an overall tolerance. Problems of this sort led to the notion of optimizing the allocation of the individual part tolerances to achieve the overall desired tolerance at the minimum cost [12–14].
A major contributor to a theory of tolerancing is Taguchi [15]. His philosophy may be summarized in four statements: “It is better to be precise and inaccurate than being accurate and imprecise; Quality should be designed into the product and not inspected; Quality is achieved by minimizing the deviation from the target; [and] The cost of quality should be measured as a function of the deviation from target.” [16] This philosophy resulted in the concept of the Taguchi loss function.
More recently, it is noted that several products are described not so much by their dimensions as by a “recipe” according to which they are manufactured. This is the case of integrated circuit chips and food products such as an apple pie. In these cases, tolerances largely determine the quality, lifetime, or reliability of the product. These are important attributes not often captured by product descriptions as they can significantly impact the proclivity of consumers to purchase a product. Again, recognizing that demanding narrower tolerances results in higher costs, several researchers have sought to meet a set of performance requirements at minimum cost [17,18].
The problem with minimizing manufacturing cost is that this objective results in the trivial solution of manufacturing none of the product. If the manufacturer manufactures no product, the manufacturing cost is $0.00. This, obviously, is not a helpful solution. To render the solution helpful, it is then necessary either to impose constraints on the optimization problem or to change the objective. Constraints typically take the form of a set of product requirements, whereas an alternative objective may seek minimum cost per item produced. Hazelrigg and Saari [19] note that constraints only remove alternatives from the allowable set of design choices and, if they remove the optimal point, that is, if the constraints are active, they always penalize performance. Thus, for optimal design, constraints should be avoided to the extent possible. A way to avoid constraints is to change the objective function to one that more accurately reflects the preference of the responsible decision maker. Noting that the underlying objective of a profit-making organization is to make money, Hazelrigg [20] presents a framework for product design optimization with this objective that also accounts for uncertainty.1
Tolerances introduce the opportunity to intentionally allow variability in a product, the incentive being to reduce the cost of manufacture. As noted, this variability results in attributes of concern to customers of the product that obtain directly from the product-to-product variation. Product variability introduces risk into a purchase decision that is not present in a deterministic product. Optimization of tolerances must take this risk into account. Thus, the purpose of this article is to show that the basic logic of Hazelrigg’s framework, with minor modification, can be applied to the optimization of both geometric and process tolerances separately or concurrently with the product design, with the objective being the maximization of a measure of net revenue or profit. The medium used to illustrate this application is the tolerancing of an apple pie.2 Although the optimization framework is designed to an objective of profit maximization, it is conveniently adaptable to other valid preferences.
2 A General Framework for Tolerancing
Hazelrigg and Saari [19] show that optimal system design, including tolerances, demands that all design decisions be made using an overall system preference. Thus, the underlying tenet of this article is that the purpose of tolerancing is to increase the value, measured as expected utility, E{u}, of a product to the producer of the product. This is a sensible tenet for a number of reasons. First, it is the producer of the product who decides what the tolerances should be and, for rationality, this choice must be based on a preference of the decision maker. Second, for a product that has multiple consumers, it is, in general, not possible to express a joint consumer preference that would enable rational choice of tolerances [21,22]. Third, for most products, the consumers are too far removed from the technical aspects of a product to care about tolerances or even understand them. Fourth, vendors or parts suppliers have conflicting interests with the producer and, for this and other reasons, cannot be left to select the tolerances on the parts they produce.
With this in mind, the product design optimization framework that shall be used here is a modification of Hazelrigg’s framework as shown in Fig. 1. The purpose of this framework is to enable computation of an objective function, namely, expected utility, E{u}, which is based on a logical and defensible preference of the relevant decision maker and that enables product design optimization under uncertainty. There are three entry points in this framework: description of a baseline design, specification of a set of beliefs defined as “exogenous” variables that define the extant uncertainty, and the expression of a preference from which we will be able to determine a utility measure. A design is described fully by its configuration, M, its dimensions, x, and tolerances, T, on the variables x. These are deterministic design decision variables subject to optimization. Typically, M consists of a set of statements that describe the system in detail,3 and the x are continuous real numbers that may include weights, voltages, volumes, and other such variables in addition to dimensions. The values of the variables, x, are toleranced, whereas the statements that comprise M are not. While instantiations of the product or system achieve the design descriptors, M, achieved values of x, , will vary from the design whenever T ≠ 0. The variables T comprise the tolerances applied to x,4 while τ(M, x, T) are the attributes of the system that are a function of the variability in as determined by T. The values of τ would typically, but not necessarily, be determined by a Monte Carlo simulation, taking into account the degree to which the tolerance is not held with precision. τ is an aggregate parameter accounting for the variability in the achieved values . a(M,x) are the as-designed system attributes taking M and x to have their nominal values, that is, with T = 0. We might refer to the attributes a as performance attributes, such as maximum speed, acceleration, and gas mileage for a car, and the attributes τ as quality attributes, such as reliability and lifetime. C(M, x, T) is the cost of producing the system, and q(τ, a, P, t) is the demand for the system as a function of its attributes, τ and a, price, P, and possibly time, t.5P is a design variable chosen to maximize E{u}. a and C are differentiable functions of x. R is the gross revenue derived from the system. y are exogenous variables that specify all uncertainties related to the system performance, cost, demand, and other variables such as the weather. u, utility, is a risk-adjusted measure of system performance in a specific simulation case obtained by optimizing price. u is determined by the overall system preference, for example, to make money, more is better. The final measure of the system performance is expected utility, E{u}, again typically obtained via a Monte Carlo simulation. Optimization loops are used to optimize the tolerances, T, and system design variables x. With this simple overview, we will now look at the elements of this framework in more detail.
The quantity of product sold at each point in time depends on the demand for the product, which is a function of its attributes and its price. The attributes of the product are a result of its design and its tolerances. Variability in products is the result of nonzero tolerances. The more nearly identical that each individual product is to a nominal product, the more predictable it will be, and predictability of a product may itself be an attribute of concern to customers, frequently referred to as the product’s “quality” [28–30]. For example, customers are often concerned about getting a “lemon,” particularly in the purchase of a car, and they show this preference by paying more for cars that have good reliability reports.
3 The Mathematics of Tolerances
We now see that the Taguchi loss function relies on several underlying assumptions that were not obvious in the absence of the aforementioned derivation. First, it requires that the basic system design, namely, the variables x, be chosen to maximize E{u}, thus taking uncertainty and risk into account. Second, it requires that δT be small. But, just because δT is small does not mean that nonzero tolerances necessarily have a small impact on E{u}. Indeed, tolerance variables can have associated attributes, τ, such as reliability or safety, that have profoundly large impact on E{u}. In these cases, it is important that the decision maker’s risk preference be taken into account. The Taguchi loss function does not do this.
4 Computational Procedure
The computational framework follows the logic flow shown in Fig. 1, which outlines a procedure for the optimization of the product design, including tolerances, as a unified process. Unfortunately, for most products, this can lead to a highly complex and time-consuming set of computations. The complexity of the problem makes it desirable to resort to Monte Carlo methods, which sacrifice computational efficiency to achieve a more simplified and less-prone-to-error mathematical formulation. However, even this may leave the problem intractable. As a result, it is desirable to separate the dimensional optimization of the product from the tolerance optimization. The assumptions leading to Eq. (5) enable this separation. Thus, in practice, it is convenient to apply the framework in two steps, first the optimization of the “dimensions” (target values of x) of the product and then, based on these optimized values, the optimization of the tolerances placed on the target values.
We shall begin our outline of the computational procedure under the assumption that the basic product design has already been optimized. Keep in mind that the validity of Eq. (5) depends on this being the case. Tolerances place “constraints” on the variability of the outcomes, , of the decisions, x, with a concomitant cost. The goal of the computational procedure is to enable a selection of these constraints such that they maximize the value, measured as the expected utility, of the product to the producer. Under the condition that the basic product design, assuming all x values achieve their nominal value, is optimized to achieve maximum expected utility, Eqs. (8)–(10) afford some degree of independence from the basic design in the consideration of tolerances. Indeed, in the case that the decision maker is risk neutral, that is, for whom utility equals profit, minimizing the expected loss is a solution. However, minimization of the loss does not assure maximization of expected utility for decision makers who are not risk neutral. Because of this, we are forced to compute a utility difference in the context of the expected value of the basic design. This requires evaluation of the expected utility of the basic design and evaluation of the total loss function as a deviation from the expected utility of the basic design.
The first issue, which would appear to be overlooked in many applications of the Taguchi loss function, is the need to take product price into account as a variable of choice to the manufacturer that also must be optimized. The thing that makes the determination of the optimal price shift tricky is that the quality loss is not realized on a product-by-product basis one by one as products come off the production line, but rather on consumer perceptions based on a history of many products produced under the design variables and tolerances of the product. Thus, in order to simulate the demand shift, for each product outcome (achieved values of x on a product-by-product basis), we must compute the product loss function. This requires the inner Monte Carlo simulation shown in Fig. 1 between the selection of tolerances, T, and their resulting quality measure, τ. This nesting of Monte Carlo loops can result in substantial increases in computational time. One approach to this problem is to assume that there is no significant variability in the outcome of x on a product-by-product basis, that is, a change large enough to alter the attributes, a, and analyze only the impact of tolerance on one particular product instantiation. This provides an approximate result that can be later checked against a limited number of full simulations around the optimal tolerance design point.
The next problem we encounter is the appropriate expression for the cost of achieving a specific tolerance level. One way to achieve a given tolerance is to test to assure that all tolerances are met and to discard any parts of or products that fail to meet the tolerance. This results in a cost of wastage. The wastage costs result from the costs of manufacturing unsalable product. It is obviously desirable to keep wastage small. But this means maintaining tolerances with a high per-product probability, and this often demands more sophisticated and concurrently more expensive manufacturing equipment. Ergo, as a tolerance is reduced, the manufacturer must consider the purchase of more expensive manufacturing equipment. The tolerance cost model must reflect these costs.
While tolerances denote the limits of acceptable outcomes of x, they do not describe distribution of these outcomes. Yet, this distribution is needed in order to compute the loss function. While it might seem natural to describe the distributions of outcomes as Gaussian with a mean and standard deviation, the Gaussian distribution has the property that it extends infinitely in both positive and negative directions. This causes problems for two key reasons. First, actual parts do not have negative dimensions, and second, actual parts do not get infinitely large. One might think that, for a Gaussian distribution, these are extremely rare occurrences that can be neglected. The problem is, however, that a Monte Carlo analysis will interrogate the distribution hundreds of millions, perhaps even billions, of times, and rare events that will cause errors are bound to occur. Thus, we have chosen to represent tolerances for the analyses presented here as beta distributions, although other distributions can be used within the context of the theory presented here. Beta distributions have finite limits, can be skewed, and can be shaped based on the distribution parameters α and β.
5 Apple Pie
As an illustration of the decision theoretic formulation of the tolerancing problem, we have chosen the tolerancing of an apple pie. The detailed geometric tolerancing of an apple pie would be a formidable task and, in the end, rather futile. No two pies are exactly alike nor would anyone want that they would be. So, geometric tolerancing is not appropriate for a pie, save for, perhaps, the diameter of the pie as it has to fit in a box for marketing purposes. Instead of describing an apple pie by its detailed dimensions, which would involve volumes of numbers, we describe and tolerance an apple pie by its recipe. Accordingly, tolerances are placed on the measurable parameters of the recipe, including amounts of ingredients and processing parameters such as baking time and temperature. The apple pie recipe used in this example is given below.
Apple Pie
Ingredients |
cup sugar, more to taste |
cup packed brown sugar |
3 tablespoons all-purpose flour |
1 teaspoon ground cinnamon |
teaspoon ground ginger |
teaspoon ground nutmeg |
6–7 cups peeled and sliced tart apples |
1 tablespoon lemon juice |
dough for double-crust pie |
1 tablespoon butter |
1 large egg white |
Process |
Preheat oven, 375 deg. |
Toss apples with lemon juice, add sugar, toss to coat |
Combine sugars, flour and spices |
Roll half of dough to -in.-thick circle, |
transfer to 9-in. pie plate, |
trim even with rim |
Add filling, dot with butter |
Roll remaining dough to -in.-thick circle |
Place over filling, trim even with rim, seal, and flute edge |
Cut slits in top |
Beat egg white until foamy, brush over crust |
Sprinkle with sugar |
Cover with foil, bake 25 min |
Remove foil and bake another 25 min |
Cool on wire rack |
Ingredients |
cup sugar, more to taste |
cup packed brown sugar |
3 tablespoons all-purpose flour |
1 teaspoon ground cinnamon |
teaspoon ground ginger |
teaspoon ground nutmeg |
6–7 cups peeled and sliced tart apples |
1 tablespoon lemon juice |
dough for double-crust pie |
1 tablespoon butter |
1 large egg white |
Process |
Preheat oven, 375 deg. |
Toss apples with lemon juice, add sugar, toss to coat |
Combine sugars, flour and spices |
Roll half of dough to -in.-thick circle, |
transfer to 9-in. pie plate, |
trim even with rim |
Add filling, dot with butter |
Roll remaining dough to -in.-thick circle |
Place over filling, trim even with rim, seal, and flute edge |
Cut slits in top |
Beat egg white until foamy, brush over crust |
Sprinkle with sugar |
Cover with foil, bake 25 min |
Remove foil and bake another 25 min |
Cool on wire rack |
This recipe is divided into two parts, the first part specifying amounts of each ingredient, the variables of which are measurable, continuous real numbers as required by Eq. (6). Tolerances would typically be placed on these variables. The second part specifies the processing steps. Some of these steps are amenable to tolerancing, such as the baking temperature and time. But others defy tolerancing or even a clear definition. Indeed, we become rather philosophical at this point, invoking Gödel’s theorem. Gödel’s theorem deals with the limits of rationality in reflexive systems.7 Language is a reflexive system, that is, we define words with words. Hence, all definitions rely on knowing the definitions of other words, which are only known by knowing the definitions of other words, and so on. Thus, words describing the aforementioned process steps such as “toss,” “combine,” “beat,” “roll,” and “sprinkle” can never be defined with clear and precise precision. What this means is that the process steps of a typical recipe can never be transferred assuring no loss of clarity and, as a result, there will always be some element of art in the manufacture of any product that involves process steps.
Nonetheless, we assume that these ingredient amounts and process variables have been duly optimized (∂E{u}/∂x0 = 0T, Eq. (3)) and will now examine tolerances on the baking time and temperature. Note that these variables are measurable, continuous real numbers. To begin, we construct an elliptical penalty function, taking into account the correlation between these variables. Obviously, if the oven temperature is a bit low, a longer cooking time will compensate at least partly for this deviation. Figure 3 shows an elliptical loss function with zero loss occurring at a baking temperature of 375 F and a baking time of 50 min.8 The figure indicates that an increase in baking time of approximately 6 min will compensate optimally for a decrease in baking temperature of 10 deg. The dashed-line box in Fig. 3 denotes example tolerance limits of ±5 deg on temperature and ±5 min on baking time. If these tolerances were held, pies baked in conditions that exceed these tolerances would be discarded as a loss. However, we see that pies baked in conditions just outside the upper left and lower right corners of this box are classified as waste, while they are considerably more acceptable than those being sold that are baked in conditions corresponding to the lower left and upper right corners. Intuitively, a multiparameter tolerance criteria could enable the tolerances to be relaxed while reducing waste and maintaining or even improving quality. Multitolerance criteria can be easily implemented in the context of this tolerance-evaluation framework; however, our example problem sets tolerances on the variables independently.
Figure 4 shows a commercially manufactured apple pie that was baked with time and temperature parameters that exceeded appropriate tolerance limits. The crust is rather burnt and bears the taste of burnt pastry. Clearly, were this the norm, demand for this producer’s pie would be significantly reduced. One might be inclined to think that we have been a bit facetious in choosing to go to so much detail to analyze production tolerances on an apple pie. Be assured, however, that this is taken quite seriously in the apple pie baking industry [32–35]. Indeed, detailed studies have been conducted to identify the attributes of importance to apple pie customers and to estimate how variations in these attributes might affect demand for the product. However, we did not choose variables for our example from the literature. Rather we chose them, while not entirely unreasonable, to emphasize aspects of tolerance optimization that one might encounter in typical manufacturing situations.
Finally, we took producer utility to be the log of the net revenue per production period. A simulation was coded that has the ability to take into account uncertainties in all major variables associated with the determination of performance (profit) as a function of tolerances. However, simulations of enough cases to map out expected utilities for even two tolerances, including all uncertainties, can be quite time consuming, as much as days of run time or longer. Thus, for the example provided here, we chose to assume that the demand, demand elasticity, and production costs are known deterministically. With these assumptions, Fig. 7 is produced by computing results for combinations of every combination of baking time and temperature corresponding to the tick marks of this plot. This comprised a total of 600 time–temperature tolerance cases, with 1 million simulations per case. The run time for this was about 4 h.
Clearly, computer run times for cases that seek to optimize multiple tolerances with full consideration of uncertainties can be an impediment to application of this approach. Nonetheless, the approach can be used in a “deterministic” mode to locate the regions of optimal solutions, and these can be verified with limited computations in the vicinity of the optimal solutions taking uncertainties into account. The key factor driving high computing time is the need for the solution of a nested Monte Carlo simulation, which could require a total of a billion or more simulations to achieve adequate accuracy.
6 Conclusions
The objective of this research is to cast the problem of tolerancing in the framework of decision theory. It was found that Hazelrigg’s design framework [20] could provide a mathematically rigorous basis for a theory of tolerancing with modification to enable the analysis of the so-called quality attributes emerging from product-to-product variability. The resulting analysis provides insights into the validity of the Taguchi loss function.
Taguchi defines a loss function that can be derived from a Taylor series around “target values” of design variables, with arguments that the first-order term of the series is zero because the loss is minimized at the target value. But this argument holds only if the design target values themselves are optimized with respect to an overall system or product value function, and only in the case where all derivatives of this value function with respect to the design variables x and T exist and are finite in the vicinity of these optima, thus validating the Taylor series. Otherwise, the first-order terms do not vanish and, in fact, diminish the concept of a tolerance by allowing larger tolerances to have the potential to improve certain samples of the product. Although Taguchi recognizes the need for an optimization criterion, it does not appear that this requirement is clearly recognized in applications of the loss function for tolerance optimization.
Second, the Taguchi method is most commonly applied assuming that the tolerances themselves are independent of each other. The decision theoretic formulation makes clear that this is not the case. Namely, the total value loss resulting from nonzero tolerances is not the sum of the Taguchi losses for each tolerance as determined independently.
Third, while the Taguchi loss function treats the cost of tolerancing to the manufacturer and the loss of value to the customers, the decision theoretic formulation makes clear that the important factor is profit or net benefit to the designer/manufacturer. It is this entity that decides what the tolerances should be, reaps the benefits of production, and owns the loss. This entity would likely prefer to maintain a profitable level of demand for the product, whereas nonzero tolerances reduce demand. Through consideration of demand, the decision theoretic approach takes consumer preferences into account, without the need to assess a group preference [21].
Fourth, the loss attributed to diminished demand resulting from nonzero tolerances can be mitigated by re-optimization of the price at which the product is sold. Although this is required for the optimization of tolerances, we see no evidence that it has been considered in applications of the Taguchi loss function.
Fifth, the determination of tolerances in a decision theoretic framework enables consideration of uncertainties affecting the optimal design of the entire product or system, and it accounts for the risk preference of the design decision maker. In this regard, it should be noted that, although the variation in the product resulting from nonzero tolerances may be small, it still has the potential to result in large losses in product value, thus invalidating the approximation of risk neutrality.
Lastly, we believe that the decision theoretic formulation of the tolerancing problem provides significant new insight into the mathematics of tolerancing and appears to encompass a range of tolerancing problems that span geometric dimensional tolerancing through process tolerancing.
Funding Data
The National Science Foundation (Award No. CMMI-1923164).
Conflict of Interest
There are no conflicts of interest.
Data Availability Statement
The authors attest that all data for this study are included in the paper.
Nomenclature
- q =
demand for a product
- r =
discount rate
- t =
time
- u =
utility
- a =
a set of attributes that determine the demand for a product
- v =
eigenvectors of the Hessian matrix
- x =
a set of statements such as dimensions describing the measurable and continuous real variables that determine a basic design
- y =
a set of statements that describe uncertainties on other variables
- H =
Hessian matrix
- M =
a set of statements describing a particular design configuration
- C =
costs associated with the production of a product
- L =
loss incurred because variables x do not achieve their target values
- P =
price at which a product is sold
- R =
revenue generated by the sale of a product
- V =
net present value of profit
- T =
a set of real numbers describing tolerances on the variables x
- =
superscript denotes transpose
- E{u} =
expected utility
- τ =
a set of attributes related to tolerances that affect demand for a product
- ν =
eigenvalues of the Hessian matrix
Footnotes
The fact that this example demands the imposition of a self-imposed constraint highlights the fact that minimization of cost is not a valid preference. It is important that a theory of tolerancing enable the use of a valid preference that does not require self-imposed constraints.
Approximately 50 million apple pies are manufactured annually for sale in grocery stores, generating an annual revenue of about a quarter of a billion dollars in 2020 (data from Information Resources, Inc.). The tolerancing of an apple pie is very much an engineering problem taken quite seriously by this industry, and it illustrates issues of tolerancing not obvious in more complex examples.
For example, such a statement might be, “The car has four doors.” These statements may include descriptions of manufacturing processes, operations, and maintenance, and even distribution, sales, and disposal/recycling.
Tolerances may be expressed in any convenient form that describes the variability in the achieved variables, , of x. For example, this may take the form of probability distributions on .
This framework enables C, R, and q to be functions of time, t.
Ltot may contain additional terms relating to expenses such as warranty costs and liability costs. We include these in the loss function to emphasize that they are associated with tolerances.
A reflexive statement that illustrates the limit of rationality in language is, This statement is a lie. If the statement is a true, it must be a lie. And, if it is a lie, it must be true.
The derivatives that determine the loss function (Eq. (6)) may be obtained by measuring demand as a function of the allowed variability of the product. For the example presented, we chose an illustrative loss function.