Six Sigma (2 of 4)
Extreme Programming (XP)
Extreme Programming (XP) is a software engineering methodology, the most prominent of several agile software development methodologies. Like other agile methodologies, Extreme Programming differs from traditional methodologies primarily in placing a higher value on adaptability than on predictability. Proponents of XP regard ongoing changes to requirements as an often natural and often inescapable aspect of software development projects; they believe that being able to adapt to changing requirements at any point during the project life is a more realistic and better approach than attempting to define all requirements at the beginning of a project and then expending effort to control changes to the requirements.
XP prescribes a set of day-to-day practices for managers and developers; the practices are meant to embody and encourage particular values. Proponents believe that the exercise of these practices—which are traditional software engineering practices taken to so-called "extreme" levels—leads to a development process that is more responsive to customer needs ("agile") than traditional methods, while creating software of similar or better quality.
Origins
Software development in the 1990s was shaped by two major influences: internally, object-oriented programming replaced procedural programming as the programming paradigm favored by some in the industry; externally, the rise of the Internet and the dot-com boom emphasized speed-to-market and company-growth as competitive business factors. Rapidly-changing requirements demanded shorter product life-cycles, and were often incompatible with traditional methods of software development.
Information about the principles and practices behind XP was disseminated to the wider world through discussions on the WikiWikiWeb. Various contributors discussed and expanded upon the ideas, and some spin-off methodologies resulted (see agile software development). Also, XP concepts have been explained, for several years, using a hyper-text system map on the XP website.
Current state
XP created quite a buzz in the late 1990s and early 2000s, seeing adoption in a number of environments radically different from its origins.
The high discipline required by the original practices often went by the wayside, causing certain practices to be deprecated or left undone on individual sites. Agile development practices have not stood still, and XP is still evolving, assimilating more lessons from experiences in the field.
Goal of XP
Extreme Programming is described as being:
- An attempt to reconcile humanity and productivity
- A mechanism for social change
- A path to improvement
- A style of development
- A software development discipline
The main aim of XP is to lower the cost of change. In traditional system development methods (like SSADM) the requirements for the system are determined at the beginning of the development project and often fixed from that point on. This means that the cost of changing the requirements at a later stage will be high.
XP sets out to lower the cost of change by introducing basic values, principles and practices. By applying XP, a system development project should be more flexible with respect to changes.
XP values
Extreme Programming initially recognized four values. A new value was added in the second edition of Extreme Programming Explained. The five values are:
- Communication
- Simplicity
- Feedback
- Courage
- Respect
Building software systems requires communicating system requirements to the developers of the system. In formal software development methodologies, this task is accomplished through documentation. Extreme Programming techniques can be viewed as methods for rapidly building and disseminating institutional knowledge among members of a development team. The goal is to give all developers a shared view of the system which matches the view held by the users of the system. To this end, Extreme Programming favors simple designs, common metaphors, collaboration of users and programmers, frequent verbal communication, and feedback.
Extreme Programming encourages starting with the simplest solution and refactoring to better ones. The difference between this approach and more conventional system development methods is the focus on designing and coding for the needs of today instead of those of tomorrow, next week, or next month. Proponents of XP acknowledge the disadvantage that this can sometimes entail more effort tomorrow to change the system; their claim is that this is more than compensated for by the advantage of not investing in possible future requirements that might change before they become relevant. Coding and designing for uncertain future requirements implies the risk of spending resources on something that might not be needed. Related to the "communication" value, simplicity in design and coding should improve the (quality of) communication. A simple design with very simple code could be easily understood by most programmers in the team.
Within Extreme Programming, feedback relates to different dimensions of the system development:
- Feedback from the system
by writing unit tests, or running periodic integration tests, the programmers have direct feedback from the state of the system after implementing changes. - Feedback from the customer
The functional tests (aka acceptance tests) are written by the customer and the testers. They will get concrete feedback about the current state of their system. This review is planned once in every two or three weeks so the customer can easily steer the development. - Feedback from the team
When customers come up with new requirements in the planning game the team directly gives an estimation of the time that it will take to implement.
Feedback is closely related to communication and simplicity. Flaws in the system are easily communicated by writing a unit test that proves a certain piece of code will break. The direct feedback from the system tells programmers to recode this part. A customer is able to test the system periodically according to the functional requirements (aka user stories). To quote Kent Beck, "Optimism is an occupational hazard of programming, feedback is the treatment."
Several practices embody courage. One is the commandment to always design and code for today and not for tomorrow. This is an effort to avoid getting bogged down in design and requiring a lot of effort to implement anything else. Courage enables developers to feel comfortable with refactoring their code when necessary. This means reviewing the existing system and modifying it so that future changes can be implemented more easily. Another example of courage is knowing when to throw code away: courage to remove source code that is obsolete, no matter how much effort was used to create that source code. Also, courage means persistence: A programmer might be stuck on a complex problem for an entire day, then solve the problem quickly the next day, if only he or she is persistent.
The respect value manifests in several ways. In Extreme Programming, team members respect each other because programmers should never commit changes that break compilation, that make existing unit-tests fail, or that otherwise delay the work of their peers. Members respect their work by always striving for high quality and seeking for the best design for the solution at hand through refactoring.
Principles
The principles that form the basis of XP are based on the values just described and are intended to foster decisions in a system development project. The principles are intended to be more concrete than the values and more easily translated to guidance in a practical situation.
Feedback is most useful if it is done rapidly. The time between an action and its feedback is critical to learning and making changes. In Extreme Programming, unlike traditional system development methods, contact with the customer occurs in small iterations. The customer has clear insight into the system that is being developed. He or she can give feedback and steer the development as needed.
Unit tests also contribute to the rapid feedback principle. When writing code, the unit test provides direct feedback as to how the system reacts to the changes one has made. If, for instance, the changes affect a part of the system that is not in the scope of the programmer who made them, that programmer will not notice the flaw. There is a large chance that this bug will appear when the system is in production.
Simplicity is about treating every problem as if its solution were extremely simple. Traditional system development methods say to plan for the future and to code for reusability. Extreme programming rejects these ideas.
The advocates of Extreme Programming say that making big changes all at once does not work. Extreme Programming applies incremental changes: for example, a system might have small releases every three weeks. By making many little steps the customer has more control over the development process and the system that is being developed.
The principle of embracing change is about not working against changes but embracing them. For instance, if at one of the iterative meetings it appears that the customer's requirements have changed dramatically, programmers are to embrace this and plan the new requirements for the next iteration.
Activities
XP describes four basic activities that are performed within the software development process:
- Coding
The advocates of XP argue that the only truly important product of the system development process is code. Without code you have nothing.Coding can be drawing diagrams that will generate code, scripting a web-based system or coding a program that needs to be compiled.Coding can also be used to figure out the most suitable solution. For instance, XP would advocate that faced with several alternatives for a programming problem, one should simply code all solutions and determine with automated tests (see below) which solution is most suitable.Coding can also help to communicate thoughts about programming problems. A programmer dealing with a complex programming problem and finding it hard to explain the solution to fellow programmers might code it and use the code to demonstrate what he or she means. - Testing
One cannot be certain of anything unless one has tested it. Testing is not a perceived, primary need for the customer. A lot of software is shipped without proper testing and still works (more or less). In software development, XP says this means that one cannot be certain that a function works unless one tests it. This raises the question of defining what one can be uncertain about.You can be uncertain whether what you coded is what you meant. To test this uncertainty, XP uses Unit Tests. These are automated tests that test the code. The programmer will try to write as many tests he or she can think of that might break the code he or she is writing; if all tests run successfully then the coding is complete.You can be uncertain whether what you meant is what you should have meant. To test this uncertainty, XP uses acceptance tests based on the requirements given by the customer in the exploration phase of release planning. - Listening
Programmers do not necessarily know anything about the business side of the system under development. The function of the system is determined by the business side. For the programmers to find what the functionality of the system should be, they have to listen to business.Programmers have to listen to what the customer needs. Also, they have to try to understand the business problem, and to give the customer feedback about his or her problem, to improve the customer's own understanding of his or her problem.Communication between the customer and programmer is further addressed in The Planning Game. - Designing
From the point of view of simplicity, one could say that system development doesn't need more than coding, testing and listening. If those activities are performed well, the result should always be a system that works. In practice, this will not work. One can come a long way without designing but at a given time one will get stuck. The system becomes too complex and the dependencies within the system cease to be clear.One can avoid this by creating a design structure that organizes the logic in the system. Good design will avoid lots of dependencies within a system; this means that changing one part of the system will not affect other parts of the system.
Practices
Extreme Programming has 12 practices, grouped into four areas, derived from the best practices of software engineering:
- Fine scale feedback
- Pair Programming
- Planning Game
- Test Driven Development
- Whole Team
- Continuous process
- Continuous Integration
- Design Improvement
- Small Releases
- Shared understanding
- Coding Standards
- Collective Code Ownership
- Simple Design
- System Metaphor
- Programmer welfare
- Sustainable Pace
Application of Extreme Programming
Extreme Programming remains a sensible choice for some projects. Projects suited to Extreme Programming are those that:
- Involve new or prototype technology, where the requirements change rapidly, or some development is required to discover unforeseen implementation problems
- Are research projects, where the resulting work is not the software product itself, but domain knowledge
- Are small and more easily managed through informal methods
Projects suited for more traditional methodologies are those that:
- Involve stable technology and have fixed requirements, where it is known that few changes will occur
- Involve mission critical or safety critical systems, where formal methods must be employed for safety or insurance reasons
- Are large projects which may overwhelm informal communication mechanisms
- Have complex products which continue beyond the project scope to require frequent and significant alterations, where a recorded knowledge base, or documentation set, becomes a fundamental necessity to support the maintenance
Project Managers must weigh project aspects against available methodologies to make an appropriate selection.
Lean Agile Process
The core idea of lean is to eliminate/reduce non-value-added activities (termed "wastes") and thus increase customer value. The Agile process itself is a lean method for the software development life cycle, and I am sharing a couple of Agile best practices adopted by many teams to make the Agile process extra lean.
My team follows an Agile method that serves the needs at the enterprise level, producing frequent IRs (interim releases). (An IR consists of four sprints. Each sprint has two weeks in the sprint cycle.) At the end of an IR, the product is ready to ship if required. On adopting some of the best Agile practices, our process became more lean, to the point that it increased the process efficiency by increasing the productivity gain.
There was no change in the Agile framework that my team followed before and after. Few GAAPs (generally accepted Agile practices) are practiced religiously, which made our Agile software development approach a still more lean process. I summarize them here within the context of how these practices help make Agile even more lean.
1. Backlog grooming
Backlog grooming is one of the GAAPs in which the Scrum team meets regularly to keep the product backlog items clean and up to date. Like other Agile events, the grooming meeting can be a timeboxed event.
Grooming of stories
The grooming meeting of stories generally happens once per sprint. The Scrum team meets to carry out the following activities:
- Creating new stories for prioritized epics so they can be completed in the current interim release
- Moving the prioritized stories to the top of the backlog
- Adding/updating the acceptance criteria for each story from the backlog list
- Estimating the story based on the accepting criteria
The meeting is timeboxed to two hours for a sprint of two weeks.
Grooming epics
The grooming meeting of epics/features happens once per interim release (which consists of four sprints). The PO will prioritize the list of the features that he would like to see for the product based on customer requirements and business priorities. The team spends time in grooming the epics/features as prioritized by the PO. The epic grooming meeting can be a timeboxed activity of eight hours per IR.
Following are the activities performed by the Scrum team during the epic grooming meeting:
- Identifying the activities to develop a feature and divide them so that they are completed in one IR by splitting the feature vertically
- Creating the stories for the identified activities of a feature/epic for the next IR
- Adding the high-level acceptance criteria to the stories so that they can be estimated
- Estimating the epic
2. Acceptance Criteria-Driven Development
Acceptance Criteria-Driven Development (ACDD) is an Agile approach practiced by integrating TDD with acceptance criteria. It reduces the defect rates as all the test scenarios would have been covered as part of acceptance criteria, and thus it increases the quality and value of the product by making the Agile process more lean. This approach of practicing Agile is influenced by Test-Driven Development.
3. Code refactoring
Refactoring is the technical approach in restructuring the internal logic of the code without affecting its behavior and external identity. Usually refactoring is performed to improve the NFRs (nonfunctional requirements) or to provide the extendability to the code. Refactoring is a technical debt and becomes a non-value-added activity if it is not addressed at the right time as per lean principles. Hence code refactoring becomes important and needs to be performed regularly. During the following development events, think of reaching a state of "continuous refactoring":
- Add/update a feature: The code is updated while adding a new feature. After completing the implementation of a feature, think of refactoring. Think of refactoring every time a feature is added or updated. This brings the rigor of continuous refactoring.
- Fixing a defect: Follow three steps in fixing the defects: red, green, and refactor. When the issue is present, it's in a red state. Fix the defect first, which makes the status green, as the feature has no defect. Then think of refactoring around the code that was updated as part of fixing the defect.
- Code review: When experienced professionals review the code, they will have plenty of opinions about its design based on their experience in the domain and technology. Such reviews help experienced senior developers pass their knowledge to junior developers. While the developers incorporate the code review comments, think of refactoring. Based on the guidelines of experienced developers' suggestions, the developer does the refactoring. Keep doing the refactoring; it becomes a habit that leads to continuous refactoring.
Conclusion
The main purpose of this article is to share how GAAPs could help move Agile into a more lean process.
Backlog grooming
On conducting regular backlog grooming sessions, the team is well prepared for the sprint planning meetings attaining "Definition of Ready" for sprint/IR planning meetings. Having groomed the stories in the previous sprints, it helps to address the required dependencies and design discussions/decisions before planning meetings, thus reducing any waiting time.
Acceptance Criteria-Driven Development
Having the acceptance criteria for every story, it ensures that the developer implements the feature to satisfy the requirements of the PO. The acceptance criteria has multiple sections with all possible scenarios, which reduces the probability of defects, which contributes in making the process still more lean. Addressing defects is a non-value-added activity.
Code refactoring
If refactoring is not addressed at the right time, then it becomes a technical debt. Today's technical debt will become tomorrow's waste, as it requires effort to address it. Technical debts need to be addressed immediately before the product is shipped as part of next incremental release. Such refactoring efforts will trigger a reach for a state called continuous refactoring.
Wheel and Spoke Model
The Wheel And Spoke Model is a sequentially parallel software development model. It is essentially a modification of the spiral model that is designed to work with smaller initial teams, which then scale upwards and build value faster. It is best used during the design and prototyping stages of development. It is a bottom-up methodology.
The wheel and spoke model retains most of the elements of the spiral model, on which it is based. As in the spiral model, it consists of multiple iterations of repeating activities:
- New system requirements are defined in as much detail as possible from several different programs.
- A preliminary common API is generated that is the greatest common denominator across all the projects.
- Implementation stage of a first prototype.
- The prototype is given to the first program where it is integrated into their needs. This forms the first spoke of the wheel and spoke model
- Feedback is gathered from the first program and changes propagated back to the prototype.
- The next program can now use the common prototype, with the additional changes and added value from the first integration effort. Another spoke is formed.
- The final system is the amalgamation of common features used by the different programs – forming the wheel, and testing/bug-fixes that were fed back into the code-base - forming the spokes.
Routine changes and additions are eventually seen by every program that uses the common code, and the experience gained by developing the prototype for the first program is shared by each successive program using the prototype.
Applications
The wheel and spoke is best used in an environment where several projects have common architecture or feature-set that can be abstracted by an API.
Advantages
- Low initial risk. Since one is developing a small-scale prototype instead of a full-blown development effort, much fewer programmers are needed initially. If the effort is deemed successful, the model scale very well by adding new people as the scope of the prototype is expanded
- Gained expertise applicable across different programs. The core team developing the prototype gains experience from each successful program that adapts the prototype and sees an increasing number of bug fixes and a general rise in code quality. This knowledge is directly transferable to the next program since the core code remains mostly similar.
Constructionist Design Methodology (CDM)
We present a methodology for designing and implementing interactive intelligences. The constructionist design methodology (CDM)--so called because it advocates modular building blocks and incorporation of prior work-addresses factors that we see as key to future advances in AI, including support for interdisciplinary collaboration, coordination of teams, and large-scale systems integration. We test the methodology by building an interactive multifunctional system with a real-time perception-action loop. The system, whose construction relied entirely on the method ology, consists of an embodied virtual agent that can perceive both real and virtual objects in an augmented-reality room and interact with a user through coordinated gestures and speech. Wireless tracking technologies give the agent awareness of the environment and the user's speech and communicative acts. User and agent can communicate about things in the environment, their placement, and their function, as well as about more abstract topics, such as current news, through situated multi- modal dialogue. The results demonstrate the CDM's strength in simplifying the modeling of complex, multifunctional systems that require architectural experimentation and exploration of unclear subsystem boundaries, undefined variables, and tangled data flow and control hierarchies.
Taguchi Philosophy
There has been a great deal of controversy about Genichi Taguchi's methodology since it was first introduced in the United States. This controversy has lessened considerably in recent years due to modifications and extensions of his methodology. The main controversy, however, is still about Taguchi's statistical methods, not about his philosophical concepts concerning quality or robust design. Furthermore, it is generally accepted that Taguchi's philosophy has promoted, on a worldwide scale, the design of experiments for quality improvement upstream, or at the product and process design stage.
Taguchi's philosophy and methods support, and are consistent with, the Japanese quality control approach that asserts that higher quality generally results in lower cost. This is in contrast to the widely prevailing view in the United States that asserts that quality improvement is associated with higher cost. Furthermore, Taguchi's philosophy and methods support the Japanese approach to move quality improvement upstream. Taguchi's methods help design engineers build quality into products and processes. As George Box, Soren Bisgaard, and Conrad Fung observed: "Today the ultimate goal of quality improvement is to design quality into every product and process and to follow up at every stage from design to final manufacture and sale. An important element is the extensive and innovative use of statistically designed experiments."
TAGUCHI'S DEFINITION OF QUALITY
The old traditional definition of quality states quality is conformance to specifications. This definition was expanded by Joseph M. Juran (1904-) in 1974 and then by the American Society for Quality Control (ASQC) in 1983. Juran observed that "quality is fitness for use." The ASQC defined quality as" the totality of features and characteristics of a product or service that bear on its ability to satisfy given needs."
Taguchi presented another definition of quality. His definition stressed the losses associated with a product. Taguchi stated that "quality is the loss a product causes to society after being shipped, other than losses caused by its intrinsic functions." Taguchi asserted that losses in his definition "should be restricted to two categories: (1) loss caused by variability of function, and (2) loss caused by harmful side effects." Taguchi is saying that a product or service has good quality if it "performs its intended functions without variability, and causes little loss through harmful side effects, including the cost of using it."
It must be kept in mind here that "society" includes both the manufacturer and the customer. Loss associated with function variability includes, for example, energy and time (problem fixing), and money (replacement cost of parts). Losses associated with harmful side effects could be market shares for the manufacturer and/or the physical effects, such as of the drug thalidomide, for the consumer.
Consequently, a company should provide products and services such that possible losses to society are minimized, or, "the purpose of quality improvement … is to discover innovative ways of designing products and processes that will save society more than they cost in the long run." The concept of reliability is appropriate here. The next section will clearly show that Taguchi's loss function yields an operational definition of the term "loss to society" in his definition of quality.
TAGUCHI'S LOSS FUNCTION
We have seen that Taguchi's quality philosophy strongly emphasizes losses or costs. W. H. Moore asserted that this is an "enlightened approach" that embodies "three important premises: for every product quality characteristic there is a target value which results in the smallest loss; deviations from target value always results in increased loss to society; [and] loss should be measured in monetary units (dollars, pesos, francs, etc.)."
Figure I depicts Taguchi's typically loss function. The figure also contrasts Taguchi's function with the traditional view that states there are no losses if specifications are met.
Figure 1 Taguchi's Loss Function
It can be seen that small deviations from the target value result in small losses. These losses, however, increase in a nonlinear fashion as deviations from the target value increase. The function shown above is a simple quadratic equation that compares the measured value of a unit of output Y to the target T.:
where L(Y) is the expected loss associated with the specific value of Y.
Essentially, this equation states that the loss is proportional to the square of the deviation of the measured value, Y, from the target value, T. This implies that any deviation from the target (based on customers' desires and needs) will diminish customer satisfaction. This is in contrast to the traditional definition of quality that states that quality is conformance to specifications. It should be recognized that the constant k can be determined if the value of L(Y) associated with some Y value are both known. Of course, under many circumstances a quadratic function is only an approximation.
Since Taguchi's loss function is presented in monetary terms, it provides a common language for all the departments or components within a company. Finally, the loss function can be used to define performance measures of a quality characteristic of a product or service. This property of Taguchi's loss function will be taken up in the next section. But to anticipate the discussion of this property, Taguchi's quadratic function can be converted to:
This can be accomplished by assuming Y has some probability distribution with mean, a and variance o.2 This second mathematical expression states that average or expected loss is due either to process variation or to being off target (called "bias"), or both.
TAGUCHI, ROBUST DESIGN, AND THE DESIGN OF EXPERIMENTS
Taguchi asserted that the development of his methods of experimental design started in Japan about 1948. These methods were then refined over the next several decades. They were introduced in the United States around 1980. Although, Taguchi's approach was built on traditional concepts of design of experiments (DOE), such as factorial and fractional factorial designs and orthogonal arrays, he created and promoted some new DOE techniques such as signal-to-noise ratios, robust designs, and parameter and tolerance designs. Some experts in the field have shown that some of these techniques, especially signal-to-noise ratios, are not optimal under certain conditions. Nonetheless, Taguchi's ideas concerning robust design and the design of experiments will now be discussed.
DOE is a body of statistical techniques for the effective and efficient collection of data for a number of purposes. Two significant ones are the investigation of research hypotheses and the accurate determination of the relative effects of the many different factors that influence the quality of a product or process. DOE can be employed in both the product design phase and production phase.
A crucial component of quality is a product's ability to perform its tasks under a variety of conditions. Furthermore, the operating environmental conditions are usually beyond the control of the product designers, and, therefore robust designs are essential. Robust designs are based on the use of DOE techniques for finding product parameter settings (e.g., temperature settings or drill speeds), which enable products to be resilient to changes and variations in working environments.
It is generally recognized that Taguchi deserves much of the credit for introducing the statistical study of robust design. We have seen how Taguchi's loss function sets variation reduction as a primary goal for quality improvement. Taguchi's DOE techniques employ the loss function concept to investigate both product parameters and key environmental factors. His DOE techniques are part of his philosophy of achieving economical quality design.
To achieve economical product quality design, Taguchi proposed three phases: system design, parameter design, and tolerance design. In the first phase, system design, design engineers use their practical experience, along with scientific and engineering principles, to create a viably functional design. To elaborate, system design uses current technology, processes, materials, and engineering methods to define and construct a new "system." The system can be a new product or process, or an improved modification of an existing product or process.
The parameter design phase determines the optimal settings for the product or process parameters. These parameters have been identified during the system design phase. DOE methods are applied here to determine the optimal parameter settings. Taguchi constructed a limited number of experimental designs, from which U.S. engineers have found it easy to select and apply in their manufacturing environments.
The goal of the parameter design is to design a robust product or process, which, as a result of minimizing performance variation, minimizes manufacturing and product lifetime costs. Robust design means that the performance of the product or process is insensitive to noise factors such as variation in environmental conditions, machine wear, or product to-product variation due to raw material differences. Taguchi's DOE parameter design techniques are used to determine which controllable factors and which noise factors are the significant variables. The aim is to set the controllable factors at those levels that will result in a product or process being robust with respect to the noise factors.
In our previous discussion of Taguchi's loss function, two equations were discussed. It was observed that the second equation could be used to establish quality performance measures that permit the optimization of a given product's quality characteristic. In improving quality, both the average response of a quality and its variation are important. The second equation suggests that it may be advantageous to combine both the average response and variation into a single measure. And Taguchi did this with his signal-to-noise ratios (S/N). Consequently, Taguchi's approach is to select design parameter levels that will maximize the appropriate S/N ratio.
These S/N ratios can be used to get closer to a given target value (such as tensile strength or baked tile dimensions), or to reduce variation in the product's quality characteristic(s). For example, one S/N ratio corresponds to what Taguchi called "nominal is best." Such a ratio is selected when a specific target value, such as tensile strength, is the design goal.
For the "nominal is best" case, Taguchi recommended finding an adjustment factor (some parameter setting) that will eliminate the bias discussed in the second equation. Sometimes a factor can be found that will control the average response without affecting the variance. If this is the case, our second equation tells us that the expected loss becomes:
Consequently, the aim now is to reduce the variation. Therefore, Taguchi's S/N ratio is:
where S 2 is the sample's standard deviation.
In this formula, by minimizing S 2 , − 10 log 10 S 2 , is maximized. Recall that all of Taguchi's S/N ratios are to be maximized.
Finally, a few brief comments concerning the tolerance design phase. This phase establishes tolerances, or specification limits, for either the product or process parameters that have been identified as critical during the second phase, the parameter design phase. The goal here is to establish tolerances wide enough to reduce manufacturing costs, while at the same time assuring that the product or process characteristics are within certain bounds.
EXAMPLES AND CONCLUSIONS
As Thomas P. Ryan has stated, Taguchi at the very least, has focused "our attention on new objectives in achieving quality improvement. The statistical tools for accomplishing these objectives will likely continue to be developed." Quality management "gurus," such as W. Edwards Deming (1900-1993) and Kaoru Ishikawa (1915-), have stressed the importance of continuous quality improvement by concentrating on processes upstream. This is a fundamental break with the traditional practice of relying on inspection downstream. Taguchi emphasized the importance of DOE in improving the quality of the engineering design of products and processes. As previously mentioned, however," his methods are frequently statistically inefficient and cumbersome." Nonetheless, Taguchi's design of experiments have been widely applied and theoretically refined and extended. Two application cases and one refinement example will now be discussed.
K. N. Anand, in an article in Quality Engineering, discussed a welding problem. Welding was performed to repair cracks and blown holes on the cast-iron housing of an assembled electrical machine. Customers wanted a defect-free quality weld, however the welding process had resulted in a fairly high percentage of welding defects. Management and welders identified five variables and two interactions that were considered the key factors in improving quality. A Taguchi orthogonal design was performed resulting in the identification of two highly significant interactions and a defect-free welding process.
The second application, presented by M. W. Sonius and B. W. Tew in a Quality Engineering article, involved reducing stress components in the connection between a composite component and a metallic end fitting for a composite structure. Bonding, pinning, or riveting the fitting in place traditionally made the connections. Nine significant variables that could affect the performance of the entrapped fiber connections were identified and a Taguchi experimental design was performed. The experiment identified two of the nine factors and their respective optimal settings. Therefore, stress levels were significantly reduced.
The theoretical refinement example involves Taguchi robust designs. We have seen where such a design can result in products and processes that are insensitive to noise factors. Using Taguchi's quadratic loss function, however, may provide a poor approximation of true loss and suboptimal product or process quality. John F. Kros and Christina M. Mastrangelo established relationships between nonquadratic loss functions and Taguchi's signal-to-noise ratios. Applying these relationships in an experimental design can change the recommended selection of the respective settings of the key parameters and result in smaller losses.
Added size (A): A = BA + PA
Estimated Proxy Size (E): E = BA + PA + M
PROBE estimating basis used: (A, B, C or D)
Correlation: (R^2):
Regression Parameters: ë_0 Size and Time
Regression Parameters: ë_1 Size and Time
Projected Added and Modified Size (P): P = ë_0size + ë_1size*E
Estimated Total Size (T): T = P + B - D - M + R
Estimated Total New Reusable (NR): sum of * items
Estimated Total Development Time: Time = ë_0time + ë_1time*E
Prediction Range: Range
Upper Prediction Interval: UPI = P + Range
Lower Prediction Interval: UPI = P - Range
Prediction Interval Percent: