7.0. The Overarching Disciplines
To establish and maintain an integrated and successful implementation of a project or program requires some specific disciplines, first to get everyone on the same page, and then keep them together as the project progresses. These disciplines which are critical, but often not appreciated by the participants, can be described as follows: 1) program definition and integration; 2) program evaluation; 3) product definition and integration, and 4) product evaluation. I will describe what I mean by each of these, and then look at the methods for assuring their effective implementation, including communication media, processes, tools, and data interpretation.
7.1 Program Definition and Integration
In Chapter 3, we discussed the program plan as the basic foundation of program definition and implementation, and also discussed the planning process. Chapter 4 described the management and organizational relationships involved. Here, we are concerned with implementing that plan, keeping it coordinated, and tracking progress with respect to it. If the plan is to be implemented well, it must be defined in sufficient detail that responsibilities and expectations are clear.
7.1.1 The Statement of Work
The requirements and objectives of any project are invoked in a statement of work. Whether the project is a contracted activity for a customer outside the Enterprise, or funded by the Enterprise, some description of the work to be done must be created. This statement of work may be very formal or very informal, as long as a work breakdown structure and the requirements for that work can be derived from it. It is the project manager's job to provide that derivation.
7.1.2. The Work Breakdown Structure (WBS)
As part of defining the work to be done, it is helpful to create a diagram of the total job. This chart, called a Work Breakdown Structure, is a convenient way to display and sum up all the activities and resources needed to do each part of the job, and to break them down into the lowest work package level. A project work breakdown structure looks like an organization chart, but organizes work and resources rather than people. The work breakdown structure with its dictionary (which is akin to the charter that goes with each block on a table of organization) defines all the work required. Every element of work deemed necessary to implement the project must be accounted for. Deliverable products and services generally provide the organizing principle for the WBS, with activities needed to create them as sub-elements under those products.
The work breakdown structure serves four related purposes. The first is to keep track of all elements of the work to be done in the project during both the planning and execution phases. It is a framework for assuring that nothing has been forgotten, and therefore defines the project statement of work. Second, it provides a structure for costing each element of work for the contract. Time-phased resources are priced and budgeted at every level of the WBS. Third, the same structure should be used during the project implementation to accumulate the actual costs as they are incurred; these will be compared to the time-phased budgeted resources for assessing progress and cost performance. Fourth, it is a structure for determining what disciplines are required and relating organizational responsibilities to the work to be done.
Figure 7-1 shows a simplified partial work breakdown structure example for the development of a fictitious new hybrid automobile. Only one twig of one branch of the actual WBS is shown. Note that while not shown, each end-item product element of the WBS occurs twice; once for the non-recurring development tasks, and again under vehicle production for the recurring costs, where only the cost of repetitively producing the vehicle is gathered. The importance of this is that if the product has a target unit production cost, that cost must be allocated and budgeted among all its parts and the cost accumulation system imposed to gather actuals against the estimates under these recurring cost elements. This will be discussed further under design-to-unit production cost methodology in section 7.4.4.
Figure 7-1 A Simplified Work Breakdown Structure Example
7.1.3 Developing the Program Anatomy
One of the most important elements of the program planning and implementation process is the creation and maintenance of the master schedule or anatomy of the project or program. In order for the personnel working on each part of the project to do the detailed task planning, scheduling and costing consistently and compatibly with other parts of the project, the over-all anatomy and ground rules must be developed and adjusted as the elements of the project develop their responsive plans. This section discusses the general steps to get from an overall rough plan to detailed time phased task definition and time-phased resource plans.
7.1.4 The Anatomy of a Typical Development Project
Figure 7-2 illustrates the rhythm of design evolution, test, and revision that is required in a sample missile development project.
While the time scales and quantities may differ for different types of products, the rhythm is similar and generally determined by the time to acquire parts. Given a preliminary set of functional requirements, this schedule provides for a flight 36 months from go-ahead. (EMD or engineering manufacturing development is Pentagon-ese for development for production, and LRIP stands for Low Rate Initial Production.)
Testing the interactions of a missile with its environment requires flying and expending it, and many complete flight tests are needed to evaluate the design over the full range of operations. It takes about 6 months to create a design and breadboard it to verify that it performs as anticipated. This is true whether it be electronics, electromechanical equipment, ordnance, or purely mechanical structure. Figure 7-3 shows a missile subsystem development plan that supports the prior missile development plan of figure 7-2. When breadboards are complete and brass boards are being completed and the mechanical packaging has been designed, preliminary drawings can be released to build the prototypes to be used for engineering evaluation (EDT). When environmental testing has been completed, any fixes needed can be incorporated by Engineering Change Order to the supplier or fabricating organization during the fabrication of initial flight and design verification (DVT) units.
Figure 7-3 Typical Subsystem Plan
The Ground Test Units (GTU) are flight configuration vehicles that never fly but are used for various integration tests. In this plan the first GTU will be built up using engineering development units (EDU) which will be built for that purpose. The second GTU will use design verification test (DVT) hardware, that is flight quality hardware which will be tested to verify that the design is ready for flight. The third GTU may use DVT units initially and can then be upgraded with flight hardware if required. In any case hardware will be designated for all GTUs and the simulation lab on a non-conflict basis.
In general, custom chips produced by foundries will be the pacing item in electronics packages during early development. Mechanical design and printed wiring boards can be designed and ordered during this time with first use being in brass boards to verify proper heat transfer, noise compatibility and acceptable operation. Flight1 software is to be developed with the hardware by the same integrated product team (IPT), and is to have all tactical mission functionality in it from build 1 and up. There will be no incomplete software builds for EMD flights. Updates will be made as required during the EMD program but only for findings from flight or to meet new requirements which will be changes in scope. You can now begin to see how the planning process defines the internal contracts among the various members of the project.
With the level of detail in the subsystem plan above, 3T charts (task, talent, time) can be developed to establish the time-phased resources for each task needed to accomplish the work defined. This is a negotiated plan and considers the risk involved in doing the job for the resources negotiated as well as the project milestones supported by each task against which accomplishment will be measured. Examples of such spread sheets created on Excel are shown in chapter 8.
7.1.5 Program Integration Media
I personally like to use what I call a Program Requirements Manual (PRM) to provide a road map to the entire project. It is a living document. By that I mean that it is a continuously evolving loose-leaf bound document with frequent page changes and additions, and a change notice page that accompanies each change. Every page has a signature block for the Program Manager. Copies are maintained in the office of every activity leader involved in the program. Updates can be sent by e-mail, but the books must be hard copy. If someone new to the activity were to pick up this book at any point in time, it would lead that person to any detail of the program. It comes into being at the time the project implementation begins, incorporating and providing the means of updating the program plan.
The PRM has an introduction containing the purpose and objectives of the project. Presumably, these rarely change. It then lists all requirements documents that govern what is being done. These include contractual documents and master specifications, as well as any internal contracts or ground rules that were commitments for the program during its formulation. These are listed in order of precedence -- that is, which governs in case of conflict. As specific decisions are made over time that either flesh out details of implementation or modify existing details, these are coordinated and issued over the program manager's signature as program management memos.
These memos, which can be numbered according to the elements of the work breakdown structure, become page additions in the program requirements manual. Anyone working on the program can initiate a program management memo, but it must be coordinated with every potentially affected organization as shown by their authorized signature, and can only be issued when signed by the Project Manager or his or her designee. In all cases, in addition to describing an agreed-upon action, there must be a budget statement, which says either " This will be accomplished within existing budget" or "Budget for this task will be provided by___". You can see that each memo is a negotiation and contract among the affected parties. If the program manager ever signs and issues a unilateral program management memo, it is like issuing a blank check.
The Program Requirements Manual contains as references the current versions of key product implementation and design guidance documents, which are issued and controlled by the organizations that create them. Note that the program manager does not control those implementing documents, but includes reference to the latest version as part of the road map into the product design and implementation.
7.1.6 Program Configuration Control - Block Change and Coordination
Periodically, the program plan is updated by block change, usually when some significant change or event occurs. The promulgation of this change is by Program Management Memo with narrative describing the changes to the program and the master schedule baseline designation by a change letter. Typically, budget revisions are made at this time as well, reflecting agreements or decisions based on the progress to date and problems encountered. Changes may be made to reduce cost, or in some cases the program manager may budget more work out of his or her management reserve.
7.1.7 Product Manufacturing Scheduling and Programming
Product Manufacturing Scheduling and programming must be undertaken soon after the project starts. While the details of the design of the various product parts may not be known, it is crucial to create the picture of what has to be built, bought, and stocked, and how and when it will be done. Usually the first product articles to be built will be used in design evaluation testing at various levels, and these hardware requirements are an output of the integrated test plan described in section 7.2.2.
Figure 7-4 Production Planning Example
Figure 7-4 shows a typical program planning chart taken from an actual program plan. In this production planning example, "low rate initial production" (LRIP) is a commitment to the first limited number of production missiles prior to completion of evaluation testing. Their purpose is to ramp up manufacturing rate at all suppliers and contractor production facilities to assure that the implementation of those increased rates does not introduce undesired changes in the production article and that affordability targets are in fact being met, before full production is implemented.
This Gantt type chart from a real program purports to show a logical project phasing into low rate initial production, but a slightly different display method on the next chart will show it to be wrong.
Figure 7-5 A More Useful Display
Figure 7-5 shows an approach I would recommend remembering when you start to lay out production programs.
The changes to the prior chart display the production schedule in a more meaningful way, and we can now see the fallacy of the plan, The parallelograms are a technique I like to use to show the production "waterfall" when you plan to build a significant number of units. If each unit build were a horizontal line broken into the two phases shown, the loci of placing orders, start of fab and assembly, and delivery would form the parallelograms shown. The vertical scale is 10 product units per division, and the horizontal scale is time where in this case divisions are quarters of a year. While in reality, products are typically bought and built in lots, the parallelogram shows the "just in time" dates for each unit. The milestone FUE at the upper right stands for "First Unit Equipped" and refers to the full complement of systems available to the first operational combat battalion for this missile program example.
The first thing we see that there is an 18 month gap in operations between the start of the last development unit and the start of the first Low rate initial production units both at suppliers and in fab and assembly. What are all those manufacturing people at suppliers and in our own facility going to do during the 18 month stand down? They are going to other jobs, and may never be back. The result? You don't buy what you tested, because manufacturing continuity has been lost. Moreover, quadrupling the production rate after that gap is likely to be problematic.
A Better Plan
Figure 7-6 is a much more rational production plan that meets the same goals with continuity of manufacture from development into production.
Figure 7-6 A More Logical Plan
We have added pre-production missiles built during EMD to demonstrate that when missiles are built to the intended product disclosure, those missiles can be built and will perform as anticipated. Proofing of production tools, processes, and facilities as well as the paper are accomplished with these vehicles. Problems are addressed and corrected before low rate initial production missiles are procured. Some of these vehicles can fulfill the requirements for customer operational evaluation as well.
There is other useful information derivable from this chart. We can see what the production rates have to be, and how production lots can be planned as well as when funding must be available for each lot. By displaying the plan in this way, you can quickly see where problems lie and how to plan major decision milestones consistent with a plan that provides continuity of operations.
In order to establish the lead times and spans depicted in the parallelograms of Figure 7-6, evaluation of the purchasing, fabrication, and assembly sequence for each part of the product must be undertaken.
This process, typically called programming the job, generates a whole "tree" of lower-level documentation for every segment of the product. This data set includes the operations planning, manufacturing methods and processes; tooling to be used; determination of need dates for start of next assembly; fabrication spans and start dates; purchasing lead times, order dates for buying and kitting of the bill of materials; and job instructions.
All subassemblies and assemblies of the end product are identified by drawing number, and a manufacturing / assembly sequence and job programming chart is created. This is another form of work breakdown structure. In one respect this chart is a subset of the recurring part of the overall project WBS. It identifies what work has to be done at each level of product manufacture and assembly along with the applicable configuration management information. Eventually, for every part number, it becomes the top document of a set that captures the detailed job definitions and shop paper for every manufacturing operation, assembly operation and job instruction.
A partial and very simplified example for a hybrid automobile might look like Figure 7-7. The actual chart is, of course, much larger. For each segment or subassembly, the various functional organizations that add value to that segment commit to a plan for their part of the activity, using this work breakdown as the framework for job package planning and authorization.
Figure 7-7 A Simplified Manufacturing/Assembly Sequence Chart
For the manufacturing planning and scheduling process required to prepare to build a run of product, the operations organizations need detailed design disclosure data, which, while not yet released, must be pretty mature. They are going to generate an entire production documentation set, job instructions, tooling, and order plans based on that data and the commitment of when final design data will be available for that particular segment. Consequently, they don't want to even start the product master scheduling activity until the engineering data is mature.
Engineering, on the other hand, will not want to commit until it can confidently predict that the design is mature. On the third hand, we have the project manager and the customer service folks, who have to make commitments about when the product deliveries will occur, and in many cases plan and reserve outside services and facilities. They cannot afford to find out that the plan cannot be achieved after the fact. This conflict in equally valid objectives is fundamental and one of the most difficult to manage.
Since nothing can proceed without the design data, the responsible engineering organization must commit to a time when preliminary design data will be available, when "OK to buy to" lists of materials will be available. They must also commit when the design will be released for manufacture, based on completion of development prototype testing and correction of any problems found during those tests. Agreements must be made on what kind of change control will be used during the transition from building experimental product hardware until design release.
On the basis of this preliminary planning data, Purchasing must estimate order placement dates, and delivery spans for all components, raw materials, and outside supplied sub-assemblies. Manufacturing and Quality Assurance must begin creating skeleton work instructions and manufacturing planning for the creation of each and every segment including design of tooling. Estimates are created for the manufacturing spans needed assuming all raw materials and purchased parts are on hand when needed. On the basis of the resulting data, shop loading and tooling requirements can be determined and hiring of any required additional talent planned.
When I ran large programs, I would insist that Engineering and Operations had to begin this manufacturing programming process within six months of starting the project, and complete it by the end of the first year. They, of course, thought I was insane. But it was imperative to validate the project master schedule assumptions about lead times, order dates, etc. which, until validated, were based purely on prior program experience. That should be sufficient, but as we have said several times, until you have buy-in and commitment from those who are going to do the job, you have nothing real. And there is an axiom that seems to apply to every project: No matter who signed up to what in the proposal and initial plan, there will be a host of individuals and groups who will tell you that it can't possibly be done once you have committed to do it. So, I wasn't insane -- just scared witless.
Once we got by the point of talking past each other over this dilemma, we arrived at an uneasy but workable solution -- to add a preliminary product manufacturing scheduling phase, ahead of the formal programming. While this entailed added work, it created a preliminary skeleton plan, which, together with "producibility teams," (forerunners of what are now called integrated product teams) working together from the onset, provided the continuing visibility to the Operations organizations as the design evolved. This in turn allowed Operations to do much of the detailed planning, influence the design, and identify design constraints. Thus they were able to proceed on key tasks such as tooling design and more importantly to make commitments on manufacturing time spans. This whole concept of concurrency and integrated product development teams is sometimes referred to as concurrent engineering.
A formal chairman or czar is needed to keep this process on track and to keep all the necessary feet to the fire. This must be a respected person who has the knowledge and authority to incentivize people to find solutions when they reach an impasse. The end result of this preliminary product manufacturing scheduling effort is a negotiated baseline schedule for the creation of both the design data and the product; this activity will either validate the overall project master schedule or modify it. The result is a set of commitments that every activity literally signs up to based on the commitments made by all the others. Experience has shown this approach to yield about 90% fidelity to what actually results. Moreover, if things change from these preliminary programming assumptions, the implications of those changes are better understood.
7.1.8 Internal Contracts -- Change Control
The key principle in all that we have discussed so far is the establishment and maintenance of an integrated, running contract with all participating parties. This is a contract that reflects program decisions made, and advises everyone involved that those decisions have been made. Again, note that we are not talking about design decisions, but about program implementation decisions. The same thing is true of manufacturing processes to be used. In each case, the result is a set of commitments against which accomplishment will be measured by the responsible management. The product manufacturing scheduling process generates contracts among the activities that depend on each other. Unilateral legislation of these plans by program or senior management does not result in commitments or ownership. The process described here does.
7.1.9 Design and Configuration Change Control
One of the important features of the internal contract set is a rigorous system for managing changes in the product configuration. This discipline is often referred to as configuration management. It is a formal process for evaluating proposed changes for their impact on the project as a whole, including the necessity, what the cost and schedule implications are, and when they will be introduced into the process.
The change control process thus insures coordination with all potentially affected parties before the change is approved, indicates the changes in the internal contracts that will be required, and finally provides for the paper and hardware trail for the introduction of that change. The reason for including it under project integration rather than product integration is the fact that design changes usually have far-reaching project implications that may not be evident to the originator.
7.2 Program Evaluation (Program Status and Control)
Whether a project's or program's end-products are hardware, software, or services, there are three basic measures of accomplishment: technical performance, cost performance and schedule performance. Technical performance is the accomplishment of the requirements of the task, as outlined in the statement of work. Cost performance is the cost of accomplishment of the work when compared to the bid or budgeted cost. If production unit price is an important attribute (and it usually is), cost performance would include real measures of that cost prior to, and early in, actual production. Schedule performance is when the work is accomplished compared to the master schedule.
Program evaluation is the process of determining whether the requirements of the project are being met, and if they are being accomplished on schedule and at the budgeted cost. And when targets are not being met, it is program evaluation's job to recommend corrective actions.
7.2.1 Technical Performance Assessment
As the project progresses, the program manager and his or her management team must have visibility into how well the evolving product will meet the requirements of the project. This requires some form of ongoing assessment. It is often called technical performance measurement, and is best accomplished by creating a formal process for periodic review of every part of the end product during its development. The chief project engineer, who is responsible for the product design, should be the source of this evaluation, using the framework of a requirements breakdown structure. This third form of product WBS, rather than defining the work and resources necessary to create the product, defines the product's function or use. It is used to allocate requirements as discussed in section 7.3.1 B. A proven technique for technical performance assessment is discussed in section 7.4.
7.2.2 Cost and Schedule Status and Control
Part of the process of initiating project work is the authorizing of budgets into the accounting system of the Enterprise for all organizations involved in the project. These budgets will be structured in cost accounts keyed to the work breakdown structure and the organizations implementing those elements of work. During the planning phase of the project, these budgets should have been planned on a time-phased basis by resource type for the entire project duration. If done properly, on day one of the project these resource plans are activated in the accounting system and are the budget baseline that goes with the current baseline program plan. All the schedule milestones are based on the program master schedule and will be used as the yardstick for measuring accomplishment.
7.2.3. Accomplishment Measures -- Earned Value Systems and Milestones
It is important to use cost and schedule plans and actual costs and accomplishment to assess progress on the project. One of the most effective means of measuring progress in a large project is the use of an earned value system. This type of system has been required in some form on most Department of Defense cost-reimbursable contracts.
The system uses the budgeted cost to accomplish each component of work by the scheduled time as the measure of planned accomplishment. The planned or budgeted cost of the actual work performed and the actual cost of work performed compared to the budgeted cost of scheduled work are measures of cost and schedule variance from plan that signal potential problems. A description of a typical earned value measurement system is discussed in chapter 8.
Sometimes maligned as an unnecessary burden, as the primary cost management system it is a very powerful tool in measuring accomplishment and foreseeing cost and schedule problems before they become evident by other means. I found it helpful to use this type of system even on company-funded projects, without the excessive government-mandated variance report preparation that most users find burdensome. The disadvantage of earned value systems is that this capability must be integrated with the Enterprise accounting system in order for the data to be reliable and effectively used.
7.3 Product Definition
Figure 7-8 illustrates the disciplines and their relationships needed to create an integrated product definition.
Figure 7-8 Product Definition Disciplines
7.3.1. Systems Engineering Requirements Definition and Allocation
Earlier in the book, we noted that engineering tends to be organized along the lines of the product elements. Systems engineering is therefore a crucial overarching discipline in any complex project where several product elements are involved. As a result, regardless of where the systems engineers report in the organizational structure, they must recognize that they are, among other things, the program manager's technical staff. Design authority is delegated to the appropriate product segment design teams along with the responsibility for meeting their allocated requirements. This section presents some of the important functions that the project manager depends on the systems engineers to perform.
A. Standard Characteristics for Analysis and Performance Assessment
In products that involve complex systems and interactions, it is important to use the same system description for all the analytical work that is being done to either set requirements or evaluate how the product design will perform. One way to do this is to have a configuration for analysis that is controlled and only changed as a block so that everyone stays on the same page, so to speak.
A tool that we used for this purpose was called the Standard Characteristics for Analysis. This document is controlled by the systems engineering organization and is revised periodically when substantial physical configuration and/or functional changes have accumulated in the product that could affect simulations, software or previous analysis.
B. Functional Analysis and Requirements Definition and Allocation
One of the first things that must be done as a project begins the design synthesis process is to allocate all the required attributes of the product among all of its component parts. A tool for assisting in allocating requirements and later evaluating the design compliance with them is the requirements breakdown structure that defines the allocated baseline. Let's take our hybrid automobile as an example. Some of the attributes are: battery size and capacity, acceleration, fuel mileage, service life, maintenance requirements, weight allocations, and especially unit cost. Dimensions, stability, cabin volume and seating capacity are also important. Figure 7-9 shows a partial and simplified requirements breakdown for this vehicle. As with other examples in this chapter, I have only expanded part of the tree; if complete it would take several pages. Also what is shown is largely qualitative, as it might be early in design synthesis, rather than quantitative as it would need to be at completion.
Figure 7-9 A Sample Requirements Breakdown Structure
7.3.2 Design Integration
One of the most important design disciplines is the design integration function. Given a set of design requirements and a design concept, those requirements must be parceled out among the elements of the design, and the interfaces between the elements defined, controlled and validated. Also included in this discipline is the definition and control of product external interfaces for handling, receiving power, operating, monitoring, and testing if required.
A. Design Integration and Data
Several tools can help to perform this integration. One is a product design data book, with a section devoted to each segment of the product as a design guide. This is distributed to each subsystem or segment design team. It is used to document and disseminate the allocations of attributes and derived requirements, such as physical and functional interfaces, and induced environments that must be met by each segment based on the evolving design. Typically, this Data Book is, like the Program Requirements Manual, a living document that can be updated and augmented with additional pages as the design progresses. All applicable design requirements are included or referenced in this data book. Key subsystem requirements documents are referenced but not controlled in the Data Book as they are developed, in the same way that the Data Book is referenced but not controlled in the Program Requirements Manual.
B. Key Attributes Allocation ( Functions, Unit Cost, Product Life and Reliability, Tolerances etc.)
This allocation process is initiated with targets using comparable experience from past work, while holding a reserve to relieve problems where the cost to achieve a target is too high. Then it must be an iterative process with the design teams to evaluate the difficulty and cost of meeting them in each area. Tolerance allocation should normally be statistically combined rather than added worst on worst, to avoid unreasonable cost.
C. Interface Definitions Internal and External
Interface definitions between segments of the product and external to the product are documented with Interface Control Documents. These include tolerances on critical dimensions established by tolerance analysis, characteristics and tolerances of functional interface signals and power etc.
D, Configuration Change Control
An important systems engineering role in its dual capacity as design integrator and project technical staff is to chair the change control process described earlier in section 7.1.6. This process while not explicitly shown in figure 7-8 can be thought of as an overlay to that schematic that assures that proposed design changes, whatever their source, are evaluated for their effect on every part of the process shown in that schematic.
E. Participation in Test Planning
While many test requirements are generated by each product segment, it is important that systems engineering integrate those test requirements to assure test completeness and that assembly-level tests are perceptive in verifying the function of the product system for its intended use. This can be done by creating a total product test requirements specification that is the top spec for all test requirements and invokes all segment test specs for both development and in-process production tests.
One can see from the foregoing discussion that the overarching disciplines generally grouped under systems engineering span across all other activities, keeping them coordinated as the product design evolves, and providing configuration control discipline for the integrated design. Systems Engineering looks at the interactions between product segments and between the product and the environment in which it will be used. The individual product teams must recognize the importance of this function and rely on the systems engineering member of their team to tie back to other teams.
The use of integrated product teams creates the connection between the design task expertise and the producing and user expertise from the beginning of the design activity. As a result, when a problem is encountered, no matter where in the anticipated life cycle or segment of the product, the integrating disciplines, properly used, will register it and cause a decision to be made about its disposition. The next section shows how that part of the process works.
7.4 Product Evaluation
7.4.1 Design Performance Evaluation
A. Technical Performance Measurement - Use of the Requirements Work Breakdown Structure
We will evaluate technical performance against the allocated baseline. As previously mentioned, one way of organizing and tracking these allocated requirements is through the use of a requirements breakdown structure.
An effective method for implementing technical performance is depicted in Figure 7-10. This system is maintained in the project systems engineering organization by personnel knowledgeable about the attributes that have been allocated, the design process, and system interactions. In an integrated product team environment, these could be the systems engineering members on each product segment team. The technique requires self-reporting and allows a continuous objective evaluation of the status of the product design in achieving the project requirements defined in the allocated baseline requirements breakdown structure.
Figure 7-10 A Technical Performance Measurement System
The scoring system we have employed uses a rating system of 1 to 5 with the following definitions: (Sometimes red, orange, yellow, green and blue are used.)
1 or red - Major problem requiring management attention and help needed. Current design will not meet requirement. No corrective action identified.
2 or orange - Significant problem with no immediate management action needed, no plan for corrective action but evaluation continuing. Watch this space.
3 or yellow - Potential problem in meeting requirement, limited data available, corrective action plan exists, results will determine rating.
4 or green - Current design appears satisfactory based on results thus far.
Minor problems if any have been worked out, solutions exist and are in progress. Evaluation tests not complete. No corrective action needed.
5 or blue - Current design is in good shape. Design evaluation tests completed and no problems remain.
Additionally, a circle around the rating signifies that a corrective action plan exists but has not yet been implemented or completed. A subscript number is displayed with the three lower ratings to remind viewers of the number of weeks the rating has existed.
This assessment when taken together with cost and schedule status gives an overall picture of the program health at any time. All attributes, including unit production cost, are included in this evaluation process.
The presentation of status using this approach is done on an exception basis. Problem areas are tracked using an indentured presentation chart approach. While gathered and evaluated by the systems engineering personnel, the system is fed by the ongoing design, analysis, and test activities. This information can be presented to the chief project engineer and all responsible engineering managers at a weekly project engineering status meeting with every product segment or element covered at least once per month. While it is a self-evaluation system, its integrity is assured by the fact that the engineering management and program management, who should be aware of most events on the program, are in a position to evaluate the evaluation itself, as it is given. If the designers have not `fessed up to a problem to the evaluator, or if the evaluator has not done an adequate assessment, it soon becomes obvious.
B Product Requirements Compliance
As the technical staff of the project manager, the systems engineering requirements group has a responsibility for devising an evaluation plan for assuring that the product as designed will comply with the requirements. This may require a combination of testing, analysis, and simulation. As part of this charge, the systems engineers who will do this evaluation should get together with the independent test and evaluation group to create a plan taking full advantage of the test team's expertise as well as their own.
7.4.2. Integrated Test Program Planning
For any project, whether a large building, a large software program, or the development of a hybrid car, perceptive testing is an important part of the design evaluation. It is important to establish criteria for properly testing the various elements of the product so confidence in each of the parts that make up the whole is equally high.
To assure that the proper tests are planned to evaluate the product design and to assure planning for sufficient hardware and software and test equipment for all testing including the element development tests, it is very desirable to have an "Integrated Test Program Plan". Regardless of who has the responsibility or need for a test, it must be accounted for in this plan. In addition, after development has been completed, and the product design is released for production, the in-process tests must be provided for as well.
In our model, we charge the Test and Evaluation organization, the independent test agency during development, with the creation and maintenance of the integrated test plan. This includes the common development test strategy to be used. The quality assurance organization will be the independent testing agency during production.
A. Development Test Consistency
A common set of defined test objectives tailored to the unique nature of each component or process provides guidance to the design and test teams so that oversights are minimized. The following division of responsibilities has been found to work well for complex projects.
B. Subsystem or Segment Development Test Requirements
The cognizant product subsystem or element engineering group is responsible for defining the development tests needed to create their element of the product with confidence. This includes informal experimental tests and prototyping. They also define what tests are needed for design verification: that is, those tests run on the first article built to the released design disclosure. However, commencing with design verification testing, the test organization is responsible for conduct of the test to the engineering requirements.
C. Integration or Combined System Testing
For combined systems-level testing, systems engineering defines the test requirements in a "Test Requirements Specification" and the Test Organization is responsible for designing and conducting these tests.
D. In-Process Test Consistency
The same philosophy applies to in-process testing. When the product is ready to produce, QA has the responsibility for in-process testing at all levels, building on the methods used during the development phase of the project.
E. Test Hardware Requirements
The T and E organization should create and maintain an up-to-date test hardware requirements list as part of the Integrated Test Program Plan. This list establishes all the various units of subsystem and total system hardware needed to complete the integrated test program. This plan, of course, requires input from all groups working on the project, and allows multiple usage of hardware and test equipment where feasible to save the cost of duplicating hardware. On the other hand, one of the most common mistakes in projects is not recognizing that failures occur during development, and not providing enough hardware to support the necessary test and evaluation activities. Penny-wise and pound-foolish. The list should provide for appropriate spares to assure that losses during test do not hamper the overall program with hardware shortages.
7.4.3 Product Liability Evaluation
In today's litigious environment, product liability has become a major area of concern. Product safety must be a design consideration, and an independent assessment of the resulting design is an important discipline. It is only a slight exaggeration to say that the product will only occasionally be used in the manner intended by the developer. Properly prepared users manuals (with appropriate warnings on misuse) help, but the product liability evaluation should take to heart the consequences of blatant misuse and mishandling.
7.4.4 Product Unit Cost Evaluation
If you want to achieve a production unit cost target for your product, it must be designed to achieve that cost. To do this requires three things:
First, you must have a means for all of the activities that add cost to the creation of the product to be actively involved in the design process from the onset. Each of the activities must make commitments to targets for their cost contribution. Producibility teams, or Integrated Product Teams as described in Chapter 5 are that mechanism.
Second, you must have a mechanism for allocating cost targets, creating cost estimates, evaluating those estimates by the IPTs, and finally accumulating actual costs against those targets when building samples of the product.
and Third, you must have representative samples of the product built early enough to see problems and fix them in the product design disclosure and to prove them out before production begins in earnest.
Figure 7-11 shows an effective process for designing to unit cost and evaluating progress in achieving those targets. This particular example is from a low-cost precision guided missile program.
Figure 7-11 Effective Design to Unit Production Cost
It is easy to make promises for something that will not be measured for several years. Many design-to-cost programs do just that. As a result they are not credible, and do not achieve their goals. There must be meaningful measurements along the way that validate those promises or they will never be met. An example of the process we instituted on one major defense program is included as Appendix 1. It is the program management memo that implemented the methodology to achieve unit production costs, and it was successful, because it had all the elements mentioned above.
7.5. Other Development Disciplines
7.5.1. Design Notebooks
Design notebooks should be required for all work done on any project. These provide a record of the reasoning behind the designer's decisions and are available for others to see in the event that person gets disabled or departs the project for one reason or another.
7.5.2. Discipline for Design Review against All Requirements
Design review is a very important check and balance to help assure that there are no oversights in the design process. It is not a dog and pony show, but rather a time when the design should be subjected to tough scrutiny. In this chapter we have talked about requirements that are imposed on the various parts of the project by the integration functions. If all the requirements applicable to each segment or subsystem of the product that are evaluated in the technical performance measurement system were collected, these would constitute the allocated requirements on each segment. This is the checklist that should be used for formal design reviews. Design reviews should be performed for every segment of the product at least twice during its development.
The first review addresses the design concept that has been selected, and examines the requirements check-list against the planned approach, and any feasibility tests, similar product applications, and analysis that support the planned approach. The second review takes place after the completion of development tests and shortly before the design package is to be released to manufacturing and procurement. Here the test results are reviewed and the issue is design confidence. Participants in these reviews include members of the design team, senior representatives from their functional organizations, and representatives of all the overarching disciplines mentioned previously in this chapter. Minutes of the proceedings should be kept and action items recorded for disposition.
7.5.3. Design Configuration Control , Release, and Change Control
Today, most design data resides in a CAD/CAM computer database rather than on drawing paper at the time of its release for procurement or manufacture. While the principles that apply to the discipline required for this data are the same whether on a drawing or a computer screen, it is important that the mechanisms unique to each medium be in place.
As a rule, the design of a product element is fluid and changing rapidly until the completion of prototype hardware testing. Once that point is reached, the design organization is ready to freeze the design. It will therefore be released from the unilateral control of the design function and come under project change control. At that point, the data goes through a rigorous check process by the configuration management function to look for any errors, and assure that parts lists and materiel lists are properly included; it is then "programmed" into the procurement and manufacturing process. The change status is maintained in this central database from then on. The design data package is now programmed for manufacture, and work is authorized with budget and manufacturing instructions.
In the meantime, the same (but unreleased) design data can be used by engineering to authorize the build of design verification test samples identified in the integrated test program plan. It is desirable, but not always possible, to release the design package before these samples are built. In any case, they should be built in the same manner that later articles will be. If nothing else, it is desirable to accept this hardware to the released design data package.
Any changes that are identified as a result of design verification must be programmed through the configuration control function as a formal change, since other hardware has been programmed (if not already built) to the released documentation package. If the change affects form, fit, or function, (interchangeability) then the design must be re-identified to prevent the old design from showing up in the market distribution system. You can see that the subject of configuration control is complex, requiring careful and detailed coordination and management. However, failure to do it well is far less pleasant in the long run.
7.6. Production Disciplines
7.6.1 Product Definition for Planning Purposes
Design release is the formal publication of the design data for manufacture. After this point, any changes must be formally evaluated for impact and effectivity because work is in progress.
7.6.2 Production First Article Master Scheduling
In Section 7.1 we discussed this process in the context of development when the design was still evolving. At this juncture, when we are ready to start production, the product design disclosure is mature and released. It is presumed that the configuration is stable and the only changes that will be introduced are those arising out of manufacturing or processing problems encountered during the "Product Disclosure Demonstration", sometimes called "Proofing".
The end result of this product manufacturing scheduling effort is a negotiated baseline schedule for the creation of product that meets the overall project master schedule or modifies it. Problems usually arise and must be worked out during this process. In large projects, this process can take months and is done segment by segment.
Concurrent engineering with the use of integrated product teams makes this process easier, because the activity representatives have already been involved in every segment of the design. But IPT's are not a substitute for formal programming -- a mistake many people make.
7.6.3 Product Disclosure Demonstration
During the product development phase of a project, the product samples that are built are usually manufactured in a low-volume flexible manufacturing environment. Often, the workers in those facilities are more experienced artisans who can relate problems encountered with the design data in the shop, along with solutions they worked out. This is in fact part of the validation process.
When the product design has been tested and validated, and the processes and tooling for volume production and quality assurance are in place, it is wise to "proof" the production process. Proofing seeks to demonstrate that the rate production processes, tools, and environment yield the same product that was tested to validate the design disclosure data during development. The validated design disclosure, together with the rate production process, form what I call the product disclosure-- that is, all the data required to set up and create products that will meet all the product requirements on a continuing basis.
Proofing is the process by which we can validate the entire product disclosure, before actually building up to full rate. The samples of product are called pilot production, or low-rate initial production in the current jargon of the Department of Defense. Regardless of what it is called, the intent is to make sure that the process works as anticipated, and that what comes out of the process is what was desired. Until this proofing has been completed, we cannot be sure that development is complete. In Chapter 12, on contracting, we discuss incentive contracts that use this pilot production as the measurement sample for performance against objectives.
7.7. Checks and Balances by Hand-off of Responsibility
In applying the overarching disciplines discussed in this chapter, it is useful to think about the way the gestalt works in successful projects. In Chapter 3, we talked about checks and balances that organizational hand-offs can provide. Now, with the picture of overarching disciplines fresh in our minds we can tie the two ideas together. A key theme in this book is building trust, but not blind trust. Checks and balances are an important part of integral management.
7.7.1. Project Systems Engineering Hand-off to Design Engineering
In Section 7.3, we saw that Systems Engineering provides two kinds of integration. One is the definition and allocation of the performance requirements baseline and evaluation of the product design that results from design engineering against that baseline discussed in 7.3.1. As such, these people are the technical expertise of the project manager. For that reason I lean toward having this function answer directly to the project manager, or at least be independent of design engineering management.
The second part of the systems engineering responsibility we called the design integration function (section 7.3.2). This function ties all parts of the design together and looks for underlaps in responsibilities and ways to close those gaps. This should be part of the design engineering function but separate from any subsystem management.
7.7.2. Design Hand-off to Testers
At some point in the development process, as described in Section 7.4.2, the designers should reach a point where they are reasonably happy with what they have done and are ready for design verification. It is desirable at this point to step back and let the separate test and evaluation experts put product built to that design through its paces against the requirements. Design engineering should play a supporting role in this activity, but the T and E folks would be responsible for reporting the results against a formal test plan. This design verification is really akin to a preliminary qualification test, but does not really address the production part of the process until later.
7.7.3 Development Engineering Hand-off to Manufacturing
Many businesses use a prototype shop that is part of engineering to build the development specimens to be tested. This could include not only the breadboards, brass boards, and prototypes needed to hone the design, but also the design verification samples. It is my experience that this is a fine approach for everything through the prototypes. But the design verification units should be built, if possible, by the people who will produce the end products, even though it may be in a different production environment.
There are two reasons for this. One is that it keeps the operations people involved in the design process as part of concurrent engineering. The second reason is that it helps guard against the release of a sloppy design data package. If an incomplete or poorly defined design package goes into the programming process discussed in Section 7.1, we are going to soon hear some noise in the system about it. That tends to keep folks on their toes.
7.7.4. Manufacturing Hand-off to Product Assurance
It has long been thought that a separate inspection function guards against manufacturing errors or poor quality ending up in finished products. In recent years, this paradigm has been challenged by the concept that, with appropriate training, the manufacturing operator is capable and motivated to prevent and or correct any mistakes or deviations from an acceptable product. In this model, the quality assurance function is primarily one of auditing to make sure that the process is working.
I have watched this done both ways, and am frankly undecided on this subject. I have no doubt that the new paradigm thesis is correct, and that manufacturing personnel can be motivated to assure quality. My current feeling is that in the early stages of a project manufacturing phase, the more traditional hand-off is desirable because personnel are learning the processes as production rate ramps up. This is true of QA as well. The self-inspection model seems to work best when things have settled into a stable and mature mode, but I have seen problems on projects during their production start-up phase.
Copyright © 2001 L. David Montague. All rights reserved.