I teach lean and Six Sigma techniques together, and there seems to be a lot of questions around the differences between Process Mapping (common in Six Sigma and quality system techniques) and Value Stream Mapping (VSM) (associated with a lean approach). More questions revolve around “which is better?” and “which should I use in a given situation?” The first of these latter questions are pretty easy to answer: Both tools are completely valid and have their place in your toolbox to use in documenting processes and, more importantly, identifying ways to improve processes. Before I attempt to answer the second question, let’s take a little closer look at the two tools.
Process Mapping, as taught under the Six Sigma umbrella today, creates visualizations of processes to understand the interactions of complex processes―both technical and organizational. It is heavily used in Six Sigma and other methodologies, such as ISO and standard operating procedures. These maps identify all the inputs and factors that can have an effect on the process or the problem we may be working on. Think elegant. Process mapping is very effective for complex interrelated processes that cross over many functions, both physical and virtual. There are many variations in the symbology used, but there are some basic conventions to help with visualizations using squares or rectangles for major task steps, diamonds for decision points and other varied shapes to standardize the view of what is happening in the process, all connected by directional lines. If you do a web search on “SIPOC” or “Process Mapping,” you will be able to find many examples and pretty detailed explanations on how they are used. Figure 1 below is an example of a simple process map:
Figure 1: Simple Process Map–Write an APICS Article
The SIPOC tool is very useful because it suggests you will examine the key elements of a business process and break out a process map to add visualization. These key elements include suppliers, inputs, process, outputs, customers, and requirements. Typically, the process is broken out into a process map as described above. By having all of the important aspects of the overall process on a single page with the process map, it is much easier to understand everything that needs to be considered before we make any changes, which is a good thing. Then, if we are looking at making changes, we can redraw the process map and then have an easy way to compare the before and after.
Process maps are a wonderful way to provide a clear understanding of all the interrelationships in complex processes–like software coding, processing of calls in a toll-free operation, product, and process design cycles, chemical and pharmaceutical processing and many others in which there are many interactions and cause and effect relationships. Six Sigma as a technique heavily uses process mapping and SIPOC together to attach variation in processes and improve quality.
Value Stream Mapping ― also known as “Information and Material Flow Maps” is credited to Toyota in its best-known forms. In its most popular form, it’s best known in its treatment in Learning to See by Rother and Shook. I have examined and practiced over 25 methods for VSM, and I believe the permutations are limitless. Value Stream Mapping is a scalable approach to create a visual representation of what is happening in a process that includes a great deal of detailed information about what is happening at each step. Due to this, VSMs help us to identify at what point changes are needed to improve system performance.
The two major things that VSMs focus on are material and information flow. If all you do in your company is information (e.g., processing insurance claims), you have to imagine the claim as your “material” in your value stream map, just as if it were a widget. By doing this, it is possible to use a VSM approach successfully in a service or office process. Information flows always exist and often are independent of the thing we are mapping in the material flow. Using a process-mapping approach to the information flow may make sense, coupled with material flow data boxes as illustrated in the data box below depicting an insurance claim:
Figure 2: Value Stream Map Data Box–Insurance Claim
In our example, we have a great deal of information on this task as decided by the VSM team at the onset of the work. Starting from the top is the working schedule and staffing for this step in the process, as this will probably not be consistent across the value stream―knowing this is critical for later work-balancing efforts. Next, we have the process step name and the Cycle Time (C/T) to perform the work at this step. Cycle time should not be confused with Takt time, which is the pace at which the thing needs to be produced in order to keep up with demand. Next, we have the Full-Time Equivalent (FTE) calculation that indicates if there is a disconnect between staffing that is actually in place versus the calculated need. An easy way to calculate your FTE requirement is to divide the CT by the Takt time. For this example, if Takt time is 36 seconds, the formula would be 190 seconds CT divided by 36 seconds Takt time. This gives us a rounded result of 5.3 FTEs.
In examining the next pieces of data, we find additional information, including the rework percentage, uptime percentage (probably for our computer systems we use in this step), overtime hours worked on average per week and our absenteeism percentage. If you study Toyota’s approach to VSM, the data boxes usually have data that lines up with the key Toyota metrics of SQDCPM (Safety, Quality, Delivery, Cost, Productivity, and Morale), the last five of which are directly and indirectly addressed in this data box.
In this data box example, we can already see two major issues. First, we see that there is a disconnect between planned staffing and the calculated FTE, which relates to the observed 200 hours per week of overtime being worked. Does this tell us the whole story? Hardly, but this data does give us an idea of what is going on and an easy way to compare this data box to the others in the VSM. By doing this, we can quickly spot the vital process steps that are constraining the overall VSM and show us where we need to focus scarce time and resources on process improvement efforts.
For the purposes of this article, there is one more critical thing that is typically included in a VSM roll-up: a calculation of the total time the “thing” is in the value stream compared to the amount of time it is being worked on. By adding up all the delays and time the thing is waiting in inventory and then dividing this into the time it is actually being worked on, you get the “value add time” for your value stream. Often, this number is very tiny–less than 1%. This is an indicator of system velocity and a useful metric to compare your results to certain benchmarks. For example, I believe I heard somewhere that if you can approach 10% value-add time, you would be approaching a world-class level of performance on this measure for most processes.
Which to use in your situation?
There seems to be an aura of mystery and black magic in making the decision on which tool should be used for a given situation. I don’t feel it has to be that way. Let me provide a couple scenarios, and we will see if you can arrive at the right conclusion. Read each of these scenarios and form your own opinion with respect to which tool should be applied to each scenario based on what you have learned here or from your other prior experiences.
Scenario #1 involves this situation: You have been charged with improving a process in your business that is underperforming in profitability, and the biggest concern we have is the lead time and the space this process is consuming today. The process is fairly well understood, and it seems most of our issues are driven by communications problems and a high incidence of human error.
Scenario #2 involves a process in which the quality of our results is the single biggest concern. The process is very difficult to understand, and the reasons we are having problems is hard to pinpoint. The problem seems highly technical and the “people element” does not seem to be a big driver. You have to decide how to get started on attacking the problem.
The clues I have provided here include the nature of what we are trying to fix–with one scenario (#1 above) focusing on velocity (lead time) and waste reduction (profitability) in the presence of a process that seems easy to understand. This scenario points at VSM as the likely choice. Scenario #2 specifies that the process is complex, and it is hard to understand all the probable inputs, outputs, and factors that may be impacting it. Another clue is the focus on quality improvement. Process mapping is the probable best choice for scenario #2 because we need to drill down to the most likely sources of variation and build an elegant information-based hypothesis to address them.
That being said, for most people I have worked with and for typical business problems, I have found a variation on VSM far more effective than Process Mapping in many cases. I also would encourage you to think even more creatively; there are cases in which process mapping and VSM should happen concurrently on the same piece of paper. For example, there may be a good case to use a process-mapping approach for the IT and information flow portion of the visualization and VSM for the thing being processed–taking advantage of the best attributes of both tools at exactly the same time.
Ron Crabtree, CPIM, CIRM, is a director-at-large for the Greater Detroit Chapter of APICS and president of MetaOps®, a consulting firm that aligns inside realities with outside perceptions. He is an active speaker and instructs professional audiences on lean, Six Sigma and many other best-in-class business process improvement methodologies. He may be reached at (248) 568-6484 or firstname.lastname@example.org.