Frequently enough we encounter a logistical network with a diamond structure to it. This happens when a task in the primary sequence provides inputs to two successors, one of which is in the primary sequence and the second is in a component sequence. This is illustrated in the next figure, which shows the primary sequence in red and the component sequence in blue.
The diamond structure doesn’t appear to be a problem, until we calculate the component tolerance and insert it into our model of the network. When we do this, we end up with a significant gap in the primary sequence of the network. This is illustrated in the next figure.
The gap is created by two factors. First, the size of the component tolerance, which is based on the variation associated with the entire component sequence, is large. Second, the precedence dependency between the last task in the component sequence and its predecessor in the primary sequence prevents the component sequence from moving to the left. Consequently, when we include the component tolerances in our model of the project, we end up with a what appears to be a most discomforting gap in the primary sequence.
The knee-jerk response of most project managers today is to force the gap to vanish. Unfortunately, the resulting model ignores completely the strong interaction between variation and the parallel structure. As such, the resulting model is overwhelmingly wrong; it grossly underestimates the duration of the project; and it misleads project managers and decision-makers into making commitments that cannot be met by the resources of the enterprise. But, the resulting picture is strikingly comforting for those who lack any understanding of variation, despite the fact that accuracy relative to duration is destroyed.
There is a solution, of course. Rather than eliminating the gap arbitrarily, we can move the earlier part of the component sequence to the left. Specifically, we move early the portion of the component sequence that precedes the problem dependency. By doing so, we uncouple most of the component sequence from the diamond-shaped feature of the network, and we diminish the magnitude of the interaction effect. This is shown in the next figure.
However, this tactic does not allow us to eliminate the gap entirely. If we did so, we too would be ignoring the interaction between variation and the parallel structure. Instead, our robust project design tactic lets us reduce the magnitude of the interaction, which we model with a smaller but finite gap. The smaller gap (shown in the next figure) provides a correction factor that at this time we can only estimate.
How should we estimate the magnitude of the correction factor? At this writing, the most practical way to estimate it is simply by calculating a component tolerance for the parallel segments that are involved directly in the diamond-shaped structure. This gives us smaller component tolerances, which in turn create a smaller gap. But we can do this only in cases where we can uncouple the earlier segment of the longer component sequence.
[So that others also might subscribe to The Project Management Soap Box, please share the following link with friends and colleagues. Subscribe!
Thursday, October 21, 2004
Tuesday, October 19, 2004
It is clear [Variation And The Parallel Structure] that the interaction between variation and the parallel structure of our project models is quite strong. The prediction error caused by ignoring this interaction is significant. We have only two courses of action available to us, if we want useful predictive models. First, we can simply estimate the magnitude of the interaction, thereby including its effect in our predictive models. Second, we can take measures to diminish the magnitude of the interaction effect.
To diminish the magnitude of the interaction effect, component sequences can and should be started earlier. Doing so greatly increases our level of confidence that the outputs of the component sequences will be available for the subsequent assembly task, rather than risking a delay of the assembly task. The degree to which we move early each component sequence is called the component tolerance. This approach is illustrated with our small project, in the next figure.
The first task in each path of the project is an entry task. These, necessarily, are scheduled. Component tolerances tell us how to schedule the start of each entry task. However, this method of scheduling entry tasks is useful if and only if we are working exclusively with a single-project system of resources, i.e. a team of resources that is fully dedicated to a single project and whose team members have no significant commitments to other projects. In a multiproject environment, this method is problematic, as it causes us to schedule the starts of entry tasks as late as possible (ALAP), taking into account the component tolerance. For reasons that are beyond the scope of this writing, the ALAP approach creates problems within a multiproject environment.
Finally, there are other situation where we cannot apply this simple technique directly, even when we are working with a single-project model. We discuss one such situation in the next chapter.
[So that others also might subscribe to Shareholder Value, please share the following link with friends, colleagues, and your boss. Subscribe!]
Sunday, October 17, 2004
We begin with a simple example of a project model with a parallel structure. This is shown in the next figure. For the sake of simplicity, every task in the model has an expected duration of 10 days and a standard deviation of 5 days. The primary (longest) sequence consists of 8 tasks, the last of which is an assembly task. The assembly task is shown in light yellow. The model also includes two component sequences, the outputs of which are required at the start of the assembly task. While this model is indeed very simple, it is sufficient to demonstrate the strength of the interaction between variation and concurrent sequences, which create the parallel structure.
The next figure shows the partial numerical results of a Monte Carlo analysis. Section (a) of the figure shows the results of the first seven tasks of the primary sequence alone. Notice that the mean duration is 70 days. The variation about that 70-day mean is significant. Sections (b) and (c) of the figure show the partial results for the two component sequences. Notice that each of the component sequences is modeled as starting on day 30 of the Monte Carlo simulation. Each of the component sequences has an average duration of 40 days and a degree of variation that rivals that of the longer sequence. The mean value for the duration, from the start of the project to the completion of the two component sequences, is also 70 days.
Now let’s ask the difficult question. The partial numerical results show that the mean duration from the start of the project to the completion of the 7 task sequence is 70 days. The partial results also show that the mean duration from the start of the project to the completion of the two component sequences is also 70 days. What might most people expect as the mean duration from the start of the project to the start of task no. 8, the assembly task?
Given the current, extremely widespread practice of selecting a commitment duration that matches what looks like the last scheduled day of work, we have to conclude that most executives, who rely on the models crafted by their project managers (as well as the project managers), would expect day 70 to coincide with the mean start time of the assembly task. Most executives and their project managers would be wrong.
To understand just how wrong, consider an even simpler project, which consists of just two tasks in parallel. The project is shown in the next figure. Each task is modeled with a Log-Normal distribution, with a mean duration of 30 days and a standard deviation is 14 days. The mean duration is indicated by the thick blue lines over the histograms. These coincide with the ends of the task bars.
For this even smaller project, the mean duration is certainly not 30 days, as the current widespread practice suggests. Since the two 30 day tasks are in parallel, the project isn’t finished until both tasks are finished. Consequently, the longer of the two tasks always determines the duration of the little project, and the parallel structure acts as a highest-only-pass filter. The result is a two-factor interaction between variation and the parallel structure of the model. The strength of this two-factor interaction is apparent in the next figure, which shows the histogram of project duration, in yellow.
The yellow histogram is the statistical equivalent of the two tasks in parallel. The mean duration indicated by the statistically equivalent representation is 38 days. This corresponds to the mean start time of the subsequent assembly task.
The interaction between variation and the parallel task structure appears to add 8 days to the mean duration of the little project. The mean start of the subsequent assembly task appears to be delayed by 8 days. Of course, in reality there is no delay. It only appears that there is a delay to us, because our expectations have been shaped by an incorrect model, a model that ignores the interaction effect entirely. The expectations of decision-makers are continually shaped by similarly incorrect models of their projects.
Now let’s look at a slightly more realistic model. This time we use a model that has 10 parallel tasks, all with a mean duration of 30 days, just like the tasks in the previous case. The results for the 10 task model are shown below. The upper histogram shows the mean duration and the degree of variation associated with each of the ten tasks. The yellow histogram shows the statistically equivalent representation of the entire project.
The mean duration for the entire project is 56 days, nearly twice the 30 day commitment duration that the current, widespread practice causes decision-makers to specify. In fact, the probability of having the entire project completed within a 30 day interval is less than 1%. Further, the mean duration for the entire project comes with a confidence level of less than 60%, entirely too low for any customer whose millions of dollars may be at risk. To achieve a comfortably high confidence level of, say, 95%, we would need to select a commitment duration of 83 days, nearly three times longer than the duration suggested by the current practice.
As a last step, take a look at how the strength of the interaction between variation and the parallel structure increases, as the number of parallel tasks increases. The next figure shows a parametric curve of the strength of the interaction as a function of the number of tasks in parallel. Notice that the greatest contribution to the mean duration takes place when going from a single task to two tasks in parallel. The increase in the mean duration is 8 days in this case. However, although the additional contributions are not as large as the initial contribution, they are incremental contributions. They all add to the mean duration of the project.
Unfortunately, even when the project managers of today might want to correct the models of their projects, by taking the significant effects of variation into account, they find it exceedingly difficult, because the tools available to project managers make absolutely no provision for this. At this writing, the most widely distributed project management tools provide no means of calculating and including a project tolerance in the models of projects. Consequently, today’s project managers and the executives to whom they report would see only the ten tasks in parallel. They would select a commitment duration of 30 days, given the boneheaded misrepresentation provided by today’s tools and today's widespread practice.
Now, let’s return to our original example. Recall. We have a primary sequence of 8 tasks and two component sequences in parallel with the primary sequence. We are striving to estimate the mean duration, from the start of the project to the start of the assembly task. Of course, we’re interested in the mean duration to the project’s completion as well.
The first histogram in the figure shows the true value of the mean duration to the start of the assembly task. Notice that the true mean is 80 days, not the 70 day interval that the deterministic models of today would have us believe. The mean duration to the project’s completion is 90 days.
Now let’s explore the implications of what we’ve just discussed. Specifically, let’s see the degree to which your organizations and your careers are exposed to risk, by your own deterministic models of projects. The next figure compares the commitment duration based on the deterministic model of our little project. Virtually all your peers would propose a commitment duration of only 80 days for this project. By doing so, they would expose their superiors, their customers, and their careers to an inordinate level of risk, as the lower portion of the figure shows clearly.
The 80 day commitment based on the deterministic model comes with an extremely low confidence level. The probability of completing this little project within an 80 day interval is less than 20%. The corresponding risk to customers and to careers is greater than 80%. In fact, the deterministic 80 day duration equals only the mean duration to the start of the assembly task. Further, the mean duration of the project comes with an unacceptably low confidence level of approximately 50%. Whereas, a committed duration that brings with it a comfortably high confidence level (from the perspective of the customer of the project) extends beyond 110 days.
Next, I would like you to consider the following. An error of this magnitude is created by a deterministic model of a very small project. This simple project has but two component sequences in parallel with the primary sequence of tasks. A real project doesn’t have just two or three parallel sequences. It has a dozen or more. Therefore, when we deal with real projects, the effects of variation are far more pronounced than this simple illustration suggests. The deterministic models of real projects are overwhelmingly wrong, all because they ignore the effects of variation. The risks to which project managers, executives, their businesses, and their customers are exposed are correspondingly large.
Clearly, the effects of variation must not be ignored. Instead, variation must be understood and managed, so that its adverse effects might be diminished. In the subsequent chapters I’ll show you techniques for limiting the adverse effects of variation.
In the meantime, please do me a favor. No, two favors! First, let me know that you're reading this. Since I started this blog I've received almost no feedback, despite the number of subscribers. Second, please help me to recruit more subscribers. One day I hope to publish this as a book. Feedback from interested readers would be most useful. Thank you!
[So that others might also subscribe to Shareholder Value, please share the following link with friends, colleagues, and your boss. :-) Subscribe!]