Building Analytic Components, Part 2

In the previous article we discussed the big picture process of developing analytic components, and we introduced the example we will build on moving forward, which will be drawn from an examination of the data available to review and manage IT Portfolios within the Federal government. The example is enormously relevant because of the challenge of developing useful reporting based on the existing data, which is focused on supporting citizens in understanding the execution of government spending, but may not be as useful for government folks in understanding their own portfolios. This is very similar to the issues most developers of analytic components will face within their own organizations. We are often forced to make a first iteration that “proves the point” with data that was developed for some other purpose or which isn’t a perfect match for our analytic purpose. Often in the pursuit of better organizational performance we are trying to glue different types of data together to provide some previously uncovered insight that drives improvement. Almost inevitably our effort will run into issues with data because we are developing things for a specific purpose, which had gone un-measured until we discovered that new purpose. Maybe we need additional financial data from the accounting department on purchasing of technology to marry to performance information from the IT Operations organization. It may be that the purchasing information as it stands is not granular enough for our final purposes now, but if we build our mockup and business case well enough we may be able to persuade all parties that gathering that information would bring real value to our decision making. 
In our current example, the challenge will be to develop something more meaningful for government decision makers based on the data that is currently available, as well as to develop a “wish list” of information we will need to better facilitate the decision-making process. Some of this information is completely aspirational, and some may be as simple as querying the internal systems and reports that are not publically available but that we know are tracked. For example, we know that project execution data within individual investments is tracked in detail within the OMB 300b; while this information is not publically available and therefore not going to make our initial live analytic components, we can assume it is available for future reporting should our IT investment manager require it. To the degree possible, I try to find the balance between what I can show now to make the data and benefits real for a client, and not constraining the total possible value proposition too much by ruling out data that is not immediately available. In this case we are working to better enable our government IT investment manager to better manage the portfolio and improve organizational performance. In order to keep the size of this article reasonable we are going to constrain our hypothetical IT investment manager to only three high level goals: 
  1. Look at their IT portfolio and identify investments that may be at risk or that may require intervention. 
  2. Look for opportunities for collaboration and attempt to identify redundancy in the portfolio. 
  3. Identify data quality issues, process issues, or other factors that may be impacting the performance of the portfolio.

Identifying Investments at Risk 

Our IT investment manager may first want to look at the portfolio from the standpoint of understanding which investments might be at risk. In order to do this we need to be able to understand the factors that may potentially impact investment performance. The following seem particularly useful and are available from the IT spending dashboard: Variance from Schedule: Are we ahead or behind schedule? I would posit that too great a variance in either direction is probably bad. Even if you are ahead of schedule, it usually says something about your planning if you have too much variance. On the IT spending dashboard, variance greater than 30% is red, 10-30% is yellow, and less than 10% is green.

  • Variance from Cost: See above for the most part; variance of any kind isn’t great, but obviously most of the time you are less worried about being over budget than under. 
  • CIO Rating: This is a best-judgment measure that is supposed to roll together quite a few things into one simple rating, including: risk management, requirements management, contractor oversight, historical performance, human capital, and others. 
  • # of Baselines: The standard against which the performance of the work (in the context of cost and schedule) are measured. Understanding the baseline history places the variance from cost and schedule in context and provides insight into the planning and management of the IT investments. 
  • Size of the Investment: This is useful for prioritizing deviation from cost and schedule. In terms of identifying where to exert management influence first, the investment manager may want to understand the dollars involved. 
  • Performance Metrics: The specific metrics being used to measure investment performance as well as detailed attainment data. 

Things that might be helpful but which aren’t available from the dashboard:

  • Criticality of the investment: Every investment is important, but some are absolutely critical to the organizational mission. The IT spending dashboard lets us see what business function is being supported (BRM) and what services are being delivered (SRM), but doesn’t help us get to a value judgment on the criticality of the investment to that agency or its relationships with or influence on other investments. 
  • Internal/External Users: As above, there is precious little information that would help me determine how big the impact to organizational performance would be if there were problems with the investment. Understanding the size and scope of the user community may help us understand and evaluate the potential impact of investment risk. 
  • Organizational Desire for the Investment: I am trying to draw a line between criticality and desirability here. Criticality I am defining as the investment’s relationship to achieving the mission. Desirability may be something which takes performance from satisfying a requirement to exceeding a requirement. As part of our stakeholder analysis we put ourselves in the shoes of the IT Investment Portfolio Manager and came up with the following key criteria:
 Important Investment Factors

Purpose of the Analytic Component 

One of the hardest things to do when thinking about developing analytic components is develop the statement encapsulating the purpose of the analytic component. Our first analytic component for the investment portfolio manager (IPM) will be focused on helping guide the IPM’s eye to the investment that most require attention. This may be harder than it would seem. One of the hardest things I find in developing high-level dashboards is to resist the temptation to overcomplicate or try to service a broader audience than is really intended. Our dashboard is intended for the person in charge of managing the entire IT investment portfolio. As such, some detail that will be available from more analyst-oriented dashboards will either be abstracted or otherwise wrapped into the presentation layer. The design tension here – between giving enough detail to support decision making and presenting a very complex information set in a manner that is accessible – is going to be very difficult. Throughout the development of most analytic components we will identify measures and views that are very relevant to other stakeholders. In the case of this example we are going to find a great deal of information and views that will resonate with individual investment managers, project portfolio managers, project managers, and analysts. Keeping laser-focused on the objective of our high-level stakeholder will be critical to ensuring the eventual success of the dashboard. We may also need to build some of the lower level analytics in order to understand the various components of the high level analytic well enough to understand the interplay and relationships of the various components.
Stay focused on the right stakeholder group

Finding Value in the As-Is Analysis I mentioned in our first article that an early step in the building of analytic components is to start with what is currently in place. For our IPM this means looking at the existing IT Spending dashboard for ideas. In looking at the information available to us currently, I think that it is obvious that we need to address issues with investments that are ahead or behind schedule/cost or that have been flagged by the CIO. These are the “Big Three” on the existing IT Spending Dashboard and are represented as shown in the following:

IT Dashboard Big Three

Project Level Cost and Schedule Variance Rating
Evaluation (by agency CIO)
Color
≥ 30%
1 (High Risk) or 2 (Moderately High Risk)
Red
≥ 10% and < 30%
3 (Medium Risk)
Yellow
< 10%
4 (Moderately Low Risk) or 5 (Low Risk)
Green

The dashboard does provide the useful feature of enabling you to select whether to view the variance in terms of investment count or by dollars, but does not necessarily help me group or grade investments in order to ensure I give my attention to the ones that are most in need first. Because of this, I think one of the first things our investment manager needs is a way to combine these measures to get a reasonable sense of their impact on investment performance. A problem also exists when we dig deeper into the data, because the cost variance and schedule variance are being driven at the project level, which lives below the investment level. This creates a reporting problem because any specific project-level schedule or cost variance may have disproportionate impact to the overall execution of the investment. A critical path delay in a specific project may unleash a ripple effect of consequences that greatly exceed just the specific project impact; the reverse is true as well. We can certainly handle this from a detailed analysis level by providing a detailed project analysis dashboard depicting cost and schedule variance across the project portfolio. However, even here some real thought will be required in order to surface critical path projects that are at risk. For a large department with several hundred projects and billions of dollars in the portfolio, we need to develop an analytic component that provides a holistic view of our investments. The spread and number of the projects presents us with a problem as we try to look at our project portfolio for clues to our investments. If we use a four square chart showing project variance by cost on the y axis and project schedule variance on the x axis we can clearly identify projects that are in trouble, but it is hard to visually understand how this relates to my investments because there is not a clear way to group the projects and show a consolidated measurement of what this means for the containing investment. It would be much cleaner if the investment portfolio manager could look at the simpler investment only view.

Cost Variance and Schedule Variance Grid
Investment Portfolio
Project Portfolio

It is immediately apparent that the investment chart does not present the whole picture and that the project portfolio chart does not necessarily help the investment manager. In order to help our investment manager we are going to have to come up with some way of characterizing the project-related characteristics being carried within the investment that will help the investment manager identify trouble spots within the portfolio. It should be understood up front that this analysis will not replace open communication with individual investment managers in order to ensure that hidden threats do not turn into major problems. The purpose is to help the portfolio manager identify who to speak with first and direct the attention of the portfolio manager to the areas of the portfolio where attention is most needed. The following include some of the many strategies for enabling the ITPM to leverage the project related characteristics contained within the project portfolio that is attached to each investment:

  • Adding total variance from estimated cost and schedule: Characterizing the investment by total variance gives us a sense of how far off the plan we are, but not necessarily how far off track we are. For example, if we are way ahead of schedule and way behind on cost we may be poor planners, but we may not be in big trouble. 
  • Average into a score: Average all of the projects within an investment and score them on that basis, then characterize the investment in the same manner. This is essentially the same strategy that was used previously under the investment wide scoring process used until last year. The flaw of course is that it does not truly represent the cost and schedule risk in proportion to the dollars that comprised the project. 
  • Percentage of dollars at risk: Create a scoring system or other mechanism to categorize CV and SV into a way of putting it into buckets (R, Y, G). Essentially, size the projects by dollars, decide how what percentage of the investments total dollars need to be “at risk” to constitute an overall rating (R, Y, G) for the whole investment. 
  • Combined with Rules: Create a scoring system or other mechanism to categorize CV and SV into a way of putting it into the (R, Y, G) buckets. If any project is Red, the investment goes red. Outside of red, decide if the investments are green or yellow overall based on a simple majority of the dollars at risk. 

Depending on the organization, any of the above could work, and one of the nice things that would be possible with a strategy that enabled a real score based on project risk per investment is that it would allow us to rank our investments in order by our risk factors. This might be very helpful is we were to attempt to develop a scorecard and ranking system that included all of the above investments. However, that exercise will have to wait for another day. We want our portfolio manager to be able to view the IT investment portfolio and intuitively grasp which projects need the most attention. To do so we have narrowed our focus to a few critical categories of information and developed the following rough design:

Rough IT Investment Portfolio Design

We are going to use a TreeMap, which is useful for “displaying hierarchical (tree-structured) data as a set of nested rectangles. Each branch of the tree is given a rectangle, which is then tiled with smaller rectangles representing sub-branches. A leaf node’s rectangle has an area proportional to a specified dimension on the data. Often the leaf nodes are colored to show a separate dimension of the data.” (Wikipedia, http://en.wikipedia.org/wiki/Treemapping). 
By default, our treemap will depict the following: 
  • Dollars at Risk: Investment dollar value will be characterized by rectangle size, with their associated projects nested within them also shown by dollar size. 
  • CIO Rating: Each investment box will be colored Green, Yellow, or Red, based on CIO Rating. For an explanation of CIO rating, see http://www.itdashboard.gov/faq#faq7. 
  • Project Risk: Each project will be colored Green, Yellow, or Red, based on Cost Performance Index range. 

CPI essentially provides an indicator of how much you have accomplished of the plan based on the dollars expended against the plan. One of the most difficult decisions we made in developing this design was in trying to develop a method for characterizing and visualizing project related risk within the portfolio. We chose to use the cost performance index (CPI), which is described by Wikipedia as: 
“CPI greater than 1 is good (under budget): 
< 1 means that the cost of completing the work is higher than planned (bad); 
= 1 means that the cost of completing the work is right on plan (good); 
> 1 means that the cost of completing the work is less than planned (good or sometimes bad). 
Having a CPI that is very high (in some cases, very high is only 1.2) may mean that the plan was too conservative, and thus a very high number may in fact not be good, as the CPI is being measured against a poor baseline. Management or the customer may be upset with the planners as an overly conservative baseline ties up available funds for other purposes, and the baseline is also used for manpower planning.” (Wikipedia, http://en.wikipedia.org/wiki/Earned_value_management) 
The downside of this is that it is more geared toward financial performance than specific performance against the plan so our investment portfolio manager will not easily see important schedule lags for example, but we felt it was a good compromise. No dashboard will replace good communication, and critical path issues – including schedule lag or other issues bubbling out of the project portfolio – will still require excellent communications and further analysis. I’ll post the finished dashboard to the site next week. As always, I appreciate your feedback and look forward to continuing the discussion. Are you interested in learning more about the visual design of informational elements? Read”The Visual Display of Quantitative Information“to get the journey started.
Put our team to work improving your organization’s performance. 

Thanks as always for reading my blog, I hope you will join the conversation by commenting on this post.

If you liked this post, please consider subscribing to this blog and following me on twitter @jmillsapps. I regularly give talks via webinar and speak at events and other engagements. If you are interested in finding out where to see me next please look at the my events page on this blog. If you would interested in having me speak at your event please contact me at events@joshmillsapps.com.

If you are interested in consulting services please go to MB&A Online to learn more.

Be Sociable, Share!

Leave a Reply

Your email address will not be published.