Troux: The answer to the new budget reality

Troux the answer to the new budget reality

 

I had the very good fortune to be invited to take part in a happy hour with great food and good drinks with Troux after the Enterprise Architecture (EA) conference I attended yesterday. Throughout the last few years where it seems as if every quarter has been their biggest quarter, they’ve also managed to consistently be rated amongst the leaders in EA in periodicals and places such as Gartner. I really believe that this is going to be their year in the Federal space and that everything will start to click for many reasons.

The first reason is the idea of Troux On-Demand. Some things that every customer is faced with when acquiring these types of decisions support tools is the big capital investment needed upfront, the resources required to get things going, and the sense that it’s going to take a long while to get to value. These can all be seen as strong deterrents. What has changed this perception in the Federal space is this idea of Troux On-Demand, a cloud service within the Amazon Gov cloud that can essentially be turned on and combined with the accelerator programs that Troux has developed to allow organizations to get to value quickly, in 90 days, 120 days etc.

The combination of those things is going to create this really unique package for Federal that allows organizations to come in, identify areas where they can save money, root out redundancy, and all the other things that organizations are going to have to do to meet budget requirements. The budget climate has gotten to where there is simply no way to continue on doing things the way they have always been done.  A lot of organizations that we’re talking to are just looking at what they’ve got for funding and trying to figure out how they are going to continue to deliver on the mission.

At last night’s event there was this real sense that the combination of need and the evolution of the technology was going to create this incredible opportunity in the federal space for folks that have cracked the nut on how to deliver answers in that space quickly, relatively painlessly, and of course cost is always a factor. By being able to address something as a service, you’re able to reduce that huge front end expenditure with some of these tools. You can see pretty rapid adoption even in the federal space which is sometimes a little bit more conservative.  So I’ll be interested to see how it all plays out. We’ve developed some federal specific offerings around the idea of coming in and understanding the portfolio quickly so we’re very excited about what they put together. I think there’s going to be a lot of people that see this as how they are going to continue to meet the mission given even the extraordinary challenges that we are facing today.

 

Thanks as always for reading my blog, I hope you will join the conversation by commenting on this post.

If you liked this post, please consider subscribing to this blog and following me on twitter @jmillsapps. I regularly give talks via webinar and speak at events and other engagements. If you are interested in finding out where to see me next please look at the my events page on this blog. If you would interested in having me speak at your event please contact me at events@joshmillsapps.com.

If you are interested in consulting services please go to MB&A Online to learn more.

Building Analytic Components, Part 2

In the previous article we discussed the big picture process of developing analytic components, and we introduced the example we will build on moving forward, which will be drawn from an examination of the data available to review and manage IT Portfolios within the Federal government. The example is enormously relevant because of the challenge of developing useful reporting based on the existing data, which is focused on supporting citizens in understanding the execution of government spending, but may not be as useful for government folks in understanding their own portfolios. This is very similar to the issues most developers of analytic components will face within their own organizations. We are often forced to make a first iteration that “proves the point” with data that was developed for some other purpose or which isn’t a perfect match for our analytic purpose. Often in the pursuit of better organizational performance we are trying to glue different types of data together to provide some previously uncovered insight that drives improvement. Almost inevitably our effort will run into issues with data because we are developing things for a specific purpose, which had gone un-measured until we discovered that new purpose. Maybe we need additional financial data from the accounting department on purchasing of technology to marry to performance information from the IT Operations organization. It may be that the purchasing information as it stands is not granular enough for our final purposes now, but if we build our mockup and business case well enough we may be able to persuade all parties that gathering that information would bring real value to our decision making. 
In our current example, the challenge will be to develop something more meaningful for government decision makers based on the data that is currently available, as well as to develop a “wish list” of information we will need to better facilitate the decision-making process. Some of this information is completely aspirational, and some may be as simple as querying the internal systems and reports that are not publically available but that we know are tracked. For example, we know that project execution data within individual investments is tracked in detail within the OMB 300b; while this information is not publically available and therefore not going to make our initial live analytic components, we can assume it is available for future reporting should our IT investment manager require it. To the degree possible, I try to find the balance between what I can show now to make the data and benefits real for a client, and not constraining the total possible value proposition too much by ruling out data that is not immediately available. In this case we are working to better enable our government IT investment manager to better manage the portfolio and improve organizational performance. In order to keep the size of this article reasonable we are going to constrain our hypothetical IT investment manager to only three high level goals: 
  1. Look at their IT portfolio and identify investments that may be at risk or that may require intervention. 
  2. Look for opportunities for collaboration and attempt to identify redundancy in the portfolio. 
  3. Identify data quality issues, process issues, or other factors that may be impacting the performance of the portfolio.

Identifying Investments at Risk 

Our IT investment manager may first want to look at the portfolio from the standpoint of understanding which investments might be at risk. In order to do this we need to be able to understand the factors that may potentially impact investment performance. The following seem particularly useful and are available from the IT spending dashboard: Variance from Schedule: Are we ahead or behind schedule? I would posit that too great a variance in either direction is probably bad. Even if you are ahead of schedule, it usually says something about your planning if you have too much variance. On the IT spending dashboard, variance greater than 30% is red, 10-30% is yellow, and less than 10% is green.

  • Variance from Cost: See above for the most part; variance of any kind isn’t great, but obviously most of the time you are less worried about being over budget than under. 
  • CIO Rating: This is a best-judgment measure that is supposed to roll together quite a few things into one simple rating, including: risk management, requirements management, contractor oversight, historical performance, human capital, and others. 
  • # of Baselines: The standard against which the performance of the work (in the context of cost and schedule) are measured. Understanding the baseline history places the variance from cost and schedule in context and provides insight into the planning and management of the IT investments. 
  • Size of the Investment: This is useful for prioritizing deviation from cost and schedule. In terms of identifying where to exert management influence first, the investment manager may want to understand the dollars involved. 
  • Performance Metrics: The specific metrics being used to measure investment performance as well as detailed attainment data. 

Things that might be helpful but which aren’t available from the dashboard:

  • Criticality of the investment: Every investment is important, but some are absolutely critical to the organizational mission. The IT spending dashboard lets us see what business function is being supported (BRM) and what services are being delivered (SRM), but doesn’t help us get to a value judgment on the criticality of the investment to that agency or its relationships with or influence on other investments. 
  • Internal/External Users: As above, there is precious little information that would help me determine how big the impact to organizational performance would be if there were problems with the investment. Understanding the size and scope of the user community may help us understand and evaluate the potential impact of investment risk. 
  • Organizational Desire for the Investment: I am trying to draw a line between criticality and desirability here. Criticality I am defining as the investment’s relationship to achieving the mission. Desirability may be something which takes performance from satisfying a requirement to exceeding a requirement. As part of our stakeholder analysis we put ourselves in the shoes of the IT Investment Portfolio Manager and came up with the following key criteria:
 Important Investment Factors

Purpose of the Analytic Component 

One of the hardest things to do when thinking about developing analytic components is develop the statement encapsulating the purpose of the analytic component. Our first analytic component for the investment portfolio manager (IPM) will be focused on helping guide the IPM’s eye to the investment that most require attention. This may be harder than it would seem. One of the hardest things I find in developing high-level dashboards is to resist the temptation to overcomplicate or try to service a broader audience than is really intended. Our dashboard is intended for the person in charge of managing the entire IT investment portfolio. As such, some detail that will be available from more analyst-oriented dashboards will either be abstracted or otherwise wrapped into the presentation layer. The design tension here – between giving enough detail to support decision making and presenting a very complex information set in a manner that is accessible – is going to be very difficult. Throughout the development of most analytic components we will identify measures and views that are very relevant to other stakeholders. In the case of this example we are going to find a great deal of information and views that will resonate with individual investment managers, project portfolio managers, project managers, and analysts. Keeping laser-focused on the objective of our high-level stakeholder will be critical to ensuring the eventual success of the dashboard. We may also need to build some of the lower level analytics in order to understand the various components of the high level analytic well enough to understand the interplay and relationships of the various components.
Stay focused on the right stakeholder group

Finding Value in the As-Is Analysis I mentioned in our first article that an early step in the building of analytic components is to start with what is currently in place. For our IPM this means looking at the existing IT Spending dashboard for ideas. In looking at the information available to us currently, I think that it is obvious that we need to address issues with investments that are ahead or behind schedule/cost or that have been flagged by the CIO. These are the “Big Three” on the existing IT Spending Dashboard and are represented as shown in the following:

IT Dashboard Big Three

Project Level Cost and Schedule Variance Rating
Evaluation (by agency CIO)
Color
≥ 30%
1 (High Risk) or 2 (Moderately High Risk)
Red
≥ 10% and < 30%
3 (Medium Risk)
Yellow
< 10%
4 (Moderately Low Risk) or 5 (Low Risk)
Green

The dashboard does provide the useful feature of enabling you to select whether to view the variance in terms of investment count or by dollars, but does not necessarily help me group or grade investments in order to ensure I give my attention to the ones that are most in need first. Because of this, I think one of the first things our investment manager needs is a way to combine these measures to get a reasonable sense of their impact on investment performance. A problem also exists when we dig deeper into the data, because the cost variance and schedule variance are being driven at the project level, which lives below the investment level. This creates a reporting problem because any specific project-level schedule or cost variance may have disproportionate impact to the overall execution of the investment. A critical path delay in a specific project may unleash a ripple effect of consequences that greatly exceed just the specific project impact; the reverse is true as well. We can certainly handle this from a detailed analysis level by providing a detailed project analysis dashboard depicting cost and schedule variance across the project portfolio. However, even here some real thought will be required in order to surface critical path projects that are at risk. For a large department with several hundred projects and billions of dollars in the portfolio, we need to develop an analytic component that provides a holistic view of our investments. The spread and number of the projects presents us with a problem as we try to look at our project portfolio for clues to our investments. If we use a four square chart showing project variance by cost on the y axis and project schedule variance on the x axis we can clearly identify projects that are in trouble, but it is hard to visually understand how this relates to my investments because there is not a clear way to group the projects and show a consolidated measurement of what this means for the containing investment. It would be much cleaner if the investment portfolio manager could look at the simpler investment only view.

Cost Variance and Schedule Variance Grid
Investment Portfolio
Project Portfolio

It is immediately apparent that the investment chart does not present the whole picture and that the project portfolio chart does not necessarily help the investment manager. In order to help our investment manager we are going to have to come up with some way of characterizing the project-related characteristics being carried within the investment that will help the investment manager identify trouble spots within the portfolio. It should be understood up front that this analysis will not replace open communication with individual investment managers in order to ensure that hidden threats do not turn into major problems. The purpose is to help the portfolio manager identify who to speak with first and direct the attention of the portfolio manager to the areas of the portfolio where attention is most needed. The following include some of the many strategies for enabling the ITPM to leverage the project related characteristics contained within the project portfolio that is attached to each investment:

  • Adding total variance from estimated cost and schedule: Characterizing the investment by total variance gives us a sense of how far off the plan we are, but not necessarily how far off track we are. For example, if we are way ahead of schedule and way behind on cost we may be poor planners, but we may not be in big trouble. 
  • Average into a score: Average all of the projects within an investment and score them on that basis, then characterize the investment in the same manner. This is essentially the same strategy that was used previously under the investment wide scoring process used until last year. The flaw of course is that it does not truly represent the cost and schedule risk in proportion to the dollars that comprised the project. 
  • Percentage of dollars at risk: Create a scoring system or other mechanism to categorize CV and SV into a way of putting it into buckets (R, Y, G). Essentially, size the projects by dollars, decide how what percentage of the investments total dollars need to be “at risk” to constitute an overall rating (R, Y, G) for the whole investment. 
  • Combined with Rules: Create a scoring system or other mechanism to categorize CV and SV into a way of putting it into the (R, Y, G) buckets. If any project is Red, the investment goes red. Outside of red, decide if the investments are green or yellow overall based on a simple majority of the dollars at risk. 

Depending on the organization, any of the above could work, and one of the nice things that would be possible with a strategy that enabled a real score based on project risk per investment is that it would allow us to rank our investments in order by our risk factors. This might be very helpful is we were to attempt to develop a scorecard and ranking system that included all of the above investments. However, that exercise will have to wait for another day. We want our portfolio manager to be able to view the IT investment portfolio and intuitively grasp which projects need the most attention. To do so we have narrowed our focus to a few critical categories of information and developed the following rough design:

Rough IT Investment Portfolio Design

We are going to use a TreeMap, which is useful for “displaying hierarchical (tree-structured) data as a set of nested rectangles. Each branch of the tree is given a rectangle, which is then tiled with smaller rectangles representing sub-branches. A leaf node’s rectangle has an area proportional to a specified dimension on the data. Often the leaf nodes are colored to show a separate dimension of the data.” (Wikipedia, http://en.wikipedia.org/wiki/Treemapping). 
By default, our treemap will depict the following: 
  • Dollars at Risk: Investment dollar value will be characterized by rectangle size, with their associated projects nested within them also shown by dollar size. 
  • CIO Rating: Each investment box will be colored Green, Yellow, or Red, based on CIO Rating. For an explanation of CIO rating, see http://www.itdashboard.gov/faq#faq7. 
  • Project Risk: Each project will be colored Green, Yellow, or Red, based on Cost Performance Index range. 

CPI essentially provides an indicator of how much you have accomplished of the plan based on the dollars expended against the plan. One of the most difficult decisions we made in developing this design was in trying to develop a method for characterizing and visualizing project related risk within the portfolio. We chose to use the cost performance index (CPI), which is described by Wikipedia as: 
“CPI greater than 1 is good (under budget): 
< 1 means that the cost of completing the work is higher than planned (bad); 
= 1 means that the cost of completing the work is right on plan (good); 
> 1 means that the cost of completing the work is less than planned (good or sometimes bad). 
Having a CPI that is very high (in some cases, very high is only 1.2) may mean that the plan was too conservative, and thus a very high number may in fact not be good, as the CPI is being measured against a poor baseline. Management or the customer may be upset with the planners as an overly conservative baseline ties up available funds for other purposes, and the baseline is also used for manpower planning.” (Wikipedia, http://en.wikipedia.org/wiki/Earned_value_management) 
The downside of this is that it is more geared toward financial performance than specific performance against the plan so our investment portfolio manager will not easily see important schedule lags for example, but we felt it was a good compromise. No dashboard will replace good communication, and critical path issues – including schedule lag or other issues bubbling out of the project portfolio – will still require excellent communications and further analysis. I’ll post the finished dashboard to the site next week. As always, I appreciate your feedback and look forward to continuing the discussion. Are you interested in learning more about the visual design of informational elements? Read”The Visual Display of Quantitative Information“to get the journey started.
Put our team to work improving your organization’s performance. 

Thanks as always for reading my blog, I hope you will join the conversation by commenting on this post.

If you liked this post, please consider subscribing to this blog and following me on twitter @jmillsapps. I regularly give talks via webinar and speak at events and other engagements. If you are interested in finding out where to see me next please look at the my events page on this blog. If you would interested in having me speak at your event please contact me at events@joshmillsapps.com.

If you are interested in consulting services please go to MB&A Online to learn more.

Building Analytic Components, Part 1

Business Intelligence seems to be on the tips of many IT executives tongues these days. After the Cloud, Big Data, and Master Data Management; Business Intelligence has a prominent place in the pantheon of IT buzz words. The intent of business intelligence to me is quite simple, leverage the information available to the business to improve performance. This can take many forms, but at its core BI isn’t about making the dashboards, cool charts, or data warehouses. It is about helping the organization make better decisions. With that said I am going to walk through the process of designing a single business intelligence analysis component focused on IT Portfolio Management in two articles. The first article will describe the high level process of developing the analytic, and the second article will be a detailed dive into the development of a specific analytic component that will be used to manage investments within the portfolio. I will be using data from the Federal IT Spending dashboard which is publically available at http://www.itdashboard.gov/. Just because the data is from the public sector doesn’t make this example less relevant to the private sector.  IT investment portfolio characteristics like deviation from cost and schedule are broadly applicable across both the public and private sector, and the complex missions and comprehensive IT portfolios ofthe public sector can provide a rich point of reference particularly for very large (Fortune 500) private sector organizations.


Caption: IT Spending Dashboard
I think that with any analysis or reporting component it is important to start by identifying the stakeholder community that is being served. In this case, the analysis component under development will be used by IT Investment Managers and the senior executive team tasked with managing the organizational IT investment portfolio for performance and risk. In both the public and private sectors IT portfolio management is an often talked about but often difficult to execute practice. For large organizationsthe ability to manage competing organizational requirements, disparate technologies, uneven performance data, and complex portfolio composition issimply too difficult.  The portfolio ends up becoming largely managed on the basis of organizational politics and personalities. In order to effectively manage the IT investment portfolio of a large and complex organization there needs to be a data driven approach and analytics that are available and agreed upon by the entire organization. Developing these analytics and the comprehensive suite of informational requirements necessary to manage a complex organization required more space than I have in this article. My intent here is really to focus more on the process of tailoring analytics to meet stakeholder requirements rather than to focus too deeply on a particular problem faced by investment managers.  However, I think there will be some value for people interested in that particular specialty as well.

Whose needs are being serviced?

In the development of any serious analytic component the first thing we need to understand is the stakeholder community weare serving. In our case this community is tasked with managing the IT investment portfolios of large complex organizations. In order to better understand their informational requirements one of the first things I try to do is get a grasp on the informational inputs that they are currently using tomake decisions. This organizational reporting archeology work is revealing because it will often uncover vast differences in the information executives are using to make similar decisions. This is important to surface because one important benefit a BI effort can bring is more standardization to decision-making approaches and a more open evaluation of decision criteria. Another finding is that similar information is being developed for different report, for different executives by different groups.  This results in enormous wasted effort. This homework process sets the stage for the first big exercise in developing a better decision support system for our investment managers – the facilitated session.

Caption: Remember that all your stakeholders are different.
As a consultant I believe that these facilitated sessions are critical to the requirements gathering process and enable frank and open discussion of competing requirements. One of the most consistent things that I have found is that the information and analytics that are currently in use by an organization reflect a much smaller subset of the stakeholders than intended, often only the person who made the buy decision.  The second is that the stakeholders often have very different opinions on what the informational components of the analysis should be. One of the things I often do for my facilitated sessions is to create the giant wall of reporting.  Depending on the meeting space and the volume of reports that I have on hand, I will create an entire wall dedicated to the information that is currently being used to drive the decision making process. In the case of the investment portfolio management team, this will include reports related to individual investments as well as any dashboards, spreadsheets or analytics that are being used to support portfolio-wide decisions.  I will also have one of our team members develop a “core data” sheet that maps the various informational inputs being used across the current reports and analytics, along with some associated meta data regarding authoritative information sources, the process that creates the information, data refresh requirements and any obvious differences inreporting semantics. Finally, we will set up a whiteboard or flip chart board with some basic categories including columns for information that is relevant in the aggregate, information that is relevant at the discrete level, questions that we are trying to answer, and sources of data. Depending on the analysis weare trying to facilitate we may have more spaces available, but those are the basics. This set up process usually takes about two weeks depending on the size of the organization and complexity of the analysis we are trying to facilitate.  A lot of this is working through the homework processes associated with the existing decision-making and reporting artifacts.  Depending on the scope of the engagement this activity may be coming from a business process re-engineering effort or in the specific case of IT Investment Portfolio Management, we mayhave been asked in to supplement or review the process as a whole, which has implications for this process as well.
Important Factors in Understanding the IT InvestmentPortfolio
In any case, once the set up is complete and our homework is done it is time to bring in the stakeholders. I find that one of the first things that is useful is to attempt to provide some structure and scope to the exercise by focusing on the questions that need to be answered and the use cases for the analysis. One of the great benefits of this is providing a focus on the results and outcomes we are trying to achieve rather than simply organizing and reporting on the information that is available. One of the great problems I have found with BI initiatives is that they are often in large part driven by the information that is available rather than the information that would be useful to make decisions. I have never sat in a facilitated session that did not uncover some informational gap in the current decisions support scenario.  The group may discover that some of the data currently being developed and managed is not truly necessary in order to support decision-making. It iscritical that this be recorded and that the thin layer of information required to support decision making be explicit.  This way the resources that are being expended in the as-is state to support the existing decision making process can be re-allocated to meet other organizational needs. Too often reports and analysis components are added but nothing is ever removed. This is not just wasteful of resources, but it is actually harmful to the quality of the decisions that will be made. The world is a complex place for modern executives. The ability of business intelligence systems to not just provide the decision maker with relevant information but to remove informational clutter is critical to the overall quality of the decisions. Many teams are hesitant to remove analytics and reports because “somebody might be using them” or because “we spent so much time developing them.” These are not good reasons to continue maintaining these resources. This is not to say that these exercises should result in an entirely new set of reports and analytics. Maintaining legacy reporting is great as long as there is a real use case, in fact some of the best reports and analysis that will support your organization are probably sitting in spreadsheets on a few executives’ desks right now. These are usually cobbled together from several existing reporting systems by an executive’s staff and have been used to drive organizational meetings for years. Unfortunately, these spreadsheets and PowerPoint are rarely the things that survive a new BI initiative.  Instead, the reports they were cobbled together from in the old system are re-built in the new system and the need to cobble together the spreadsheet that actually drives decision making continues to need to be built every week or month.  Identifying the questions and use cases is critical because it enables the development of criteria for what stays and what goes, as well as setting boundaries around data requirements.
Developing the burning questions
Once you have come upon an initial set of questions and use cases you can begin to design analytic components to meet those requirements. In our investment manager example, one of the use cases is to identify under performing components of the investment portfolio. This enables earlier engagement by management to either increase the performance or terminate depending on a more detailed analysis. The task then becomes selecting from the larger portfolio those investments that may be at risk. In next weeks article I will provide adetailed step-by-step walkthrough of developing a meaningful analytic component for our IT Investment Manager. 

Thanks as always for reading my blog, I hope you will join the conversation by commenting on this post.

If you liked this post, please consider subscribing to this blog and following me on twitter @jmillsapps. I regularly give talks via webinar and speak at events and other engagements. If you are interested in finding out where to see me next please look at the my events page on this blog. If you would interested in having me speak at your event please contact me at events@joshmillsapps.com.

If you are interested in consulting services please go to MB&A Online to learn more.