Your Planning Platform Is Not Your Data Warehouse

Published on: 12 April 2022
Written by: Michael Taylor

11 reasons why your planning platform will complement, rather than alleviate the need for a data warehouse

For the Office of Finance, the implementation of robust budgeting, forecasting, and reconciliation processes and capabilities that reduce the reliance on ungoverned spreadsheets and flexible but fallible human input, provide immense value to the overall business.

Such systems introduce a fundamental level of rigour by creating a baseline financial plan for success. They help keep an organisation and its key players honest on performance against budgets and targets, and they close an important loop in the reconciliation gap between what was productively earned vs. what was received and realised.

However, such systems can be a victim of their own success. Whilst the title of this piece might seem to be stating the obvious, the insatiable appetite for quality data across an enterprise and in the Office of Finance in particular, often results in an ever-increasing quantum of financial and non-financial data to be requested and loaded into such platforms.

The unintended consequence of this is that those systems then get pushed beyond their sweet spot, resulting in dissatisfied users, higher platform costs, overworked source systems, and unwelcome technical debt.

CFOs, CIOs, and other senior executives need to appreciate and acknowledge that Planning systems should not be seen as the centrepiece of an organisation's Data and Analytics ecosystem. Rather, Planning and Reconciliation are just two of a long list of valuable use cases that an organisation can unlock as part of a broader Data and AI strategy. This piece discusses 11 reasons why your planning and reconciliation platforms complement, rather than alleviate the need for a robust data foundation strategy and approach.

1. Application Performance

There is an old adage that seasoned data practitioners all over the world mutter frequently over the course of their data careers, and that is to keep the largest volume of data and the associated query load, close to “where the iron is”, a reference to where the processing capacity is. This helps reduce costs, keeps your front-end use-case centric applications lightweight and high-performing, and keeps your architecture ‘loosely coupled’, offering modularity and flexibility to change in future. We’ve seen organisations opt for a level of detail far too granular in their budgeting and forecasting systems, resulting in those who need to finalise a budget or forecast rallying against the unexpected workload to keep all that data current.

The advice is to keep your level of detail appropriate for the job at hand, keep the data volume and processing load close to ‘where the iron is’, and think creatively about ways to meet user requirements by leveraging common features such as drill-through-to-detail. Using approaches such as this can simulate to your users they are working with a well-designed, cohesive data architecture underpinning front-end applications and the underlying data that support them.

2. Skills availability

Finding the skills required to implement most of the leading budgeting and forecasting platforms doesn’t come cheap. Skills in advanced TM1 Turbo integrator, Anaplan Model Building or Master Anaplanners, and good modular architecting more generally, come at a distinct premium compared to skills that anchor around the SQL query language which underpins most Data Foundation approaches.

Granted, if you’re currently grappling with a Cloudera platform, or even looking for strong skills in Redshift optimisation, such skills have similar premium value due to scarcity. The key takeaway is that if your data strategy centres around your Finance and Planning platforms, you’ll pay a premium for skills now and into the future.

3. Licensing costs

Planning applications designed for the office of finance typically cater for a focused set of users, and their licencing frameworks are constructed with this in mind. If you’re thinking of loading numerous data sets into your finance systems, and there is a need to disseminate that information to a large number of users, check the licencing constructs that enable this. Sharing data out to hundreds or even thousands of users is likely to be cost-prohibitive, compared to more traditional approaches of landing and staging data in a data foundation technology and exposing this via one of the more traditional Business Intelligence tools.

4. Long term record-keeping of budget/forecast accuracy

Are you measuring the budgeting and/or forecasting effectiveness of your organisation over time? What is the consequence of members of your team continually missing their budgets and forecasts? Is there a trend? Do people know why they are under or over budget?

Since data volume can drive the cost of your planning platform, it’s probably more appropriate to export and archive your historic point-in-time budgets or forecasts so they can be properly assessed against what really happened in the actuals, and so that models can be trimmed of that historic data and in turn kept more lightweight.

Advanced analysis can then be conducted on that exported archived data, and those learnings utilised in future planning. Data Science and Machine Learning approaches can also be applied against historic data to offer budget and/or forecast recommendations, and human contributors can be asked to justify why they are departing from the scientifically proven, data-driven projection (often referred to as a ‘business override’).

This level of transparency and historic recording will put people on notice to think more clearly about their plans and improve overall planning hygiene of the company.

5. Data loading, data quality, and orchestration

Some planning platforms offer rudimentary data transformation capabilities, whereas others have extremely limited capabilities, necessitating a third-party tool for data integration tasks. Trapping poor quality data at the ingestion stage is one of the fundamental components of a good data transformation framework.

Without this key quality gate in place, the risk of poor-quality data entering your planning system is extremely high, and therefore confidence and trust in the data is eroded. Garbage in, garbage out, as they say.

6. Source system query load considerations

Typical source systems that are in play for a Planning or Reconciliation implementation usually consist of the central Enterprise Resource Planning (ERP) system, General Ledger (GL), Accounts Payable (AP), and Accounts Receivable (AR) systems. Such systems are not generally designed for reporting, given their highly normalised structures; this commonly necessitates multiple joins across distinct tables to get to the data you want in the shape you need (e.g., Microsoft AX will require ingestion and joining of close to 100 tables).

Now, there is a justified case for hitting these systems with live queries since it’s not uncommon for late arriving journals to show up right at the end of a monthly close process, but the same rules do not apply to all source systems. In most cases, source systems must be protected from material query load as it can slow down the functioning of critical business systems and processes, and for this reason it is better practice to query them once per day, and stage that data within a central repository for repeated use.

7. Integration of Non-financial data

It might come as no surprise that it’s not all about the financials, and depending on the industry you’re in, financials might be such a lagging indicator that focusing on them too closely is like driving a car looking through the rear vision window.

Say you’re a sporting body, ensuring spectators are attending games, people are tuned-in from the comfort of their lounge room, and kids are signing up at grass roots level, the financials should largely take care of themselves. Or if you’re a manufacturer and you’re striking the right balance between supply-led and demand-led factors, you’re paying fair market rates for quality raw materials and your pricing regimes are correct, your workforce allocation is well-planned-out, and customers are happy with the product, the financials should again take care of themselves.

For many of these data sets, the planning system is not the right place for them to be maintained in detail. That is the role of your Data Foundation.

8. Data science and Machine Learning capabilities

Many budgeting and forecasting platforms get it about 90% right but fail at the last mile. That is, the platform is put in place, the applications are designed according to good modular practice, but they fall over when they rely on a human to essentially ‘type in’ what they think the budget or forecast could or should be. No doubt there is a role for intuition and gut feel, however some planning application vendors are now moving the game forward.

Some vendors have introduced powerful Machine Learning forecasting technology that can run right down to SKU level. This is useful if your supply chain teams are structured to budget at a low level of detail. Other planning tools integrate nicely with leading Data Science and AI platforms. The problem however, is that coupling your planning applications with the vendor’s Data Science and Machine Learning offerings can be frightfully expensive.

When a robust data foundation is in place, your data sets are open and loosely coupled, and offer other alternatives such as Open Source to conduct statistical forecasting, and other relevant algorithmic approaches.

9. Ad-hoc and self-service reporting capability

Budgeting and forecasting platforms are improving their reporting capabilities with each release. However, Business Intelligence (BI) platforms have been designed with self-service in mind and serving multiple audiences, and ultimately there is no substitute for offering a full suite of ad-hoc reporting capabilities. Planning, budgeting, and forecasting platforms have their quirks in this regard; some prefer to model data across separate time dimensions rather than the classic Year > Quarter > Month > Day drill-down path, which can confound certain users.

Others lay their dimensionality horizontally across the screen, rather than a more logical vertical layout, a real problem where the dimensions of an ad-hoc analysis are broad, requiring some nightmare scrolling right and left to navigate analysis pathways. Others simply don’t allow for simplistic speed-of-thought analysis such as excluding items within a dimension and offering automatic subtotal recalculation.

These features are critically important if you’re in a sales, marketing, or supply chain role and are wanting to perform rapid basket or what-if analysis on dynamic sets of data, which can often be when you’re on the phone to a buyer or supplier negotiating the subtleties of a key transaction.

10. Data Fabric thinking

The notion of the Data Fabric, and associated concepts such as Data Virtualisation, anchor on the idea that sometimes it’s best to leave source data where it is, rather than moving it around in costly batch windows. The Data Fabric builds upon the data foundation concept but takes it one step further by leaving data at its upstream central stores of truth whilst making it ‘appear’ as if they have each been consolidated into one location.

Planning systems rarely offer this feature or functionality, and instead rely on the movement of data into in-memory and other related Online Analytical Processing (OLAP) structures. This can be problematic and is worsened by the fact that some planning platforms ‘lock’ users out of their models while metadata updates take place, often necessitating the splitting of models so data refreshes can occur, which then impacts support and maintenance tasks.

Data Fabric thinking can alleviate data refresh troubles and keeps the options open, which will appeal to the CIO and other technical leaders who have a limited overnight batch window at their disposal.

11. Data Sharing

For decades, keeping valuable data assets to internal use only has been the norm. However, appetite to consider sharing data beyond an organisation’s walls is growing. Suppliers are realising there are numerous advantages to gaining a better understanding of key customer demands, so they can be more proactive in their preparation to deliver quality products and services.

During a time where supply chains are stretched to their limit and delivery timelines and costs are blowing out, knowing the ideal order quantities as early as possible can be of substantial benefit. The opportunity for improved data sharing in B2B scenarios is huge. In fact, many market tenders in the business to business (B2B) supply chain space are now featuring data sharing as a mandatory requirement, putting the challenge firmly at the feet of the respondent as to how they plan to meet this growing need to maintain competitiveness.

Data Sharing will become a fundamental pillar of most data foundation platform offerings in future, that is one opportunity organisations won’t want to miss.

Related Articles

Copyright © Tridant Pty Ltd.

Privacy Policy
chevron-down