White Papers

Closing the TMS Implementation Gap

Business and finance background of building closeups.

Why so many TMS implementations succeed on paper and fall short in practice – and the framework that changes that.

 

Executive Summary

In the corporate treasury landscape, deploying a Treasury Management System is routinely treated as a project with a defined end point. There is a plan, a timeline, a budget, and a go-live date. When those four things align, the project is declared a success and the team moves on.

The problem is that meeting a deadline is not the same as solving a problem. Across the industry, a persistent gap exists between the moment a TMS goes live and the moment it actually delivers the operational improvement it was purchased to create. Systems run in parallel with the spreadsheets they were supposed to replace. Licensed modules sit unused twelve months after deployment. Phase 2 commitments made under pressure during scoping quietly become the permanent configuration.

This paper examines why that gap exists and what a more rigorous approach looks like. The argument is not that TMS technology fails to deliver but that the standard implementation model is often structurally ill-designed to realise the value the technology is capable of providing.

 

  1. Redefining What Success Actually Means

The dominant framework for measuring implementation success comes from project management: on time, on budget, in scope. These are useful disciplines. But they are measures of the implementation, not the outcome, and conflating the two is where the industry consistently goes wrong.

A system can satisfy all three constraints while delivering almost none of the operational improvement that justified the investment. The reporting module technically functions but has never been configured to produce outputs the team can use. The ERP integration works for the main entity but breaks down for the subsidiaries that generate the most complexity. The bank connectivity covers the primary relationships but not the counterparties where the reconciliation problem actually lives.

A system can be live, on time, and on budget and still leave the team running the same manual processes they ran before.

The metrics that actually determine whether a TMS has delivered value are conspicuously absent from most implementation contracts: the percentage of licensed modules actively used twelve months after go-live; the measurable reduction in manual reconciliation steps; whether the system has become the genuine single source of truth for cash positions rather than one input among several that still requires manual verification.

Until these become the metrics that define success, are agreed in advance, with baselines established before the implementation begins, the incentive structure will continue to reward go-live and ignore what comes after.

 

  1. The Pre-Contract Scope Problem

A significant proportion of implementation problems are not implementation problems at all. They are scoping problems that become visible during implementation because they were never resolved before the contract was signed.

The standard sales and procurement cycle creates a structural incentive toward vagueness at the scoping stage. Vendors want to close deals. Buyers want to show internal stakeholders that they have secured a system within budget. Both parties are motivated to defer difficult conversations about complexity, edge cases, and integration requirements to a later phase, after the commercial commitment has been made.

The consequence is predictable. Detailed requirements are discovered during implementation rather than before it. Scope expands. Timelines slip. Features that were implicitly assumed to be included require additional investment. The client feels misled. The vendor feels they are being asked to deliver beyond what was agreed. Both are right, because the agreement was never clear enough to support a contrary position.

The solution is straightforward in principle and demanding in practice: the Scope of Work must be comprehensively documented and agreed before the contract is signed, not after. This means completing genuine discovery – understanding the client’s existing processes, data architecture, banking relationships, ERP structure, and reporting requirements – before a contract is executed.

When scope is locked before signature, the project gains two things that are otherwise structurally unavailable: price certainty and genuine alignment. The client knows what they are buying. The vendor knows what they are delivering. The features most likely to drive user value are part of the agreed delivery rather than a future negotiation.

 

  1. Installation Quality Versus Deployment Speed

There is consistent pressure in TMS projects to deploy quickly. Some comes from internal stakeholders who want to see results. Some from vendors with revenue recognition tied to go-live milestones. Some from project teams who have been living inside the implementation for a year and want to be done.

When that pressure overrides sound implementation practice, the result is a superficial installation. The software is running. Basic functions work. But the underlying treasury processes have not been genuinely modernised. ERP connectivity is partial. Reporting requires a manual export to be usable, and so on.

A system that takes slightly longer to implement but eliminates every material manual workaround is worth more than a system that goes live on time but leaves the team maintaining parallel processes indefinitely. The cost of a rushed implementation is not paid at go-live, it is paid daily, by the analyst running exports into spreadsheets, by the treasury manager who cannot trust the cash position without checking it manually, by the organisation that paid for a transformation and received a migration.

A quality-first installation requires three things done properly. Robust data integration: ERP connectivity, bank feeds, and entity structures fully mapped and tested before the team is asked to rely on them. Configuration precision: the system tailored to the specific nuances of the organisation’s structure and reporting requirements, not a standard template with the edges left for the team to work around. User competency: genuine understanding of the system’s logic, not one-off training sessions, so that the team trusts the outputs and can work without a safety net.

None of these are complicated ideas, but all of them are routinely sacrificed to meet go-live targets.

 

  1. Why Spreadsheets Persist

The persistence of spreadsheet processes after a TMS implementation is widely understood as an adoption problem. The conventional response is more training. Both the diagnosis and the treatment are usually wrong.

Treasury teams are not unsophisticated. The analysts who continue to run manual processes alongside a new system are making a rational decision based on direct experience of that system. The spreadsheet offers something the TMS has not yet provided: outputs they can trace, logic they can verify, and results they can defend in a CFO review without needing to explain why the number is different from what they expected.

Teams do not run spreadsheets alongside their TMS because they are resistant to change. They run them because the system has not yet given them a reason to stop.

When the TMS produces a cash position the analyst cannot reconcile to the bank statements, the spreadsheet stays. When the ERP integration drops transactions and nobody is certain which runs are complete, the spreadsheet stays. When the reporting module requires three manual steps before producing something presentable, the spreadsheet stays. These are not attitude problems. They are configuration gaps — and they are fixable.

The right response to persistent parallel processes is an honest audit of where the system is failing to earn trust, followed by targeted fixes to those specific gaps.

 

  1. A Governance Framework That Lasts Beyond Go-Live

The most common failure mode in TMS implementation is not the catastrophic kind. It is the gradual kind: slow erosion of momentum after go-live, configuration that never gets updated, Phase 2 that never ships, post-implementation review that never happens. The project team disperses, the internal champion moves on, and what remains is a system that works well enough that nobody prioritises improving it and not well enough that it has delivered its potential.

Preventing this requires a governance structure designed at the outset, not improvised after the fact. Four elements are essential.

Outcome-based success metrics agreed before contract signature. Define what twelve-month success looks like before the project begins. Establish baselines. Agree on measurement methodology. Make these the formal criteria against which the implementation will be evaluated. Without this, there is no shared definition of success and no basis for accountability.

Pre-signature scope lock. Complete the initial discovery before the contract is executed. This eliminates the conditions that produce scope creep, deferred features, and post-signature cost increases, it forces both sides to have the difficult conversations early, when they are still useful.

A named internal owner with real authority and dedicated time. Identify the person who will own the TMS after go-live before the project begins. Not a super-user with a secondary designation but a named individual with ring-fenced time.

A formal post-go-live review with agreed outputs. Schedule a structured review after go-live, attended by both the internal TMS owner and the vendor’s implementation team. Focus on the measuring success on the key metrics.

 

The Standard Is Too Low

The treasury technology industry has become comfortable with an implementation standard that does not serve treasury teams well. Go-live is celebrated. The gap between go-live and genuine value delivery is rarely discussed openly, because doing so is uncomfortable for vendors who sell implementations and clients who approved the budget.

But the gap is real, and in an environment where treasury decisions carry genuine P&L consequences, it matters. A function still running manual workarounds eighteen months after its TMS went live is not a transformed function. It is a function that paid for transformation and did not receive it.

Closing that gap does not require better technology. The platforms available today are genuinely capable. It requires more rigorous scoping, a governance model that outlasts the project team, an implementation methodology that prioritises quality over speed, and a shared commitment to measure what actually matters rather than what is easy to measure.

The distinction between a TMS that is live and a TMS that works is found in the rigour of the implementation. Go-live is not the finish line. It is the end of the beginning.

Treasury teams deserve better than the current standard. The organisations that demand it – that insist on pre-contract scope clarity, outcome-based success metrics, and a 90-day review that asks the hard questions – will find the technology they already own delivers far more than they have been getting from it.