Preparing for the Perfect Storm - Learning from the TSB Crisis

 

On Sunday, April 22, it first became apparent that TSB was suffering from a major IT crisis that would come to bring the company to its knees, affecting millions of UK customers and preventing many from being able to make payments, pay bills or even access their accounts. It’s a catastrophic failure from the bank, so much so that 10 days on (at the time of writing), many customers are still unable to access their finances through TSB.

This technical outage is a huge deal that is beyond frustrating for customers and stakeholders of the bank – especially looking at those who have yet to receive full paychecks or are facing the possibility of negative credit scoring to contend with.

There are, of course, lessons to be learned in the failings of others; so, while we’re yet to receive a story on what precisely went wrong with this TSB IT crisis, there are fundamental learnings for all businesses regarding IT, change management and Business Continuity.

 

What Can We Learn from TSB’s Mistakes?

Just like the other large-scale business failure of the year where KFC was forced to shut down 750 stores in the UK due to a chicken logistics issue (Business Insider 2018), that can be blamed on switching to a supplier with a single depot and point of failure, we can always learn from the mistakes of those around us. So, in an attempt to pull something positive from this snafu, here are the lessons for businesses not looking to make the same mistakes:

1-      Never rush technical teams on large projects

As has been brought up by the Chair of the Treasury Committee Nicky Morgan in a meeting with TSB’s CEO this week, the impact on TSB’s reputation because of this project is massive.

“Do you realise the reputational damage this has done, not just to TSB, but to online banking in this country?”

Projects, especially those that can have a direct impact on customers, should always be as carefully planned and executed as possible. Ambitious plans are fine and even encouraged, but make sure no one is cutting corners to hit targets on time.

2-      Tests must be comprehensive

This was an incredibly large project that will have cost millions and involved thousands of technical experts, so a huge amount of testing will have taken place. However, it’s clear from the failure of the execution that there must have been some sort of failure in the testing process, whether that’s that the tests didn’t cover everything, or that tests returned misleading results. Sometimes you can’t test for everything, which is why the next lesson is the most important one…

3-      Have a business continuity plan for when things go wrong

Technology fails. This is a fact that we all must deal with from time to time. However, businesses should always be prepared for whatever failure might come their way; whether it’s a botched migration job or simply a case of hardware failure. Solid business continuity planning should mean that a business can continue serving its customers even though something went awry. As with many things in life, always hope for the best but plan for the worst.

Exmos offers a range of managed services designed to make Enterprise IT as Pain-Free as possible, including for Business Continuity and Disaster Recovery Planning. Click here to learn more.

 

Posted by Jordan Maciver on Friday, May 4, 2018

GET IN

TOUCH

PAIN-FREE IT +44 (0)1324 486 844

  • Linked-In
  • Twitter
  • RSS