A comment I hear repeatedly when talking to C-Level personnel is that
‘our IT department costs so much and delivers so little!’
Whether this is a problem for your business in particular is essentially beside the point. It’s a perception that genuinely does exist in many boardrooms, and it’s this perception that can cause issues, regardless of the reality. It is also not a new problem… and things may be about to get a whole lot worse.
Back in the 1990s some of the IT industry’s horrendous failures were laid bare by an excellent and still occasionally quoted report by The Standish Group. The analysis is a mixture of hard-hitting facts on the percentages of IT project failures, interspersed with some much needed dark comedy of real-life people explaining project failures: my own favourite is Sid who delivered some insurance software two years late and, presumably unbeknownst to him, twelve months after they had actually discontinued the product.
Over 80% of IT projects fail in large organisations when judged on whether they met their budget, delivery date or expected functionality – and the stats for smaller organisations are not much better.
This fairly staggering conclusion meant that for quite some time – and even now! – the Chaos Report (as it was officially called), was one of the most referenced documents in the software industry. I personally have been pitched by Management Consultants, Software Businesses, Offshoring Companies and Process Improvement Consultancies all quoting and claiming to address (or even eliminate!) this 80% failure rate. Unfortunately, I didn’t believe them then and I still don’t believe many of them now.
Let’s say up front that many organisations have fixed some of the problems highlighted in the report, but also that many would only pass by using smoke and mirror tactics. For example, having no up-front fixed scope and/or budget in an Agile project could get you a dubious pass I suppose. However, that said, practically all the reasons listed in the report – poor or changing requirements, lack of executive sponsorship etc. – remain huge problems for software delivery teams. For the majority, the same old issues really haven’t gone away.
In an attempt to give us some hope, the Chaos Report uses an example of the evolution of bridge building over the years. Apparently, we have gotten a lot better at bridge building in the last 3000 years and we shouldn’t despair too much, as software is still very new, and we’re bound to get better at that too… However, they also make the very pertinent point that today we wouldn’t accept an 80% failure rate in bridges – I suspect no one was ecstatic about a similar failure rate for bridges 2900 years ago either if the comparative example is to be used to its conclusion.
So, whilst many of the points they are making are good ones, I think the bridge-building comparison is misleading. IT is much more like a science – as I hope I go on to explain below – than it is like civil engineering and here’s where we have our first big problem.
A member of my family left the UK for the US over 30 years ago, because his area of bio-physics was so specialised that it became a case of move across the world, or give up on his research. He is now recognised as the world’s leading expert in his field. When I last had a drink with him we started talking about quantum mechanics. He explained, very patiently, a lot of stuff that frankly I just didn’t understand and still to this day don’t – not really. He finished off this impromptu lecture by telling me that he didn’t really understand all that much about this subject himself either – and he really meant that.
It occurred to me sometime afterwards that the majority of IT engineers and managers that I know would rarely confess to a lack of knowledge in an IT field. This isn’t due to arrogance on their part, it’s because they are simply expected to know and therefore, for the lack of another ‘expert’, will offer an opinion. This problem is compounded by business chucking literally anything vaguely technical over the fence for IT to deal with, but it’s mainly due to not working in a highly peer-reviewed industry, the way traditional science functions. A techie (an ‘IT person’) could or would often be made to look silly if they were consistently challenged by world-wide peers but that rarely, if ever, happens in commercial IT.
In short, IT (or Computer Science) may be called a science but it doesn’t seem to have to play by the same rules.
Now, if you will allow me, let’s make this existing problem described above a whole lot worse. The industry vogue at the moment is for open source software and the cloud. Quite rightly, people have realised they would rather pay nothing – or very little – for their software components or their processing and storage needs. The financial savings are undeniable if you know what you are doing. And therein lies another major problem.
The open source market offers something new to try practically every day but the complexity is enormous. No matter what they may tell you, technophiles love new stuff to play with and because open source is often free, the temptation is almost irresistible. The choice on offer now is vast and the branches and specialisms available grow exponentially every year. Computer Science has become like traditional science – but without the checks and balances.
It’s getting to the point now that, in reality, you are no more likely to have your internal IT department understand the tech you need than ‘mathematician’ Gill in accounts.
Let me offer a visual example.
Here are some of the tools on offer you may want to consider for your next data project…
Where do you start?
How can you possibly be an expert in all of these?
Which tool will be best for you and what will actually help to solve your problems?
How do you glue each piece of tech together so that you have a complete system that meets your needs?
It can be overwhelming!
So, to continue my doom and gloom assessment, I think the problem is going to get worse – and here’s why:
So, what’s the answer?
Don’t despair! There are some mechanisms and processes that EVERYONE can put in place to make these problems less of an issue.
Here’s a real-world example to ponder; the company I work for is often asked to build cutting edge software that has never been built before. Very often it involves us using some new tech that we have no experience of whatsoever.
So, why do we win this business over some of our competitors?
Put in simple terms, it’s because we do the basics very well which gives us a strong foundation upon which to learn and adapt quickly, and we automate the hell out of everything so that those basics will never deviate and always work.
This offers a solid base – a platform if you will – that we can build on. When something goes wrong in development, which it nearly always will, we already know what the problem is not, and that helps us enormously in narrowing and hastening our search. As a result, we can demonstrate to prospective customers that we are likely to have a much higher rate of success than most. In effect, with our platform and processes, we have a ‘scientific constant’ so that experimentation and ultimately delivery with the new tech is much more effective and reliable.
So, is it true to say that nowadays IT departments really can’t do everything?
I would suggest that for most businesses that we talk to that is almost certainly the case, but often I am speaking to them because they are aware that they need help. Obviously, this is a skewed scenario so some sort of self-assessment for your own company might be useful and the easiest way to do that is to work out what your current scientific constant is.
Here’s a question we ask customers consistently and maybe you should ask it of yourselves:
Are your development/infrastructure, test and production environments exactly the same and if not, why not?
The logic of this good practice is so irrefutable that when it is explained to even a completely non-technical person, they are in disbelief that it sometimes doesn’t happen.
In fact, frighteningly, it rarely happens. The most common answer to this question we encounter is yes but it involves Sarah writing that script and Sean hand-cracking that network etc. If they were honest with themselves the answer to the question is no, they haven’t got a scientific constant they can work with. They can carry on doing the same thing and hoping for different results, but most of us know what Einstein said about that, and therefore their chances of coping with open source, cloud, AI etc are practically zero.
As always, I am happy to have a healthy debate about this article’s content, or indeed, offer help if I am able to do so! I can be reached at [email protected]penta.technology