As the cloud has become mainstream, the scope for contentions and aggravations between new cloud-native services and still viable legacy applications has started to grow rapidly. Indeed, it has all the potential to become another of those exponential growth areas where the IT functions and resources become swamped by errors, failures and possible catastrophes of our own making.
The pace of growth in cloud-natives apps and services is alarming enough in its own right, and while built using well known languages and with well known tools, there is still plenty of scope for unforeseen problems to be created when set to work collaboratively in bigger cloud services. Add in the practical necessities of having the cloud-native apps work with the established legacy business applications that are still the best solutions for the tasks they a required to do, and those problems can multiply.
Add in the existing problems that come with the increasing use of open source apps and services as integral components of new applications, some of which famously became the cause of the security claims made against Huawei in the run up to the launch of 5G mobile technology and CIOs, tasked as they are with realising ever-more critical business objectives using ever-more complex IT resources, are increasingly in need of help.
Getting intelligent about the ‘how’ of software
One solution that is gaining traction here is the development of software intelligence, tools that provide a level of analysis of applications that functional and code errors can be identified before they are found the hard – and expensive- way and can identify and report on how applications collaborate with each other. In particular it can point to those collaborations that are going to cause operational problems or inhibit performance as a business works through the uncharted waters of its own particular journey to transformation and modernization.
Exploiting these capabilities was the primary subject of the keynote presentation at the Software Intelligence Forum in New York, hosted by one of the leading suppliers of the technology, Paris-based CAST. The company’s founder and CEO, Vincent Delaroche, had managed to coral three senior IT managers, from professional services firm Marsh McLennan, Broadridge Financial Solutions, and consulting firm, Capgemini, to talk through their experiences of the problems unearthed, the practical implementations to address them, and the results so far identified.
Ravi Khokhar is an Executive VP & Head of Cloud at Capgemini and with clients like major, global banking and insurance businesses, is also most permanently confronting the high levels of legacy software debt his clients are sitting on. And in the worlds of banking and insurance stringent compliance and regulatory regimes mean that proven legacy applications can become all but sacrosanct. He also indicated the tools go so far in scope as to allow a focus towards achieving sustainability and carbon neutrality by being able to accurately predict and forecast the impact of the move to the cloud.
Mark Schlesinger, Senior Technical Fellow at Broadridge, has a slightly different problem, that of making sure that the company’s 600 specialist financial applications meet the needs of its clients, and that the clients’ own processes are getting the best out of the services they are using. In essence, Broadridge hosts a client’s business environment, which is a selection from those 600 specialist applications.
This means not only being involved in running the Broadridge business in terms of resiliency, operations, always-on technology, and risk management, but also working directly with clients to manage how the applications selected from the service interact with other existing applications they continue to use. This is particularly relevant in clients operating, or building, hybrid cloud environments. The ability to identify where contentions will occur, together with the nature of those contention, helps to both speed the process of re-engineering the flashpoints and, especially important in financial services, reducing or eliminating the risks to the business.
For MarshMcLennan’s Global CIO, Paul Beswick, the need for good intelligence is driven by the situation he found himself in when joining the company – in charge of a new-to-him IT team at the time when separate IT organisations were being merged together. As he observed, there were hundreds of applications, a wide variety of them that were custom, developed by vendors that were supported by the company, and some that were supported externally.
Inevitably, this meant that there were huge amounts of technical debt based around significant amounts of legacy applications. This all helped to make it harder for the team to move with the speed and agility required. So the need was to triage the situation in order to focus the effort of the team. The need was also to have a consistent way of looking at the entirety of the custom software portfolio and be able to understand how to manage it. This included tracking progress in reducing points of obsolescence and paying down technical debt.
Does the CIO need to know?
CAST’s Delaroche asked about being the CIO these days and reported his anecdotal evidence that many lately have said to him that they no longer need to understand the technology, as that was the job of their staff. He disagreed with that view but wondered what the trio thought. Beswick’s response broadly spoke for the other two – this is very much a people issue, with technology coming in a poor second:
If you were an organization that had a handful of really high quality teams managing a handful of products, you could imagine that’s a defensible position. In reality, I think most big organizations have a wide range of things developed and supported by a wide range of different people. Some of them aren’t people you’ve handpicked. Some you’ve inherited and some come from third parties.
That means the CIO has to have oversight on the quality of their work, and the skillset to understand what that oversight is revealing, he added:
Everyone starts off with good intentions. But new requirements come in from over here and deadline pressure over there. And there are compromises being made all the time. Good teams make good decisions about what those compromises should be, but you don’t know, in a given situation, you have no idea which of those decisions are going to come back to bite you later. A mantra many of my team will have heard, is that every technical decision we make today, we’re going to regret at some point. Hopefully 10 years out, possibly two years out.
Schlesinger reinforced that line of thought, pointing out the attrition in staff that is inevitable. That attrition maybe 5% or 25%, but the real is issue is it is hard to integrate new members into any development team without having accurate information about what they are working on and the current issues being targeted. Software Intelligence can provide that common ground.
Delaroche also asked them to outline their individual process of moving multiple, at least fairly complex systems to the cloud. This, Beswick acknowledged, is a major challenge. Key here, he suggested, is identifying the bundles of applications to move and the dependencies most likely to break. There is a real need to understand how those applications are architected within themselves, and the way they interact between each other. This means logging a wide range of data such as CPU usage, average latency, and which ports are used to connect.
Using CAST for this allows his team to collect and collate this information and identify problem areas. But in a large environment where third party support can be a great help, it then becomes necessary to select carefully, he said:
As we’re thinking about a much more wide-scale move to the cloud, one of the things that’s really important for us is to figure out who the right partners are as part of that journey. We’ve all heard the `Don’t worry, give it to us, it’ll save you 20%’ pitch, but someone who can actually come in and get into the detail of what we have, and be able to tell us something much more substantial, and much more grounded, is a useful part of that.
Open source is everywhere, but still a problem
In addition to specialist sessions diving deeper into such issues as how software intelligence tools can make moving legacy workloads, de-risking such processes and exploiting automation is the new environment more effective and reliable – and achievable in far faster timescale – there was one session that did touch on an important, but less often talked about subject.
This was the proliferation of open source code as integral elements of other applications and services, and the dangers that this can now present to business users. Many readers will, of course, recall the problems with open source code Huawei had just prior to the launch of 5G mobile services, when the Americans accused the Chinese company of being a security risk and having defective code.
The defective code issue stemmed from the fact well-established and widely used open source applets were found to be component parts of other open source sub-systems that were part of Huawei-developed applications. Many of those applets no longer had anyone providing support for the code, making them a potential security risk.
Back in November 2019 the open source code repository service, GitHub, grasped this increasingly important nettle and launched a Code Sponsorship Scheme to encourage individuals or companies to take on official support for these orphaned, but still widely used, applets. According to CAST this has helped the company’s Highlight crawler to easily probe code repositories such as GitHub, capturing relevant information about every update to every applet held there. This gets added to the Highlight OSS knowledgebase comprised of some 100 million+ components.
Subhadeep Bhattacharjee, Senior VP and Head of Quality Engineering at US bank, Northern Trust, talked delegates through some of these issues, observing that about 70% of the code being used now is open sourced. The upside of this is easy access to code that is flexible and cheap. But there is a downside:
It is extremely important to manage effectively, not only from a financial services standpoint, but also because of the amount of vulnerabilities that are out there if not managed appropriately.
The second problem are the legal issues that you might potentially run into where there are several versions of open source that you should potentially not use without the right kind of engagement with firms. So how do you, it all becomes exceedingly complex to manage those risks when you have thousands of applications and different kinds of platforms and different kinds of scenarios that you deal with every day.