There are many perfectly legitimate reasons why even the most accomplished IT professionals find themselves stumped by legacy systems. We’ll run through a number of scenarios in a short while, but for anyone finding themselves faced with a complex legacy system, their first question has to be “what have we here then?”.
That question forms the basis of this, our latest blog – where we focus on the importance of knowing PRECISELY what is contained within that system, what it does, and how it can be better used.
We will take as an example CA Gen, the computer aided software engineering tool, for which we have global expertise – although this analysis can apply to pretty much any legacy system. Four scenarios initially spring to mind.
The following scenarios are typical of those you may find yourself in when faced with a legacy system challenge.
- Because (in our example) CA Gen is the ‘well behaved child’ of an organisation’s IT architecture, it has probably run quite uneventfully for many years without anyone touching it. However, when there is a need to change the models or upgrade to a newer version of CA Gen, it gets analysed in detail, potentially for the first time in years.
- An organisation may be in the process of changing its outsourced IT supplier. Not only does the outgoing supplier need to fully document the legacy system’s functionality (despite maybe never having had much to do with it), but the new supplier needs to take the time and make the effort to understand it. With many legacy systems, whether CA Gen or something else, there’s a strong chance that the new outsourcer is not a specialist in that product. That’s where specific reputable external experts come in
- It is highly likely that any large company subject to takeover, merger or acquisition will have a number of legacy systems. The new owner could easily find itself with the headache of trying to unravel years and years of tightly coupled code – potentially without a clear starting point – or the in-house expertise to deal with it.
- When undertaking a planned modernisation, or a componentisation exercise, the first part of the exercise is to look at what you’ve got. You can’t go on that modernisation or transformation journey without knowing where to start from. It’s all-well-and-good knowing where you want to be – but your roadmap is useless if you don’t know where you are now.
We’re talking here about IT professionals not knowing their systems in intimate detail – admittedly, this is controversial in a blog aimed at IT professionals. No offence meant. The point is, that because of the nature of any legacy system, you can’t necessarily be EXPECTED to know these systems intricately. They are, by their nature, not new. They have probably run quietly in the background for so long without needing any nurturing. The personnel who installed them may have moved on as the systems require so little attention. Skillsets in general will have advanced.
So, this is not about being deliberately ignorant, or anything that any IT professional should be embarrassed about. It is perfectly reasonable to not know what is in this mass of code. What’s needed here is a way of analysing what exists in these many models, how they interact, what issues might they have – and what are the potential corruptions within them.
But what first?
At Jumar, we’ve seen this scenario again and again, but due to the scale of the task we were forced to adopt the automated approach. Hence, the development of our Model Analyser tool. Model Analyser allows the user to quickly get a grasp of the whole model – identifying functionality within those models, such as naming conventions, standard of code, reusability within the code. Additionally (and this is a subject for more in-depth discussion at a later date) whether there are any potential corruptions which have unknowingly found their way into them over the years.
Model Analyser pulls out all the information about that model, and converts it into an easily understandable, highly detailed report, which allows reports to be run against it. This is a level of visibility hitherto unavailable in its native format. In particular, we found it hugely beneficial to build in the functionality to rank logic units based on a number of different complexity indicators which helps to identify where effort needs to be focused.
From this point on, you have options. You can begin to make informed choices about what to do with those models. One immediate potential benefit is that it shows where there is redundant code, which can quickly be removed.
Our experience tends to show that analysing a small number of models initially, gives a good indication of how to proceed with the project. We can then analyse models further, to begin to form a strategy based upon whatever is the driving force behind the project. Such analysis can be carried out on- or off-site.
It means you can reduce risk by allowing you to support your portfolio is problems should develop, but also you achieve compliance – by ensuring that all the appropriate documentation exists. This is something we will look at in more detail in next month’s blog.
Model Analyser was created from real-world project needs and experiences – and therefore has a proven track record. There’s more information in our YouTube video and our datasheet. But, if you’d like to talk to someone about legacy modernization or anything detailed in this blog, our team of experts is always happy to chat – by phone, Skype, email or in person. Simply drop us a line.