Join us at the watercooler. The CA Gen community is here.

iStock_000014820352XSmall

The watercooler – the place to be

Picture the scene.  The watercooler in the corner of our CA Gen development centre bubbles away, as four or five of our team members take a break from their development work – coincidentally all at the same time.

The topic turns from Christmas shopping to last night’s “I’m a celebrity…” (for those outside the UK, this is a TV show, you’re probably glad you’ve never heard of).

A few more people join this ‘watercooler moment’ and – as you’d expect – the topic of discussion changes again.  One of the team members reveals they have had a ‘eureka moment’ with a CA Gen modernisation project.  The conversation then becomes frenzied and work-related, with all discussions about Christmas shopping forgotten.  This has become a classic ‘watercooler moment’ and shows just how valuable these informal work get-togethers can be.  Everyone gets back to their desk, encouraged by the fact that someone has achieved something above and beyond their normal remit, and given the customer much more than they’d expected.

pouring a glass of water

More than just water. The watercooler is the hub for office banter.

This happens everyday in companies across the world  – so why are we obsessing about it here?  Well, during our most recent ‘watercooler moment’, the conversation inevitably turned to the topic of the CA Gen community.  It’s a phrase used widely – and, as you’d expect from such a legacy system, very affectionately.  Which started us thinking; if we could get the entire CA Gen community around our watercooler, what would everyone talk about.  The need for more cups, obviously, but it became the subject of a heated debate.

With no consistent agreement on the big subjects, we decided to find out – in the only obvious way. We’d ask.

So, if CA Gen forms a part of your remit (no matter how small), we’d like to invite you to our virtual watercooler for a a brief chat.  (It’ll only take you a few minutes, and it involved nothing more than clicking a few buttons – certainly with no typing required!

_______________________________________________________________________

Get involved…

Join us at the watercooler here 

and take our quick three minute survey.

________________________________________________________________________

We’ll publish the results (but not individual replies) in a forthcoming blog – so we’d really appreciate you taking the time to participate.

Thanks in advance for taking a brief break from the grind of daily life.  There’s nothing like a break at a watercooler for helping you to recharge!

If you’d like more information about our CA Gen modernisation services, please contact us.

Advertisements

CA Gen model corruptions: deal with them now before they halt your project

JJ for blog

Jeff Johns: Principal Consultant

This month, Jeff Johns tackles errors and corruptions in CA Gen models, and looks at how they can be identified and fixed.

The vast majority of users will know that CA Gen (also known as COOL:Gen) models can contain potentially harmful errors and inconsistencies. We usually refer to these as model corruptions and they are found in the majority of CA Gen encyclopaedias.

It’s not hard to find them (CA Gen includes an option to run an Encyclopaedia Validation Report) but what do you do once you’ve found them? Do you actually need to do anything? Also, not all types of corruption are detected by this report, so how can you be sure you’ve found them all?

eraser and word error

Model corruptions: deal with them before they become a problem.

Most organisations, when they find such errors, frankly tend to ignore them. And that might be fine for a while if the model isn’t actively being maintained. But that all changes when those errors start to rear their heads and cause problems during development (“missing mandatory association” anyone?), code generation, deployment or, worst of all, at runtime. Now, they can’t be ignored. In this blog, we look at how these corruptions – which can have potentially expensive and damaging consequences – can be dealt with once and for all.

To begin with, let’s explore how they originate.

At a very low level, CA Gen has an underlying schema which contains all the objects it needs to define an application – data types, logic statements, UI controls, flows, etc. All of these objects are interrelated, and the schema keeps track of these relationships.

Corruptions happen when, for whatever reason (usually through hardware or software failure), the rules that define the schema are broken. For example, there may be an association missing that the schema expects to be there. It can’t therefore link these objects to one another. There may be an Attribute that’s not associated with an Entity Type, or a Logic Statement that’s not associated with an Action Diagram. Or the corruption may be more complex – perhaps the Attribute is associated with TWO Entity Types when the schema requires that it is associated with one and only one.

The more a model is worked upon, or the more version upgrades it undergoes, the more scope there is for these errors and corruptions to creep in.

Prevention is – in this case – much easier and quicker than cure

In our experience, users tend not to run Encyclopaedia Validation Reports regularly, and most tend to only discover them when they have problems with their models. Most CA Gen users adopt the mentality of ‘it doesn’t seem broken – so why fix it?”. It’s simply not a priority in their day-to-day business.

The problem is that these errors, when they do present themselves, always do so at the most inconvenient time – when you are trying to create a new version of an application, whether as part of on-going maintenance or some modernisation or replatforming exercise. Corruptions can stop a project in its tracks, while everyone wonders what to do. This abrupt halt usually results in, at least, a small panic.

This tends to be the point where we are contacted – and there’s usually, not surprisingly, a sense of urgency.

So what can be done?

Whether being carried out before or after the situation becomes critical, the process is largely the same.

The first thing is to look at nature of the errors. To do this, we have developed our own Schema Tool, which allows us to look at the ‘nuts and bolts’ of the CA Gen model at a very low level. This allows us to see the vast number of objects, and examine all of their properties and associations, just as the Gen toolset does.  However, our tool provides us with the deepest level of information, presented in a meaningful and structured way, allowing us to capture the inter-relationships within a model and between models.

Schema tool

Jumar’s Schema tool is used to examine the contents of a Gen model at the lowest level, allowing the user to navigate around the model’s objects and display their properties and associations.

We can therefore see where things are missing, and where there are invalid properties or associations where they shouldn’t be.

It is very common to discover that there are associations missing – and we are left with orphaned objects. Usually these can simply be deleted using the Schema tool. Alternatively, if we decide that an orphaned object should be retained and we can find out which other objects it should be related to, we can reinstate those relationships.

If there is a large number of corruptions in a model we will usually want to apply automation to apply fixes in bulk and we have created a dedicated tool for this purpose. Additionally, it’s not unusual for us to write one-off pieces of automation to fix unusual or non-standard problems. To do this, we have to find a pattern within a group of the errors, and let the automation carry out a consistent process of correcting them.

It’s also worth asking yourself at this point, ‘am I sure that I have detected all the errors?’ As previously mentioned, the validation report doesn’t necessarily detect all types of corruption. It’s quite possible to have invalid scenarios in models which do not actually break the rules of the schema. Because of this, we have created additional reporting tools to check for some of these other types of corruption.

iStock_000011704260XSmall

Identifying, and then removing the corruptions makes for a smoother project.

Looking at the bigger picture, when we carry out any type of modernization or upgrade activity, we always strongly advise that an error correction process is carried out at the start. It makes sense to fix these at the outset to prevent potential project hold-ups, and because if we’re using automation to modernise a system, we don’t want that automation to be operating on invalid source information. So we run the validation reports, fix the errors that in our experience actually have an impact (there are some harmless ones that we may leave alone), and then run the validation reports again to satisfy ourselves and the client that the errors that we fixed have really gone.

We’re very proud of this capability because there are very few people who can do this. Even the most sophisticated CA Gen users tend not to work at this low level. They’re used to using the toolset and the functionality it offers – whereas we’re 100% familiar with the API and schema where problems like this can be identified and remediated.

Why not try running an Encyclopaedia Validation report on one of your key models and see what you get. You might be surprised. If you’d like to talk to us about the results – or any other aspect of your CA Gen portfolio – please contact us.

Why are you suspicious about automation software?

Andy Scott, Client Services Director, Jumar Solutions Limited

It's all about speed. Automation can save considerable time and money. There's no need to be suspicious.

It’s all about speed. Automation can save considerable time and money. There’s no need to be suspicious.

The simple answer to that question is: that you have good reason to be. The phrase ‘too good to be true’ springs to mind when being told that a complex CA Gen transformation project could be carried out in a fraction of the time compared to traditional development methods.

The scepticism tends to manifest itself in two ways; suspicion over whether using automation is viable – and suspicion over whether it is genuinely possible.

In this blog, we’ll attempt to address those questions – but if you want the short answer it is, respectively, ‘probably’ and ‘yes’.

Viability

With any complex modernisation or transformation task (be it CA Gen or another technology) there is a tendency to think that your particular circumstances are so unique and complex, that no automation software could possibly produce the desired results. Over many years we’ve seen dozens of such environments, and can confidently say that there is no such thing as a ‘standard’ set of circumstances; we have, however, seen a diverse range of implementation styles and ‘standards’. Systems have typically evolved over time, they are often business critical and they may follow standards to greater or lesser degrees. Despite this, highly-tailored automation is a perfectly viable option for modernising and/or re-architecting legacy and ‘tightly-coupled’ systems. The best way to ascertain if this really is the case, would be to look at three basic elements included within the preparation / planning phase for the project.

  • What does the system look like now?
  • What do you want it to look like in the future?
  • What steps are necessary to achieve the desired outcome?

Scenarios where an organisation can create a well-defined set of required transitions, based on a full understanding of the existing ‘as is’ configuration, best lend themselves to the application of tailored automation. But that’s not to say that more complex scenarios don’t. More on that, later.

Tightly coupled systems

Tightly coupled system? Automation could still save you considerable cost.

Scale is also an important factor to consider here. Much of the investment in tailored automation is done upfront – and if there are, say, thousands of in-scope objects to be processed, then that upfront investment is effectively distributed across that large number of objects. The implementation of a risk mitigation strategy to approach the project in a logical phased manner (which is where the majority of our expertise lies) and the resulting economies of scale should help to mitigate any fears or suspicions over the viability of using automation in these circumstances. When projects are done on such a large scale, we are effectively industrialising the process and supporting the required transitions via automation, which provides greater predictability and high quality resulting in cost effectiveness for the client as the time and cost to process each in-scope object reduces dramatically.

We’ve touched on this before but, vital to understanding the feasibility of using automation, is to understand what you currently have. In our experience, only a very small number of organisations know EXACTLY how their system is set up (and how it has evolved over the years following original development), and the majority don’t. (It’s nothing to be embarrassed about – read our previous blog for the reasons why). Even on occasions where an organisation DOES know exactly what they have, it is still important to carry out a comprehensive model analysis exercise to confirm / validate the ‘as is’ situation, hence ensuring that the automation, when tailored and executed, absolutely delivers the desired result.

Model analyser

Jumar’s Model Analyser software quickly ascertains the ‘as is’ situation.

This understanding of the target ‘to be’ scenario needs to be thoroughly defined, in order to determine the transition steps that are candidates to be automated. It’s not unusual for organisations to have defined the target architecture and supporting technologies, but without specific knowledge of the capabilities of automation software, there is a danger that opportunities are overlooked or dismissed as they are not considered viable when implemented using traditional techniques. Our approach encourages clients to stop and take a methodical approach to the process. Only now can the viability of automation start to be considered in earnest and a cost-benefit justification be prepared. However, as a general rule, in our experience, in the vast majority of cases where the scope is large – it is viable.

Possibility (scepticism)

Now, we look at the second major area of scepticism. Is it really automated, or simply outsourced to a low-cost high-capacity development shop perhaps located on a different continent?

Phoenix box device small

Jumar’s Project Phoenix automation software

We can’t speak for other organisations that offer automation solutions, but at Jumar, our automation software does exactly what it says on the tin (read more about our automation tools and Project Phoenix), and we can prove it with strong customer references from complex “real world” CA Gen projects. All automation actions are recorded and documented in a comprehensive execution log report – where the speed of completion of each action is blatantly so fast and the number of objects processed so large that it’s simply not possible that it is being done manually. It is only when looking at such a report that the magnitude of the savings made by using automated methods really becomes clear. Imagine how long it would take (and how much it would cost) to do it manually, or to introduce a change at a late stage in the process. Not to mention the very real risk of introducing manual errors and inconsistencies. Highly tailored automation is fast, consistent, predictable and reliable – operating across the entire implementation as specified it can subsequently be adapted and re-executed as required.

Degrees of automation

One final thing before we draw to a close on the subject of automation – it may be that ‘degrees of automation’ need to be considered when defining the transformation roadmap and the approach to be adopted. There have been occasions where we have been faced with a particularly complex situation where a degree of pragmatism comes into play. For example, when supporting a customer with a migration from a very tightly coupled implementation to a new functionally isolated “n-tier” architecture. Upon further investigation, separating the application into its constituent tiers may show that, in practice, even with automation software, it may be viable (and cost-effective) to automate most – rather than all – of the task in hand. In a recent case study, the tailored automation software effortlessly dealt with 80% of the required transformations, but it was decided that the remaining 20% would be tackled manually. The net effect was still a considerable time and cost saving over 100% manual work. Where automation is not viable (and cost-effective), there’s no way we would recommend it.

_MG_0646

Andy Scott

If you have a CA Gen – or a related legacy technology – transformation project (planned or underway), I’d be more than happy to discuss the benefits that automation could bring to your organisation.

Just drop me a line and we can talk some more.

I’m more than happy to attempt to lay any suspicions to rest once and for all!

Legacy modernisation: You are here. But where is here?

_MG_0565By Jeroen Wolff, Jumar Solutions

There are many perfectly legitimate reasons why even the most accomplished IT professionals find themselves stumped by legacy systems.  We’ll run through a number of scenarios in a short while, but for anyone finding themselves faced with a complex legacy system, their first question has to be “what have we here then?”.

That question forms the basis of this, our latest blog – where we focus on the importance of knowing PRECISELY what is contained within that system, what it does, and how it can be better used.

legacy word in vintage wood type

We will take as an example CA Gen, the computer aided software engineering tool, for which we have global expertise – although this analysis can apply to pretty much any legacy system.  Four scenarios initially spring to mind.

Typical scenarios

The following scenarios are typical of those you may find yourself in when faced with a legacy system challenge.

  1. Because (in our example) CA Gen is the ‘well behaved child’ of an organisation’s IT architecture, it has probably run quite uneventfully for many years without anyone touching it.  However, when there is a need to change the models or upgrade to a newer version of CA Gen, it gets analysed in detail, potentially for the first time in years.
  2. An organisation may be in the process of changing its outsourced IT supplier.  Not only does the outgoing supplier need to fully document the legacy system’s functionality (despite maybe never having had much to do with it), but the new supplier needs to take the time and make the effort to understand it.  With many legacy systems, whether CA Gen or something else, there’s a strong chance that the new outsourcer is not a specialist in that product.  That’s where specific reputable external experts come in
  3. It is highly likely that any large company subject to takeover, merger or acquisition will have a number of legacy systems.  The new owner could easily find itself with the headache of trying to unravel years and years of tightly coupled code – potentially without a clear starting point – or the in-house expertise to deal with it.
  4. When undertaking a planned modernisation, or a componentisation exercise, the first part of the exercise is to look at what you’ve got.  You can’t go on that modernisation or transformation journey without knowing where to start from.  It’s all-well-and-good knowing where you want to be – but your roadmap is useless if you don’t know where you are now.

Corkboard

We’re talking here about IT professionals not knowing their systems in intimate detail – admittedly, this is controversial in a blog aimed at IT professionals.  No offence meant.  The point is, that because of the nature of any legacy system, you can’t necessarily be EXPECTED to know these systems intricately.  They are, by their nature, not new.  They have probably run quietly in the background for so long without needing any nurturing.  The personnel who installed them may have moved on as the systems require so little attention.  Skillsets in general will have advanced.

So, this is not about being deliberately ignorant, or anything that any IT professional should be embarrassed about.  It is perfectly reasonable to not know what is in this mass of code.  What’s needed here is a way of analysing what exists in these many models, how they interact, what issues might they have – and what are the potential corruptions within them.

But what first?

compassThe difficulty comes with the question “where do you start?”.  The task is so unfeasibly complex, that it almost has to be automated.

At Jumar, we’ve seen this scenario again and again, but due to the scale of the task we were forced to adopt the automated approach.  Hence, the development of our Model Analyser tool.  Model Analyser allows the user to quickly get a grasp of the whole model – identifying functionality within those models, such as naming conventions, standard of code, reusability within the code.  Additionally (and this is a subject for more in-depth discussion at a later date) whether there are any potential corruptions which have unknowingly found their way into them over the years.

Model Analyser pulls out all the information about that model, and converts it into an easily understandable, highly detailed report, which allows reports to be run against it.  This is a level of visibility hitherto unavailable in its native format.  In particular, we found it hugely beneficial to build in the functionality to rank logic units based on a number of different complexity indicators which helps to identify where effort needs to be focused.

Model analyser

Watch our Model Analyser video here

From this point on, you have options.  You can begin to make informed choices about what to do with those models.  One immediate potential benefit is that it shows where there is redundant code, which can quickly be removed.

Our experience tends to show that analysing a small number of models initially, gives a good indication of how to proceed with the project.  We can then analyse models further, to begin to form a strategy based upon whatever is the driving force behind the project.  Such analysis can be carried out on- or off-site.

It means you can reduce risk by allowing you to support your portfolio is problems should develop, but also you achieve compliance – by ensuring that all the appropriate documentation exists.  This is something we will look at in more detail in next month’s blog.

More information

Model Analyser was created from real-world project needs and experiences – and therefore has a proven track record.  There’s more information in our YouTube video and our datasheet.  But, if you’d like to talk to someone about legacy modernization or anything detailed in this blog, our team of experts is always happy to chat – by phone, Skype, email or in person. Simply drop us a line.