If You Make It, Don't Forsake It

When you build something, you want it to stand the test of time, right? In this post, I'll examine why that actually doesn't always happen.

If You Make It, Don't Forsake It

When you build something, you want it to stand the test of time, right? In this post, I'll examine why that often doesn't happen, in the context of software development.


Years and years ago, a (really nice) guy I used to work for described how the attribution and accountability (and suing!) thing worked in terms of building houses.

Surveyors identify land ripe for planting properties on, architects come up with blueprints and great ideas and they then get handed to developers and then builders. When the houses are built. In the middle of all of this is likely a local authority planning operation, who take some money, spend some time considering the application and then approve or deny. Let's assume they approve, cos if the money's right...

The baton then gets handed to someone responsible for filling those houses with people, who in turn make them into homes.

Where magic happens, singles become couples, couples become families and so on.

So, as a home maker, a couple or a family, let's assume you saw an advert, drove by a show home or whatever, but ultimately decided to pay tens, if not hundreds of thousands of local currency to go and live in one. You want the keys, shift your gear in and have nothing at all to worry about until you decide to move on, or you die (in which case you no longer have an opinion). If you're living though, you expect to be able to trust in your investment.

But, what if someone in the 'collaboration' above got something wrong, leaving your home inaccessible due to the developers never building a reasonably quality road to it, or worse still, the surveyors missing the fact that the water plain is a foot above your front door?

You're left with a day in, day out nightmare, or at least one that is in the making.

In technology, this is no different. And here's why...

In the beginning

In the same way that a housing development has a lengthy list of cast and crew, so does delivering an information technology solution. In this post, I'll use a customer portal web application as the example.

To begin with, someone has an idea. More often than not, it's someone out in 'the business', otherwise known as the Customer:

"So, we have a bunch of new products and services, guys. We need some sort of self-service 'portal' thing, so they can buy them and make changes to them".

And that's where it starts - a vague, shapeless notion of where the Customer wants to get to, insofar as their imagination can carry it.

Who you gonna call?

Well, in most cases it'll be your friendly neighbourhood software development team / department. They do this kind of stuff in their sleep and of course they never seem all that busy...

With "What is it they actually need?" being the first response from the development folks, the idea is then passed on to someone else to care about, usually Business Analysts. Because the idea is no more than that, the BA's primary objective is to take that idea and break it down into specific requirements, what's in scope and what's not in scope. Basically turn that vague, shapeless notion into something reasonably palatable by those coding types. This process is a little like a pendulum, in terms of how the requirements pass back and forth between the BAs and the Customer, until such time as they make sense.

Eventually, the devs have something they can work with, so start the uncomfortable process of 'sizing' the work. Sizing is effectively estimating the effort and ultimately time needed to deliver stuff, which to this day makes many of them feel like they're taking some kind of oath, that if broken, will see them marched into the forest, never to be seen again. This is a symptom of blame culture in companies, but even in firms that are quite relaxed about the concept of things being late, is a hard thing to shake off. Anyway...

You have your requirements nailed on, you have developers feeling reasonably happy about what's expected of them, so it's time to engage some fresh actors.

Design time

Architects start to chime in.

"How's all this software gonna hang together?" "What existing software is it gonna talk to?" "Who's gonna use it?" Just a small selection of questions that start being asked during this phase of the development. More questions emerge, (sometimes) including:

"What are the security considerations / requirements?"

"Who invited you?"

OK, so that's not necessarily a fair reflection of the scenario everywhere, but it is historically a real thing and does still happen today. Architects care about the practicality and elegance of the design, in the same way BAs care about the requirements being nailed and similarly the devs care about whizzing out their product in an efficient and effective manner. None of these actors necessarily put security front and centre of their thoughts. This isn't a great place to be and we'll explore why later on.

If no one's thinking about security upfront, then no one's baking it into the architecture or design, and therefore not in the development and therefore ultimately the product potentially goes out into the World with unlocked doors, windows, leaky pipes and so on.

Build time

The devs now have their user stories, all feature driven, mostly functional requirements based and off they go. Breaking stories down, sizing, referencing some bloke called Fibonacci and being generally 'agile'. The non-functional requirements are taken into the mix of course, but they aren't always security focussed. Why would they be? No one at design time has laid down those challenges or poked in those considerations / requirements.

Off we go, building things.

Time passes...

Thorin sits down and starts singing about gold.

And then eventually some software emerges - a web interface and some APIs maybe. Meanwhile, the ops teams have provisioned servers, databases have been created and holes have been poked in firewalls. It's all good and our system is tested and functioning nicely. Acceptance testing has passed and we have the greenlight to launch our shiny new portal.

And so we launch. Much celebration and all round backslapping ensues.


Well, since security wasn't considered during the design phase, it almost certainly wasn't considered and baked in during the build phase. On that basis, it should come as no surprise that the system is at best going out of the door without any vulnerabilities to begin with (by chance) or at worst, it's going out of the door riddled with vulnerabilities. Or somewhere in between.

There is a good chance that once a system exists, it appears on a 'list' somewhere, or perhaps it's drawn to other people's attention because of a launch email or something like that. If so, then that's a saving grace, because the recipients of that information might be security people.

It's almost like a zero day scenario; a system has been put into production on a platform, without any security consideration and so it's a race against time to retrospectively assess it for vulnerabilities before someone you'd rather hadn't... has. And then fix them.

What if you don't find and fix the problems before someone's exploited them? Well that's kinda too bad, but there are a bunch of serious questions to be asked about how the problems could possibly exist in the first place.

And so, the analogy of a building project comes into play. Well, it can't have been the developers' fault, because no one discussed security with them, but then architects are all about the practicality and elegance of the design. The 'customer' doesn't take security into consideration, because they know nothing of its importance.

And so on.

The reality is that it's everyone's responsibility to ensure that security is baked into the endeavour, from day one.

Once the system is out there, it's a risk to the enterprise. Even if security had been considered throughout the process of development, there's no guarantee that things haven't been missed, or that (as is quite common), components have been implemented during the development that are out of control of the developers / operators and fall into old age and thus ill-health really quickly.


The Colosseum metaphor in the above image is deliberate. When the ancient Romans built the structure, it was to last for the entirety of their empire, which was forever. The problem was that things changed; the Huns came and wrecked their plans and then when Christianity rocked up, it had no interest in these ancient places or monuments. It was busy building places and monuments to represent its needs of the time (and most of those are incidentally still very shiny indeed. The old things were neglected and fell into old age and ill-health.

The same applies with software and supporting systems. If you build them with longevity in mind, then you take into consideration all the quality attributes needed to make that a reality, which include security.

Once built, if you care and look after them, they will stand the test of time, or at least stay secure until the 'customer' has a fresh idea of what it needs.

For reference, I've written about getting the application security basics right and why patching is non-negotiable.

I've also written about using components with known vulnerabilities, as this is relevant.

If you're a...

Scrum Master

Speak to your information security people before you do anything else. We're not there to red flag stuff. We're there to help ensure your products are as high quality as they can be.

Security is a cultural challenge and this has been talked about plenty. It still is being talked about plenty. Worth a watch (by everyone) is this course by Troy Hunt, who explains this issue and provides useful methods for overcoming this challenge.

Thanks for reading. :)