Skip to content

Addressing vulnerability management

Blog
  • 1621959888112-code.webp
    Anyone working in the information and infrastructure security space will be more than familiar with the non-stop evolution that is vulnerability management. Seemingly on a daily basis, we see new attacks emerging, and those old mechanisms that you thought were well and truly dead resurface with “Frankenstein” like capabilities rendering your existing defences designed to combat that particular threat either inefficient, or in some cases, completely ineffective. All too often, we see previous campaigns resurface with newer destructive capabilities designed to extort both from the financial and blackmail perspective.

    It’s the function of the “Blue Team” to (in several cases) work around the clock to patch a security vulnerability identified in a system, and ensure that the technology landscape and estate is as healthy as is feasibly possible. On the flip side, it’s the function of the “Red Team” to identify hidden vulnerabilities in your systems and associated networks, and provide assistance around the remediation of the identified threat in a controlled manner.

    Depending on your requirements, the minimum industry accepted testing frequency from the “Red Team” perspective is once per year, and typically involves the traditional “perimeter” (externally facing devices such as firewalls, routers, etc), websites, public facing applications, and anything else exposed to the internet. Whilst this satisfies the “tick in the box” requirement on infrastructure that traditionally never changes, is it really sufficient in today’s ever-changing environments ? The answer here is no.

    With the arrival of flexible computing, virtual data centres, SaaS, IaaS, IoT, and literally every other acronym relating to technology comes a new level of risk. Evolution of system and application capabilities has meant that these very systems are in most cases self-learning (and for networks, self-healing). Application algorithms, Machine Learning, and Artificial Intelligence can all introduce an unintended vulnerability throughout the development lifecycle, therefore, failing to test, address, and validate the security of any new application or modern infrastructure that is public facing is a breach waiting to happen. For those “in the industry”, how many times have you been met with this very scenario

    “Blue Team: We fixed the vulnerabilities that the Red Team said they’d found…” “Red Team: We found the vulnerabilities that the Blue Team said they’d fixed…”

    Does this sound familiar ?

    What I’m alluding to here is that security isn’t “fire and forget”. It’s a multi-faceted, complex process of evolution that, very much like the earth itself, is constantly spinning. Vulnerabilities evolve at an alarming rate, and unfortunately, your security program needs to evolve with it rather than simply “stopping” for even a short period of time. It’s surprising (and in all honesty, worrying) the amount of businesses that do not currently (and even worse, have no plans to) perform an internal vulnerability assessment. You’ll notice here I do not refer to this as a penetration test - you can’t “penetrate” something you are already sitting inside. The purpose of this exercise is to engage a third party vendor (subject to the usual Non-Disclosure Agreement process) for a couple of days. Let them sit directly inside your network, and see what they can discover. Topology maps and subnets help, but in reality, this is a discovery “mission” and it’s up to the tester in terms of how they handle the exercise.

    The important component here is scope. Additionally, there are always boundaries. For example, I typically prefer a proof of concept rather than a tester blundering in and using a “capture the flag” approach that could cause significant disruption or damage to existing processes - particularly in-house development. It’s vital that you “set the tone” of what is acceptable, and what you expect to gain from the exercise at the beginning of the engagement. Essentially, the mantra here is that the evolution wheel in fact never stops - it’s why security personnel are always busy, and CISO’s never sleep 🙂

    These days, a pragmatic approach is essential in order to manage a security framework properly. Gone are the days of annual testing alone, being dismissive around “low level” threats without fully understanding their capabilities, and brushing identified vulnerabilities “under the carpet”. The annual testing still holds significant value, but only if undertaken by an independent body, such as those accredited by CREST (for example).

    You can reduce the exposure to risk in your own environment by creating your own security framework, and adopting a frequent vulnerability scanning schedule with self remediation. Not only does this lower the risk to your overall environment, but also provides the comfort that you take security seriously to clients and vendors alike whom conduct frequent assessments as part of their Due Diligence programs. Identifying vulnerabilities is one thing, however, remediating them is another. You essentially need to “find a balance” in terms of deciding which comes first. The obvious route is to target critical, high, and medium risk, whilst leaving the “low risk” items behind, or on the “back burner”.

    The issue with this approach is that it’s perfectly possible to chain multiple vulnerabilities together that on their own would be classed as low risk, and end up with something much more sinister when combined. This is why it’s important to address even low-risk vulnerabilities to see how easy it would be to effectively execute these inside your environment. In reality, no Red Team member can tell you exactly how any threat could pan out if a way to exploit it silently existed in your environment without a proof of concept exercise - that, and the necessity sometimes for a “perfect storm” that makes the previous statement possible in a production environment.

    Vulnerability assessments rely on attitude to risk at their core. If the attitude is classed as low for a high risk threat, then there needs to be a responsible person capable of enforcing an argument for that particular threat to be at the top of the remediation list. This is often the challenge - where board members will accept a level of risk because the remediation itself may impact a particular process, or interfere with a particular development cycle - mainly because they do not understand the implication of weakened security over desired functionality.

    For any security program to be complete (as far as is possible), it also needs to consider the fundamental weakest link in any organisation - the user. Whilst this sounds harsh, the below statement is always true

    “A malicious actor can send 1,000 emails to random users, but only needs one to actually click a link to gain a foothold into an organisation”

    For this reason, any internal vulnerability assessment program should also factor in social engineering, phishing simulations, vishing, eavesdropping (water cooler / kitchen chat), unattended documents left on copiers, dropping a USB thumb drive in reception or “public” (in the sense of the firm) areas.

    There’s a lot more to this topic than this article alone can sanely cover. After several years experience in Information and Infrastructure Security, I’ve seen my fair share of inadequate processes and programs, and it’s time we addressed the elephant in the room.

    Got you thinking about your own security program ?


Related Topics
  • Network Security Monitoring

    Learning
    7
    3 Votes
    7 Posts
    195 Views

    @phenomlab I will check those out. Thanks for sharing. I appreciate it!

  • How do you manage IT pros?

    Moved Blog
    3
    1 Votes
    3 Posts
    370 Views

    @DownPW yes, exactly my point.

  • Bad information security advice

    Security
    1
    1 Votes
    1 Posts
    143 Views
    No one has replied
  • 0 Votes
    4 Posts
    676 Views

    @DownPW 🙂 most of this really depends on your desired security model. In all cases with firewalls, less is always more, although it’s never as clear cut as that, and there are always bespoke ports you’ll need to open periodically.

    Heztner’s DDoS protection is superior, and I know they have invested a lot of time, effort, and money into making it extremely effective. However, if you consider that the largest ever DDoS attack hit Cloudflare at 71m rps (and they were able to deflect it), and each attack can last anywhere between 8-24 hours which really depends on how determined the attacker(s) is/are, you can never be fully prepared - nor can you trace it’s true origin.

    DDoS attacks by their nature (Distributed Denial of Service) are conducted by large numbers of devices whom have become part of a “bot army” - and in most cases, the owners of these devices are blissfully unaware that they have been attacked and are under command and control from a nefarious resource. Given that the attacks originate from multiple sources, this allows the real attacker to observe from a distance whilst concealing their own identity and origin in the process.

    If you consider the desired effect of DDoS, it is not an attempt to access ports that are typically closed, but to flood (and eventually overwhelm) the target (such as a website) with millions of requests per second in an attempt to force it offline. Victims of DDoS attacks are often financial services for example, with either extortion or financial gain being the primary objective - in other words, pay for the originator to stop the attack.

    It’s even possible to get DDoS as a service these days - with a credit card, a few clicks of a mouse and a target IP, you can have your own proxy campaign running in minutes which typically involves “booters” or “stressers” - see below for more

    https://heimdalsecurity.com/blog/ddos-as-a-service-attacks-what-are-they-and-how-do-they-work

    @DownPW said in Setting for high load and prevent DDoS (sysctl, iptables, crowdsec or other):

    in short if you have any advice to give to secure the best.

    It’s not just about DDos or firewalls. There are a number of vulnerabilities on all systems that if not patched, will expose that same system to exploit. One of my favourite online testers which does a lot more than most basic ones is below

    https://www.immuniweb.com/websec/

    I’d start with the findings reported here and use that to branch outwards.

  • 19 Votes
    30 Posts
    492 Views

    @phenomlab 100%.

  • 0 Votes
    1 Posts
    223 Views
    No one has replied
  • 0 Votes
    1 Posts
    334 Views
    No one has replied
  • Hit with Malware?

    Malware
    1
    0 Votes
    1 Posts
    292 Views
    No one has replied