Why do I need a pentest or vulnerability assessment ?

  • 1622024819274-security1.webp

    If you are new to the security industry, then penetration testing may be one of those topics you are starting to look at in more detail. If you’ve been in the industry for a while, then you will already know (I hope) the importance of this particular exercise. Penetration testing and ongoing vulnerability assessments are essential given today’s risk of cyber crime and data breaches, and most clients when performing due diligence will probably ask when the your most recent test was conducted. If you’ve never completed a test, and your client asks the question, don’t be surprised if you detect a glimpse of a raised eyebrow, or even a loss of confidence - and ultimately a loss of business because of this.

    The mitigating factors here are risk and gap analysis. If you have multiple potential entry points into your network, be it intended, or unintended, a vulnerability scan should be completed at least once a year (my personal preference is once a quarter, but this depends on budget) to verify the security of endpoints exposed to the public. Such endpoints are typically firewalls, routers, and essentially, anything that is hosting an externally accessible service.

    What does a penetration test involve ?

    The main concept is the word “penetration”, meaning to enter. Be it lawfully, or illegal, the principle remains the same - if a device can be accessed, it needs to be tested. Most services that are public facing are designed to be entered, but what if the entity utilising such access decides to violate the intended purpose ? A hacker could circumvent your controls and gain access to other areas, or leverage a flaw in the system that negates the intended service. Such a compromise can either provide access to personally identifiable data, or make the affected system vulnerable to another form of attack. Before any testing is performed, the penetration testing company will need to sign an NDA (Non Disclosure Agreement). This protects both their client in the form of confidentiality, and their reputation. After all, if they do find a weakness, you wouldn’t want them telling everyone about it. This may seem glaringly obvious, but you will be unpleasantly surprised if you knew how many companies had been duped over the years by a fake penetration testing entity that actually turn out to be hackers !

    Any penetration testing company that is legitimate will be more than aware of the importance in terms of client confidentiality and security, and will normally make the NDA their first discussion point before even asking about your network. If any potential penetration testing company does not raise this point, or attempts to deviate when asked questions around confidentiality, they should immediately be treated as a risk, and certainly never provided privileged information about your network topology.

    Due diligence

    In all cases, you should check and verify the identity and integrity of any company before entering into any contractual agreement. My suggestion here is to only accept reviews or recommendations from people you can actually meet or speak to over the phone without the penetration testing company facilitating any meetings or phone calls on your behalf. This removes the potential for fraud, and reduces the overall risk.

    Testing scope

    The penetration test scope depends on the criteria predefined - If an external penetration test is conducted, then this will typically be limited to devices or services exposed to the internet. The usual practice is to provide a list of IP Address ranges and associated subnets, and allow the penetration testing company to “walk” these ranges looking for services. Additionally, if you are looking to conduct a test of your internal network, then (at least to me), this is not a penetration test, but a vulnerability assessment. Why ?

    Because if someone is already inside your network, you aren’t testing the ability to penetrate something you already have access to - you are testing to see how effective your internal infrastructure is

    Testing process

    The penetration testing company will begin by scanning all IP Addresses and subnets to see what will respond. If the tester finds an exposed service (normally one you’d expect in an ideal world), they will perform an array of tests against the address to determine (but not limited to)

    • The type of device
    • The operating system
    • Device fingerprint
    • What services are running
    • What ports are open

    Once the penetration tester has this information, further interrogation is possible. This part of the exercise is often automated using custom tools and scripts - most of these are often enhanced by the penetration testing company themselves, which provides a unique testing style (more on why this is important later). For example, if the penetration tester finds SNMP exposed on a device, they will then attempt to exploit known vulnerabilities in the protocol in order to get the device to “cough up” other details it wouldn’t normally divulge. A weak SNMP configuration can expose the running configuration of a router for example, meaning that the attacker then gains intelligence about your network and adjacent devices.

    Such intelligence allows the penetration tester to leverage other vulnerability checks, and ultimately, they may gain access through a route you did not expect - or even know existed.

    Testing responsibilities

    The remit of a penetration tester is not to hack your network, steal data, or bring the infrastructure to it’s knees, but to expose and report vulnerabilities to their client in a responsible and professional manner. This is typically in the form of a confidential findings report that is delivered to a predefined and authorised contact within your company.

    Testing findings and exceptions

    If the penetration testing company finds a vulnerability or exploit that is considered high risk, then they are duty bound to inform you of this discovery within a predefined time frame. This varies depending on the severity of the issue or potential exploit. In cases such as this, the penetration testing company provide documentation on the steps taken to reproduce the vulnerability, and provide an example of what they were able to do by leveraging it. This provides the client with sufficient knowledge to prepare to rectify the issue before the main findings report is produced. Seeing as penetration testers have other clients, the report can take some time to compile, and they wouldn’t want you to have an unknown exploit in the report when it does finally arrive.

    The implications of high risk vulnerabilities are wide ranging, can cause significant reputation damage for the tester if unreported (although this is at the discretion of the tester to report if they deem it important enough), and worse, could be exploited if left unpatched.

    Testing report

    The report should be protected with a strong password, and be in a format that cannot easily be manipulated or altered. A secured PDF with the ability to edit removed is the preferred medium. The method of delivery should be as secure as possible to prevent it from falling into the wrong hands. An ideal solution is to provide the report via a Secure SFTP server, or encrypted email. Most penetration testers use certificates when sending email, and are secure by default.


    This part is up to you. A thorough and respectable penetration testing company will provide all relevant information in relation to the risk levels they have identified, and will also provide a means to work towards remediation. Depending on the issues identified, the remediation will obviously be different for each device in scope, or each vulnerability identified. In most cases, penetration testing companies offer a reduced rate for performing the same test against devices and vulnerabilities subject to remediation provided all steps have been completed within a defined time period - typically 30 days. Look at it this way - if you are presented with a report that shows your infrastructure (external and internal) alluding to a block of Swiss Cheese and you chose to ignore it, then you deserve to be hacked in my view. Harsh ? Yes. A reality ? Very much so. Ignore vulnerabilities at your peril.

    Look at it this way - it’s much easier to avoid spilling milk in the first place than it is to mop it up afterwards

    Should I perform my own testing ?

    Yes, but your test results could be seen as one sided, or biased given that you have conducted the test yourself. My advice here would be

    • Perform your own interim testing. The frequency really depends on attitude to risk, I personally consider once a quarter as a sane value
    • Complete remediation as far as possible, and perform as much post-testing as you can to identify any further risk
    • Engage a recognised penetration testing company to carry out their own independent testing to validate and confirm your remediation

    The report generated by this entity will carry much more weight, and can be used for client evidence if they request it. There’s much more to this topic, so if you’d like any further information, just ask - I’ll be more than happy to answer questions.

  • 1 Votes
    3 Posts

    @DownPW absolutely. Then there’s also the cost of having to replace aging hardware - for both the production site, and the recovery location.

  • 13 Votes
    17 Posts

    @小城风雨多 I was a die-hard OnePlus user since the 6T, but my experience with the 9 series has left me extremely disappointed and I probably won’t go back now I have a Samsung S23+ which works perfectly.

  • 2 Votes
    1 Posts

    Just seen this post pop up on Sky News


    He has claimed the devices are so safe he would happily use his children as test subjects.

    Is this guy completely insane? You’d seriously use your kids as Guinea Pigs in human trials?? This guy clearly has easily more money than sense, and anyone who’d put their children in danger in the name of technology “advances” should seriously question their own ethics - and I’m honestly shocked that nobody else seems to have a comment about this.

    This entire “experiment” is dangerous to say the least in my view as there is huge potential for error. However, reading the below article where a paralyzed man was able to walk again thanks to a neuro “bridge” is truly ground breaking and life changing for that individual.


    However, this is reputable Swiss technology at it’s finest - Switzerland’s Lausanne University Hospital, the University of Lausanne, and the Swiss Federal Institute of Technology Lausanne were all involved in this process and the implants themselves were developed by the French Atomic Energy Commission.

    Musk’s “off the cuff” remark makes the entire process sound “cavalier” in my view and the brain isn’t something that can be manipulated without dire consequences for the patient if you get it wrong.

    I daresay there are going to agreements composed by lawyers which each recipient of this technology will need to sign so that it exonerates Neuralink and it’s executives of all responsibility should anything go wrong.

    I must admit, I’m torn here (in the sense of the Swiss experiment) - part of me finds it morally wrong to interfere with the human brain like this because of the potential for irreversible damage, although the benefits are huge, obviously life changing for the recipient, and in most cases may outweigh the risk (at what level I cannot comment not being a neurosurgeon of course).

    Interested in other views - would you offer yourself as a test subject for this? If I were in a wheelchair and couldn’t move, I probably would I think, but would need assurance that such technology and it’s associated procedure is safe, which at this stage, I’m not convinced it’s a guarantee that can be given. There are of course no real guarantees with anything these days, but this is a leap of faith that once taken, cannot be reversed if it goes wrong.

  • 1 Votes
    3 Posts

    @DownPW yes, exactly my point.

  • 1 Votes
    3 Posts

    @Panda said in Wasting time on a system that hangs on boot:

    Why do you prefer to use KDE Linux distro, over say Ubuntu?

    A matter of taste really. I’ve tried pretty much every Linux distro out there over the years, and whilst I started with Ubuntu, I used Linux mint for a long time also. All of them are Debian backed anyway 😁

    I guess I feel in love with KDE (Neon) because of the amount of effort they’d gone to in relation to the UI.

    I agree about the lead and the OS statement which is why I suspect that Windows simply ignored it (although the Device also worked fine there, so it clearly wasn’t that faulty)

  • 16 Votes
    12 Posts
  • 0 Votes
    1 Posts

    Anyone working in the information and infrastructure security space will be more than familiar with the non-stop evolution that is vulnerability management. Seemingly on a daily basis, we see new attacks emerging, and those old mechanisms that you thought were well and truly dead resurface with “Frankenstein” like capabilities rendering your existing defences designed to combat that particular threat either inefficient, or in some cases, completely ineffective. All too often, we see previous campaigns resurface with newer destructive capabilities designed to extort both from the financial and blackmail perspective.

    It’s the function of the “Blue Team” to (in several cases) work around the clock to patch a security vulnerability identified in a system, and ensure that the technology landscape and estate is as healthy as is feasibly possible. On the flip side, it’s the function of the “Red Team” to identify hidden vulnerabilities in your systems and associated networks, and provide assistance around the remediation of the identified threat in a controlled manner.

    Depending on your requirements, the minimum industry accepted testing frequency from the “Red Team” perspective is once per year, and typically involves the traditional “perimeter” (externally facing devices such as firewalls, routers, etc), websites, public facing applications, and anything else exposed to the internet. Whilst this satisfies the “tick in the box” requirement on infrastructure that traditionally never changes, is it really sufficient in today’s ever-changing environments ? The answer here is no.

    With the arrival of flexible computing, virtual data centres, SaaS, IaaS, IoT, and literally every other acronym relating to technology comes a new level of risk. Evolution of system and application capabilities has meant that these very systems are in most cases self-learning (and for networks, self-healing). Application algorithms, Machine Learning, and Artificial Intelligence can all introduce an unintended vulnerability throughout the development lifecycle, therefore, failing to test, address, and validate the security of any new application or modern infrastructure that is public facing is a breach waiting to happen. For those “in the industry”, how many times have you been met with this very scenario

    “Blue Team: We fixed the vulnerabilities that the Red Team said they’d found…” “Red Team: We found the vulnerabilities that the Blue Team said they’d fixed…”

    Does this sound familiar ?

    What I’m alluding to here is that security isn’t “fire and forget”. It’s a multi-faceted, complex process of evolution that, very much like the earth itself, is constantly spinning. Vulnerabilities evolve at an alarming rate, and unfortunately, your security program needs to evolve with it rather than simply “stopping” for even a short period of time. It’s surprising (and in all honesty, worrying) the amount of businesses that do not currently (and even worse, have no plans to) perform an internal vulnerability assessment. You’ll notice here I do not refer to this as a penetration test - you can’t “penetrate” something you are already sitting inside. The purpose of this exercise is to engage a third party vendor (subject to the usual Non-Disclosure Agreement process) for a couple of days. Let them sit directly inside your network, and see what they can discover. Topology maps and subnets help, but in reality, this is a discovery “mission” and it’s up to the tester in terms of how they handle the exercise.

    The important component here is scope. Additionally, there are always boundaries. For example, I typically prefer a proof of concept rather than a tester blundering in and using a “capture the flag” approach that could cause significant disruption or damage to existing processes - particularly in-house development. It’s vital that you “set the tone” of what is acceptable, and what you expect to gain from the exercise at the beginning of the engagement. Essentially, the mantra here is that the evolution wheel in fact never stops - it’s why security personnel are always busy, and CISO’s never sleep 🙂

    These days, a pragmatic approach is essential in order to manage a security framework properly. Gone are the days of annual testing alone, being dismissive around “low level” threats without fully understanding their capabilities, and brushing identified vulnerabilities “under the carpet”. The annual testing still holds significant value, but only if undertaken by an independent body, such as those accredited by CREST (for example).

    You can reduce the exposure to risk in your own environment by creating your own security framework, and adopting a frequent vulnerability scanning schedule with self remediation. Not only does this lower the risk to your overall environment, but also provides the comfort that you take security seriously to clients and vendors alike whom conduct frequent assessments as part of their Due Diligence programs. Identifying vulnerabilities is one thing, however, remediating them is another. You essentially need to “find a balance” in terms of deciding which comes first. The obvious route is to target critical, high, and medium risk, whilst leaving the “low risk” items behind, or on the “back burner”.

    The issue with this approach is that it’s perfectly possible to chain multiple vulnerabilities together that on their own would be classed as low risk, and end up with something much more sinister when combined. This is why it’s important to address even low-risk vulnerabilities to see how easy it would be to effectively execute these inside your environment. In reality, no Red Team member can tell you exactly how any threat could pan out if a way to exploit it silently existed in your environment without a proof of concept exercise - that, and the necessity sometimes for a “perfect storm” that makes the previous statement possible in a production environment.

    Vulnerability assessments rely on attitude to risk at their core. If the attitude is classed as low for a high risk threat, then there needs to be a responsible person capable of enforcing an argument for that particular threat to be at the top of the remediation list. This is often the challenge - where board members will accept a level of risk because the remediation itself may impact a particular process, or interfere with a particular development cycle - mainly because they do not understand the implication of weakened security over desired functionality.

    For any security program to be complete (as far as is possible), it also needs to consider the fundamental weakest link in any organisation - the user. Whilst this sounds harsh, the below statement is always true

    “A malicious actor can send 1,000 emails to random users, but only needs one to actually click a link to gain a foothold into an organisation”

    For this reason, any internal vulnerability assessment program should also factor in social engineering, phishing simulations, vishing, eavesdropping (water cooler / kitchen chat), unattended documents left on copiers, dropping a USB thumb drive in reception or “public” (in the sense of the firm) areas.

    There’s a lot more to this topic than this article alone can sanely cover. After several years experience in Information and Infrastructure Security, I’ve seen my fair share of inadequate processes and programs, and it’s time we addressed the elephant in the room.

    Got you thinking about your own security program ?