We DON'T Need Blinky Lights

Blog
  • 1631810017053-netsecurity.jpg.webp
    I read an article By Glenn S. Gerstell (Mr. Gerstell is the general counsel of the National Security Agency) with a great deal of interest. That same article is detailed below

    The National Security Operations Center occupies a large windowless room, bathed in blue light, on the third floor of the National Security Agency’s headquarters outside of Washington. For the past 46 years, around the clock without a single interruption, a team of senior military and intelligence officials has staffed this national security nerve center.

    The center’s senior operations officer is surrounded by glowing high-definition monitors showing information about things like Pentagon computer networks, military and civilian air traffic in the Middle East and video feeds from drones in Afghanistan. The officer is authorized to notify the president any time of the day or night of a critical threat.

    Just down a staircase outside the operations center is the Defense Special Missile and Aeronautics Center, which keeps track of missile and satellite launches by China, North Korea, Russia, Iran and other countries. If North Korea was ever to launch an intercontinental ballistic missile toward Los Angeles, those keeping watch might have half an hour or more between the time of detection to the time the missile would land at the target. At least in theory, that is enough time to alert the operations center two floors above and alert the military to shoot down the missile.

    But these early-warning centers have no ability to issue a warning to the president that would stop a cyberattack that takes down a regional or national power grid or to intercept a hypersonic cruise missile launched from Russia or China. The cyberattack can be detected only upon occurrence, and the hypersonic missile, only seconds or at best minutes before attack. And even if we could detect a missile flying at low altitudes at 20 times the speed of sound, we have no way of stopping it.

    Something I’ve been saying all along is that technology alone cannot stop cyber attacks. Often referred to as a “silver bullet”, or “blinky lights”, this provides the misconception that by purchasing that new, shiny device, you’re completely secure. Sorry folks, but this just isn’t true. In fact, cyber crime, and it’s associated plethora of hourly attacks is evolving at an alarming rate - in fact, much faster than you’d like to believe.

    You’d think that for all the huge technological advances we have made in this world, the almost daily plethora of corporate security breaches, high profile data loss, and individuals being scammed every day would have dropped down to nothing more than a trickle - even to the point where they became virtually non-existent. We are making huge progress with landings on Mars, autonomous space vehicles, artificial intelligence, big data, machine learning, and essentially reaching new heights on a daily basis thanks to some of the most creative minds in this technological sphere. But somehow, we have lost our way, stumbled and fallen - mostly on our own sword. But why ?

    Just like the Y2k Gold Rush in the late 90’s, information security has become the next big thing with companies ranging from a few employees as startups to enterprise organisations touting their services and platforms to be the best in class, and the next “must have” tool in the blue team’s already bulging arsenal of tools. Tools that on their own in fact have little effect unless they are combined with something else as equally as expensive to run. We’ve spent so much time focusing on efforts ranging from what SEIM solution we need to what will be labelled as the ultimate silver bullet capable of eliminating the threat of attack once and for all that in my opinion, we have lost sight of the original goal. With regulatory requirements and best practice pushing us towards products and services that either require additional staff to manage, or are incredibly expensive to deploy and ultimately run. Supposedly, in an effort to simplify the management, analysis, and processing of millions of logs per hour we’ve created even more platforms to ingest this data in order to make sense of it.

    In reality, all we have created is a shark infested pool where larger companies consume up and coming tech startups for breakfast to ensure that they do not pose a threat to their business model / gravy train, therefore enabling them to dominate the space even further with their newly enhanced reach.

    How did we get to this ? What happened to thought process and working together in order to combat the threat that increases on an hourly basis ? We seem to be so focused on making sure that we aren’t the next organisation to be breached that we have lost the art of communication and the full benefit of sharing information so that it assists others in their journey. We’ve become so obsessed with the daily onslaught of platforms that we no longer seem to have the time to even think, let alone take stock and regroup - not as an individual, but as a community.

    There are a number of ”communities” that offer “free” forums and products under the open source banner, but sadly, these seem to be turning into paid-for products at a rate of knots. I understand people need to live and make money, but if awareness was raised to the point where users wouldn’t click links in phishing emails, fall for the fake emergency wire transfer request from the CEO, or be suddenly tempted by the latest offer in terms of cheap technology then we might - just might - be able to make the world a better place. In order to make this work, we first need to remove the stigma that has become so ingrained by the media and set in stone like King Arthur’s Excalibur. Let’s first start with the hacker / criminal parallel. They aren’t the same thing folks.

    Nope. Not at all. Hackers are those people who find ingenious ways of getting into networks and infrastructure that you never even knew existed, trick you into parting with sensitive information (then inform you as to where you went wrong), and most importantly, educate you so that you and your network are far more secure against real attacks and real criminals. These people exist to increase your awareness, and by definition, security footprint - not use it against you in order to steal. Hackers do like to wear hoodies as they are comfortable, but you won’t find one using gloves, wearing a balaclava or sunglasses, and in some cases, they actually prefer desktops rather than laptops.

    The image being portrayed here is one perpetuated by the media, and it has certainly been effective - but not in a positive way. The word “hacker” is now synonymous with criminals, where it really shouldn’t be. One defines security, whereas the other sets out to break it. If we locked up all the hackers on this planet, we’d only have the blue team remaining. It’s the job of the red team (hackers) to see how strong your defences are. Hackers exist to educate, not infiltrate (at least, not without asking for permission first :))

    I personally have lost count of how many times I’ve sat in meetings where a sales pitch around a security platform is touted as a one stop shop or a Swiss army knife that can protect your entire network from a breach. Admittedly, there’s some great technology on the market that performs a variety of functions to protect your estate, but they all fail to take into consideration the weakest link in any chain - users. Irrespective of bleeding edge “combat platforms” (as I like to refer to them), criminals are becoming very adept in their approach, leveraging techniques such as social engineering. It should come as no surprise for you to learn that this type of attack can literally walk past your shiny new defence system as it relies on the one vulnerability you cannot predict - the human. Hence the term “hacking humans”.

    I’m of the firm opinion that if you want to outsmart a criminal, you have to think like one. Whilst newfangled platforms are created to assist in the fight against cyber crime, they are complex to configure, suffer from alerting bloat (far too many emails so you end up missing the one where your network is actually being compromised), or are simply overwhelming and difficult to understand. Here’s the thing. You don’t need (although they do help) expensive bleeding edge platforms with flashing lights to tell you where weak points lie within your network, but you do need to understand how a criminal can and will exploit these. A vulnerability cannot be leveraged if it no longer exists, or even better, never even existed to begin with.

    And so, on with the mission, and the real reason as to why I created this site. I’ve been working in information technology for 30 years, and have a very strong technical background in network design and information security.

    What I want to do is create a communication, information, and awareness sharing platform. I created the original concept of what I thought this new community should look like in my head, but its taken a while to finally develop, get people interested, and on board. To my mind, those from inside and outside of the information security arena will pool together, share knowledge, raise awareness, and probably the most important, harness this new found force and drive change forward.

    The breaches we are witnessing on a daily basis are not going to simply stop. They will increase dramatically in their frequency, and will get worse with each incident.

    Let’s stop the “hackers are criminals” myth, start using our own unique talents in this field, and make a community that

    • is able to bring effective change
    • treats everyone as equals
    • The community once fully established could easily be the catalyst for change - both in perception, and inception.

    Why not wield the stick for a change instead of being beaten with it, and work as a global virtual team instead ?

    Will you join me ? In case I haven’t already mentioned it, this initiative has no cost - only gains. It is entirely free.


  • 5 Votes
    5 Posts
    210 Views

    @qwinter very well put. Great points and I can certainly align with these. I personally don’t see no university as a barrier to progression. My old boss said that he’d take preference with anyone who had a degree because of their “ability to think logically” (I kid you not). I said “well, you hired me and I don’t have a degree…”.

    He paused for a moment realising that he’d literally dug himself a hole and fell in it. He then said “ah yes, but you’re an exception”.

    “Exception” or not - it’s still a bigoted reasoning mechanism, and elitist to put it mildly. Class distinction springs to mind here.

  • 0 Votes
    1 Posts
    151 Views

    1631808994808-scamming.jpg.webp

    One of many issues with working in the Infosec community is an inevitable backlash you’ll come across almost on a daily basis. In this industry, and probably hundreds of others like it are those who have an opinion. There’s absolutely nothing wrong with that, and it’s something I always actively encourage. However, there’s a fine line between what is considered to be constructive opinion and what comes across as a bigoted approach. What I’m alluding to here is the usage of the word “hacker” and it’s context. I’ve written about this particular topic before which, so it seems, appears to have pressed a few buttons that “shouldn’t be pressed”.
    alt text

    But why is this ?

    The purpose of this article is definition. It really isn’t designed to “take sides” or cast aspersions over the correct usage of the term, or which scenarios and paradigms it is used correctly or incorrectly against. For the most part, the term “hacker” seems to be seen as positive in the Infosec community, and based on this, the general consensus is that there should be greater awareness of the differences between hackers and threat actors, for example. The issue here is that not everyone outside of this arena is inclined to agree. You could argue that the root of this issue is mainly attributed to the media and how they portray “hackers” as individuals who pursue nefarious activity and use their skills to commit crime and theft on a grand scale by gaining illegal access to networks. On the one hand, the image of hoodies and faceless individuals has created a positive awareness and a sense of caution amongst the target groups – these being everyday users of civilian systems and corporate networks alike, and with the constant stream of awareness campaigns running on a daily basis, this paradigm serves only to perpetuate rather than diminish. On the other hand, if you research the definition of the term “hacker” you’ll find more than one returned.

    Is this a fair reflection of hackers ? To the untrained eye, picture number 2 probably creates the most excitement. Sure, picture 1 looks “cool”, but it’s not “threatening” as such, as this is clearly the image the media wants to display. Essentially, they have probably taken this stance to increase awareness of an anonymous and faceless threat. But, it ISN’T a fair portrayal.

    Current definitions of “the word”

    The word “hacker” has become synonymous with criminal activity to the point where it cannot be reversed. Certainly not overnight anyway. The media attention cannot be directly blamed either in my view as without these types of campaigns, the impact of such a threat wouldn’t be taken seriously if a picture of a guy in a suit (state sponsored) was used. The hoodie is representative of an unknown masked assailant and it’s creation is for awareness – to those who have no real understanding of what a hacker should look like – hence my original article. As I highlighted above, we live in a world where a picture speaks a thousand words.

    The word hacker is always going to be associated with nefarious activity and that’s never going to change, regardless of the amount of effort that would be needed to re-educate pretty much the entire planet. Ask anyone to define a hacker and you’ll get the same response. It’s almost like trying to distinguish the deference between a full blown criminal and a “lovable rogue” or the fact that hoodies aren’t trouble making adolescent thugs.

    Ultimately, it’s far too ingrained – much like the letters that flow through a stick of rock found on UK seaside resorts. It’s doesn’t matter how much you break off, the lettering exists throughout the entire stick regardless if you want that to happen or not. To make a real change, and most importantly, have media (and by definition, everyone else) realise they have made a fundamental misjudgement, we should look at realistic definitions.

    The most notable is the below, taken from Tech Target

    A hacker is an individual who uses computer, networking or other skills to overcome a technical problem. The term hacker may refer to anyone with technical skills, but it often refers to a person who uses his or her abilities to gain unauthorized access to systems or networks in order to commit crimes. A hacker may, for example, steal information to hurt people via identity theft, damage or bring down systems and, often, hold those systems hostage to collect ransom.

    The term hacker has historically been a divisive one, sometimes being used as a term of admiration for an individual who exhibits a high degree of skill, as well as creativity in his or her approach to technical problems. However, the term is more commonly applied to an individual who uses this skill for illegal or unethical purposes.

    One great example of this is that hackers are not “evil people” but are in fact industry professionals and experts who use their knowledge to raise awareness by conducting proof of concept exercises and providing education and awareness around the millions of threats that we are exposed to on an almost daily basis. So why does the word “hacker” strike fear into those unfamiliar with its true meaning ? The reasoning for this unnecessary phenomena isn’t actually the media alone (although they have contributed significantly to it’s popularity). It’s perception. You could argue that the media have made this perception worse, and to a degree, this would be true. However, they actually didn’t create the original alliance – the MIT claimed that trophy and gave the term the “meaning” it has to this day. Have a look at this

    MIT Article

    Given the origins of this date back to 1963, the media is not to blame for creating the seemingly incorrect original reference when it’s fairly obvious that they didn’t. The “newspaper” reflected in the link is a campus circulation and was never designed for public consumption as far as I can see. Here’s a quote from that article:

    “Many telephone services have been curtailed because of so-called hackers, according to Professor Carleton Tucker, administrator of the Institute telephone system.

    The students have accomplished such things as tying up all the tie-lines between Harvard and MIT, or making long-distance calls by charging them to a local radar installation. One method involved connecting the PDP-1 computer to the phone system to search the lines until a dial tone, indicating an outside line, was found.”

    The “so-called hackers” alignment here originally comes from “Phreaking” – a traditional method of establishing control over remote telephone systems allowing trunk calls, international dialling, premium rates, etc, all without the administrator’s knowledge. This “old school” method would certainly no longer work with modern phone systems, but is certainly “up there” with the established activity that draws a parallel with hacking.

    Whilst a significant portion of blogs, security forums, and even professional security platforms continue to use images of hoodies, faceless individuals, and the term “hacker” in the criminal sense, this is clearly a misconception – unfortunately one that connotation itself has allowed to set in stone like King Arthur’s Excalibur. In fairness, cyber criminals are mostly faceless individuals as nobody can actually see them commit a crime and only realise they are in fact normal people once they are discovered, arrested, and brought to trial for their activities. However, the term “hacker” is being misused on a grand scale – and has been since the 1980’s.

    An interesting observation here is that hoodies are intrinsically linked to threatening behaviour. A classic example of this is here. This really isn’t misrepresentation by the media in this case – it’s an unfortunate reality that is on the increase. Quite who exactly is responsible for putting a hacker in a hoodie is something of a discussion topic, but hackers were originally seen as “Cyberpunks” (think Matrix 1) until the media stepped in where they suddenly were seen as skateboarding kids in hoodies. And so, the image we know (and hackers loathe) was born. Perhaps one “logical” perspective for hoodies and hackers could be the anonymity the hoodie supposedly affords.

    The misconception of the true meaning of “hacker” has damaged the Infosec community extensively in terms of what should be a “no chalk” line between what is criminal, and what isn’t. However, it’s not all bad news. True meaning aside, the level of awareness around the nefarious activities of cyber criminals has certainly increased, but until we are able to establish a clear demarcation between ethics in terms of what is right and wrong, those hackers who provide services, education, and awareness will always be painted in a negative light, and by inference, be “tarred with the same brush”. Those who pride themselves on being hackers should continue to do so in my view – and they have my full support.

    It’s not their job solely to convince everyone else of their true intent, but ours as a community.

    Let’s start making that change.

  • 0 Votes
    1 Posts
    164 Views

    Once in every while, you encounter a repetitive issue that no matter what you try to do to resolve it, the problem manifests itself over and over again - sometimes, even on a daily basis. Much of how the issue is remediated really depends on the person assigned to the task.

    You might be puzzled at why I’d write about something like this, but it’s a situation I see constantly - one I like to refer to as “over thinker syndrome”. What do I mean by this ? Here’s the theory. Some people are very analytical when it comes to problem solving. Couple that with technical knowledge and you could land up with a situation where something relatively simple gets blown out of all proportion because the scenario played out in the mind is often much further from reality than you’d expect. And the technical reasoning is usually always to blame. Sometime around 2007, a colleague noticed that the Exchange Server (2003 wouldn’t you know) would suddenly reboot half way through a backup job. Rightly so, he wanted to investigate and asked me if this would be ok. Anyone with an ounce of experience knows that functional backups are critical in the event of a disaster - none more so than I - obviously, I have the go ahead. One bright spark in my team suggested a reboot of the server, which immediately prompted the response

    “…it’s rebooting itself every day, so how will that help ?”

    The investigation

    Joking aside, we’ve all heard the “have you rebooted” question touted at some point during helpdesk discussions, but this one was different. A system rebooting itself is usually symptomatic of an underlying issue somewhere, and my team member was ready for the task ahead. Stepping up to the plate, he asked if it was ok to install some monitoring software on the server. Usually, installing additional software components in a production server without testing first is a non-starter, but seeing as we needed to get this resolved as quickly as possible to reinstate the nightly backup (which incidentally hasn’t run successfully for 3 days by now), I provided approval to proceed without question. There’s a leap of faith at this point, as you could cause more problems than that you actually set out to resolve in the first place, but, as with anything related to information technology, someone’s you have to accept an element of risk. The software itself was actually for the RAID controller and motherboard  The assigned technician had already decided it was related to something along the lines of a faulty RAM module, or perhaps an issue with the controller itself. My thoughts leaned elsewhere already at this point - is the server reboots itself at exactly the same time every day then there is an established pattern which should be investigated first. It’s a logical approach, but it’s a common trait for technical support staff to sometimes think outside of the box - or in this case, outside of the building. Not wanting to push my opinion, or trample on anyone’s toes, I decided to remain quiet and see just how far this would go before intervention was required.

    In this case, not very far. The following morning after another unannounced nightly reboot, the error “the previous shutdown at [insert time and date here] was unexpected” showed up in the event log. No real surprises there, and once again, exactly the same time as the previous night. I asked my technician for an update, and he informed me that he believed that the memory was faulty and somehow causing the server to blue screen and reboot. That was actually a reasonable response and so I commended him on his research and findings, but also reminded him to perform a manual backup so that we at least had something to revert to in the event of a failure. Later that afternoon, the same tech approached me and said that he had ordered some replacement memory, and wanted to arrange downtime to fit it. Trying to keep a poker face and remain passive, I agreed and the memory was replaced the same evening around 10pm. At 2am the following morning, kaboom ! - the server rebooted itself again. Not wanting to admit defeat, our courageous tech suggested that the problem could be due to the system overheating. Another fair point, but not realistic as you’d see this in event log as a thermal shutdown. I willingly entertained this, and allowed investigations into the CPU temperature to begin - after another manual backup. Unsurprisingly, the temperature data returned no smoking gun, so that was abandoned. The next port of call was to reapply the service pack. Now, I’ll admit that this used to fix a multitude of issues under Windows NT Server (particularly Service Pack 4) but not under Windows 2003. I declined this for obvious reasons - if you reapply the service pack, you run the risk of overwriting key DLL files that could (and often will) render Exchange inoperable. Not being prepared to introduce an unprecedented risk into what was already becoming something of a showcase, I suggested that we look elsewhere.

    The exasperation

    The final (and honestly more realistic suggestion) was to enable verbose logging in Exchange. This is actually a good idea, but only if you suspect that the information store could be the issue. Given the evidence, I wasn’t convinced. If there was corruption in the store, or on any of the disks, this would show itself randomly through the day and wouldn’t wait until 2am in the morning. Not wanting to come across as condescending, I agreed, but at the same time, set a deadline to escalation. I wasn’t overly concerned about the backups as these were being completed manually each day whilst the investigations were taking place. Neither was I concerned at what could be seen at this point as wasting someone’s time when you think you may have the answer to what now seemed to be an impossible problem. This is where experience will eclipse any formal qualifications hands down. Those with university degrees may scoff at this, but those with substantially analytical thinking patterns seem to avoid logic like the plague and go off on a wild tangent looking for a dramatically technical explanation and solution to a problem when it’s much simpler than you’d expect. Hence the title of this article - Avoid the “bulldozer to find a china cup” scenario. After witnessing another pained expression on the face of my now exasperated and exhausted tech, I said “let’s get a coffee”. In agreement, he followed me to the kitchen and then asked me what I thought the problem could be. I said that if he wanted my advice, it would be to step back and look at this problem from a logical angle rather than technical. The confused look I received was priceless - the guy must have really though I’d lost the plot. After what seemed like an eternity (although in reality only a few seconds) he asked me what I meant by this. “Come with me”, I said. Finishing his coffee, he diligently followed me to the server room. Once inside, I asked him to show me the Exchange Server. Puzzled, he correctly pointed out the exact machine. I then asked him to trace the power cables and tell me where they went.

    As with most server rooms, locating and identifying cables can be a bit of a challenge after equipment has been added and removed, so this took a little longer than we expected. Eventually, the tech traced the cables back to

    …an old looking UPS that had a red light illuminated at the front like it had been a prop in a Terminator film.

    The realisation

    Suddenly, the real cause of this issue dawned on the tech like a morning sunrise over the Serengeti. The UPS that the Exchange Server was unexpectedly connected to had a faulty battery. The UPS was conducting a self test at 2am each morning, and because the bypass test failed owing to the burnt battery, the connected server lost power and started back up after the offending equipment left bypass mode and went online.

    Where is this going you might ask ?  Here’s the moral of this (particular, and many others like it) story

    Just because a problem involves technology, it doesn’t mean that the answer has to be a complex technical one Logic and common sense has a part to play in all of our lives. Sometimes, it makes more sense just to step back, take a breath, and see something for what it really is before deciding to commit It’s easy to allow technical expertise to cloud your judgement - don’t fall into the trap of using a sledgehammer to break an egg You cannot buy experience - it’s earned, gained, and leaves an indelible mark

    Let’s hear your views. Did you ever come across a situation where no matter what you tried, nothing worked ? Did the solution turn out to be much simpler than you’d have ever thought ?

  • 0 Votes
    1 Posts
    176 Views

    One of the most important safety nets in IT Operations is contingency. Every migration needs a rollback plan in the event that things don’t quite go the way you’d expect, and with a limited timeline to implement a change, or in some cases, a complete migration, the rollback process is one that is an essential component. Without a plan to revert all changes back to their previous state, your migration is destined for failure from the outset. No matter how confident you are (I’ve yet to meet a project manager who doesn’t build in redundancy or rollback in one form or another) there is always going to be something you’ve missed, or a change that produces undesirable results.

    It is this seemingly innocent change that can have a domino effect on your migration - unless you have access to a replica environment, the result of the change cannot be realistically predicted. Admittedly, it’s a simple enough process to clone virtual machines to test against, but that’s of no consequence if your change relates to those conducted at hardware level. A classic example of this is a firewall migration. Whilst it would be possible to test policies to ensure their functionality meets the requirement of the business, confirming VPN links for example isn’t so straightforward - especially when you need to rely on external vendors to complete their piece of the puzzle before you can continue. Unless you’re deploying technology into a greenfield site, you do not have the luxury of testing a VPN into a production network during business hours. Based on this, you have a couple of choices

    You perform all testing off hours by switching equipment for the replacement, and perform end to end testing. Once you are satisfied everything works as it should, you put everything back the way you found it, then schedule a date for the migration. You configure the firewall using a separate subnet, VLAN, and other associated networking elements meaning the two environments run symmetrically

    But which path is the right one ? Good question. There’s no hard and fast rule to which option you go for - although option 2 is more suited to a phased migration approach whilst option 1 is more aligned to “big bang” - in other words, moving everything at the same time. Option 2 is good for testing, but may not reflect reality as you are not targeting the same configuration. As a side note, I’ve often seen situations where residual configuration from option 1 has been left behind, meaning you either land up with a conflict of sorts, or black hole routing.

    Making use of a rollback

    This is where the rollback plan bridges the gap. If you find yourself in a situation where you either run out of time, or cannot continue owing to physical, logical or external constraints, then you would need to invoke your rollback plan. It’s important to note at this stage that part of the project plan should include a point where the progress is reviewed and assessed, and if necessary, the rollback is executed. My personal preference is within around 40% of the allocated time window - all relevant personnel should reconvene and provide status updates around their areas of responsibility, and give a synopsis of any issues - and be fully prepared to elaborate on these if the need arises. If the responsible manager feels that the project is at risk of overrunning it’s started time frame, or cannot be completed within that window, he or she needs to exercise authority to invoke the rollback plan. When setting the review interval, you should also consider the amount of time required to revert all changes and perform regression testing.

    Rollback provides the ideal opportunity to put everything back how it was before you started on your journey - but it does depend on two major factors. Firstly, you need to allocate a suitable time period for the rollback to be completed within. Secondly, unless you have a list of changes that were made to hardware - inclusive of configuration, patching, and a myriad of others, how can you be sure that you’ve covered everything ?

    Time after time I see the same problem - something gets missed, and turns out to be fundamental on Monday morning when the changes haven’t been cross checked.

    So what should a contingency plan consist of ?

    One surefire way to ensure that configurations are preserved prior to making changes is to create backups of running configs - 2 minutes now can save you 2 days of troubleshooting when you can’t remember which change caused your issue.  For virtual machines, this is typically a snapshot that can be restored later should the need arise. A word to the wise though - don’t leave the machine running on snapshot for too​ long as this can rapidly deplete storage space. It’s not a simple process to recover a crashed VM that has run out of disk space.

    Keep version and change control records up to date - particularly during the migration. Any change that could negatively impact the remainder of the project should be examined and evaluated, and if necessary, removed from the scope of works (provided this is a feasible step - sometimes negating a process is enough to make a project fail)

    Document each step. I can’t stress the importance of this enough. I understand that we all want to get things done in a timely manner, but will you realistically remember all the changes you made in the order they were implemented ?
    Use differential tools to examine and easily highlight changes between two configurations. There are a number of free tools on the internet that do this. If you’re using a Windows environment, a personal favourite of mine is WinMerge. Using a diff tool can separate the wood from the trees quickly, and provides a simple overview of changes - very useful in the small hours, I can assure you.
    Working on a switch or firewall ? Learn how to use the CLI. This is often superior in terms of power and usually contains commands that are not available from the GUI. Using this approach, it’s perfectly feasible to bulk load configuration, and also back it out using the same mechanism.

    What if your rollback plan doesn’t work ? Unfortunately, there is absolutely no way to simulate a rollback during project planning, and this is often made worse by many changes being made at once to multiple systems. It’s not that the rollback doesn’t work - it’s usually always a case of settings being reverted before they should be. In most cases, this has the knock on effect of denying yourself access to a system - and it’s always in a place where there are no local support personnel to assist - at least, not immediately. For every migration I have completed over my career, I’ve always ensured that there is an alternative route to reach a remote device should the primary path become inaccessible. For firewalls, this can be a blessing - particularly as they usually permit access on the public interfaces.

    However, delete a route inadvertently and you are toast - you lose access to the firewall full stop - get out of that one. What would I do in a situation like this where the firewall is located in Asia for example, and you are in London ? Again - contingency. You can’t remove a route on a firewall if it was created automatically by the system. In this case, a VLAN or directly connected interface will create it’s own dynamic route, and should still be available. If dealing with a remote firewall, my suggestion here would be Out Of Band Management (OOBM), but not a device connected directly to the firewall itself, as this presents a security risk if not configured properly. A personal preference is a locally connected laptop in the remote location that uses either independent WiFi or a 3G / Mifi presence. Before the migration starts, establish a WebEx or GoToMeeting session (don’t forget to disable UAC here as that can shoot you in the foot), and arrange for a network cable to be plugged into switch fabric, or directly. Direct is better if you can spare the interface, as it removes potential routing issues. Just configure the NIC on the remote machine with an address in the same subnet add the interface you’re connected to, and you’re golden.

    I’ve used the above as a get out of jail free card on several occasions, and I can assure you it works.

    So what are the takeaways here ?

    The most important aspect is to be ready with a response - effectively a “plan b” when things go wrong. Simple planning in advance can save you having to book a flight, or foot the expense of a local IT support firm with no prior knowledge of your network - there’s the security aspect as well; you’d need to provide the password for the device which immediately invokes a change once the remediation is complete. In summary

    Thoroughly plan each migration and allow time for contingency steps. You may not need them, and if you don’t, then you effectively gain time that could be used elsewhere. Have an alternative way of reaching a remote device, and ensure necessary third party vendors are going to be available during your maintenance window should this be necessary. Take regular config backups of all devices. You don’t necessarily need an expensive tool for this - I actually designed a method to make this work using Linux, a TFTP server, and a custom bash script - let me know if you’d like a copy 🙂 Regularly analyse (automated diff) configuration changes between configurations. Any changes that are undocumented or previously approved are a cause for alarm and should be investigated Ensure that you have adequate documentation, and steps necessary to recover systems in the event of failure

    Any thoughts or questions ? Let me know !

  • 0 Votes
    1 Posts
    138 Views

    bg-min-dark.webp
    It’s a common occurrence in today’s modern world that virtually all organisations have a considerable budget (or a strong focus on) information and cyber security. Often, larger organisations spend millions annually on significant improvements to their security program or framework, yet overlook arguably the most fundamental basics which should be (but are often not) the building blocks of any fortified stronghold.

    We’ve spent so much time concentrating on the virtual aspect of security and all that it encompasses, but seem to have lost sight of what should arguably be the first item on the list – physical security. It doesn’t matter how much money and effort you plough into designing and securing your estate when you consider how vulnerable and easily negated the program or framework is if you neglect the physical element. Modern cyber crime has evolved, and it’s the general consensus these days that the traditional perimeter as entry point is rapidly losing its appeal from the accessibility versus yield perspective. Today’s discerning criminal is much more inclined to go for a softer and predictable target in the form of users themselves rather than spend hours on reconnaissance and black box probing looking for backdoors or other associated weak points in a network or associated infrastructure.

    Physical vs virtual

    So does this mean you should be focusing your efforts on the physical elements solely, and ignoring the perimeter altogether ? Absolutely not – doing so would be commercial suicide. However, the physical element should not be neglected either, but instead factored into any security design at the outset instead of being an afterthought. I’ve worked for a variety of organisations over my career – each of them with differing views and attitudes to risk concerning physical security. From the banking and finance sector to manufacturing, they all have common weaknesses. Weaknesses that should, in fact, have been eliminated from the outset rather than being a part of the everyday activity. Take this as an example. In order to qualify for buildings and contents insurance, business with office space need to ensure that they have effective measures in place to secure that particular area. In most cases, modern security mechanisms dictate that proximity card readers are deployed at main entrances, rendering access impossible (when the locking mechanism is enforced) without a programmed access card or token. But how “impossible” is that access in reality ?

    Organisations often take an entire floor of a building, or at least a subset of it. This means that any doors dividing floors or areas occupied by other tenants must be secured against unauthorised access. Quite often, these floors have more than one exit point for a variety of health and safety / fire regulation reasons, and it’s this particular scenario that often goes unnoticed, or unintentionally overlooked. Human nature dictates that it’s quicker to take the side exit when leaving the building rather than the main entrance, and the last employee leaving (in an ideal world) has the responsibility of ensuring that the door is locked behind them when they leave. However, the reality is often the case instead where the door is held open by a fire extinguisher for example. Whilst this facilitates effective and easy access during the day, it has a significant impact to your physical security if that same door remains open and unattended all night. I’ve seen this particular offence repeatedly committed over months – not days or weeks – in most organisations I’ve worked for. In fact, this exact situation allowed thieves to steal a laptop left on the desk in an office of a finance firm I previously worked at.

    Theft in general is mostly based around opportunity. As a paradigm, you could leave a £20 note / $20 bill on your desk and see how long it remained there before it went missing. I’m not implying here that anyone in particular is a thief, but again, it’s about opportunity. The same process can be aligned to Information security. It’s commonplace to secure information systems with passwords, least privilege access, locked server rooms, and all the other usual mechanisms, but what about the physical elements ? It’s not just door locks. It’s anything else that could be classed as sensitive, such as printed documents left on copiers long since forgotten and unloved, personally identifiable information left out on desks, misplaced smartphones, or even keys to restricted areas such as usually locked doors or cupboards. That 30 second window could be all that would be required to trigger a breach of security – and even worse, of information classed as sensitive. Not only could your insurance refuse to pay out if you could not demonstrate beyond reasonable doubt that you had the basic physical security measures in place, but (in the EU) you would have to notify the regulator (in this case, the ICO) that information had been stolen. Not only would it be of significant embarrassment to any firm that a “chancer” was able to casually stroll in and take anything they wanted unchallenged, but significant in terms of the severity of such an information breach – and the resultant fines imposed by the ICO or SEC (from the regulatory perspective – in this case, GDPR) – at €20m or 4% of annual global (yes, global) turnover (if you were part of a larger organisation, then that is actually 4% of the parent entity turnover – not just your firm) – whichever is the highest. Of equal significance is the need to notify the ICO within 72 hours of a discovered breach. In the event of electronic systems, you could gain intelligence about what was taken from a centralised logging system (if you have one – that’s another horror story altogether if you don’t and you are breached) from the “electronic” angle of any breach via traditional cyber channels, but do you know exactly what information has taken residence on desks ? Simple answer ? No.

    It’s for this very reason that several firms operate a “clean desk” policy. Not just for aesthetic reasons, but for information security reasons. Paper shredders are a great invention, but they lack AI and machine learning to wheel themselves around your office looking for sensitive hard copy (printed) data to destroy in order for you to remain compliant with your information security policy (now there’s an invention…).

    But how secure are these “unbreakable” locks ? Despite the furore around physical security in the form of smart locks, thieves seem to be able to bypass these “security measures” with little effort. Here’s a short video courtesy of ABC news detailing just how easy it was (and still is in some cases) to gain access to hotel rooms using cheap technology, tools, and “how-to” articles from YouTube.

    Surveillance systems aren’t exempt either. As an example, a camera system can be rendered useless with a can of spray paint or even something as simple as a grocery bag if it’s in full view. Admittedly, this would require some previous reconnaissance to determine the camera locations before committing any offence, but it’s certainly a viable prospect of that system is not monitored regularly. Additionally, (in the UK at least) the usage of CCTV in a commercial setting requires a written visible notice to be displayed informing those affected that they are in fact being recorded (along with an impact assessment around the usage), and is also subject to various other controls around privacy, usage, security, and retention periods.

    Unbreakable locks ?

    Then there’s the “unbreakable” door lock. Tapplock advertised their “unbreakable smart lock” only to find that it was vulnerable to the most basic of all forced entry – the screwdriver. Have a look at this article courtesy of “The Register”. In all seriousness, there aren’t that many locks that cannot be effectively bypassed. Now, I know what you’re thinking. If the lock cannot be effectively opened, then how do you gain entry ? It’s much simpler than you think. For a great demonstration, we’ll hand over to a scene from “RED” that shows exactly how this would work. The lock itself may have pass-code that “…changes every 6 hours…” and is “unbreakable”, but that doesn’t extend to the material that holds both the door and the access panel for the lock itself.

    And so onto the actual point. Unless your “unbreakable” door lock is housed within fortified brick or concrete walls and impervious to drills, oxy-acetylene cutting equipment, and proximity explosive charges (ok, that’s a little over the top…), it should not be classed as “secure”. Some of the best examples I’ve seen are a metal door housed in a plasterboard / false wall. Personally, if I wanted access to the room that badly, I’d go through the wall with the nearest fire extinguisher rather than fiddle with the lock itself. All it takes is to tap on the wall, and you’ll know for sure if it’s hollow just by the sound it makes. Finally, there’s the even more ridiculous – where you have a reinforced door lock with a viewing pane (of course, glass). Why bother with the lock when you can simply shatter the glass, put your hand through, and unlock the door ?

    Conclusion

    There’s always a variety of reasons as to why you wouldn’t build your comms room out of brick or concrete – mostly attributed to building and landlord regulations in premises that businesses occupy. Arguably, if you wanted to build something like this, and occupied the ground floor, then yes, you could indeed carry out this work if it was permitted. Most data centres that are truly secure are patrolled 24 x 7 by security, are located underground, or within heavily fortified surroundings. Here is an example of one of the most physically secure data centres in the world.

    https://www.identiv.com/resources/blog/the-worlds-most-secure-buildings-bahnhof-data-center

    Virtually all physical security aspects eventually circle back to two common topics – budget, and attitude to risk. The real question here is what value you place on your data – particularly if you are a custodian of it, but the data relates to others. Leaking data because of exceptionally weak security practices in today’s modern age is an unfortunate risk – one that you cannot afford to overlook.

    What are your thoughts around physical security ?

  • 0 Votes
    1 Posts
    158 Views

    1631812610135-security1.webp
    The recent high profile breaches impacting organisations large and small are a testament to the fact that no matter how you secure credentials, they will always be subject to exploit. Can a password alone ever be enough ? in my view, it’s never enough. The enforced minimum should be at least with a secondary factor. Regardless of how “secure” you consider your password to be, it really isn’t in most cases – it just “complies” with the requirement being enforced.

    Here’s classic example. We take the common password of “Welcome123” and put it into a password strength checker
    1564764162-304322-password1.png
    According to the above, it’s “strong”. Actually, it isn’t. It’s only considered this way because it meets the complexity requirements, with 1 uppercase letter, at least 8 characters, and numbers. What’s also interesting is that a tool sponsored by Dashlane considers the same password as acceptable, taking supposedly 8 months to break
    1564764192-579936-password2.png
    How accurate is this ? Not accurate at all. The password of “Welcome123” is in fact one of the passwords contained in any penetration tester’s toolkit – and, by definition, also used by cyber criminals. As most of this password combination is in fact made up of a dictionary word, plus sequential numbers, it would take less than a second to break this rather than the 8 months reported above. Need further evidence of this ? Have a look at haveibeenpwned, which will provide you with a mechanism to test just how many times “Welcome123” has appeared in data breaches
    1564764241-350631-hibp.png

    Why are credentials so weak ?

    My immediate response to this is that for as long as humans have habits, and create scenarios that enable them to easily remember their credentials, then this weakness will always exist. If you look at a sample taken from the LinkedIn breach, those passwords that occupy the top slots are arguably the least secure, but the easiest to remember from the human perspective. Passwords such as “password” and “123456” may be easy for users to remember, but on the flip side, weak credentials like this can be broken by a simple dictionary attack in less than a second.

    Here’s a selection of passwords still in use today – hopefully, yours isn’t on there
    1564764251-257407-passwordlist.jpeg
    We as humans are relatively simplistic when it comes to credentials and associated security methods. Most users who do not work in the security industry have little understanding, desire to understand, or patience, and will naturally choose the route that makes their life easier. After all, technology is supposed to increase productivity, and make tasks easier to perform, right ? Right. And it’s this exact vulnerability that a cyber criminal will exploit to it’s full potential.

    Striking a balance between the security of credentials and ease of recall has always had it’s challenges. A classic example is that networks, websites and applications nowadays typically have password policies in place that only permit the use of a so-called strong password. Given the consolidation and overall assortment of letters, numbers, non-alphanumeric characters, uppercase and lowercase, the password itself is probably “secure” to an acceptable extent, although the method of storing the credentials isn’t. A shining example of this is the culture of writing down sensitive information such as credentials. I’ve worked in some organisations where users have actually attached their password to their monitor. Anyone looking for easy access into a computer network is onto an immediate winner here, and unauthorised access or a full blown breach could occur within an alarmingly short period of time.

    Leaked credentials and attacks from within

    You could argue that you would need access to the computer itself first, but in several historical breach scenarios, the attack originated from within. In this case, it may not be an active employee, but someone who has access to the area where that particular machine is located. Any potential criminal has the credentials – well, the password itself, but what about the username ? This is where a variety of techniques can be used in terms of username discovery – in fact, most of them being non-technical – and worryingly simple to execute. Think about what is usually on a desk in an office. The most obvious place to look for the username would be on the PC itself. If the user had recently logged out, or locked their workstation, then on a windows network, that would give you the username unless a group policy was in place. Failing that, most modern desk phones display the name of the user. On Cisco devices, under Extension Mobility, is the ID of the user. It doesn’t take long to find this. Finally there’s the humble business card. A potential criminal can look at the email address format, remove the domain suffix, and potentially predict the username. Most companies tend to leverage the username in email addresses mainly thanks to SMTP template address policies – certainly true in on premise Exchange environments or Office 365 tenants.

    The credentials are now paired. The password has been retrieved in clear text, and by using a simple discovery technique, the username has also been acquired. Sometimes, a criminal can get extremely lucky and be able to acquire credentials with minimal effort. Users have a habit of writing down things they cannot recall easily, and in some cases, the required information is relatively easily divulged without too much effort on the part of the criminal. Sounds absurd and far fetched, doesn’t it ? Get into your office early, or work late one evening, and take a walk around the desks. You’ll be unpleasantly surprised at what you will find. Amongst the plethora of personal effects such as used gym towels and footwear, I guarantee that you will find information that could be of significant use to a criminal – not necessarily readily available in the form of credentials, but sufficient information to create a mechanism for extraction via an alternative source. But who would be able to use such information ?

    Think about this for a moment. You generally come into a clean office in the mornings, so cleaners have access to your office space. I’m not accusing anyone of anything unscrupulous or illegal here, but you do need to be realistic. This is the 21st century, and as a result, it is a security measure you need to factor in and adopt into your overall cyber security policy and strategy. Far too much focus is placed on securing the perimeter network, and not enough on the threat that lies within. A criminal could get a job as a cleaner at a company, and spend time collecting intelligence in terms of what could be a vulnerability waiting to be exploited. Another example of “instant intelligence” is the network topology map. Some of us are not blessed with huge screens, and need to make do with one ancient 19″ or perhaps two. As topology maps can be quite complex, it’s advantageous to be able to print these in A3 format to make it easier to digest. You may also need to print copies of this same document for meetings. The problem here is what you do with that copy once you have finished with it ?

    How do we address the issue ? Is there sufficient awareness ?

    Yes, there is. Disposing of it in the usual fashion isn’t the answer, as it can easily be recovered. The information contained in most topology maps is often extensive, and is like a goldmine to a criminal looking for intelligence about your network layout. Anything like this is classified information, and should be shredded at the earliest opportunity. Perhaps one of the worst offences I’ve ever personally experienced is a member of the IT team opening a password file, then walking away from their desk without locking their workstation. To prove a point about how easily credentials can be inadvertently leaked, I took a photo with a smartphone, then showed the offender what I’d managed to capture a few days later. Slightly embarrassed didn’t go anywhere near covering it.

    I’ve been an advocate of securing credentials for some time, and recently read about the author of “NIST Special Publication 800-63” (Bill Burr). Now retired, he has openly admitted the advice he originally provided as in fact, incorrect

    “Much of what I did I now regret.” said Mr Burr, who advised people to change their password every 90 days and use obscure characters.

    “It just drives people bananas and they don’t pick good passwords no matter what you do.”

    The overall security of credentials and passwords

    However, bearing in mind that this supposed “advice” has long been the accepted norm in terms of password securuty, let’s look at the accepted standards from a well-known auditing firm

    It would seem that the Sarbanes Oxley 404 act dictates that regular changes of credentials are mandatory, and part of the overarching controls. Any organisation that is regulated by the SEC (for example) would be covered and within scope by this statement, and so the argument for not regularly changing your password becomes “invalid” by the act definition and narrative. My overall point here is that the clearly obvious bad password advice in the case of the financial services industry is negated by a severely outdated set of controls that require you to enforce a password change cycle and be in compliance with it. In addition, there are a vast number of sites and services that force password changes on a regular basis, and really do not care about what is known to be extensive research on password generation.

    The argument for password security to be weakened by having to change it on a frequent basis is an interesting one that definitely deserves intense discussion and real-world examples, but if your password really is strong (as I mentioned previously, there are variations of this which are really not secure at all, yet are considered strong because they meet a complexity requirement), then a simple mutation of it could render it vulnerable. I took a simple lowercase phrase

    mypasswordissimpleandnotsecureatall

    1564764311-893646-nonillion.png
    The actual testing tool can be found here. So, does a potential criminal have 26 nonillion years to spare ? Any cyber criminal who possesses only basic skills won’t need a fraction of that time as this password is in fact made up of simple dictionary words, is all lowercase, and could in fact be broken in seconds.

    My opinion ? Call it how you like – the password is here to stay for the near future at least. The overall strength of the password or credentials stored using MD5, bCrypt, SHA1 and so on are irrelevant when an attacker can use established and proven techniques such as social engineering to obtain your password. Furthermore, the addition of 2FA or a SALT dramatically increases password security – as does the amount of unsuccessful attempts permitted before the associated account is locked. This is a topic that interests me a great deal. I’d love to hear your feedback and comments.

  • 0 Votes
    1 Posts
    302 Views

    tech.jpeg
    Ever heard of KISS ? Nope - not these guys

    kiss.jpeg
    What I’m referring to is the acronym was reportedly coined by Kelly Johnson, lead engineer at the Lockheed Skunk Works (creators of the Lockheed U-2 and SR-71 Blackbird spy planes, among many others), which formed the basis of the relationship between the way things break, and the sophistication available to repair them. You might be puzzled at why I’d write about something like this, but it’s a situation I see constantly – one I like to refer to as “over thinker syndrome”. What do I mean by this ? Here’s the theory. Some people are very analytical when it comes to problem solving. Couple that with technical knowledge and you could land up with a situation where something relatively simple gets blown out of all proportion because the scenario played out in the mind is often much further from reality than you’d expect. And the technical reasoning is usually always to blame.

    Some years ago in a previous career, a colleague noticed that the Exchange Server (2003 wouldn’t you know) would suddenly reboot half way through a backup job. Rightly so, he wanted to investigate and asked me if this would be ok. Anyone with an ounce of experience knows that functional backups are critical in the event of a disaster – none more so than I – obviously, I gave the go ahead. One bright spark in my team suggested a reboot of the server, which immediately prompted the response

    “……it’s rebooting itself every day, so how will that help ?”

    There’s always one, isn’t there ? The final (and honestly more realistic suggestion) was to enable verbose logging in Exchange. This is actually a good idea, but only if you suspect that the information store could be the issue. Given the evidence, I wasn’t convinced. If there was corruption in the store, or on any of the disks, this would show itself randomly through the day and wouldn’t wait until 2am in the morning. Not wanting to come across as condescending, I agreed, but at the same time, set a deadline to escalation. I wasn’t overly concerned about the backups as these were being completed manually each day whilst the investigations were taking place. Neither was I concerned at what could be seen at this point as wasting someone’s time when you think you may have the answer to what now seemed to be an impossible problem. This is where experience will eclipse any formal qualifications hands down. Those with university degrees may scoff at this, but those with substantially analytical thinking patterns seem to avoid logic like the plague and go off on a wild tangent looking for a dramatically technical explanation and solution to a problem when it’s much simpler than you’d expect.

    After witnessing the pained expression on the face of my now exasperated and exhausted tech, I said “let’s get a coffee”. In agreement, he followed me to the kitchen and then asked me what I thought the problem could be. I said that if he wanted my advice, it would be to step back and look at this problem from a logical angle rather than technical. The confused look I received was priceless – the guy must have really though I’d lost the plot. After what seemed like an eternity (although in reality only a few seconds) he asked me what I meant by this. “Come with me”, I said. Finishing his coffee, he diligently followed me to the server room. Once inside, I asked him to show me the Exchange Server. Puzzled, he correctly pointed out the exact machine. I then asked him to trace the power cables and tell me where they went.

    As with most server rooms, locating and identifying cables can be a bit of a challenge after equipment has been added and removed, so this took a little longer than we expected. Eventually, the tech traced the cables back to

    ………an old looking UPS that had a red light illuminated at the front like it had been a prop in a Terminator film.

    Suddenly, the real cause of this issue dawned on the tech like a morning sunrise over the Serengeti. The UPS that the Exchange Server was unexpectedly connected to had a faulty battery. The UPS was conducting a self test at 2am each morning, and because the bypass test failed owing to the burnt battery, the connected server lost power and started back up after the offending equipment left bypass mode and went online.

    Where is this going you might ask ? Here’s the moral of this particular story

    Just because a problem involves technology, it doesn’t mean that the answer has to be a complex technical one Logic and common sense has a part to play in all of our lives. Sometimes, it makes more sense just to step back, take a breath, and see something for what it really is before deciding to commit It’s easy to allow technical expertise to cloud your judgement – don’t fall into the trap of using a sledgehammer to break an egg You cannot buy experience – it’s earned, gained, and leaves an indelible mark