How to destroy a community before it's even built

Blog
  • There’s a lot you can learn about a person just by the way they present themselves online - whether that is in a positive or negative light is really up to the individual posting the content. Several of my followers have questioned why I choose to part company with Peerlyst, and here’s why. Firstly, let’s understand the word “community”. Taken literally, it’s something like the below

    “The condition of sharing or having certain attitudes and interests in common.”

    Anyone calling themselves a community should abide by this basic description at all times. Especially the part “having certain attitudes”. It’s this very part of the description that is capable of destroying a community much faster than it takes to create one in the first place. It was always my dream and wish to give something back to the industry that adopted me at the age of 16 as a school leaver, and I promised myself that once I reached a plateau in my career, I would start giving something back in order to help others.

    This initial drive began in 2016 when I started writing articles for Peerlyst. The very first article I donated to the community here detailed the most common types of compromise, and what to look out for. Fairly soon, I was contacted and asked if I’d consider making this a featured resource that their community could use as a learning tool. Happily, I agreed, and began donating regular articles from my own blog for the benefit of their community. As a side point, there are several authors who write similar content for others, but it’s typically for a fee, or a mention in a larger community in order to promote that individual. This isn’t how I work. I’ve never chased glory - I get my satisfaction from those who read my articles, and engage in active discussion relating to the content.

    I always expected questions and dialogue arising from my articles. In most cases, the exchange of opinions, questions, and content in general made for a pleasant experience. Now, not every piece of creative writing inspires everyone, and I completely understand that. However, opinion can easily be divided when a specific response is used, and counter effective if the response hasn’t been well thought out before clicking that submit button. Written content often suffers from the same central ailment in the fact that it rarely conveys tone or emotion. When you read something someone else has written, it’s impossible to gauge body language or tone of voice. For this reason, diplomacy and a careful selection of words is often a good idea (also known as “think before you post”), as is reading your input before submitting it. Often, the first response to something isn’t always the best one, and you’ll find yourself effectively sanitising content before you submit after renewing it.

    However, the story (unfortunately) doesn’t end here. I was not on the receiving end of the diatribe about be unleashed, but watched (with a mixture of disgust and disbelief) as this whole scenario unfolded. The focal point of discussion was from this post

    Some of the comments left for the author of this post were in my view nothing short of disgusting to say the least. Here’s the opening comment

    Those are great academic credentials. Let’s talk about “in the trenches” experience. Were you ever an engineer or specialist hunting threats and vulnerabilities? Run a NESSUS scan? Perform threat mitigation? Get called at 3AM because your network was hacked? What I am seeing is a professional test taker and academic. Perhaps with a photographic memory and tons of charisma? Getting a PhD at an early age and knowing 5 different languages leads me to believe the previous sentence.

    Where is the actual, bonafide experience?As for your “acting chief information security officer for regulated businesses”, again, where is the actual experience? Anyone can be a CISO including a person with a Music background. Just saying.

    There was a comment from the original author of this post, but it has since been removed. It was essentially threatening the author of the above comment with a lawsuit for defamation of character. Unsurprisingly, the response below was then posted

    Feel free. I am well known on here. And your lawyer can contact me at the provided address. If you were truly serious, you would offer the proof I am requesting. I will gladly acknowledge your certification and knowledge when the proof is provided. But you have not done so and that is an indication of your true meaning. I doubt your certification as a CISSP and you have done nothing to prove me wrong. The truth is your only defense.

    I don’t claim to be an airplane pilot. I could not tell you how to land a 737. Why should you be any exemption to that? You claim to be a cybersecurity expert with a CISSP requiring 4 years of actual experience. Where is it? If you will acknowledge that experience, I will not only accept it, I will endorse you.

    Have you “attorney” bring suit against me here in the US (I’ll never travel to Singapore so that doesn’t matter). Have him/her contact me at my stated email address. I will gladly share my physical mailing address for service of process. I’ll encourage service! Let’s go to court. Perhaps I know the laws better than you in the US not to mention cybersecurity.

    As for Peerlyst, maybe they will see it fit to remove an individual who is a poser. A fake. A charlatan through her own lack of admissions. If they ask me to be silent on this, I will honour their request. It is their site after all. Guess we will need to wait and see.

    Is this really necessary ? Since when did we consider it appropriate to behave like Neanderthals by publicly humiliating someone else, then dragging their reputation through the mud ? This is when a so called community deteriorates to a battlefield, and if the moderators do not make an effort to ring fence “debates” like this, they quickly spiral out of control and dramatically damage what the community set out the build the in the first place. The best way to extinguish this particular situation is to disable the comments for that post. As a moderator, this is one the immediate mechanisms to prevent brand damage. However, this course of action was not taken, and incredibly, the moderators chose to actually engage in the debate. This was not a wise choice, as the participants then started to respond to the interjection and went off track in the process. Mediation is a powerful tool when running a community, but it’s effectiveness is severely impacted when you decide to air dirty laundry in public.  Why on earth would you want to engage in a debate with someone when they are clearly trolling someone else ? You’re supposed to actually prevent that from happening in my view. And this is the real reason why I will never write for Peerlyst again. They have knowingly damaged their own community - effectively allowing someone else to poison it’s integrity and standing as a reliable information source. I know of two others who have contacted me since my LinkedIn post detailing that I no longer write for Peerlyst, and expressed the same reasons as mine stated above.

    And so, on the 15th of November, I invoked my version of “Article 50”, and decide to leave the Peerlyst community by deactivating my account, and effectively, exercising my Right To Be Forgotten. For those who don’t fully understand the meaning of this, here’s a snippet supplied by the ICO

    The right to erasure is also known as ‘the right to be forgotten’. The broad principle underpinning this right is to enable an individual to request the deletion or removal of personal data where there is no compelling reason for its continued processing.

    I was contacted by Peerlyst the following day asking why I had deleted my account. I’m not convinced it was the fact that they were genuinely sorry that they had lost a member - but where more concerned that the content I had contributed over time was also deleted as part of the account deactivation procedure. Here’s some of the comments I received

    “I’m sorry to hear that you decided to leave for this reason. I understand you have your own initiative, which I hope will work well for you. However, removing the content which serves 100,000 monthly readers and 500,000 unique readers is a pity for those who come to Peerlyst to learn”

    My response was that all content I had previously provided is posted on my own site. It’s actually my work, and Peerlyst are no longer permitted to use it. I was also asked if I would leave my account in place so that they could retain the content. This concerns me somewhat, as that would imply the content hasn’t actually been deleted, but “moved off the site to somewhere else”. I have asked for Peerlyst to confirm that the data has been removed - so far, there has not been any response. I guess they have until May 2018 to delete it from the GDPR standpoint in order be in full compliance.

    The other comment I received was

    “So sad to see you deactivated your account. You used to believe in the mission of sharing everything to help people improve!?”

    My response…

    “And I still do. Just not for Peerlyst”.

    The point I’ll make here is as follows. For a community to succeed it has to have a solid foundation, and a clearly defined policy. There isn’t much to the policies I put together, and they can be found here.

    https://sudonix.org/policies

    Based on what I saw on Peerlyst, I felt the need to update these accordingly. Take a look yourself. I personally want to mentor the next generation of InfoSec professionals, not get into a pathetic “shit slinging” match that yields no real benefit whatsoever. I’ve also been contacted by one of the moderators - evidently, Peerlsyt’s CEO (at the time - they are now defunct) wanted to have a call with her to discuss this. Too little, too late, I’m afraid. The damage is done. I don’t want an apology as one isn’t needed. I don’t want a discussion as nothing will change. In reality, I refuse to associate my name or any of my content with a so-called community that is effectively endorsing  one of the worst online experiences we have to date - trolling.

  • phenomlabundefined phenomlab referenced this topic on
  • @phenomlab I am sorry to hear about your experience. I fully support your decision, and I believe it is waste of time to spend one more minute in those kinds of environments anyway. As you said, the attitudes of the moderators are very important. Even if you can bring a lot of talented people together, if you cannot maintain the atmosphere of the community, it will not lead anywhere good. moderators/admins are quite important, if they are not competent the community is doomed.

    I guess we can think of this as movie remakes that are much worse than the originals… Just because you have a great scenario and better actors, it does not mean the movie will be directed better. With worse directors/directing, you can end up having worse movies like Psycho, The Shining, and Mummy… The remakes are terrible for each one, Mummy remake even had Tom Cruise in it, but it is way worse than the original 😄 So, it is clear that the “director” (moderators) is quite a key.

  • I can appreciate how important your relationship with them meant to you, and how frustrating it would have been if they hadn’t seen all your efforts. In particular, you can discover similar folks in pubs where the world is like that. If you find time you can read " They have the right to believe what they want to believe "

  • @crazycells I guess the worst part for me was the trolling - made so much worse by the fact that the moderators allowed it to continue, insisting that the PeerLyst coming was seeing an example by allowing the community to “self moderate” - such a statement being completely ridiculous, and it wasn’t until someone else other than myself pointed out that all of this toxic activity could in fact be crawled by Google, that they decided to step in and start deleting posts.

    In fact, it reached a boiling point where the CEO herself had to step in and post an article stating their justification for “self moderation” which simply doesn’t work.

    The evidence here speaks for itself.


  • Link vs Refresh

    Solved Customisation
    20
    8 Votes
    20 Posts
    502 Views

    @pobojmoks Do you see any errors being reported in the console ? At first guess (without seeing the actual code or the site itself), I’d say that this is AJAX callback related

  • 0 Votes
    1 Posts
    151 Views

    1631808994808-scamming.jpg.webp

    One of many issues with working in the Infosec community is an inevitable backlash you’ll come across almost on a daily basis. In this industry, and probably hundreds of others like it are those who have an opinion. There’s absolutely nothing wrong with that, and it’s something I always actively encourage. However, there’s a fine line between what is considered to be constructive opinion and what comes across as a bigoted approach. What I’m alluding to here is the usage of the word “hacker” and it’s context. I’ve written about this particular topic before which, so it seems, appears to have pressed a few buttons that “shouldn’t be pressed”.
    alt text

    But why is this ?

    The purpose of this article is definition. It really isn’t designed to “take sides” or cast aspersions over the correct usage of the term, or which scenarios and paradigms it is used correctly or incorrectly against. For the most part, the term “hacker” seems to be seen as positive in the Infosec community, and based on this, the general consensus is that there should be greater awareness of the differences between hackers and threat actors, for example. The issue here is that not everyone outside of this arena is inclined to agree. You could argue that the root of this issue is mainly attributed to the media and how they portray “hackers” as individuals who pursue nefarious activity and use their skills to commit crime and theft on a grand scale by gaining illegal access to networks. On the one hand, the image of hoodies and faceless individuals has created a positive awareness and a sense of caution amongst the target groups – these being everyday users of civilian systems and corporate networks alike, and with the constant stream of awareness campaigns running on a daily basis, this paradigm serves only to perpetuate rather than diminish. On the other hand, if you research the definition of the term “hacker” you’ll find more than one returned.

    Is this a fair reflection of hackers ? To the untrained eye, picture number 2 probably creates the most excitement. Sure, picture 1 looks “cool”, but it’s not “threatening” as such, as this is clearly the image the media wants to display. Essentially, they have probably taken this stance to increase awareness of an anonymous and faceless threat. But, it ISN’T a fair portrayal.

    Current definitions of “the word”

    The word “hacker” has become synonymous with criminal activity to the point where it cannot be reversed. Certainly not overnight anyway. The media attention cannot be directly blamed either in my view as without these types of campaigns, the impact of such a threat wouldn’t be taken seriously if a picture of a guy in a suit (state sponsored) was used. The hoodie is representative of an unknown masked assailant and it’s creation is for awareness – to those who have no real understanding of what a hacker should look like – hence my original article. As I highlighted above, we live in a world where a picture speaks a thousand words.

    The word hacker is always going to be associated with nefarious activity and that’s never going to change, regardless of the amount of effort that would be needed to re-educate pretty much the entire planet. Ask anyone to define a hacker and you’ll get the same response. It’s almost like trying to distinguish the deference between a full blown criminal and a “lovable rogue” or the fact that hoodies aren’t trouble making adolescent thugs.

    Ultimately, it’s far too ingrained – much like the letters that flow through a stick of rock found on UK seaside resorts. It’s doesn’t matter how much you break off, the lettering exists throughout the entire stick regardless if you want that to happen or not. To make a real change, and most importantly, have media (and by definition, everyone else) realise they have made a fundamental misjudgement, we should look at realistic definitions.

    The most notable is the below, taken from Tech Target

    A hacker is an individual who uses computer, networking or other skills to overcome a technical problem. The term hacker may refer to anyone with technical skills, but it often refers to a person who uses his or her abilities to gain unauthorized access to systems or networks in order to commit crimes. A hacker may, for example, steal information to hurt people via identity theft, damage or bring down systems and, often, hold those systems hostage to collect ransom.

    The term hacker has historically been a divisive one, sometimes being used as a term of admiration for an individual who exhibits a high degree of skill, as well as creativity in his or her approach to technical problems. However, the term is more commonly applied to an individual who uses this skill for illegal or unethical purposes.

    One great example of this is that hackers are not “evil people” but are in fact industry professionals and experts who use their knowledge to raise awareness by conducting proof of concept exercises and providing education and awareness around the millions of threats that we are exposed to on an almost daily basis. So why does the word “hacker” strike fear into those unfamiliar with its true meaning ? The reasoning for this unnecessary phenomena isn’t actually the media alone (although they have contributed significantly to it’s popularity). It’s perception. You could argue that the media have made this perception worse, and to a degree, this would be true. However, they actually didn’t create the original alliance – the MIT claimed that trophy and gave the term the “meaning” it has to this day. Have a look at this

    MIT Article

    Given the origins of this date back to 1963, the media is not to blame for creating the seemingly incorrect original reference when it’s fairly obvious that they didn’t. The “newspaper” reflected in the link is a campus circulation and was never designed for public consumption as far as I can see. Here’s a quote from that article:

    “Many telephone services have been curtailed because of so-called hackers, according to Professor Carleton Tucker, administrator of the Institute telephone system.

    The students have accomplished such things as tying up all the tie-lines between Harvard and MIT, or making long-distance calls by charging them to a local radar installation. One method involved connecting the PDP-1 computer to the phone system to search the lines until a dial tone, indicating an outside line, was found.”

    The “so-called hackers” alignment here originally comes from “Phreaking” – a traditional method of establishing control over remote telephone systems allowing trunk calls, international dialling, premium rates, etc, all without the administrator’s knowledge. This “old school” method would certainly no longer work with modern phone systems, but is certainly “up there” with the established activity that draws a parallel with hacking.

    Whilst a significant portion of blogs, security forums, and even professional security platforms continue to use images of hoodies, faceless individuals, and the term “hacker” in the criminal sense, this is clearly a misconception – unfortunately one that connotation itself has allowed to set in stone like King Arthur’s Excalibur. In fairness, cyber criminals are mostly faceless individuals as nobody can actually see them commit a crime and only realise they are in fact normal people once they are discovered, arrested, and brought to trial for their activities. However, the term “hacker” is being misused on a grand scale – and has been since the 1980’s.

    An interesting observation here is that hoodies are intrinsically linked to threatening behaviour. A classic example of this is here. This really isn’t misrepresentation by the media in this case – it’s an unfortunate reality that is on the increase. Quite who exactly is responsible for putting a hacker in a hoodie is something of a discussion topic, but hackers were originally seen as “Cyberpunks” (think Matrix 1) until the media stepped in where they suddenly were seen as skateboarding kids in hoodies. And so, the image we know (and hackers loathe) was born. Perhaps one “logical” perspective for hoodies and hackers could be the anonymity the hoodie supposedly affords.

    The misconception of the true meaning of “hacker” has damaged the Infosec community extensively in terms of what should be a “no chalk” line between what is criminal, and what isn’t. However, it’s not all bad news. True meaning aside, the level of awareness around the nefarious activities of cyber criminals has certainly increased, but until we are able to establish a clear demarcation between ethics in terms of what is right and wrong, those hackers who provide services, education, and awareness will always be painted in a negative light, and by inference, be “tarred with the same brush”. Those who pride themselves on being hackers should continue to do so in my view – and they have my full support.

    It’s not their job solely to convince everyone else of their true intent, but ours as a community.

    Let’s start making that change.

  • 2 Votes
    6 Posts
    252 Views

    @kurulumu-net CSS styling is now addressed and completed.

  • 0 Votes
    1 Posts
    205 Views

    expert.webp
    One thing I’ve seen a lot of over my career is the “expert” myth being touted on LinkedIn and Twitter. Originating from psychologist K. Anders Ericsson who studied the way people become experts in their fields, and then discussed by Malcolm Gladwell in the book, “Outliers“, “to become an expert it takes 10,000 hours (or approximately 10 years) of deliberate practice”. This paradigm (if you can indeed call it that) has been adopted by several so called “experts” - mostly those within the Information Security and GDPR fields. This article isn’t about GDPR (for once), but mostly those who consider themselves “experts” by virtue of the acronym. Prior to it’s implementation, nobody should have proclaimed themselves a GDPR “expert”. You cannot be an expert in something that wasn’t actually legally binding until May 25 2018, nor can you have sufficient time invested to be an expert since inception in my view. GDPR is a vast universe, and you can’t claim to know all of it.

    Consultant ? Possibly, yes. Expert ? No.

    The associated sales campaign isn’t much better, and can be aligned to the children’s book “Chicken Licken”. For those unfamiliar with this concept, here is a walkthrough. I’m sure you’ll understand why I choose a children’s story in this case, as it seems to fit the bill very well. What I’ve seen over the last 12 months had been nothing short of amazing - but not in the sense of outstanding. I could align GDPR here to the PPI claims furore - for anyone unfamiliar with what this “uprising” is, here’s a synopsis.

    The “expert” fallacy

    Payment Protection Insurance (PPI) is the insurance sold alongside credit cards, loans and other finance agreements to ensure payments are made if the borrower is unable to make them due to sickness or unemployment. The PPI scandal has its roots set back as far as 1998, although compensatory payments did not officially start until 2011 once the review and court appeal process was completed. Since the deadline for PPI claims has been announced as August 2019, the campaign has become intensively aggressive, with, it would seem, thousands of PPI “experts”. Again, I would question the authenticity of such a title. It seems that everyone is doing it, therefore, it must be easy to attain (a bit like the CISSP then). I witnessed the same shark pool of so called “experts” before, back in the day when Y2K was the latest buzzword on everyone’s lips. Years of aggressive selling campaigns and similarly, years of FUD (Fear, Uncertainty, Doubt - more effectively known as complete bulls…) caused an unprecedented spike that allowed companies and consultants (several of whom had never been heard of before) to suddenly appear out of the woodwork and assume the identity of “experts” in this field. In reality, it’s not possible to be a subject matter expert in a particular field or niche market unless you have extensive experience. If you compare a weapons expert to a GDPR “expert”, you’ll see just how weak this paradigm actually is. A weapons expert will have years of knowledge in a field, and could probably tell you which gun discharged a bullet just by looking at the expended shell casing. I very much doubt a self styled GDPR expert can tell you what will happen in the event of an unknown scenario around the framework and the specific legal rights (in terms of the individual who the data belongs to) and implications for the institution affected. How can they when nobody has even been exposed to such a scenario before ? This makes a GDPR expert in my view about as plausible as a Brexit expert specialising in Article 50.

    What defines an expert ?

    The focal point here is in the comparison. A weapons expert can be given a gun and a sample of shell casings, then asked to determine if the suspected weapon actually fired the supplied ammunition or not. Using a process of proven identification techniques, the expert can then determine if the gun provided is indeed the origin. This information is derived by using established identity techniques from the indentations and markings in the shell casing created by the gun barrel from which the bullet was expelled, velocity, angle, and speed measurements obtained from firing the weapon. The impact of the bullet and exit damage is also analysed to determine a match based on material and critical evidence. Given the knowledge and experience level required to produce such results, how long do you think it took to reach this unrivalled plateau ? An expert isn’t solely based on knowledge. It’s not solely based on experience either. In fact, it’s a deep mixture of both. Deep in the sense of the subject matter comprehension, and how to execute that same understanding along with real life experience to obtain the optimum result. Here’s an example   An information technology expert should be able to

    Identify and eliminate potential bottlenecks Address security concerns, Design high availability Factor in extensible scalability Consider risk to adjacent and disparate technology and conduct analysis Ensure that any design proposal meets both the current criteria and beyond Understand the business need for technology and be able to support it

    If I leveraged external consultancy for a project, I’d expect all of the above and probably more from anyone who labels themselves as an expert - or for that fact, an architect. Sadly, I’ve been disappointed on numerous occasions throughout my career where it became evident very quickly that the so called expert (who I hasten to add is earning more an hour than I do in a day in most cases) hired for his “expertise and superior knowledge” in fact appears to know far less than I do about the same topic.

    How long does it really take to become an expert ?

    I’ve been in the information technology and security field since I was 16. I’m now 47, meaning 31 years experience (well, 31 as this year isn’t over yet). If you consider that experience is acquired during an 8 hour day, and used the following equation to determine the amount of years needed to reach 10,000 hours

    10000 / 8 / 365 = 3.4246575342 - for the sake of simple mathematics, let’s say 3.5 years.

    However, in the initial calculation, it’s 10 years (using the basis of 90 minutes invested per day) - making the expert title when aligned to GDPR even more unrealistic. As the directive was adopted on the 27 April 2016, the elapsed time period isn’t even enough to carry the first figure cited at 3.5 years, irrespective of the second. The reality here is that no amount of time invested in anything is going to make your an expert if you do not possess the prerequisite skills and a thorough understanding based on previous events in order to supplement and bolster the initial investment. I could spend 10,000 practicing a particular sport - yet effectively suck at it because my body (If you’ve met me, you’d know why) isn’t designed for the activity I’m requesting it to perform. Just because I’ve spent 10,000 hours reading about something doesn’t make me an expert by any stretch of the imagination. If I calculated the hours spanned over my career, I would arrive at the below. I’m basing this on an 8 hour day when in reality, most of my days are in fact much longer.

    31 x 365 x 8 = 90,520 hours

    Even when factoring in vacation based on 4 weeks per year (subject to variation, but I’ve gone for the mean average),

    31 x 28 X 8 = 6,944 hours to subtract

    This is only fair as you are not (supposed to be) working when on holiday. Even with this subtraction, the total is still 83,578 hours. Does my investment make me an expert ? I think so, yes - based on the fact that 31 years dedicated to one area would indicate a high level of experience and professional standard - both of which I constantly strive to maintain. Still think 10,000 hours invested makes you an expert ? You decide ! What are your views around this ?

  • 0 Votes
    1 Posts
    138 Views

    bg-min-dark.webp
    It’s a common occurrence in today’s modern world that virtually all organisations have a considerable budget (or a strong focus on) information and cyber security. Often, larger organisations spend millions annually on significant improvements to their security program or framework, yet overlook arguably the most fundamental basics which should be (but are often not) the building blocks of any fortified stronghold.

    We’ve spent so much time concentrating on the virtual aspect of security and all that it encompasses, but seem to have lost sight of what should arguably be the first item on the list – physical security. It doesn’t matter how much money and effort you plough into designing and securing your estate when you consider how vulnerable and easily negated the program or framework is if you neglect the physical element. Modern cyber crime has evolved, and it’s the general consensus these days that the traditional perimeter as entry point is rapidly losing its appeal from the accessibility versus yield perspective. Today’s discerning criminal is much more inclined to go for a softer and predictable target in the form of users themselves rather than spend hours on reconnaissance and black box probing looking for backdoors or other associated weak points in a network or associated infrastructure.

    Physical vs virtual

    So does this mean you should be focusing your efforts on the physical elements solely, and ignoring the perimeter altogether ? Absolutely not – doing so would be commercial suicide. However, the physical element should not be neglected either, but instead factored into any security design at the outset instead of being an afterthought. I’ve worked for a variety of organisations over my career – each of them with differing views and attitudes to risk concerning physical security. From the banking and finance sector to manufacturing, they all have common weaknesses. Weaknesses that should, in fact, have been eliminated from the outset rather than being a part of the everyday activity. Take this as an example. In order to qualify for buildings and contents insurance, business with office space need to ensure that they have effective measures in place to secure that particular area. In most cases, modern security mechanisms dictate that proximity card readers are deployed at main entrances, rendering access impossible (when the locking mechanism is enforced) without a programmed access card or token. But how “impossible” is that access in reality ?

    Organisations often take an entire floor of a building, or at least a subset of it. This means that any doors dividing floors or areas occupied by other tenants must be secured against unauthorised access. Quite often, these floors have more than one exit point for a variety of health and safety / fire regulation reasons, and it’s this particular scenario that often goes unnoticed, or unintentionally overlooked. Human nature dictates that it’s quicker to take the side exit when leaving the building rather than the main entrance, and the last employee leaving (in an ideal world) has the responsibility of ensuring that the door is locked behind them when they leave. However, the reality is often the case instead where the door is held open by a fire extinguisher for example. Whilst this facilitates effective and easy access during the day, it has a significant impact to your physical security if that same door remains open and unattended all night. I’ve seen this particular offence repeatedly committed over months – not days or weeks – in most organisations I’ve worked for. In fact, this exact situation allowed thieves to steal a laptop left on the desk in an office of a finance firm I previously worked at.

    Theft in general is mostly based around opportunity. As a paradigm, you could leave a £20 note / $20 bill on your desk and see how long it remained there before it went missing. I’m not implying here that anyone in particular is a thief, but again, it’s about opportunity. The same process can be aligned to Information security. It’s commonplace to secure information systems with passwords, least privilege access, locked server rooms, and all the other usual mechanisms, but what about the physical elements ? It’s not just door locks. It’s anything else that could be classed as sensitive, such as printed documents left on copiers long since forgotten and unloved, personally identifiable information left out on desks, misplaced smartphones, or even keys to restricted areas such as usually locked doors or cupboards. That 30 second window could be all that would be required to trigger a breach of security – and even worse, of information classed as sensitive. Not only could your insurance refuse to pay out if you could not demonstrate beyond reasonable doubt that you had the basic physical security measures in place, but (in the EU) you would have to notify the regulator (in this case, the ICO) that information had been stolen. Not only would it be of significant embarrassment to any firm that a “chancer” was able to casually stroll in and take anything they wanted unchallenged, but significant in terms of the severity of such an information breach – and the resultant fines imposed by the ICO or SEC (from the regulatory perspective – in this case, GDPR) – at €20m or 4% of annual global (yes, global) turnover (if you were part of a larger organisation, then that is actually 4% of the parent entity turnover – not just your firm) – whichever is the highest. Of equal significance is the need to notify the ICO within 72 hours of a discovered breach. In the event of electronic systems, you could gain intelligence about what was taken from a centralised logging system (if you have one – that’s another horror story altogether if you don’t and you are breached) from the “electronic” angle of any breach via traditional cyber channels, but do you know exactly what information has taken residence on desks ? Simple answer ? No.

    It’s for this very reason that several firms operate a “clean desk” policy. Not just for aesthetic reasons, but for information security reasons. Paper shredders are a great invention, but they lack AI and machine learning to wheel themselves around your office looking for sensitive hard copy (printed) data to destroy in order for you to remain compliant with your information security policy (now there’s an invention…).

    But how secure are these “unbreakable” locks ? Despite the furore around physical security in the form of smart locks, thieves seem to be able to bypass these “security measures” with little effort. Here’s a short video courtesy of ABC news detailing just how easy it was (and still is in some cases) to gain access to hotel rooms using cheap technology, tools, and “how-to” articles from YouTube.

    Surveillance systems aren’t exempt either. As an example, a camera system can be rendered useless with a can of spray paint or even something as simple as a grocery bag if it’s in full view. Admittedly, this would require some previous reconnaissance to determine the camera locations before committing any offence, but it’s certainly a viable prospect of that system is not monitored regularly. Additionally, (in the UK at least) the usage of CCTV in a commercial setting requires a written visible notice to be displayed informing those affected that they are in fact being recorded (along with an impact assessment around the usage), and is also subject to various other controls around privacy, usage, security, and retention periods.

    Unbreakable locks ?

    Then there’s the “unbreakable” door lock. Tapplock advertised their “unbreakable smart lock” only to find that it was vulnerable to the most basic of all forced entry – the screwdriver. Have a look at this article courtesy of “The Register”. In all seriousness, there aren’t that many locks that cannot be effectively bypassed. Now, I know what you’re thinking. If the lock cannot be effectively opened, then how do you gain entry ? It’s much simpler than you think. For a great demonstration, we’ll hand over to a scene from “RED” that shows exactly how this would work. The lock itself may have pass-code that “…changes every 6 hours…” and is “unbreakable”, but that doesn’t extend to the material that holds both the door and the access panel for the lock itself.

    And so onto the actual point. Unless your “unbreakable” door lock is housed within fortified brick or concrete walls and impervious to drills, oxy-acetylene cutting equipment, and proximity explosive charges (ok, that’s a little over the top…), it should not be classed as “secure”. Some of the best examples I’ve seen are a metal door housed in a plasterboard / false wall. Personally, if I wanted access to the room that badly, I’d go through the wall with the nearest fire extinguisher rather than fiddle with the lock itself. All it takes is to tap on the wall, and you’ll know for sure if it’s hollow just by the sound it makes. Finally, there’s the even more ridiculous – where you have a reinforced door lock with a viewing pane (of course, glass). Why bother with the lock when you can simply shatter the glass, put your hand through, and unlock the door ?

    Conclusion

    There’s always a variety of reasons as to why you wouldn’t build your comms room out of brick or concrete – mostly attributed to building and landlord regulations in premises that businesses occupy. Arguably, if you wanted to build something like this, and occupied the ground floor, then yes, you could indeed carry out this work if it was permitted. Most data centres that are truly secure are patrolled 24 x 7 by security, are located underground, or within heavily fortified surroundings. Here is an example of one of the most physically secure data centres in the world.

    https://www.identiv.com/resources/blog/the-worlds-most-secure-buildings-bahnhof-data-center

    Virtually all physical security aspects eventually circle back to two common topics – budget, and attitude to risk. The real question here is what value you place on your data – particularly if you are a custodian of it, but the data relates to others. Leaking data because of exceptionally weak security practices in today’s modern age is an unfortunate risk – one that you cannot afford to overlook.

    What are your thoughts around physical security ?

  • 0 Votes
    1 Posts
    110 Views

    1631810017053-netsecurity.jpg.webp
    I read an article By Glenn S. Gerstell (Mr. Gerstell is the general counsel of the National Security Agency) with a great deal of interest. That same article is detailed below

    The National Security Operations Center occupies a large windowless room, bathed in blue light, on the third floor of the National Security Agency’s headquarters outside of Washington. For the past 46 years, around the clock without a single interruption, a team of senior military and intelligence officials has staffed this national security nerve center.

    The center’s senior operations officer is surrounded by glowing high-definition monitors showing information about things like Pentagon computer networks, military and civilian air traffic in the Middle East and video feeds from drones in Afghanistan. The officer is authorized to notify the president any time of the day or night of a critical threat.

    Just down a staircase outside the operations center is the Defense Special Missile and Aeronautics Center, which keeps track of missile and satellite launches by China, North Korea, Russia, Iran and other countries. If North Korea was ever to launch an intercontinental ballistic missile toward Los Angeles, those keeping watch might have half an hour or more between the time of detection to the time the missile would land at the target. At least in theory, that is enough time to alert the operations center two floors above and alert the military to shoot down the missile.

    But these early-warning centers have no ability to issue a warning to the president that would stop a cyberattack that takes down a regional or national power grid or to intercept a hypersonic cruise missile launched from Russia or China. The cyberattack can be detected only upon occurrence, and the hypersonic missile, only seconds or at best minutes before attack. And even if we could detect a missile flying at low altitudes at 20 times the speed of sound, we have no way of stopping it.

    Something I’ve been saying all along is that technology alone cannot stop cyber attacks. Often referred to as a “silver bullet”, or “blinky lights”, this provides the misconception that by purchasing that new, shiny device, you’re completely secure. Sorry folks, but this just isn’t true. In fact, cyber crime, and it’s associated plethora of hourly attacks is evolving at an alarming rate - in fact, much faster than you’d like to believe.

    You’d think that for all the huge technological advances we have made in this world, the almost daily plethora of corporate security breaches, high profile data loss, and individuals being scammed every day would have dropped down to nothing more than a trickle - even to the point where they became virtually non-existent. We are making huge progress with landings on Mars, autonomous space vehicles, artificial intelligence, big data, machine learning, and essentially reaching new heights on a daily basis thanks to some of the most creative minds in this technological sphere. But somehow, we have lost our way, stumbled and fallen - mostly on our own sword. But why ?

    Just like the Y2k Gold Rush in the late 90’s, information security has become the next big thing with companies ranging from a few employees as startups to enterprise organisations touting their services and platforms to be the best in class, and the next “must have” tool in the blue team’s already bulging arsenal of tools. Tools that on their own in fact have little effect unless they are combined with something else as equally as expensive to run. We’ve spent so much time focusing on efforts ranging from what SEIM solution we need to what will be labelled as the ultimate silver bullet capable of eliminating the threat of attack once and for all that in my opinion, we have lost sight of the original goal. With regulatory requirements and best practice pushing us towards products and services that either require additional staff to manage, or are incredibly expensive to deploy and ultimately run. Supposedly, in an effort to simplify the management, analysis, and processing of millions of logs per hour we’ve created even more platforms to ingest this data in order to make sense of it.

    In reality, all we have created is a shark infested pool where larger companies consume up and coming tech startups for breakfast to ensure that they do not pose a threat to their business model / gravy train, therefore enabling them to dominate the space even further with their newly enhanced reach.

    How did we get to this ? What happened to thought process and working together in order to combat the threat that increases on an hourly basis ? We seem to be so focused on making sure that we aren’t the next organisation to be breached that we have lost the art of communication and the full benefit of sharing information so that it assists others in their journey. We’ve become so obsessed with the daily onslaught of platforms that we no longer seem to have the time to even think, let alone take stock and regroup - not as an individual, but as a community.

    There are a number of ”communities” that offer “free” forums and products under the open source banner, but sadly, these seem to be turning into paid-for products at a rate of knots. I understand people need to live and make money, but if awareness was raised to the point where users wouldn’t click links in phishing emails, fall for the fake emergency wire transfer request from the CEO, or be suddenly tempted by the latest offer in terms of cheap technology then we might - just might - be able to make the world a better place. In order to make this work, we first need to remove the stigma that has become so ingrained by the media and set in stone like King Arthur’s Excalibur. Let’s first start with the hacker / criminal parallel. They aren’t the same thing folks.

    Nope. Not at all. Hackers are those people who find ingenious ways of getting into networks and infrastructure that you never even knew existed, trick you into parting with sensitive information (then inform you as to where you went wrong), and most importantly, educate you so that you and your network are far more secure against real attacks and real criminals. These people exist to increase your awareness, and by definition, security footprint - not use it against you in order to steal. Hackers do like to wear hoodies as they are comfortable, but you won’t find one using gloves, wearing a balaclava or sunglasses, and in some cases, they actually prefer desktops rather than laptops.

    The image being portrayed here is one perpetuated by the media, and it has certainly been effective - but not in a positive way. The word “hacker” is now synonymous with criminals, where it really shouldn’t be. One defines security, whereas the other sets out to break it. If we locked up all the hackers on this planet, we’d only have the blue team remaining. It’s the job of the red team (hackers) to see how strong your defences are. Hackers exist to educate, not infiltrate (at least, not without asking for permission first :))

    I personally have lost count of how many times I’ve sat in meetings where a sales pitch around a security platform is touted as a one stop shop or a Swiss army knife that can protect your entire network from a breach. Admittedly, there’s some great technology on the market that performs a variety of functions to protect your estate, but they all fail to take into consideration the weakest link in any chain - users. Irrespective of bleeding edge “combat platforms” (as I like to refer to them), criminals are becoming very adept in their approach, leveraging techniques such as social engineering. It should come as no surprise for you to learn that this type of attack can literally walk past your shiny new defence system as it relies on the one vulnerability you cannot predict - the human. Hence the term “hacking humans”.

    I’m of the firm opinion that if you want to outsmart a criminal, you have to think like one. Whilst newfangled platforms are created to assist in the fight against cyber crime, they are complex to configure, suffer from alerting bloat (far too many emails so you end up missing the one where your network is actually being compromised), or are simply overwhelming and difficult to understand. Here’s the thing. You don’t need (although they do help) expensive bleeding edge platforms with flashing lights to tell you where weak points lie within your network, but you do need to understand how a criminal can and will exploit these. A vulnerability cannot be leveraged if it no longer exists, or even better, never even existed to begin with.

    And so, on with the mission, and the real reason as to why I created this site. I’ve been working in information technology for 30 years, and have a very strong technical background in network design and information security.

    What I want to do is create a communication, information, and awareness sharing platform. I created the original concept of what I thought this new community should look like in my head, but its taken a while to finally develop, get people interested, and on board. To my mind, those from inside and outside of the information security arena will pool together, share knowledge, raise awareness, and probably the most important, harness this new found force and drive change forward.

    The breaches we are witnessing on a daily basis are not going to simply stop. They will increase dramatically in their frequency, and will get worse with each incident.

    Let’s stop the “hackers are criminals” myth, start using our own unique talents in this field, and make a community that

    is able to bring effective change treats everyone as equals The community once fully established could easily be the catalyst for change - both in perception, and inception.

    Why not wield the stick for a change instead of being beaten with it, and work as a global virtual team instead ?

    Will you join me ? In case I haven’t already mentioned it, this initiative has no cost - only gains. It is entirely free.

  • 0 Votes
    1 Posts
    134 Views

    When you look at your servers or surrounding networks, what exactly do you see ? A work of art, perhaps ? Sadly, this is anything but the picture painted for most networks when you begin to look under the hood. What I’m alluding to here is that beauty isn’t skin deep - in the sense that neat cabling resembling art from the Sistine Chapel, tidy racks, and huge comms rooms full of flashing lights looks appealing from the eye candy perspective and probably will impress clients, but in several cases, this is the ultimate wolf in sheep’s clothing. Sounds harsh ? Of course it does, but with good intentions and reasoning. There’s not a single person responsible for servers and networks on this planet who will willingly admit that whilst his or her network looks like a masterpiece to the untrained eye, it’s a complete disaster in terms of security underneath.

    In reality, it’s quite the opposite. Organisations often purchase bleeding edge infrastructure as a means of leveraging the clear technical advantages, enhanced security, and competitive edge it provides. However, under the impressive start of the art ambience and air conditioning often lies an unwanted beast. This mostly invisible beast lives on low-hanging fruit, will be tempted to help itself at any given opportunity, and is always hungry. For those becoming slightly bewildered at this point, there really isn’t an invisible beast lurking around your network that eats fruit. But, with a poorly secured infrastructure, there might as well be. The beast being referenced here is an uninvited intruder in your network. A bad actor, threat actor, bad guy, criminal…. call it what you want (just don’t use the word hacker) can find their way inside your network by leveraging the one thing that I’ve seen time and time again in literally every organisation I ever worked for throughout my career - the default username and password. I really can’t stress the importance of changing this on new (and existing) network equipment enough, and it doesn’t stop at this either.

    Changing the default username and password is about 10% of the puzzle when it comes to security and basic protection principles. Even the most complex credentials can be bypassed completely by a vulnerability (or in some cases, a backdoor) in ageing firmware on switches, firewalls, routers, storage arrays, and a wealth of others - including printers (which incidentally make an ideal watering hole thanks to the defaults of FTP, HTTP, SNMP, and Telnet, most (if not all of) are usually always on. As cheaper printers do not have screens like their more expensive copier counterparts (the estate is reduced to make the device smaller and cheaper), any potential criminal can hide here and not be detected - often for months at a time - arguably, they could live in a copier without you being aware also. A classic example of an unknown exploit into a system was the Juniper firewall backdoor that permitted full admin access using a specific password - regardless of the one set by the owner. Whilst the Juniper exploit didn’t exactly involve a default username and password as such (although this particular exploit was hard-coded into the firmware, meaning that any “user” with the right coded password and SSH access remotely would achieve full control over the device), it did leverage the specific vulnerability in the fact that poorly configured devices could have SSH configured as accessible to 0.0.0.0/0 (essentially, the entire planet) rather than a trusted set of IP addresses - typically from an approved management network.

    We all need to get out of the mindset of taking something out of a box, plugging it into our network, and then doing nothing else - such as changing the default username and password (ideally disabling it completely and replacing it with a unique ID) or turning off access protocols that we do not want or need. The real issue here is that today’s technology standards make it simple for literally anyone to purchase something and set it up within a few minutes without considering that a simple port scan of a subnet can reveal a wealth of information to an attacker - several of these tools are equipped with a default username and password dictionary that is leveraged against the device in question if it responds to a request. Changing the default configuration instead of leaving it to chance can dramatically reduce the attack landscape in your network. Failure to do so changes “plug and play” to “ripe for picking”, and its those devices that perform seemingly “minor” functions in your network that are the easiest to exploit - and leverage in order to gain access to neighbouring ancillary services. Whilst not an immediate gateway into another device, the compromised system can easily give an attacker a good overview of what else is on the same subnet, for example.

    So how did we arrive at the low hanging fruit paradigm ?

    It’s simple enough if you consider the way that fruit can weigh down the branch to the point where it is low enough to be picked easily. A poorly secured network contains many vulnerabilities that can be leveraged and exploited very easily without the need for much effort on the part of an attacker. It’s almost like a horse grazing in a field next to an orchard where the apples hang over the fence. It’s easily picked, often overlooked, and gone in seconds. When this term is used in information security, a common parallel is the path of least resistance. For example, a pickpocket can acquire your wallet without you even being aware, and this requires a high degree of skill in order to evade detection yet still achieve the primary objective. On the other hand, someone strolling down the street with an expensive camera hanging over their shoulder is a classic example of the low hanging fruit synopsis in the sense that this theft is based on an opportunity that wouldn’t require much effort - yet with a high yield. Here’s an example of how that very scenario could well play out.

    Now, as much as we’d all like to handle cyber crime in this way, we can’t. It’s illegal 🙂

    What isn’t illegal is prevention. 80% of security is based on best practice. Admittedly, there is a fair argument as to what exactly is classed as “best” these days, although it’s a relatively well known fact that patching the Windows operating system for example is one of the best ways to stamp out a vulnerability - but only for that system that it is designed to protect against. Windows is just the tip of the iceberg when it comes to vulnerabilities - it’s not just operating systems that suffer, but applications, too. You could take a Windows based IIS server, harden it in terms of permitted protocols and services, plus install all of the available patches - yet have an outdated version of WordPress running (see here for some tips on how to reduce that threat), or often even worse, outdated plugins that allow remote code execution. The low hanging fruit problem becomes even more obvious when you consider breaches such as the well-publicised Mossack Fonseca (“Panama Papers”). What became clear after an investigation is that the attackers in this case leveraged vulnerabilities in the firm’s WordPress and Joomla public facing installations - this in fact led to them being able to exploit an equally vulnerable mail server by brute-forcing it.

    So what should you do ? The answer is simple. It’s harvest time.

    If there is no low-hanging fruit to pick, life becomes that much more difficult for any attacker looking for a quick “win”. Unless determined, it’s unlikely that your average attacker is going to spend a significant amount of time on a target (unless it’s Fort Knox - then you’ve have to question the sophistication) then walk away empty handed with nothing to show for the effort. To this end, below are my top recommendations. They are not new, non-exhaustive, and certainly not rocket science - yet they are surprisingly missing from the “security 101” perspective in several organisations.

    Change the default username and password on ALL infrastructure. It doesn’t matter if it’s not publicly accessible - this is irrelevant when you consider the level of threats that have their origins from the inside. If you do have to keep the default username (in other words, it can’t be disabled), set the lowest possible access permissions, and configure a strong password. Close all windows - in this case, lock down protocols and ports that are not essential - and if you really do need them open, ensure that they are restricted Deploy MFA (or at least 2FA) to all public facing systems and those that contain sensitive or personally identifiable information Deploy adequate monitoring and logging techniques, using a sane level of retention. Without any way of forensic examination, any bad actor can be in and out of your network well before you even realise a breach may have taken place. The only real difference is that without decent logging, you have no way of confirming or even worse, quantifying your suspicion. This can spell disaster in regulated industries. Don’t shoot yourself in the foot. Ensure all Windows servers and PC’s are up to date with the latest patches. The same applies to Linux and MAC systems - despite the hype, they are vulnerable to an extent (but not in the same way as Windows), although attacks are notoriously more difficult to deploy and would need to be in the form of a rootkit to work properly Do not let routers, firewalls, switches, etc “slip” in terms of firmware updates. Keep yourself and your team regularly informed and updated around the latest vulnerabilities, assess their impact, and most importantly, plan an update review. Not upgrading firmware on critical infrastructure can have a dramatic effect on your overall security. Lock down USB ports, CD/DVD drives, and do not permit access to file sharing, social media, or web based email. This has been an industry standard for years, but you’d be surprised at just how many organisations still have these open and yet, do not consider this a risk. Reduce the attack vector by segmenting your network using VLANS. For example, the sales VLAN does not need to (and shouldn’t need to) connect directly to accounting etc. In this case, a ransomware or malware outbreak in sales would not traverse to other VLANS, therefore, restricting the spread. A flat network is simple to manage, but a level playing field for an attacker to compromise if all the assets are in the same space. Don’t use an account with admin rights to perform your daily duties. There’s no prizes for guessing the level of potential damage this can cause if your account is compromised, or you land up with malware on your PC Educate and phish your users on a continual basis. They are the gateway directly into your network, and bypassing them is surprisingly easy. You only have to look at the success of phishing campaigns to realise that they are (and always will be) the weakest link in your network. Devise a consistent security and risk review framework. Conducting periodic security reviews is always a good move, and you’d be surprised at just what is lurking around on your network without your knowledge. There needn’t be a huge budget for this. There are a number of open source projects and platforms that make this process simple in terms of identification, but you’ll still need to complete the “grunt” work in terms of remediation. I am currently authoring a framework that will be open source, and will share this with the community once it is completed. Conduct regular governance and due diligence on vendors - particularly those that handle information considered sensitive (think GDPR). If their network is breached, any information they hold around your network and associated users is also at risk. Identify weak or potential risk areas within your network. Engage with business leaders and management to raise awareness around best practice, and information security. Perform breach simulation, and engage senior management in this exercise. As they are the fundamental core of the business function, they also need to understand the risk, and more importantly, the decisions and communication that is inevitable post breach.

    There is no silver bullet when it comes to protecting your network, information, and reputation. However, the list above will form the basis of a solid framework.

    Let’s not be complacent - let’s be compliant.

  • 3 Votes
    9 Posts
    300 Views

    Well, just remember - No matter where ya’ go, there you are. 🏇 🐎 🐴