AI... A new dawn, or the demise of humanity ?

Blog

  • Blog Setup

    Solved Customisation
    17
    8 Votes
    17 Posts
    421 Views

    Here is an update. So one of the problems is that I was coding on windows - duh right? Windows was changing one of the forward slashes into a backslash when it got to the files folder where the image was being held. So I then booted up my virtualbox instance of ubuntu server and set it up on there. And will wonders never cease - it worked. The other thing was is that there are more than one spot to grab the templates. I was grabbing the template from the widget when I should have been grabbing it from the other templates folder and grabbing the code from the actual theme for the plugin. If any of that makes sense.

    I was able to set it up so it will go to mydomain/blog and I don’t have to forward it to the user/username/blog. Now I am in the process of styling it to the way I want it to look. I wish that there was a way to use a new version of bootstrap. There are so many more new options. I suppose I could install the newer version or add the cdn in the header, but I don’t want it to cause conflicts. Bootstrap 3 is a little lacking. I believe that v2 of nodebb uses a new version of bootstrap or they have made it so you can use any framework that you want for styling. I would have to double check though.

    Thanks for your help @phenomlab! I really appreciate it. I am sure I will have more questions so never fear I won’t be going away . . . ever, hahaha.

    Thanks again!

  • Nodebb as blogging platform

    General
    10
    5 Votes
    10 Posts
    315 Views

    @qwinter I’ve extensive experience with Ghost, so let me know if you need any help.

  • 2 Votes
    6 Posts
    252 Views

    @kurulumu-net CSS styling is now addressed and completed.

  • 0 Votes
    1 Posts
    163 Views

    Once in every while, you encounter a repetitive issue that no matter what you try to do to resolve it, the problem manifests itself over and over again - sometimes, even on a daily basis. Much of how the issue is remediated really depends on the person assigned to the task.

    You might be puzzled at why I’d write about something like this, but it’s a situation I see constantly - one I like to refer to as “over thinker syndrome”. What do I mean by this ? Here’s the theory. Some people are very analytical when it comes to problem solving. Couple that with technical knowledge and you could land up with a situation where something relatively simple gets blown out of all proportion because the scenario played out in the mind is often much further from reality than you’d expect. And the technical reasoning is usually always to blame. Sometime around 2007, a colleague noticed that the Exchange Server (2003 wouldn’t you know) would suddenly reboot half way through a backup job. Rightly so, he wanted to investigate and asked me if this would be ok. Anyone with an ounce of experience knows that functional backups are critical in the event of a disaster - none more so than I - obviously, I have the go ahead. One bright spark in my team suggested a reboot of the server, which immediately prompted the response

    “…it’s rebooting itself every day, so how will that help ?”

    The investigation

    Joking aside, we’ve all heard the “have you rebooted” question touted at some point during helpdesk discussions, but this one was different. A system rebooting itself is usually symptomatic of an underlying issue somewhere, and my team member was ready for the task ahead. Stepping up to the plate, he asked if it was ok to install some monitoring software on the server. Usually, installing additional software components in a production server without testing first is a non-starter, but seeing as we needed to get this resolved as quickly as possible to reinstate the nightly backup (which incidentally hasn’t run successfully for 3 days by now), I provided approval to proceed without question. There’s a leap of faith at this point, as you could cause more problems than that you actually set out to resolve in the first place, but, as with anything related to information technology, someone’s you have to accept an element of risk. The software itself was actually for the RAID controller and motherboard  The assigned technician had already decided it was related to something along the lines of a faulty RAM module, or perhaps an issue with the controller itself. My thoughts leaned elsewhere already at this point - is the server reboots itself at exactly the same time every day then there is an established pattern which should be investigated first. It’s a logical approach, but it’s a common trait for technical support staff to sometimes think outside of the box - or in this case, outside of the building. Not wanting to push my opinion, or trample on anyone’s toes, I decided to remain quiet and see just how far this would go before intervention was required.

    In this case, not very far. The following morning after another unannounced nightly reboot, the error “the previous shutdown at [insert time and date here] was unexpected” showed up in the event log. No real surprises there, and once again, exactly the same time as the previous night. I asked my technician for an update, and he informed me that he believed that the memory was faulty and somehow causing the server to blue screen and reboot. That was actually a reasonable response and so I commended him on his research and findings, but also reminded him to perform a manual backup so that we at least had something to revert to in the event of a failure. Later that afternoon, the same tech approached me and said that he had ordered some replacement memory, and wanted to arrange downtime to fit it. Trying to keep a poker face and remain passive, I agreed and the memory was replaced the same evening around 10pm. At 2am the following morning, kaboom ! - the server rebooted itself again. Not wanting to admit defeat, our courageous tech suggested that the problem could be due to the system overheating. Another fair point, but not realistic as you’d see this in event log as a thermal shutdown. I willingly entertained this, and allowed investigations into the CPU temperature to begin - after another manual backup. Unsurprisingly, the temperature data returned no smoking gun, so that was abandoned. The next port of call was to reapply the service pack. Now, I’ll admit that this used to fix a multitude of issues under Windows NT Server (particularly Service Pack 4) but not under Windows 2003. I declined this for obvious reasons - if you reapply the service pack, you run the risk of overwriting key DLL files that could (and often will) render Exchange inoperable. Not being prepared to introduce an unprecedented risk into what was already becoming something of a showcase, I suggested that we look elsewhere.

    The exasperation

    The final (and honestly more realistic suggestion) was to enable verbose logging in Exchange. This is actually a good idea, but only if you suspect that the information store could be the issue. Given the evidence, I wasn’t convinced. If there was corruption in the store, or on any of the disks, this would show itself randomly through the day and wouldn’t wait until 2am in the morning. Not wanting to come across as condescending, I agreed, but at the same time, set a deadline to escalation. I wasn’t overly concerned about the backups as these were being completed manually each day whilst the investigations were taking place. Neither was I concerned at what could be seen at this point as wasting someone’s time when you think you may have the answer to what now seemed to be an impossible problem. This is where experience will eclipse any formal qualifications hands down. Those with university degrees may scoff at this, but those with substantially analytical thinking patterns seem to avoid logic like the plague and go off on a wild tangent looking for a dramatically technical explanation and solution to a problem when it’s much simpler than you’d expect. Hence the title of this article - Avoid the “bulldozer to find a china cup” scenario. After witnessing another pained expression on the face of my now exasperated and exhausted tech, I said “let’s get a coffee”. In agreement, he followed me to the kitchen and then asked me what I thought the problem could be. I said that if he wanted my advice, it would be to step back and look at this problem from a logical angle rather than technical. The confused look I received was priceless - the guy must have really though I’d lost the plot. After what seemed like an eternity (although in reality only a few seconds) he asked me what I meant by this. “Come with me”, I said. Finishing his coffee, he diligently followed me to the server room. Once inside, I asked him to show me the Exchange Server. Puzzled, he correctly pointed out the exact machine. I then asked him to trace the power cables and tell me where they went.

    As with most server rooms, locating and identifying cables can be a bit of a challenge after equipment has been added and removed, so this took a little longer than we expected. Eventually, the tech traced the cables back to

    …an old looking UPS that had a red light illuminated at the front like it had been a prop in a Terminator film.

    The realisation

    Suddenly, the real cause of this issue dawned on the tech like a morning sunrise over the Serengeti. The UPS that the Exchange Server was unexpectedly connected to had a faulty battery. The UPS was conducting a self test at 2am each morning, and because the bypass test failed owing to the burnt battery, the connected server lost power and started back up after the offending equipment left bypass mode and went online.

    Where is this going you might ask ?  Here’s the moral of this (particular, and many others like it) story

    Just because a problem involves technology, it doesn’t mean that the answer has to be a complex technical one Logic and common sense has a part to play in all of our lives. Sometimes, it makes more sense just to step back, take a breath, and see something for what it really is before deciding to commit It’s easy to allow technical expertise to cloud your judgement - don’t fall into the trap of using a sledgehammer to break an egg You cannot buy experience - it’s earned, gained, and leaves an indelible mark

    Let’s hear your views. Did you ever come across a situation where no matter what you tried, nothing worked ? Did the solution turn out to be much simpler than you’d have ever thought ?

  • 5 Votes
    4 Posts
    226 Views

    @crazycells I guess the worst part for me was the trolling - made so much worse by the fact that the moderators allowed it to continue, insisting that the PeerLyst coming was seeing an example by allowing the community to “self moderate” - such a statement being completely ridiculous, and it wasn’t until someone else other than myself pointed out that all of this toxic activity could in fact be crawled by Google, that they decided to step in and start deleting posts.

    In fact, it reached a boiling point where the CEO herself had to step in and post an article stating their justification for “self moderation” which simply doesn’t work.

    The evidence here speaks for itself.

  • 0 Votes
    1 Posts
    158 Views

    1631812610135-security1.webp
    The recent high profile breaches impacting organisations large and small are a testament to the fact that no matter how you secure credentials, they will always be subject to exploit. Can a password alone ever be enough ? in my view, it’s never enough. The enforced minimum should be at least with a secondary factor. Regardless of how “secure” you consider your password to be, it really isn’t in most cases – it just “complies” with the requirement being enforced.

    Here’s classic example. We take the common password of “Welcome123” and put it into a password strength checker
    1564764162-304322-password1.png
    According to the above, it’s “strong”. Actually, it isn’t. It’s only considered this way because it meets the complexity requirements, with 1 uppercase letter, at least 8 characters, and numbers. What’s also interesting is that a tool sponsored by Dashlane considers the same password as acceptable, taking supposedly 8 months to break
    1564764192-579936-password2.png
    How accurate is this ? Not accurate at all. The password of “Welcome123” is in fact one of the passwords contained in any penetration tester’s toolkit – and, by definition, also used by cyber criminals. As most of this password combination is in fact made up of a dictionary word, plus sequential numbers, it would take less than a second to break this rather than the 8 months reported above. Need further evidence of this ? Have a look at haveibeenpwned, which will provide you with a mechanism to test just how many times “Welcome123” has appeared in data breaches
    1564764241-350631-hibp.png

    Why are credentials so weak ?

    My immediate response to this is that for as long as humans have habits, and create scenarios that enable them to easily remember their credentials, then this weakness will always exist. If you look at a sample taken from the LinkedIn breach, those passwords that occupy the top slots are arguably the least secure, but the easiest to remember from the human perspective. Passwords such as “password” and “123456” may be easy for users to remember, but on the flip side, weak credentials like this can be broken by a simple dictionary attack in less than a second.

    Here’s a selection of passwords still in use today – hopefully, yours isn’t on there
    1564764251-257407-passwordlist.jpeg
    We as humans are relatively simplistic when it comes to credentials and associated security methods. Most users who do not work in the security industry have little understanding, desire to understand, or patience, and will naturally choose the route that makes their life easier. After all, technology is supposed to increase productivity, and make tasks easier to perform, right ? Right. And it’s this exact vulnerability that a cyber criminal will exploit to it’s full potential.

    Striking a balance between the security of credentials and ease of recall has always had it’s challenges. A classic example is that networks, websites and applications nowadays typically have password policies in place that only permit the use of a so-called strong password. Given the consolidation and overall assortment of letters, numbers, non-alphanumeric characters, uppercase and lowercase, the password itself is probably “secure” to an acceptable extent, although the method of storing the credentials isn’t. A shining example of this is the culture of writing down sensitive information such as credentials. I’ve worked in some organisations where users have actually attached their password to their monitor. Anyone looking for easy access into a computer network is onto an immediate winner here, and unauthorised access or a full blown breach could occur within an alarmingly short period of time.

    Leaked credentials and attacks from within

    You could argue that you would need access to the computer itself first, but in several historical breach scenarios, the attack originated from within. In this case, it may not be an active employee, but someone who has access to the area where that particular machine is located. Any potential criminal has the credentials – well, the password itself, but what about the username ? This is where a variety of techniques can be used in terms of username discovery – in fact, most of them being non-technical – and worryingly simple to execute. Think about what is usually on a desk in an office. The most obvious place to look for the username would be on the PC itself. If the user had recently logged out, or locked their workstation, then on a windows network, that would give you the username unless a group policy was in place. Failing that, most modern desk phones display the name of the user. On Cisco devices, under Extension Mobility, is the ID of the user. It doesn’t take long to find this. Finally there’s the humble business card. A potential criminal can look at the email address format, remove the domain suffix, and potentially predict the username. Most companies tend to leverage the username in email addresses mainly thanks to SMTP template address policies – certainly true in on premise Exchange environments or Office 365 tenants.

    The credentials are now paired. The password has been retrieved in clear text, and by using a simple discovery technique, the username has also been acquired. Sometimes, a criminal can get extremely lucky and be able to acquire credentials with minimal effort. Users have a habit of writing down things they cannot recall easily, and in some cases, the required information is relatively easily divulged without too much effort on the part of the criminal. Sounds absurd and far fetched, doesn’t it ? Get into your office early, or work late one evening, and take a walk around the desks. You’ll be unpleasantly surprised at what you will find. Amongst the plethora of personal effects such as used gym towels and footwear, I guarantee that you will find information that could be of significant use to a criminal – not necessarily readily available in the form of credentials, but sufficient information to create a mechanism for extraction via an alternative source. But who would be able to use such information ?

    Think about this for a moment. You generally come into a clean office in the mornings, so cleaners have access to your office space. I’m not accusing anyone of anything unscrupulous or illegal here, but you do need to be realistic. This is the 21st century, and as a result, it is a security measure you need to factor in and adopt into your overall cyber security policy and strategy. Far too much focus is placed on securing the perimeter network, and not enough on the threat that lies within. A criminal could get a job as a cleaner at a company, and spend time collecting intelligence in terms of what could be a vulnerability waiting to be exploited. Another example of “instant intelligence” is the network topology map. Some of us are not blessed with huge screens, and need to make do with one ancient 19″ or perhaps two. As topology maps can be quite complex, it’s advantageous to be able to print these in A3 format to make it easier to digest. You may also need to print copies of this same document for meetings. The problem here is what you do with that copy once you have finished with it ?

    How do we address the issue ? Is there sufficient awareness ?

    Yes, there is. Disposing of it in the usual fashion isn’t the answer, as it can easily be recovered. The information contained in most topology maps is often extensive, and is like a goldmine to a criminal looking for intelligence about your network layout. Anything like this is classified information, and should be shredded at the earliest opportunity. Perhaps one of the worst offences I’ve ever personally experienced is a member of the IT team opening a password file, then walking away from their desk without locking their workstation. To prove a point about how easily credentials can be inadvertently leaked, I took a photo with a smartphone, then showed the offender what I’d managed to capture a few days later. Slightly embarrassed didn’t go anywhere near covering it.

    I’ve been an advocate of securing credentials for some time, and recently read about the author of “NIST Special Publication 800-63” (Bill Burr). Now retired, he has openly admitted the advice he originally provided as in fact, incorrect

    “Much of what I did I now regret.” said Mr Burr, who advised people to change their password every 90 days and use obscure characters.

    “It just drives people bananas and they don’t pick good passwords no matter what you do.”

    The overall security of credentials and passwords

    However, bearing in mind that this supposed “advice” has long been the accepted norm in terms of password securuty, let’s look at the accepted standards from a well-known auditing firm

    It would seem that the Sarbanes Oxley 404 act dictates that regular changes of credentials are mandatory, and part of the overarching controls. Any organisation that is regulated by the SEC (for example) would be covered and within scope by this statement, and so the argument for not regularly changing your password becomes “invalid” by the act definition and narrative. My overall point here is that the clearly obvious bad password advice in the case of the financial services industry is negated by a severely outdated set of controls that require you to enforce a password change cycle and be in compliance with it. In addition, there are a vast number of sites and services that force password changes on a regular basis, and really do not care about what is known to be extensive research on password generation.

    The argument for password security to be weakened by having to change it on a frequent basis is an interesting one that definitely deserves intense discussion and real-world examples, but if your password really is strong (as I mentioned previously, there are variations of this which are really not secure at all, yet are considered strong because they meet a complexity requirement), then a simple mutation of it could render it vulnerable. I took a simple lowercase phrase

    mypasswordissimpleandnotsecureatall

    1564764311-893646-nonillion.png
    The actual testing tool can be found here. So, does a potential criminal have 26 nonillion years to spare ? Any cyber criminal who possesses only basic skills won’t need a fraction of that time as this password is in fact made up of simple dictionary words, is all lowercase, and could in fact be broken in seconds.

    My opinion ? Call it how you like – the password is here to stay for the near future at least. The overall strength of the password or credentials stored using MD5, bCrypt, SHA1 and so on are irrelevant when an attacker can use established and proven techniques such as social engineering to obtain your password. Furthermore, the addition of 2FA or a SALT dramatically increases password security – as does the amount of unsuccessful attempts permitted before the associated account is locked. This is a topic that interests me a great deal. I’d love to hear your feedback and comments.

  • 0 Votes
    1 Posts
    302 Views

    tech.jpeg
    Ever heard of KISS ? Nope - not these guys

    kiss.jpeg
    What I’m referring to is the acronym was reportedly coined by Kelly Johnson, lead engineer at the Lockheed Skunk Works (creators of the Lockheed U-2 and SR-71 Blackbird spy planes, among many others), which formed the basis of the relationship between the way things break, and the sophistication available to repair them. You might be puzzled at why I’d write about something like this, but it’s a situation I see constantly – one I like to refer to as “over thinker syndrome”. What do I mean by this ? Here’s the theory. Some people are very analytical when it comes to problem solving. Couple that with technical knowledge and you could land up with a situation where something relatively simple gets blown out of all proportion because the scenario played out in the mind is often much further from reality than you’d expect. And the technical reasoning is usually always to blame.

    Some years ago in a previous career, a colleague noticed that the Exchange Server (2003 wouldn’t you know) would suddenly reboot half way through a backup job. Rightly so, he wanted to investigate and asked me if this would be ok. Anyone with an ounce of experience knows that functional backups are critical in the event of a disaster – none more so than I – obviously, I gave the go ahead. One bright spark in my team suggested a reboot of the server, which immediately prompted the response

    “……it’s rebooting itself every day, so how will that help ?”

    There’s always one, isn’t there ? The final (and honestly more realistic suggestion) was to enable verbose logging in Exchange. This is actually a good idea, but only if you suspect that the information store could be the issue. Given the evidence, I wasn’t convinced. If there was corruption in the store, or on any of the disks, this would show itself randomly through the day and wouldn’t wait until 2am in the morning. Not wanting to come across as condescending, I agreed, but at the same time, set a deadline to escalation. I wasn’t overly concerned about the backups as these were being completed manually each day whilst the investigations were taking place. Neither was I concerned at what could be seen at this point as wasting someone’s time when you think you may have the answer to what now seemed to be an impossible problem. This is where experience will eclipse any formal qualifications hands down. Those with university degrees may scoff at this, but those with substantially analytical thinking patterns seem to avoid logic like the plague and go off on a wild tangent looking for a dramatically technical explanation and solution to a problem when it’s much simpler than you’d expect.

    After witnessing the pained expression on the face of my now exasperated and exhausted tech, I said “let’s get a coffee”. In agreement, he followed me to the kitchen and then asked me what I thought the problem could be. I said that if he wanted my advice, it would be to step back and look at this problem from a logical angle rather than technical. The confused look I received was priceless – the guy must have really though I’d lost the plot. After what seemed like an eternity (although in reality only a few seconds) he asked me what I meant by this. “Come with me”, I said. Finishing his coffee, he diligently followed me to the server room. Once inside, I asked him to show me the Exchange Server. Puzzled, he correctly pointed out the exact machine. I then asked him to trace the power cables and tell me where they went.

    As with most server rooms, locating and identifying cables can be a bit of a challenge after equipment has been added and removed, so this took a little longer than we expected. Eventually, the tech traced the cables back to

    ………an old looking UPS that had a red light illuminated at the front like it had been a prop in a Terminator film.

    Suddenly, the real cause of this issue dawned on the tech like a morning sunrise over the Serengeti. The UPS that the Exchange Server was unexpectedly connected to had a faulty battery. The UPS was conducting a self test at 2am each morning, and because the bypass test failed owing to the burnt battery, the connected server lost power and started back up after the offending equipment left bypass mode and went online.

    Where is this going you might ask ? Here’s the moral of this particular story

    Just because a problem involves technology, it doesn’t mean that the answer has to be a complex technical one Logic and common sense has a part to play in all of our lives. Sometimes, it makes more sense just to step back, take a breath, and see something for what it really is before deciding to commit It’s easy to allow technical expertise to cloud your judgement – don’t fall into the trap of using a sledgehammer to break an egg You cannot buy experience – it’s earned, gained, and leaves an indelible mark
  • 3 Votes
    12 Posts
    335 Views

    @Sala impressive. That’s actually a lot harder than it looks. I once worked for a trading firm in the 90s and a trader came to me with a corrupted floppy disk demanding I get it to work.

    Evidently, it had all of his trading positions on it and he had no backup 😧 and he wasn’t impressed when I told him that the chances of data recovery were less than zero.