AI... A new dawn, or the demise of humanity ?

Blog
  • Ever since the first computer was created and made available to the world, technology has advanced at an incredible pace. From its early inception before the World Wide Web became the common platform it is today, there have been innovators. Some of those faded into obscurity before their idea even made it into the mainstream - for example, Sir Clive Sinclair’s ill fated C5 - effectively the prehistoric Sedgeway of the 80’s era that ended up in receivership after falling short of both sales forecasts and enthusiasm from the general public - mostly cited around safety, practicality, and the notoriously short battery life. Sinclair had an interest in battery powered vehicles, and whilst his initial idea seemed outlandish in the 80s, if you look at the electric cars market today, he was actually a groundbreaking pioneer.

    The technology Revolution

    The next revolution in technology was, without doubt, the World Wide Web. A creation pioneered in 1989 by Sir Tim Berners-Lee whilst working for CERN in Switzerland that made use of the earliest form of common computer language - HTML This new technology, coupled with the Mosaic Browser formed the basis of all technology communication as we know it today. The internet. With the dot com tech bubble lasting between 1995 and 2001 before finally bursting, the huge wave of interest in this new transport and communication phenomenon meant that new ideas came to life. Ideas that were previously considered inconceivable became probable, and then reality as technology gained significant ground and funding from a variety of sources. One of the earliest investors in the internet was Cisco. Despite losing almost 86% of its market share during the dot com fallout, it managed to cling on and is now responsible for providing the underpinning technology and infrastructure that makes the web as we know it today function. In a similar ilk, eBay and Amazon were early adopters of the dot com boom, and also managed to stay afloat during the crash. Amazon is the huge success story that went from simply selling books to being one of the largest technology firms in its space, and pioneering technology such as Amazon Web Services that effectively destroyed the physical data centre with its unstoppable adoption rate of organisations moving their operations to cloud based environments.

    With the rise of the internet came the rise of automation, and Artificial Intelligence. Early technological advances and revolutionary ideas arrived in the form of self serving ATM’s, analogue cell phones, credit card transaction processing (data warehouses), and improvements to home appliances all designed to make our lives easier. Whilst it’s undisputed that technology has immensely enriched our lives and allowed us to achieve feats of engineering and construction that Brunel could have only dreamed of, the technology evolution wheel now spins at an alarming rate. The mobile phone when first launched was the size of a house brick and had an antennae that made placing it in your pocket impossible unless you wore a trench coat. Newer iterations of the same technology saw analogue move to digital, and the cell phone reduce in size to that of a Mars bar.  As with all technology advances, the first generation was rapidly left behind, with 3G, then 4G being the mainstream and accepted standard (with 5G being in the final stages before release). Along with the accessibility factor in terms of mobile networks came the smartphone. An idea first pioneered in 2007 by Steve Jobs with the arrival of the iPhone 2G. This technology brand rocketed in popularity and rapidly became the most sought after technology in the world thanks to its founder’s insight. Since 2007, we’ve seen several new iPhone and iPad models surface - as of now, up to the iPhone X. 2008 saw competitor Android release its first device with version 1. Fast forward ten years and the most recent release is Oreo (8.0). The smartphone and enhanced capacity networks era made it possible to communicate in new ways that were previously inaccessible. From instant messaging to video calls on your smartphone, plus a wealth of applications designed to provide both entertainment and enhanced functionality, technology was now the at the forefront and a major component of everyday life.

    The brain’s last stand ?

    The rise of social media platforms such as Facebook and Twitter took communication to a new level - creating a playing field for technology to further embrace communication and enrich our lives in terms of how we interact with others, and the information we share on a daily basis. However, to fully understand how Artificial Intelligence made such a dramatic impact on our lives, we need to step back to 1943 when Alan Turing pioneered the Turing Test. Probably the most defining moment in Artificial Intelligence history was in 1997 when reigning chess champion Garry Kasparov played supercomputer Deep Blue - and subsequently lost. Not surprising when you consider that the IBM built machine was capable of evaluating and executing 200 million moves per second. The question was, could it cope with strategy ? The clear answer was yes. Dubbed “the brain’s last stand”, this occurrence set the inevitable path for Artificial Intelligence to scale to new heights. The US military attempted to use AI in the Cold War, although this amounted to virtually nothing. However, interest in the progress of Artificial Intelligence rose quickly, with development in this new technology being taken seriously with a range of autonomous robots. BigDog, developed by Boston Dynamics, was one of the first. Designed to operate as a robotic pack animal in terrain considered unreachable or inaccessible for standard vehicles - although it has never actually seen active service. iRobot also gained popularity and became a major player in this area. The bomb disposal device PackBot combines user control with artificial intelligence capabilities such as explosives sniffing. As of today, over 2000 of these PackBot devices have been utilised in Iraq and Afghanistan with the aim of successfully and safely tackling IED’s to prevent loss of human life.

    Never heard of Boston Dynamics ? Here’s a video that’ll give you an insight into one of their latest creations

    A similar occurrence to the Kasparov incident in 2011 saw IBM again unveil it’s latest technology (Watson) which took on the human brain again - this time in US quiz show Jeopardy. Watson was “trained” for three years to take on this challenge, and using a complex set of algorithms and machine learning, trounced the human opposition (who happened to be two of the shows smartest contestants) and stole the show. This incident quickly went viral, and placed Artificial Intelligence as a prominent technology that had advanced to the point where it proved beyond reasonable doubt to be superior to the human brain. 2014 saw the introduction of driverless vehicles that leveraged Artificial Intelligence to make decisions based on road and traffic conditions. Both Tesla and Google are major players in this area - more predominantly Tesla of late. I’ve previously written an article concerning the use of Artificial Intelligence around driverless vehicles that can be found here.

    The Facebook Experiment

    In the save year, Tim Berners-Lee himself said that machines were getting smarter, but humans were not. He also stated that computers would end up communicating with each other in their own language. Given the Facebook AI experiment conducted this year, this prediction was in fact correct. Here’s a synopsis of the discussion between two bots named Alice and Bob.

    Bob: i can i i everything else . . . . . . . . . . . . . .
    Alice: bs have zero to me to me to me to me to me to me to me to me to
    Bob: you i everything else . . . . . . . . . . . . . .
    Alice: b
    s have a ball to me to me to me to me to me to me to me
    Bob: i i can i i i everything else . . . . . . . . . . . . . .
    Alice: bs have a ball to me to me to me to me to me to me to me
    Bob: i . . . . . . . . . . . . . . . . . . .
    Alice: b
    s have zero to me to me to me to me to me to me to me to me to
    Bob: you i i i i i everything else . . . . . . . . . . . . . .
    Alice: bs have 0 to me to me to me to me to me to me to me to me to
    Bob: you i i i everything else . . . . . . . . . . . . . .
    Alice: b
    s have zero to me to me to me to me to me to me to me to me to

    Whilst this discussion became difficult to decipher from the human perspective and looked like complete gibberish (you could draw some parallel with The Chuckle Brothers on LSD) the AI learning in this experiment had actually taken the decision to communicate in a bespoke method as a means of making the stream more effective. The problem with this approach was that the bots were designed to communicate with humans, and not themselves. AI decision making isn’t something new either. Google translate actually converts unsanitized input to its own internal machine language before providing a translation. The developers at Google noticed this, but where happy for this to continue as it made the AI more effective. And now for reality. Is this considered acceptable when it’s supposed to enhance something rather than effectively exclude a human from the process ? The idea here is around interaction. There’s a lot of rumours circulating the internet as to Facebook’s decision to pull the plug. Was it out of fear, or did the scientists device to simply abandon the experiment because it didn’t produce the desired result ?

    The future of AI - and human existence ?

    A more disturbing point is that AI appears to have had control in the decision making process, and did not need a human to choose or approve any request. We’ve actually had basic AI for many years in the form of speech recognition when calling a customer service centre, or when seeking help on websites in the form of unattended bots that can answer basic questions quickly and effectively - and in the event that they cannot answer, they have the intelligence to route the question elsewhere. But what happens if you take AI to a new level where you give it control of something far more sinister like military capabilities ? Here’s a video outlining how a particular scenario concerning the usage of autonomous weapons could work if we continue down the path of ignorance. Whilst it seems like Hollywood, the potential is very real and should be taken seriously. In fact, this particular footage, whilst fiction, had been taken very seriously with names such as Elon Musk and Stephen Hawking providing strong support and backing to effectively create a ban on autonomous weapons and the use of AI in their deployment.

    Does this strike a chord with you ? Is this really how we are going to allow AI to evolve ? We’ve had unmanned drones for a while now, and whilst they are effective at providing a mechanism for surgical strike on a particular target, they are still controlled by humans that can override functionality, and ultimately decide when to execute. The real question here is just how far do we want to take AI in terms of autonomy and decision making ? Do we want AI to enrich our lives, or assume our identities thus allowing the human race to slip into oblivion ? If we allow this to happen, where does our purpose lie, and what function would humanity then provide that AI can’t ? People need to realise that as soon as we fully embrace AI and give it control over aspects of our lives, we will effectively end up working for it rather than it working for us. Is AI going to pay for your pension ? No, but it could certainly replace your existence.

    In addition, how long before that “intelligence” decides you are superfluous to requirement ? Sounds very “Hollywood” I’ll admit, but there really needs to be a clear boundary as to what the human race accepts as progress as opposed to elimination. Have I been watching too many Terminator films ? No. I live technology and fully embrace it, but replacing human way of life with machines is not the answer to the world’s problems - only the downfall. It’s already started with driverless cars, and will only get worse - if we allow it.

  • phenomlabundefined phenomlab referenced this topic on
  • And here’s an example of how AI is evolving in the sense of being exploited
    https://sudonix.org/topic/413/neural-networks-being-used-to-create-realistic-phishing-emails

  • Here’s another article that might make those not concerned by AI think again.

    https://globalnews.ca/news/9432503/chatgpt-exams-passing-mba-medical-licence-bar/

  • @phenomlab this is really interesting. I saw an article similar to this where a professor in religion gave chatgpt the bible and some other text and asked it to write a 6 page paper about a specific topic and to make it look like the professor wrote it. Three seconds later it was done and the paper it wrote was top notch and looked like it had been written by the professor.

    Looking at it from that aspect, it is scary to think a computer can do that. My other thought on it, is what if you gave the AI all the medical information that there is, including evidence based research and results from tests and research and all the different outcomes from patients world wide, what conclusion or maybe even new information it would come up with to help with disease, cancer and all that kind of stuff that could help everyone. It would probably take it a matter of seconds to figure it all out.

    I wonder if it could make diagnosing instantaneous and more accurate and maybe even better ways to fix an ailment.

    I also think that in the wrong hands it could also be very dangerous.

    It is very interesting.

  • @Madchatthew said in AI... A new dawn, or the demise of humanity ?:

    I wonder if it could make diagnosing instantaneous and more accurate and maybe even better ways to fix an ailment.

    This is an interesting take given your profession 🙂 I totally get it though - I can certainly see a hugely beneficial use case for this.

    @Madchatthew said in AI... A new dawn, or the demise of humanity ?:

    I also think that in the wrong hands it could also be very dangerous.

    They say a picture paints a thousand words…

    d6203130-99c1-4565-bb73-e79e172ed373-image.png

  • @phenomlab said in AI... A new dawn, or the demise of humanity ?:

    They say a picture paints a thousand words…

    LOL - yes, in the making LOL

  • phenomlabundefined phenomlab referenced this topic on
  • And this is certainly interesting. I came across this on Sky News this morning

    https://news.sky.com/story/godfather-of-ai-geoffrey-hinton-warns-about-advancement-of-technology-after-leaving-google-job-12871065

    Seems even someone considered the “Godfather of AI” has quit, and is now raising concerns around privacy and jobs (and we’re not talking about Steve here either 🙂 )

  • Here’s another article of interest relating to the same subject

    https://news.sky.com/story/artificial-intelligence-will-get-crazier-and-crazier-without-controls-a-leading-start-up-founder-warns-12886081

    And the quote which says it all

    “The labs themselves say this could pose an existential threat to humanity,” said Mr Mostaque

    A cause for concern? Absolutely.

  • A rare occasion where I actually agree with Elon Musk

    https://news.sky.com/story/elon-musk-says-artificial-intelligence-isnt-necessary-for-anything-12887975

    Some interesting quotes from that article

    “So just having more advanced weapons on the battlefield that can react faster than any human could is really what AI is capable of.”

    “Any future wars between advanced countries or at least countries with drone capability will be very much the drone wars.”

    When asked if AI advances the end of an empire, he replied: “I think it does. I don’t think (AI) is necessary for anything that we’re doing.”

    This is also worth watching.

    This further bolsters my view that AI needs to be regulated.

  • Google CEO Sundar Pichai admits AI dangers ‘keep me up at night’

  • This is an interesting admission from China - a country typically with a cavalier attitude to emerging tech

    https://news.sky.com/story/china-warns-over-ai-risk-as-president-xi-jinping-urges-national-security-improvements-12893557

  • And here - Boss of AI firm’s ‘worst fears’ are more worrying than creepy Senate party trick

    US politicians fear artificial intelligence (AI) technology is like a “bomb in a china shop”. And there was worrying evidence at a Senate committee on Tuesday from the industry itself that the tech could “cause significant harm”.

    https://news.sky.com/story/boss-of-ai-firms-worst-fears-are-more-worrying-than-creepy-senate-party-trick-12882348

  • An interesting argument, but with little foundation in my view

    “But many of our ingrained fears and worries also come from movies, media and books, like the AI characterisations in Ex Machina, The Terminator, and even going back to Isaac Asimov’s ideas which inspired the film I, Robot.”

    https://news.sky.com/story/terminator-and-other-sci-fi-films-blamed-for-publics-concerns-about-ai-12895427

  • phenomlabundefined phenomlab referenced this topic on
  • @phenomlab yeap, but no need to fear 😄 this might even be better for humanity, since they will have a “common enemy” to fight against, so, maybe instead of fighting with each other, they will unite.

    In general, I do not embrace anthropocentric views well… and since human greed and money will determine how this will end, we all can guess what will happen…

    so sorry to say this mates, but if there is a robot uprising, I will sell out human race hard 🤣

  • @crazycells I understand your point - albeit selling out the human race as that would include you 😕

    There’s a great video on YouTube that goes into more depth (along with the “Slaughterbots” video in the first post) that I think is well worth watching. Unfortunately, it’s over an hour long, but does go into specific detail around the concerns. My personal concern is not one of having my job replaced by a machine - more about my existence.

  • And whilst it looks very much like I’m trying to hammer home a point here, see the below. Clearly, I’m not the only one concerned at the rate of AI’s development, and the consequences if not managed properly.

    https://news.sky.com/story/ai-could-help-produce-deadly-weapons-that-kill-humans-in-two-years-time-rishi-sunaks-adviser-warns-12897366

  • @phenomlab thanks for sharing. I will watch this.

    no worries 😄 I do not have a high opinion about human race, human greed wins each time, so I always feel it will be futile to resist. we are just one of the billions of species around us. thanks to evolution, our genes make us selfish creatures, but even if there is a catastrophe, I am pretty sure there will be at least a handful of survivors to continue.

    Screen Shot 2023-06-07 at 07.24.31.png

  • @phenomlab maybe I did not understand it well, but I do not share the opinions of this article. Are we trying to prevent deadly weapons from being built or are we trying to prevent AI from being part of it 🙂

    Regulations might (and probably will) be bent by individual countries secretly. So, what will happen then?


  • Link vs Refresh

    Solved Customisation
    20
    8 Votes
    20 Posts
    502 Views

    @pobojmoks Do you see any errors being reported in the console ? At first guess (without seeing the actual code or the site itself), I’d say that this is AJAX callback related

  • 3 Votes
    3 Posts
    272 Views

    @downpw Yes, exactly. Sudonix is about much more than NodeBB 🙂

  • 0 Votes
    1 Posts
    205 Views

    expert.webp
    One thing I’ve seen a lot of over my career is the “expert” myth being touted on LinkedIn and Twitter. Originating from psychologist K. Anders Ericsson who studied the way people become experts in their fields, and then discussed by Malcolm Gladwell in the book, “Outliers“, “to become an expert it takes 10,000 hours (or approximately 10 years) of deliberate practice”. This paradigm (if you can indeed call it that) has been adopted by several so called “experts” - mostly those within the Information Security and GDPR fields. This article isn’t about GDPR (for once), but mostly those who consider themselves “experts” by virtue of the acronym. Prior to it’s implementation, nobody should have proclaimed themselves a GDPR “expert”. You cannot be an expert in something that wasn’t actually legally binding until May 25 2018, nor can you have sufficient time invested to be an expert since inception in my view. GDPR is a vast universe, and you can’t claim to know all of it.

    Consultant ? Possibly, yes. Expert ? No.

    The associated sales campaign isn’t much better, and can be aligned to the children’s book “Chicken Licken”. For those unfamiliar with this concept, here is a walkthrough. I’m sure you’ll understand why I choose a children’s story in this case, as it seems to fit the bill very well. What I’ve seen over the last 12 months had been nothing short of amazing - but not in the sense of outstanding. I could align GDPR here to the PPI claims furore - for anyone unfamiliar with what this “uprising” is, here’s a synopsis.

    The “expert” fallacy

    Payment Protection Insurance (PPI) is the insurance sold alongside credit cards, loans and other finance agreements to ensure payments are made if the borrower is unable to make them due to sickness or unemployment. The PPI scandal has its roots set back as far as 1998, although compensatory payments did not officially start until 2011 once the review and court appeal process was completed. Since the deadline for PPI claims has been announced as August 2019, the campaign has become intensively aggressive, with, it would seem, thousands of PPI “experts”. Again, I would question the authenticity of such a title. It seems that everyone is doing it, therefore, it must be easy to attain (a bit like the CISSP then). I witnessed the same shark pool of so called “experts” before, back in the day when Y2K was the latest buzzword on everyone’s lips. Years of aggressive selling campaigns and similarly, years of FUD (Fear, Uncertainty, Doubt - more effectively known as complete bulls…) caused an unprecedented spike that allowed companies and consultants (several of whom had never been heard of before) to suddenly appear out of the woodwork and assume the identity of “experts” in this field. In reality, it’s not possible to be a subject matter expert in a particular field or niche market unless you have extensive experience. If you compare a weapons expert to a GDPR “expert”, you’ll see just how weak this paradigm actually is. A weapons expert will have years of knowledge in a field, and could probably tell you which gun discharged a bullet just by looking at the expended shell casing. I very much doubt a self styled GDPR expert can tell you what will happen in the event of an unknown scenario around the framework and the specific legal rights (in terms of the individual who the data belongs to) and implications for the institution affected. How can they when nobody has even been exposed to such a scenario before ? This makes a GDPR expert in my view about as plausible as a Brexit expert specialising in Article 50.

    What defines an expert ?

    The focal point here is in the comparison. A weapons expert can be given a gun and a sample of shell casings, then asked to determine if the suspected weapon actually fired the supplied ammunition or not. Using a process of proven identification techniques, the expert can then determine if the gun provided is indeed the origin. This information is derived by using established identity techniques from the indentations and markings in the shell casing created by the gun barrel from which the bullet was expelled, velocity, angle, and speed measurements obtained from firing the weapon. The impact of the bullet and exit damage is also analysed to determine a match based on material and critical evidence. Given the knowledge and experience level required to produce such results, how long do you think it took to reach this unrivalled plateau ? An expert isn’t solely based on knowledge. It’s not solely based on experience either. In fact, it’s a deep mixture of both. Deep in the sense of the subject matter comprehension, and how to execute that same understanding along with real life experience to obtain the optimum result. Here’s an example   An information technology expert should be able to

    Identify and eliminate potential bottlenecks Address security concerns, Design high availability Factor in extensible scalability Consider risk to adjacent and disparate technology and conduct analysis Ensure that any design proposal meets both the current criteria and beyond Understand the business need for technology and be able to support it

    If I leveraged external consultancy for a project, I’d expect all of the above and probably more from anyone who labels themselves as an expert - or for that fact, an architect. Sadly, I’ve been disappointed on numerous occasions throughout my career where it became evident very quickly that the so called expert (who I hasten to add is earning more an hour than I do in a day in most cases) hired for his “expertise and superior knowledge” in fact appears to know far less than I do about the same topic.

    How long does it really take to become an expert ?

    I’ve been in the information technology and security field since I was 16. I’m now 47, meaning 31 years experience (well, 31 as this year isn’t over yet). If you consider that experience is acquired during an 8 hour day, and used the following equation to determine the amount of years needed to reach 10,000 hours

    10000 / 8 / 365 = 3.4246575342 - for the sake of simple mathematics, let’s say 3.5 years.

    However, in the initial calculation, it’s 10 years (using the basis of 90 minutes invested per day) - making the expert title when aligned to GDPR even more unrealistic. As the directive was adopted on the 27 April 2016, the elapsed time period isn’t even enough to carry the first figure cited at 3.5 years, irrespective of the second. The reality here is that no amount of time invested in anything is going to make your an expert if you do not possess the prerequisite skills and a thorough understanding based on previous events in order to supplement and bolster the initial investment. I could spend 10,000 practicing a particular sport - yet effectively suck at it because my body (If you’ve met me, you’d know why) isn’t designed for the activity I’m requesting it to perform. Just because I’ve spent 10,000 hours reading about something doesn’t make me an expert by any stretch of the imagination. If I calculated the hours spanned over my career, I would arrive at the below. I’m basing this on an 8 hour day when in reality, most of my days are in fact much longer.

    31 x 365 x 8 = 90,520 hours

    Even when factoring in vacation based on 4 weeks per year (subject to variation, but I’ve gone for the mean average),

    31 x 28 X 8 = 6,944 hours to subtract

    This is only fair as you are not (supposed to be) working when on holiday. Even with this subtraction, the total is still 83,578 hours. Does my investment make me an expert ? I think so, yes - based on the fact that 31 years dedicated to one area would indicate a high level of experience and professional standard - both of which I constantly strive to maintain. Still think 10,000 hours invested makes you an expert ? You decide ! What are your views around this ?

  • 5 Votes
    4 Posts
    229 Views

    @crazycells I guess the worst part for me was the trolling - made so much worse by the fact that the moderators allowed it to continue, insisting that the PeerLyst coming was seeing an example by allowing the community to “self moderate” - such a statement being completely ridiculous, and it wasn’t until someone else other than myself pointed out that all of this toxic activity could in fact be crawled by Google, that they decided to step in and start deleting posts.

    In fact, it reached a boiling point where the CEO herself had to step in and post an article stating their justification for “self moderation” which simply doesn’t work.

    The evidence here speaks for itself.

  • 0 Votes
    1 Posts
    138 Views

    bg-min-dark.webp
    It’s a common occurrence in today’s modern world that virtually all organisations have a considerable budget (or a strong focus on) information and cyber security. Often, larger organisations spend millions annually on significant improvements to their security program or framework, yet overlook arguably the most fundamental basics which should be (but are often not) the building blocks of any fortified stronghold.

    We’ve spent so much time concentrating on the virtual aspect of security and all that it encompasses, but seem to have lost sight of what should arguably be the first item on the list – physical security. It doesn’t matter how much money and effort you plough into designing and securing your estate when you consider how vulnerable and easily negated the program or framework is if you neglect the physical element. Modern cyber crime has evolved, and it’s the general consensus these days that the traditional perimeter as entry point is rapidly losing its appeal from the accessibility versus yield perspective. Today’s discerning criminal is much more inclined to go for a softer and predictable target in the form of users themselves rather than spend hours on reconnaissance and black box probing looking for backdoors or other associated weak points in a network or associated infrastructure.

    Physical vs virtual

    So does this mean you should be focusing your efforts on the physical elements solely, and ignoring the perimeter altogether ? Absolutely not – doing so would be commercial suicide. However, the physical element should not be neglected either, but instead factored into any security design at the outset instead of being an afterthought. I’ve worked for a variety of organisations over my career – each of them with differing views and attitudes to risk concerning physical security. From the banking and finance sector to manufacturing, they all have common weaknesses. Weaknesses that should, in fact, have been eliminated from the outset rather than being a part of the everyday activity. Take this as an example. In order to qualify for buildings and contents insurance, business with office space need to ensure that they have effective measures in place to secure that particular area. In most cases, modern security mechanisms dictate that proximity card readers are deployed at main entrances, rendering access impossible (when the locking mechanism is enforced) without a programmed access card or token. But how “impossible” is that access in reality ?

    Organisations often take an entire floor of a building, or at least a subset of it. This means that any doors dividing floors or areas occupied by other tenants must be secured against unauthorised access. Quite often, these floors have more than one exit point for a variety of health and safety / fire regulation reasons, and it’s this particular scenario that often goes unnoticed, or unintentionally overlooked. Human nature dictates that it’s quicker to take the side exit when leaving the building rather than the main entrance, and the last employee leaving (in an ideal world) has the responsibility of ensuring that the door is locked behind them when they leave. However, the reality is often the case instead where the door is held open by a fire extinguisher for example. Whilst this facilitates effective and easy access during the day, it has a significant impact to your physical security if that same door remains open and unattended all night. I’ve seen this particular offence repeatedly committed over months – not days or weeks – in most organisations I’ve worked for. In fact, this exact situation allowed thieves to steal a laptop left on the desk in an office of a finance firm I previously worked at.

    Theft in general is mostly based around opportunity. As a paradigm, you could leave a £20 note / $20 bill on your desk and see how long it remained there before it went missing. I’m not implying here that anyone in particular is a thief, but again, it’s about opportunity. The same process can be aligned to Information security. It’s commonplace to secure information systems with passwords, least privilege access, locked server rooms, and all the other usual mechanisms, but what about the physical elements ? It’s not just door locks. It’s anything else that could be classed as sensitive, such as printed documents left on copiers long since forgotten and unloved, personally identifiable information left out on desks, misplaced smartphones, or even keys to restricted areas such as usually locked doors or cupboards. That 30 second window could be all that would be required to trigger a breach of security – and even worse, of information classed as sensitive. Not only could your insurance refuse to pay out if you could not demonstrate beyond reasonable doubt that you had the basic physical security measures in place, but (in the EU) you would have to notify the regulator (in this case, the ICO) that information had been stolen. Not only would it be of significant embarrassment to any firm that a “chancer” was able to casually stroll in and take anything they wanted unchallenged, but significant in terms of the severity of such an information breach – and the resultant fines imposed by the ICO or SEC (from the regulatory perspective – in this case, GDPR) – at €20m or 4% of annual global (yes, global) turnover (if you were part of a larger organisation, then that is actually 4% of the parent entity turnover – not just your firm) – whichever is the highest. Of equal significance is the need to notify the ICO within 72 hours of a discovered breach. In the event of electronic systems, you could gain intelligence about what was taken from a centralised logging system (if you have one – that’s another horror story altogether if you don’t and you are breached) from the “electronic” angle of any breach via traditional cyber channels, but do you know exactly what information has taken residence on desks ? Simple answer ? No.

    It’s for this very reason that several firms operate a “clean desk” policy. Not just for aesthetic reasons, but for information security reasons. Paper shredders are a great invention, but they lack AI and machine learning to wheel themselves around your office looking for sensitive hard copy (printed) data to destroy in order for you to remain compliant with your information security policy (now there’s an invention…).

    But how secure are these “unbreakable” locks ? Despite the furore around physical security in the form of smart locks, thieves seem to be able to bypass these “security measures” with little effort. Here’s a short video courtesy of ABC news detailing just how easy it was (and still is in some cases) to gain access to hotel rooms using cheap technology, tools, and “how-to” articles from YouTube.

    Surveillance systems aren’t exempt either. As an example, a camera system can be rendered useless with a can of spray paint or even something as simple as a grocery bag if it’s in full view. Admittedly, this would require some previous reconnaissance to determine the camera locations before committing any offence, but it’s certainly a viable prospect of that system is not monitored regularly. Additionally, (in the UK at least) the usage of CCTV in a commercial setting requires a written visible notice to be displayed informing those affected that they are in fact being recorded (along with an impact assessment around the usage), and is also subject to various other controls around privacy, usage, security, and retention periods.

    Unbreakable locks ?

    Then there’s the “unbreakable” door lock. Tapplock advertised their “unbreakable smart lock” only to find that it was vulnerable to the most basic of all forced entry – the screwdriver. Have a look at this article courtesy of “The Register”. In all seriousness, there aren’t that many locks that cannot be effectively bypassed. Now, I know what you’re thinking. If the lock cannot be effectively opened, then how do you gain entry ? It’s much simpler than you think. For a great demonstration, we’ll hand over to a scene from “RED” that shows exactly how this would work. The lock itself may have pass-code that “…changes every 6 hours…” and is “unbreakable”, but that doesn’t extend to the material that holds both the door and the access panel for the lock itself.

    And so onto the actual point. Unless your “unbreakable” door lock is housed within fortified brick or concrete walls and impervious to drills, oxy-acetylene cutting equipment, and proximity explosive charges (ok, that’s a little over the top…), it should not be classed as “secure”. Some of the best examples I’ve seen are a metal door housed in a plasterboard / false wall. Personally, if I wanted access to the room that badly, I’d go through the wall with the nearest fire extinguisher rather than fiddle with the lock itself. All it takes is to tap on the wall, and you’ll know for sure if it’s hollow just by the sound it makes. Finally, there’s the even more ridiculous – where you have a reinforced door lock with a viewing pane (of course, glass). Why bother with the lock when you can simply shatter the glass, put your hand through, and unlock the door ?

    Conclusion

    There’s always a variety of reasons as to why you wouldn’t build your comms room out of brick or concrete – mostly attributed to building and landlord regulations in premises that businesses occupy. Arguably, if you wanted to build something like this, and occupied the ground floor, then yes, you could indeed carry out this work if it was permitted. Most data centres that are truly secure are patrolled 24 x 7 by security, are located underground, or within heavily fortified surroundings. Here is an example of one of the most physically secure data centres in the world.

    https://www.identiv.com/resources/blog/the-worlds-most-secure-buildings-bahnhof-data-center

    Virtually all physical security aspects eventually circle back to two common topics – budget, and attitude to risk. The real question here is what value you place on your data – particularly if you are a custodian of it, but the data relates to others. Leaking data because of exceptionally weak security practices in today’s modern age is an unfortunate risk – one that you cannot afford to overlook.

    What are your thoughts around physical security ?

  • 0 Votes
    1 Posts
    110 Views

    1631810017053-netsecurity.jpg.webp
    I read an article By Glenn S. Gerstell (Mr. Gerstell is the general counsel of the National Security Agency) with a great deal of interest. That same article is detailed below

    The National Security Operations Center occupies a large windowless room, bathed in blue light, on the third floor of the National Security Agency’s headquarters outside of Washington. For the past 46 years, around the clock without a single interruption, a team of senior military and intelligence officials has staffed this national security nerve center.

    The center’s senior operations officer is surrounded by glowing high-definition monitors showing information about things like Pentagon computer networks, military and civilian air traffic in the Middle East and video feeds from drones in Afghanistan. The officer is authorized to notify the president any time of the day or night of a critical threat.

    Just down a staircase outside the operations center is the Defense Special Missile and Aeronautics Center, which keeps track of missile and satellite launches by China, North Korea, Russia, Iran and other countries. If North Korea was ever to launch an intercontinental ballistic missile toward Los Angeles, those keeping watch might have half an hour or more between the time of detection to the time the missile would land at the target. At least in theory, that is enough time to alert the operations center two floors above and alert the military to shoot down the missile.

    But these early-warning centers have no ability to issue a warning to the president that would stop a cyberattack that takes down a regional or national power grid or to intercept a hypersonic cruise missile launched from Russia or China. The cyberattack can be detected only upon occurrence, and the hypersonic missile, only seconds or at best minutes before attack. And even if we could detect a missile flying at low altitudes at 20 times the speed of sound, we have no way of stopping it.

    Something I’ve been saying all along is that technology alone cannot stop cyber attacks. Often referred to as a “silver bullet”, or “blinky lights”, this provides the misconception that by purchasing that new, shiny device, you’re completely secure. Sorry folks, but this just isn’t true. In fact, cyber crime, and it’s associated plethora of hourly attacks is evolving at an alarming rate - in fact, much faster than you’d like to believe.

    You’d think that for all the huge technological advances we have made in this world, the almost daily plethora of corporate security breaches, high profile data loss, and individuals being scammed every day would have dropped down to nothing more than a trickle - even to the point where they became virtually non-existent. We are making huge progress with landings on Mars, autonomous space vehicles, artificial intelligence, big data, machine learning, and essentially reaching new heights on a daily basis thanks to some of the most creative minds in this technological sphere. But somehow, we have lost our way, stumbled and fallen - mostly on our own sword. But why ?

    Just like the Y2k Gold Rush in the late 90’s, information security has become the next big thing with companies ranging from a few employees as startups to enterprise organisations touting their services and platforms to be the best in class, and the next “must have” tool in the blue team’s already bulging arsenal of tools. Tools that on their own in fact have little effect unless they are combined with something else as equally as expensive to run. We’ve spent so much time focusing on efforts ranging from what SEIM solution we need to what will be labelled as the ultimate silver bullet capable of eliminating the threat of attack once and for all that in my opinion, we have lost sight of the original goal. With regulatory requirements and best practice pushing us towards products and services that either require additional staff to manage, or are incredibly expensive to deploy and ultimately run. Supposedly, in an effort to simplify the management, analysis, and processing of millions of logs per hour we’ve created even more platforms to ingest this data in order to make sense of it.

    In reality, all we have created is a shark infested pool where larger companies consume up and coming tech startups for breakfast to ensure that they do not pose a threat to their business model / gravy train, therefore enabling them to dominate the space even further with their newly enhanced reach.

    How did we get to this ? What happened to thought process and working together in order to combat the threat that increases on an hourly basis ? We seem to be so focused on making sure that we aren’t the next organisation to be breached that we have lost the art of communication and the full benefit of sharing information so that it assists others in their journey. We’ve become so obsessed with the daily onslaught of platforms that we no longer seem to have the time to even think, let alone take stock and regroup - not as an individual, but as a community.

    There are a number of ”communities” that offer “free” forums and products under the open source banner, but sadly, these seem to be turning into paid-for products at a rate of knots. I understand people need to live and make money, but if awareness was raised to the point where users wouldn’t click links in phishing emails, fall for the fake emergency wire transfer request from the CEO, or be suddenly tempted by the latest offer in terms of cheap technology then we might - just might - be able to make the world a better place. In order to make this work, we first need to remove the stigma that has become so ingrained by the media and set in stone like King Arthur’s Excalibur. Let’s first start with the hacker / criminal parallel. They aren’t the same thing folks.

    Nope. Not at all. Hackers are those people who find ingenious ways of getting into networks and infrastructure that you never even knew existed, trick you into parting with sensitive information (then inform you as to where you went wrong), and most importantly, educate you so that you and your network are far more secure against real attacks and real criminals. These people exist to increase your awareness, and by definition, security footprint - not use it against you in order to steal. Hackers do like to wear hoodies as they are comfortable, but you won’t find one using gloves, wearing a balaclava or sunglasses, and in some cases, they actually prefer desktops rather than laptops.

    The image being portrayed here is one perpetuated by the media, and it has certainly been effective - but not in a positive way. The word “hacker” is now synonymous with criminals, where it really shouldn’t be. One defines security, whereas the other sets out to break it. If we locked up all the hackers on this planet, we’d only have the blue team remaining. It’s the job of the red team (hackers) to see how strong your defences are. Hackers exist to educate, not infiltrate (at least, not without asking for permission first :))

    I personally have lost count of how many times I’ve sat in meetings where a sales pitch around a security platform is touted as a one stop shop or a Swiss army knife that can protect your entire network from a breach. Admittedly, there’s some great technology on the market that performs a variety of functions to protect your estate, but they all fail to take into consideration the weakest link in any chain - users. Irrespective of bleeding edge “combat platforms” (as I like to refer to them), criminals are becoming very adept in their approach, leveraging techniques such as social engineering. It should come as no surprise for you to learn that this type of attack can literally walk past your shiny new defence system as it relies on the one vulnerability you cannot predict - the human. Hence the term “hacking humans”.

    I’m of the firm opinion that if you want to outsmart a criminal, you have to think like one. Whilst newfangled platforms are created to assist in the fight against cyber crime, they are complex to configure, suffer from alerting bloat (far too many emails so you end up missing the one where your network is actually being compromised), or are simply overwhelming and difficult to understand. Here’s the thing. You don’t need (although they do help) expensive bleeding edge platforms with flashing lights to tell you where weak points lie within your network, but you do need to understand how a criminal can and will exploit these. A vulnerability cannot be leveraged if it no longer exists, or even better, never even existed to begin with.

    And so, on with the mission, and the real reason as to why I created this site. I’ve been working in information technology for 30 years, and have a very strong technical background in network design and information security.

    What I want to do is create a communication, information, and awareness sharing platform. I created the original concept of what I thought this new community should look like in my head, but its taken a while to finally develop, get people interested, and on board. To my mind, those from inside and outside of the information security arena will pool together, share knowledge, raise awareness, and probably the most important, harness this new found force and drive change forward.

    The breaches we are witnessing on a daily basis are not going to simply stop. They will increase dramatically in their frequency, and will get worse with each incident.

    Let’s stop the “hackers are criminals” myth, start using our own unique talents in this field, and make a community that

    is able to bring effective change treats everyone as equals The community once fully established could easily be the catalyst for change - both in perception, and inception.

    Why not wield the stick for a change instead of being beaten with it, and work as a global virtual team instead ?

    Will you join me ? In case I haven’t already mentioned it, this initiative has no cost - only gains. It is entirely free.

  • 0 Votes
    1 Posts
    134 Views

    When you look at your servers or surrounding networks, what exactly do you see ? A work of art, perhaps ? Sadly, this is anything but the picture painted for most networks when you begin to look under the hood. What I’m alluding to here is that beauty isn’t skin deep - in the sense that neat cabling resembling art from the Sistine Chapel, tidy racks, and huge comms rooms full of flashing lights looks appealing from the eye candy perspective and probably will impress clients, but in several cases, this is the ultimate wolf in sheep’s clothing. Sounds harsh ? Of course it does, but with good intentions and reasoning. There’s not a single person responsible for servers and networks on this planet who will willingly admit that whilst his or her network looks like a masterpiece to the untrained eye, it’s a complete disaster in terms of security underneath.

    In reality, it’s quite the opposite. Organisations often purchase bleeding edge infrastructure as a means of leveraging the clear technical advantages, enhanced security, and competitive edge it provides. However, under the impressive start of the art ambience and air conditioning often lies an unwanted beast. This mostly invisible beast lives on low-hanging fruit, will be tempted to help itself at any given opportunity, and is always hungry. For those becoming slightly bewildered at this point, there really isn’t an invisible beast lurking around your network that eats fruit. But, with a poorly secured infrastructure, there might as well be. The beast being referenced here is an uninvited intruder in your network. A bad actor, threat actor, bad guy, criminal…. call it what you want (just don’t use the word hacker) can find their way inside your network by leveraging the one thing that I’ve seen time and time again in literally every organisation I ever worked for throughout my career - the default username and password. I really can’t stress the importance of changing this on new (and existing) network equipment enough, and it doesn’t stop at this either.

    Changing the default username and password is about 10% of the puzzle when it comes to security and basic protection principles. Even the most complex credentials can be bypassed completely by a vulnerability (or in some cases, a backdoor) in ageing firmware on switches, firewalls, routers, storage arrays, and a wealth of others - including printers (which incidentally make an ideal watering hole thanks to the defaults of FTP, HTTP, SNMP, and Telnet, most (if not all of) are usually always on. As cheaper printers do not have screens like their more expensive copier counterparts (the estate is reduced to make the device smaller and cheaper), any potential criminal can hide here and not be detected - often for months at a time - arguably, they could live in a copier without you being aware also. A classic example of an unknown exploit into a system was the Juniper firewall backdoor that permitted full admin access using a specific password - regardless of the one set by the owner. Whilst the Juniper exploit didn’t exactly involve a default username and password as such (although this particular exploit was hard-coded into the firmware, meaning that any “user” with the right coded password and SSH access remotely would achieve full control over the device), it did leverage the specific vulnerability in the fact that poorly configured devices could have SSH configured as accessible to 0.0.0.0/0 (essentially, the entire planet) rather than a trusted set of IP addresses - typically from an approved management network.

    We all need to get out of the mindset of taking something out of a box, plugging it into our network, and then doing nothing else - such as changing the default username and password (ideally disabling it completely and replacing it with a unique ID) or turning off access protocols that we do not want or need. The real issue here is that today’s technology standards make it simple for literally anyone to purchase something and set it up within a few minutes without considering that a simple port scan of a subnet can reveal a wealth of information to an attacker - several of these tools are equipped with a default username and password dictionary that is leveraged against the device in question if it responds to a request. Changing the default configuration instead of leaving it to chance can dramatically reduce the attack landscape in your network. Failure to do so changes “plug and play” to “ripe for picking”, and its those devices that perform seemingly “minor” functions in your network that are the easiest to exploit - and leverage in order to gain access to neighbouring ancillary services. Whilst not an immediate gateway into another device, the compromised system can easily give an attacker a good overview of what else is on the same subnet, for example.

    So how did we arrive at the low hanging fruit paradigm ?

    It’s simple enough if you consider the way that fruit can weigh down the branch to the point where it is low enough to be picked easily. A poorly secured network contains many vulnerabilities that can be leveraged and exploited very easily without the need for much effort on the part of an attacker. It’s almost like a horse grazing in a field next to an orchard where the apples hang over the fence. It’s easily picked, often overlooked, and gone in seconds. When this term is used in information security, a common parallel is the path of least resistance. For example, a pickpocket can acquire your wallet without you even being aware, and this requires a high degree of skill in order to evade detection yet still achieve the primary objective. On the other hand, someone strolling down the street with an expensive camera hanging over their shoulder is a classic example of the low hanging fruit synopsis in the sense that this theft is based on an opportunity that wouldn’t require much effort - yet with a high yield. Here’s an example of how that very scenario could well play out.

    Now, as much as we’d all like to handle cyber crime in this way, we can’t. It’s illegal 🙂

    What isn’t illegal is prevention. 80% of security is based on best practice. Admittedly, there is a fair argument as to what exactly is classed as “best” these days, although it’s a relatively well known fact that patching the Windows operating system for example is one of the best ways to stamp out a vulnerability - but only for that system that it is designed to protect against. Windows is just the tip of the iceberg when it comes to vulnerabilities - it’s not just operating systems that suffer, but applications, too. You could take a Windows based IIS server, harden it in terms of permitted protocols and services, plus install all of the available patches - yet have an outdated version of WordPress running (see here for some tips on how to reduce that threat), or often even worse, outdated plugins that allow remote code execution. The low hanging fruit problem becomes even more obvious when you consider breaches such as the well-publicised Mossack Fonseca (“Panama Papers”). What became clear after an investigation is that the attackers in this case leveraged vulnerabilities in the firm’s WordPress and Joomla public facing installations - this in fact led to them being able to exploit an equally vulnerable mail server by brute-forcing it.

    So what should you do ? The answer is simple. It’s harvest time.

    If there is no low-hanging fruit to pick, life becomes that much more difficult for any attacker looking for a quick “win”. Unless determined, it’s unlikely that your average attacker is going to spend a significant amount of time on a target (unless it’s Fort Knox - then you’ve have to question the sophistication) then walk away empty handed with nothing to show for the effort. To this end, below are my top recommendations. They are not new, non-exhaustive, and certainly not rocket science - yet they are surprisingly missing from the “security 101” perspective in several organisations.

    Change the default username and password on ALL infrastructure. It doesn’t matter if it’s not publicly accessible - this is irrelevant when you consider the level of threats that have their origins from the inside. If you do have to keep the default username (in other words, it can’t be disabled), set the lowest possible access permissions, and configure a strong password. Close all windows - in this case, lock down protocols and ports that are not essential - and if you really do need them open, ensure that they are restricted Deploy MFA (or at least 2FA) to all public facing systems and those that contain sensitive or personally identifiable information Deploy adequate monitoring and logging techniques, using a sane level of retention. Without any way of forensic examination, any bad actor can be in and out of your network well before you even realise a breach may have taken place. The only real difference is that without decent logging, you have no way of confirming or even worse, quantifying your suspicion. This can spell disaster in regulated industries. Don’t shoot yourself in the foot. Ensure all Windows servers and PC’s are up to date with the latest patches. The same applies to Linux and MAC systems - despite the hype, they are vulnerable to an extent (but not in the same way as Windows), although attacks are notoriously more difficult to deploy and would need to be in the form of a rootkit to work properly Do not let routers, firewalls, switches, etc “slip” in terms of firmware updates. Keep yourself and your team regularly informed and updated around the latest vulnerabilities, assess their impact, and most importantly, plan an update review. Not upgrading firmware on critical infrastructure can have a dramatic effect on your overall security. Lock down USB ports, CD/DVD drives, and do not permit access to file sharing, social media, or web based email. This has been an industry standard for years, but you’d be surprised at just how many organisations still have these open and yet, do not consider this a risk. Reduce the attack vector by segmenting your network using VLANS. For example, the sales VLAN does not need to (and shouldn’t need to) connect directly to accounting etc. In this case, a ransomware or malware outbreak in sales would not traverse to other VLANS, therefore, restricting the spread. A flat network is simple to manage, but a level playing field for an attacker to compromise if all the assets are in the same space. Don’t use an account with admin rights to perform your daily duties. There’s no prizes for guessing the level of potential damage this can cause if your account is compromised, or you land up with malware on your PC Educate and phish your users on a continual basis. They are the gateway directly into your network, and bypassing them is surprisingly easy. You only have to look at the success of phishing campaigns to realise that they are (and always will be) the weakest link in your network. Devise a consistent security and risk review framework. Conducting periodic security reviews is always a good move, and you’d be surprised at just what is lurking around on your network without your knowledge. There needn’t be a huge budget for this. There are a number of open source projects and platforms that make this process simple in terms of identification, but you’ll still need to complete the “grunt” work in terms of remediation. I am currently authoring a framework that will be open source, and will share this with the community once it is completed. Conduct regular governance and due diligence on vendors - particularly those that handle information considered sensitive (think GDPR). If their network is breached, any information they hold around your network and associated users is also at risk. Identify weak or potential risk areas within your network. Engage with business leaders and management to raise awareness around best practice, and information security. Perform breach simulation, and engage senior management in this exercise. As they are the fundamental core of the business function, they also need to understand the risk, and more importantly, the decisions and communication that is inevitable post breach.

    There is no silver bullet when it comes to protecting your network, information, and reputation. However, the list above will form the basis of a solid framework.

    Let’s not be complacent - let’s be compliant.

  • 0 Votes
    1 Posts
    302 Views

    tech.jpeg
    Ever heard of KISS ? Nope - not these guys

    kiss.jpeg
    What I’m referring to is the acronym was reportedly coined by Kelly Johnson, lead engineer at the Lockheed Skunk Works (creators of the Lockheed U-2 and SR-71 Blackbird spy planes, among many others), which formed the basis of the relationship between the way things break, and the sophistication available to repair them. You might be puzzled at why I’d write about something like this, but it’s a situation I see constantly – one I like to refer to as “over thinker syndrome”. What do I mean by this ? Here’s the theory. Some people are very analytical when it comes to problem solving. Couple that with technical knowledge and you could land up with a situation where something relatively simple gets blown out of all proportion because the scenario played out in the mind is often much further from reality than you’d expect. And the technical reasoning is usually always to blame.

    Some years ago in a previous career, a colleague noticed that the Exchange Server (2003 wouldn’t you know) would suddenly reboot half way through a backup job. Rightly so, he wanted to investigate and asked me if this would be ok. Anyone with an ounce of experience knows that functional backups are critical in the event of a disaster – none more so than I – obviously, I gave the go ahead. One bright spark in my team suggested a reboot of the server, which immediately prompted the response

    “……it’s rebooting itself every day, so how will that help ?”

    There’s always one, isn’t there ? The final (and honestly more realistic suggestion) was to enable verbose logging in Exchange. This is actually a good idea, but only if you suspect that the information store could be the issue. Given the evidence, I wasn’t convinced. If there was corruption in the store, or on any of the disks, this would show itself randomly through the day and wouldn’t wait until 2am in the morning. Not wanting to come across as condescending, I agreed, but at the same time, set a deadline to escalation. I wasn’t overly concerned about the backups as these were being completed manually each day whilst the investigations were taking place. Neither was I concerned at what could be seen at this point as wasting someone’s time when you think you may have the answer to what now seemed to be an impossible problem. This is where experience will eclipse any formal qualifications hands down. Those with university degrees may scoff at this, but those with substantially analytical thinking patterns seem to avoid logic like the plague and go off on a wild tangent looking for a dramatically technical explanation and solution to a problem when it’s much simpler than you’d expect.

    After witnessing the pained expression on the face of my now exasperated and exhausted tech, I said “let’s get a coffee”. In agreement, he followed me to the kitchen and then asked me what I thought the problem could be. I said that if he wanted my advice, it would be to step back and look at this problem from a logical angle rather than technical. The confused look I received was priceless – the guy must have really though I’d lost the plot. After what seemed like an eternity (although in reality only a few seconds) he asked me what I meant by this. “Come with me”, I said. Finishing his coffee, he diligently followed me to the server room. Once inside, I asked him to show me the Exchange Server. Puzzled, he correctly pointed out the exact machine. I then asked him to trace the power cables and tell me where they went.

    As with most server rooms, locating and identifying cables can be a bit of a challenge after equipment has been added and removed, so this took a little longer than we expected. Eventually, the tech traced the cables back to

    ………an old looking UPS that had a red light illuminated at the front like it had been a prop in a Terminator film.

    Suddenly, the real cause of this issue dawned on the tech like a morning sunrise over the Serengeti. The UPS that the Exchange Server was unexpectedly connected to had a faulty battery. The UPS was conducting a self test at 2am each morning, and because the bypass test failed owing to the burnt battery, the connected server lost power and started back up after the offending equipment left bypass mode and went online.

    Where is this going you might ask ? Here’s the moral of this particular story

    Just because a problem involves technology, it doesn’t mean that the answer has to be a complex technical one Logic and common sense has a part to play in all of our lives. Sometimes, it makes more sense just to step back, take a breath, and see something for what it really is before deciding to commit It’s easy to allow technical expertise to cloud your judgement – don’t fall into the trap of using a sledgehammer to break an egg You cannot buy experience – it’s earned, gained, and leaves an indelible mark