How do you manage IT pros?

Moved Blog
  • I’ve just read this article with a great deal of interest. Whilst it’s not “perfect” in the way it’s written, it certainly does a very good job in explaining the IT function to a tee - and despite having been written in 2009, it’s still factually correct and completely relevant.

    https://www.computerworld.com/article/2527153/opinion-the-unspoken-truth-about-managing-geeks.html

    This is my interpretation;

    The points made are impossible to disagree with. Yes, IT pros do want their managers to be technically competent - there’s nothing worse than having a manager who’s never been “on the tools” and is non technical - they are about as much use as a chocolate fireguard when it comes to being a sounding board for technical issues that a specific tech cannot easily resolve.

    I’ve been in senior management since 2016 and being “on the tools” previously for 30+ years has enabled me to see both the business and technical angles - and equally appreciate both of them. Despite my management role, I still maintain a strong technical presence, and am (probably) the most senior and experienced technical resource in my team.

    That’s not to say that the team members I do have aren’t up to the job - very much the opposite in fact and for the most part, they work unsupervised and only call on my skill set when they have exhausted their own and need someone with a trained ear to bounce ideas off.

    On the flip side, I’ve worked with some cowboys in my industry who can talk the talk but not walk the walk - and they are exposed very quickly in smaller firms where it’s much harder to hide technical deficit behind other team members.

    The hallmark of a good manager is one who knows how much is involved in a specific project or task in order to steer it to completion, and is willing to step back and let others in the team be in the driving seat. A huge plus is knowing how to get the best out of each individual team member and does not deploy pointless techniques such as micro management - in other words, be on their wavelength and understand their strengths and weaknesses, then use those to the advantage of the team rather than the individual.

    Sure, there will always be those in the team who you wouldn’t stick in front of clients - not because of the fact that they don’t know their field of expertise, but may lack the necessary polish or soft skills to give clients a warm fuzzy feeling, or may be unable (or simply unwilling) to explain technology to someone without the fundamental understanding of how a variety of components and services intersect.

    That should never be seen as a negative though. A strong manager recognizes that whilst team members are uncomfortable with being the “front of house”, they excel in other areas supporting and maintaining technology that most users don’t even realize exists, yet they use it daily (or some variant of it). It is these skills that mean IT departments and associated technologies run 24x7x365, and we should champion them more than we do already from the business perspective.

  • Absolutely you can be very good at technique and not be good at telephone support with a customer on the phone
    Each element of a team complements each other and you have to know how to take advantage of it, as a good manager or head of an IT department should do.

  • @DownPW yes, exactly my point.


  • 3 Votes
    3 Posts
    6 Views

    Same My 8T is good for now after 2 years.

    Besides that, I don’t like Samsung’s philosophy and I don’t like their UI either: OneUI
    The only brand whose Hardware catches my eye and especially the software overlay is Nothing with Nothing OS. But I know what you think of them 🙂

  • 2 Votes
    1 Posts
    12 Views

    Just seen this post pop up on Sky News

    https://news.sky.com/story/elon-musks-brain-chip-firm-given-all-clear-to-recruit-for-human-trials-12965469

    He has claimed the devices are so safe he would happily use his children as test subjects.

    Is this guy completely insane? You’d seriously use your kids as Guinea Pigs in human trials?? This guy clearly has easily more money than sense, and anyone who’d put their children in danger in the name of technology “advances” should seriously question their own ethics - and I’m honestly shocked that nobody else seems to have a comment about this.

    This entire “experiment” is dangerous to say the least in my view as there is huge potential for error. However, reading the below article where a paralyzed man was able to walk again thanks to a neuro “bridge” is truly ground breaking and life changing for that individual.

    https://news.sky.com/story/paralysed-man-walks-again-thanks-to-digital-bridge-that-wirelessly-reconnects-brain-and-spinal-cord-12888128

    However, this is reputable Swiss technology at it’s finest - Switzerland’s Lausanne University Hospital, the University of Lausanne, and the Swiss Federal Institute of Technology Lausanne were all involved in this process and the implants themselves were developed by the French Atomic Energy Commission.

    Musk’s “off the cuff” remark makes the entire process sound “cavalier” in my view and the brain isn’t something that can be manipulated without dire consequences for the patient if you get it wrong.

    I daresay there are going to agreements composed by lawyers which each recipient of this technology will need to sign so that it exonerates Neuralink and it’s executives of all responsibility should anything go wrong.

    I must admit, I’m torn here (in the sense of the Swiss experiment) - part of me finds it morally wrong to interfere with the human brain like this because of the potential for irreversible damage, although the benefits are huge, obviously life changing for the recipient, and in most cases may outweigh the risk (at what level I cannot comment not being a neurosurgeon of course).

    Interested in other views - would you offer yourself as a test subject for this? If I were in a wheelchair and couldn’t move, I probably would I think, but would need assurance that such technology and it’s associated procedure is safe, which at this stage, I’m not convinced it’s a guarantee that can be given. There are of course no real guarantees with anything these days, but this is a leap of faith that once taken, cannot be reversed if it goes wrong.

  • 1 Votes
    3 Posts
    51 Views

    @Panda said in Wasting time on a system that hangs on boot:

    Why do you prefer to use KDE Linux distro, over say Ubuntu?

    A matter of taste really. I’ve tried pretty much every Linux distro out there over the years, and whilst I started with Ubuntu, I used Linux mint for a long time also. All of them are Debian backed anyway 😁

    I guess I feel in love with KDE (Neon) because of the amount of effort they’d gone to in relation to the UI.

    I agree about the lead and the OS statement which is why I suspect that Windows simply ignored it (although the Device also worked fine there, so it clearly wasn’t that faulty)

  • 16 Votes
    12 Posts
    76 Views
  • 2 Votes
    3 Posts
    26 Views

    @DownPW If you don’t mind a retro display type of Dot Matrix - why on earth would anyone want that? I get the concept, but it’s nothing more than a gimmick and adds zero value to the operation of the handset.

    Sustainable product… with a £600 plus price tag…

    “Nothing Phone”? More like “Nothing Special” 😄

  • 6 Votes
    12 Posts
    96 Views

    @veronikya said in Cloudflare bot fight mode and Google search:

    docker modifications are a pain in the ass,

    I couldn’t have put that better myself - such an accurate analogy. I too have “been there” with this pain factor, and I swore I’d never do it again.

  • 0 Votes
    1 Posts
    129 Views

    1621959888112-code.webp
    Anyone working in the information and infrastructure security space will be more than familiar with the non-stop evolution that is vulnerability management. Seemingly on a daily basis, we see new attacks emerging, and those old mechanisms that you thought were well and truly dead resurface with “Frankenstein” like capabilities rendering your existing defences designed to combat that particular threat either inefficient, or in some cases, completely ineffective. All too often, we see previous campaigns resurface with newer destructive capabilities designed to extort both from the financial and blackmail perspective.

    It’s the function of the “Blue Team” to (in several cases) work around the clock to patch a security vulnerability identified in a system, and ensure that the technology landscape and estate is as healthy as is feasibly possible. On the flip side, it’s the function of the “Red Team” to identify hidden vulnerabilities in your systems and associated networks, and provide assistance around the remediation of the identified threat in a controlled manner.

    Depending on your requirements, the minimum industry accepted testing frequency from the “Red Team” perspective is once per year, and typically involves the traditional “perimeter” (externally facing devices such as firewalls, routers, etc), websites, public facing applications, and anything else exposed to the internet. Whilst this satisfies the “tick in the box” requirement on infrastructure that traditionally never changes, is it really sufficient in today’s ever-changing environments ? The answer here is no.

    With the arrival of flexible computing, virtual data centres, SaaS, IaaS, IoT, and literally every other acronym relating to technology comes a new level of risk. Evolution of system and application capabilities has meant that these very systems are in most cases self-learning (and for networks, self-healing). Application algorithms, Machine Learning, and Artificial Intelligence can all introduce an unintended vulnerability throughout the development lifecycle, therefore, failing to test, address, and validate the security of any new application or modern infrastructure that is public facing is a breach waiting to happen. For those “in the industry”, how many times have you been met with this very scenario

    “Blue Team: We fixed the vulnerabilities that the Red Team said they’d found…” “Red Team: We found the vulnerabilities that the Blue Team said they’d fixed…”

    Does this sound familiar ?

    What I’m alluding to here is that security isn’t “fire and forget”. It’s a multi-faceted, complex process of evolution that, very much like the earth itself, is constantly spinning. Vulnerabilities evolve at an alarming rate, and unfortunately, your security program needs to evolve with it rather than simply “stopping” for even a short period of time. It’s surprising (and in all honesty, worrying) the amount of businesses that do not currently (and even worse, have no plans to) perform an internal vulnerability assessment. You’ll notice here I do not refer to this as a penetration test - you can’t “penetrate” something you are already sitting inside. The purpose of this exercise is to engage a third party vendor (subject to the usual Non-Disclosure Agreement process) for a couple of days. Let them sit directly inside your network, and see what they can discover. Topology maps and subnets help, but in reality, this is a discovery “mission” and it’s up to the tester in terms of how they handle the exercise.

    The important component here is scope. Additionally, there are always boundaries. For example, I typically prefer a proof of concept rather than a tester blundering in and using a “capture the flag” approach that could cause significant disruption or damage to existing processes - particularly in-house development. It’s vital that you “set the tone” of what is acceptable, and what you expect to gain from the exercise at the beginning of the engagement. Essentially, the mantra here is that the evolution wheel in fact never stops - it’s why security personnel are always busy, and CISO’s never sleep 🙂

    These days, a pragmatic approach is essential in order to manage a security framework properly. Gone are the days of annual testing alone, being dismissive around “low level” threats without fully understanding their capabilities, and brushing identified vulnerabilities “under the carpet”. The annual testing still holds significant value, but only if undertaken by an independent body, such as those accredited by CREST (for example).

    You can reduce the exposure to risk in your own environment by creating your own security framework, and adopting a frequent vulnerability scanning schedule with self remediation. Not only does this lower the risk to your overall environment, but also provides the comfort that you take security seriously to clients and vendors alike whom conduct frequent assessments as part of their Due Diligence programs. Identifying vulnerabilities is one thing, however, remediating them is another. You essentially need to “find a balance” in terms of deciding which comes first. The obvious route is to target critical, high, and medium risk, whilst leaving the “low risk” items behind, or on the “back burner”.

    The issue with this approach is that it’s perfectly possible to chain multiple vulnerabilities together that on their own would be classed as low risk, and end up with something much more sinister when combined. This is why it’s important to address even low-risk vulnerabilities to see how easy it would be to effectively execute these inside your environment. In reality, no Red Team member can tell you exactly how any threat could pan out if a way to exploit it silently existed in your environment without a proof of concept exercise - that, and the necessity sometimes for a “perfect storm” that makes the previous statement possible in a production environment.

    Vulnerability assessments rely on attitude to risk at their core. If the attitude is classed as low for a high risk threat, then there needs to be a responsible person capable of enforcing an argument for that particular threat to be at the top of the remediation list. This is often the challenge - where board members will accept a level of risk because the remediation itself may impact a particular process, or interfere with a particular development cycle - mainly because they do not understand the implication of weakened security over desired functionality.

    For any security program to be complete (as far as is possible), it also needs to consider the fundamental weakest link in any organisation - the user. Whilst this sounds harsh, the below statement is always true

    “A malicious actor can send 1,000 emails to random users, but only needs one to actually click a link to gain a foothold into an organisation”

    For this reason, any internal vulnerability assessment program should also factor in social engineering, phishing simulations, vishing, eavesdropping (water cooler / kitchen chat), unattended documents left on copiers, dropping a USB thumb drive in reception or “public” (in the sense of the firm) areas.

    There’s a lot more to this topic than this article alone can sanely cover. After several years experience in Information and Infrastructure Security, I’ve seen my fair share of inadequate processes and programs, and it’s time we addressed the elephant in the room.

    Got you thinking about your own security program ?

  • 48 Votes
    62 Posts
    2k Views

    Somehow, I knew it wouldn’t be long before AI was was being used extensively to produce indecent images of children. I find this sickening to the core.

    https://news.sky.com/story/sickening-rise-in-ai-generated-child-sex-abuse-images-inciting-paedophiles-to-commit-more-crimes-12970836

    The transport medium for this is WhatsApp, which, given it’s encryption, is a cause for significant alarm in the sense that tracking the perpetrators of these images is almost impossible. According to the article

    Meta has defended the plans, insisting it has “robust safety measures” to detect and prevent abuse while maintaining security.

    No, it doesn’t - it can’t even stop the most simple of things such as ensuring personally identifiable information doesn’t land up being provided to an unauthorized source.