Never underestimate the importance of security

Blog
  • cropped-vault2-min.jpg.webp
    Over the years, I’ve been exposed to a variety of industries - one of these is aerospace engineering and manufacturing. During my time in this industry, I picked up a wealth of experience around processing, manufacturing, treatments, inspection, and various others. Sheet metal work within the aircraft industry is fine-limit. We’re not talking about millimeters here - we’re talking about thousands of an inch, or “thou” to be more precise. Sounds Imperial, right ? Correct. This has been a standard for years, and hasn’t really changed. The same applies to sheet metal thickness, typically measured using SWG (sheet / wire gauge). For example, 16 SWG is actually 1.6mm thick or thereabouts and the only way you’d get a true reading is with either a Vernier or a Micrometer. For those now totally baffled, one mm is around 40 thou or 25.4 micrometers (μm). Can you imagine having to work to such a minute limit where the work you’ve submitted is 2 thou out of tolerance, and as a result, fails first off inspection ?

    Welcome to precision engineering. It’s not all tech and fine-limit though. In every industry, you have to start somewhere. And typically, in engineering, you’d start as an apprentice in the store room learning the trade and associated materials.

    Anyone familiar with engineering will know exactly what I mean when I use terms such as Gasparini, Amada, CNC, Bridgeport, guillotine, and Donkey Saw. Whilst the Donkey Saw sounds like animal cruelty, it’s actually an automated mechanical saw who’s job it is to cut tough material (such as S99 bar, which is hardened stainless steel) simulating the back and forth action manually performed with a hacksaw. Typically, a barrel of coolant liquid was connected to the saw and periodically deposited liquid into the blade to prevent it from overheating and snapping. Where am I going with all this ?

    Well, through my tenure in engineering, I started at the bottom as “the boy” - the one you’d send to the stores to get a plastic hammer, a long weight (wait), a bubble for a spirit level, sky hooks, and just about any other imaginary or pointless tool you could think of. It was part of the initiation ceremony - and the learning process.

    One other extremely dull task was to cut “blanks’’ in the store room from 8’ X 4’ sheets of CRS (cold rolled steel) or L166 (1.6mm aerospace grade aluminium, poly coated on both sides). These would then be used to make parts according to the drawing and spec you had, or could be for tooling purposes. My particular “job” (if you could call it that) in this case was to press the footswitch to activate the guillotine blade after the sheet was placed into the guide. The problem is that after about 50 or so blanks, you only hear the trigger word requiring you to “react”. In this case, that particular word was “right”. This meant that the old guy I was working with had placed the sheet, and was ready for me to kick the switch to activate the guillotine. All very high tech and vitally important - not.

    And so, here it is. Jim walks into the store room where we’re cutting blanks, and asks George if he’d like coffee. After 10 minutes, Jim returns with a tray of drinks and shouts “George, coffee!”. George, fiddling with the guillotine guide responds with “right”…. See if you can guess the rest…

    George went as white as a sheet and almost fainted as the guillotine blade narrowly missed his fingers. It took more than one coffee laden with sugar to put the colour back into his cheeks and restore his ailing blood sugar level.

    The good news is that George finally retired with all his fingers intact, and I eventually progressed through the shop floor and learned a trade.

    The purpose of this post ? In an ever changing and evolving security environment, have your wits about you at all times. It’s not only your organisation’s information security, but clients who have entrusted you as a custodian of their information to keep it safe and prevent unauthorised access. Information Security is a 101 rule to be adhered to at all times - regardless of how experienced you think you are. Complacency is at the heart of most mistakes. By taking a more pragmatic approach, that risk is greatly reduced.


  • 0 Votes
    4 Posts
    152 Views

    @DownPW 🙂 most of this really depends on your desired security model. In all cases with firewalls, less is always more, although it’s never as clear cut as that, and there are always bespoke ports you’ll need to open periodically.

    Heztner’s DDoS protection is superior, and I know they have invested a lot of time, effort, and money into making it extremely effective. However, if you consider that the largest ever DDoS attack hit Cloudflare at 71m rps (and they were able to deflect it), and each attack can last anywhere between 8-24 hours which really depends on how determined the attacker(s) is/are, you can never be fully prepared - nor can you trace it’s true origin.

    DDoS attacks by their nature (Distributed Denial of Service) are conducted by large numbers of devices whom have become part of a “bot army” - and in most cases, the owners of these devices are blissfully unaware that they have been attacked and are under command and control from a nefarious resource. Given that the attacks originate from multiple sources, this allows the real attacker to observe from a distance whilst concealing their own identity and origin in the process.

    If you consider the desired effect of DDoS, it is not an attempt to access ports that are typically closed, but to flood (and eventually overwhelm) the target (such as a website) with millions of requests per second in an attempt to force it offline. Victims of DDoS attacks are often financial services for example, with either extortion or financial gain being the primary objective - in other words, pay for the originator to stop the attack.

    It’s even possible to get DDoS as a service these days - with a credit card, a few clicks of a mouse and a target IP, you can have your own proxy campaign running in minutes which typically involves “booters” or “stressers” - see below for more

    https://heimdalsecurity.com/blog/ddos-as-a-service-attacks-what-are-they-and-how-do-they-work

    @DownPW said in Setting for high load and prevent DDoS (sysctl, iptables, crowdsec or other):

    in short if you have any advice to give to secure the best.

    It’s not just about DDos or firewalls. There are a number of vulnerabilities on all systems that if not patched, will expose that same system to exploit. One of my favourite online testers which does a lot more than most basic ones is below

    https://www.immuniweb.com/websec/

    I’d start with the findings reported here and use that to branch outwards.

  • 2 Votes
    3 Posts
    77 Views

    @crazycells exactly. Not so long ago, we had the Cambridge Analytica scandal in the UK. Meta (Facebook) seem to be the ultimate “Teflon” company in the sense nothing seems to stick.

  • 1 Votes
    2 Posts
    165 Views

    @mike-jones Hi Mike,

    There are multiple answers to this, so I’m going to provide some of the most important ones here

    JS is a client side library, so you shouldn’t rely on it solely for validation. Any values collected by JS will need to be passed back to the PHP backend for processing, and will need to be fully sanitised first to ensure that your database is not exposed to SQL injection. In order to pass back those values into PHP, you’ll need to use something like

    <script> var myvalue = $('#id').val(); $(document).ready(function() { $.ajax({ type: "POST", url: "https://myserver/myfile.php?id=" + myvalue, success: function() { $("#targetdiv").load('myfile.php?id=myvalue #targetdiv', function() {}); }, //error: ajaxError }); return false; }); </script>

    Then collect that with PHP via a POST / GET request such as

    <?php $myvalue= $_GET['id']; echo "The value is " . $myvalue; ?>

    Of course, the above is a basic example, but is fully functional. Here, the risk level is low in the sense that you are not attempting to manipulate data, but simply request it. However, this in itself would still be vulnerable to SQL injection attack if the request is not sent as OOP (Object Orientated Programming). Here’s an example of how to get the data safely

    <?php function getid($theid) { global $db; $stmt = $db->prepare("SELECT *FROM data where id = ?"); $stmt->execute([$theid]); while ($result= $stmt->fetch(PDO::FETCH_ASSOC)){ $name = $result['name']; $address = $result['address']; $zip = $result['zip']; } return array( 'name' => $name, 'address' => $address, 'zip' => $zip ); } ?>

    Essentially, using the OOP method, we send placeholders rather than actual values. The job of the function is to check the request and automatically sanitise it to ensure we only return what is being asked for, and nothing else. This prevents typical injections such as “AND 1=1” which of course would land up returning everything which isn’t what you want at all for security reasons.

    When calling the function, you’d simply use

    <?php echo getid($myvalue); ?>

    @mike-jones said in Securing javascript -> PHP mysql calls on Website:

    i am pretty sure the user could just use the path to the php file and just type a web address into the search bar

    This is correct, although with no parameters, no data would be returned. You can actually prevent the PHP script from being called directly using something like

    <?php if(!defined('MyConst')) { die('Direct access not permitted'); } ?>

    then on the pages that you need to include it

    <?php define('MyConst', TRUE); ?>

    Obviously, access requests coming directly are not going via your chosen route, therefore, the connection will die because MyConst does not equal TRUE

    @mike-jones said in Securing javascript -> PHP mysql calls on Website:

    Would it be enough to just check if the number are a number 1-100 and if the drop down is one of the 5 specific words and then just not run the rest of the code if it doesn’t fit one of those perameters?

    In my view, no, as this will expose the PHP file to SQL injection attack without any server side checking.

    Hope this is of some use to start with. Happy to elaborate if you’d like.

  • 1 Votes
    1 Posts
    148 Views

    1622031373927-headers-min.webp

    It surprises me (well, actually, dismays me in most cases) that new websites appear online all the time who have clearly spent an inordinate amount of time on cosmetics / appearance, and decent hosting, yet failed to address the elephant in the room when it comes to actually securing the site itself. Almost all the time, when I perform a quick security audit using something simple like the below

    https://securityheaders.io

    I often see something like this

    Not a pretty sight. Not only does this expose your site to unprecedented risk, but also looks bad when others decide to perform a simple (and very public) check. Worse still is the sheer number of so called “security experts” who claim to solve all of your security issues with their “silver bullet” solution (sarcasm intended), yet have neglected to get their own house in order. So that can you do to resolve this issue ? It’s actually much easier than it seems. Dependant on the web server you are running, you can include these headers.

    Apache <IfModule mod_headers.c> Header set X-Frame-Options "SAMEORIGIN" header set X-XSS-Protection "1; mode=block" Header set X-Download-Options "noopen" Header set X-Content-Type-Options "nosniff" Header set Content-Security-Policy "upgrade-insecure-requests" Header set Referrer-Policy 'no-referrer' add Header set Feature-Policy "geolocation 'self' https://yourdomain.com" Header set Permissions-Policy "geolocation=(),midi=(),sync-xhr=(),microphone=(),camera=(),magnetometer=(),gyroscope=(),fullscreen=(self),payment=()" Header set X-Powered-By "Whatever text you want to appear here" Header set Access-Control-Allow-Origin "https://yourdomain.com" Header set X-Permitted-Cross-Domain-Policies "none" Header set Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" </IfModule> NGINX add_header X-Frame-Options "SAMEORIGIN" always; add_header X-XSS-Protection "1; mode=block"; add_header X-Download-Options "noopen" always; add_header X-Content-Type-Options "nosniff" always; add_header Content-Security-Policy "upgrade-insecure-requests" always; add_header Referrer-Policy 'no-referrer' always; add_header Feature-Policy "geolocation 'self' https://yourdomain.com" always; add_header Permissions-Policy "geolocation=(),midi=(),sync-xhr=(),microphone=(),camera=(),magnetometer=(),gyroscope=(),fullscreen=(self),payment=();"; add_header X-Powered-By "Whatever text you want to appear here" always; add_header Access-Control-Allow-Origin "https://yourdomain.com" always; add_header X-Permitted-Cross-Domain-Policies "none" always; add_header Strict-Transport-Security "max-age=63072000; includeSubdomains;" always;

    Note, that https://yourdomain.com should be changed to reflect your actual domain. This is just a placeholder to demonstrate how the headers need to be structured.

    Restart Apache or NGINX, and then perform the test again.


    That’s better !

    More detail around these headers can be found here

    https://webdock.io/en/docs/how-guides/security-guides/how-to-configure-security-headers-in-nginx-and-apache

  • 0 Votes
    1 Posts
    143 Views

    1621959888112-code.webp
    Anyone working in the information and infrastructure security space will be more than familiar with the non-stop evolution that is vulnerability management. Seemingly on a daily basis, we see new attacks emerging, and those old mechanisms that you thought were well and truly dead resurface with “Frankenstein” like capabilities rendering your existing defences designed to combat that particular threat either inefficient, or in some cases, completely ineffective. All too often, we see previous campaigns resurface with newer destructive capabilities designed to extort both from the financial and blackmail perspective.

    It’s the function of the “Blue Team” to (in several cases) work around the clock to patch a security vulnerability identified in a system, and ensure that the technology landscape and estate is as healthy as is feasibly possible. On the flip side, it’s the function of the “Red Team” to identify hidden vulnerabilities in your systems and associated networks, and provide assistance around the remediation of the identified threat in a controlled manner.

    Depending on your requirements, the minimum industry accepted testing frequency from the “Red Team” perspective is once per year, and typically involves the traditional “perimeter” (externally facing devices such as firewalls, routers, etc), websites, public facing applications, and anything else exposed to the internet. Whilst this satisfies the “tick in the box” requirement on infrastructure that traditionally never changes, is it really sufficient in today’s ever-changing environments ? The answer here is no.

    With the arrival of flexible computing, virtual data centres, SaaS, IaaS, IoT, and literally every other acronym relating to technology comes a new level of risk. Evolution of system and application capabilities has meant that these very systems are in most cases self-learning (and for networks, self-healing). Application algorithms, Machine Learning, and Artificial Intelligence can all introduce an unintended vulnerability throughout the development lifecycle, therefore, failing to test, address, and validate the security of any new application or modern infrastructure that is public facing is a breach waiting to happen. For those “in the industry”, how many times have you been met with this very scenario

    “Blue Team: We fixed the vulnerabilities that the Red Team said they’d found…” “Red Team: We found the vulnerabilities that the Blue Team said they’d fixed…”

    Does this sound familiar ?

    What I’m alluding to here is that security isn’t “fire and forget”. It’s a multi-faceted, complex process of evolution that, very much like the earth itself, is constantly spinning. Vulnerabilities evolve at an alarming rate, and unfortunately, your security program needs to evolve with it rather than simply “stopping” for even a short period of time. It’s surprising (and in all honesty, worrying) the amount of businesses that do not currently (and even worse, have no plans to) perform an internal vulnerability assessment. You’ll notice here I do not refer to this as a penetration test - you can’t “penetrate” something you are already sitting inside. The purpose of this exercise is to engage a third party vendor (subject to the usual Non-Disclosure Agreement process) for a couple of days. Let them sit directly inside your network, and see what they can discover. Topology maps and subnets help, but in reality, this is a discovery “mission” and it’s up to the tester in terms of how they handle the exercise.

    The important component here is scope. Additionally, there are always boundaries. For example, I typically prefer a proof of concept rather than a tester blundering in and using a “capture the flag” approach that could cause significant disruption or damage to existing processes - particularly in-house development. It’s vital that you “set the tone” of what is acceptable, and what you expect to gain from the exercise at the beginning of the engagement. Essentially, the mantra here is that the evolution wheel in fact never stops - it’s why security personnel are always busy, and CISO’s never sleep 🙂

    These days, a pragmatic approach is essential in order to manage a security framework properly. Gone are the days of annual testing alone, being dismissive around “low level” threats without fully understanding their capabilities, and brushing identified vulnerabilities “under the carpet”. The annual testing still holds significant value, but only if undertaken by an independent body, such as those accredited by CREST (for example).

    You can reduce the exposure to risk in your own environment by creating your own security framework, and adopting a frequent vulnerability scanning schedule with self remediation. Not only does this lower the risk to your overall environment, but also provides the comfort that you take security seriously to clients and vendors alike whom conduct frequent assessments as part of their Due Diligence programs. Identifying vulnerabilities is one thing, however, remediating them is another. You essentially need to “find a balance” in terms of deciding which comes first. The obvious route is to target critical, high, and medium risk, whilst leaving the “low risk” items behind, or on the “back burner”.

    The issue with this approach is that it’s perfectly possible to chain multiple vulnerabilities together that on their own would be classed as low risk, and end up with something much more sinister when combined. This is why it’s important to address even low-risk vulnerabilities to see how easy it would be to effectively execute these inside your environment. In reality, no Red Team member can tell you exactly how any threat could pan out if a way to exploit it silently existed in your environment without a proof of concept exercise - that, and the necessity sometimes for a “perfect storm” that makes the previous statement possible in a production environment.

    Vulnerability assessments rely on attitude to risk at their core. If the attitude is classed as low for a high risk threat, then there needs to be a responsible person capable of enforcing an argument for that particular threat to be at the top of the remediation list. This is often the challenge - where board members will accept a level of risk because the remediation itself may impact a particular process, or interfere with a particular development cycle - mainly because they do not understand the implication of weakened security over desired functionality.

    For any security program to be complete (as far as is possible), it also needs to consider the fundamental weakest link in any organisation - the user. Whilst this sounds harsh, the below statement is always true

    “A malicious actor can send 1,000 emails to random users, but only needs one to actually click a link to gain a foothold into an organisation”

    For this reason, any internal vulnerability assessment program should also factor in social engineering, phishing simulations, vishing, eavesdropping (water cooler / kitchen chat), unattended documents left on copiers, dropping a USB thumb drive in reception or “public” (in the sense of the firm) areas.

    There’s a lot more to this topic than this article alone can sanely cover. After several years experience in Information and Infrastructure Security, I’ve seen my fair share of inadequate processes and programs, and it’s time we addressed the elephant in the room.

    Got you thinking about your own security program ?

  • 0 Votes
    1 Posts
    158 Views

    1631812610135-security1.webp
    The recent high profile breaches impacting organisations large and small are a testament to the fact that no matter how you secure credentials, they will always be subject to exploit. Can a password alone ever be enough ? in my view, it’s never enough. The enforced minimum should be at least with a secondary factor. Regardless of how “secure” you consider your password to be, it really isn’t in most cases – it just “complies” with the requirement being enforced.

    Here’s classic example. We take the common password of “Welcome123” and put it into a password strength checker
    1564764162-304322-password1.png
    According to the above, it’s “strong”. Actually, it isn’t. It’s only considered this way because it meets the complexity requirements, with 1 uppercase letter, at least 8 characters, and numbers. What’s also interesting is that a tool sponsored by Dashlane considers the same password as acceptable, taking supposedly 8 months to break
    1564764192-579936-password2.png
    How accurate is this ? Not accurate at all. The password of “Welcome123” is in fact one of the passwords contained in any penetration tester’s toolkit – and, by definition, also used by cyber criminals. As most of this password combination is in fact made up of a dictionary word, plus sequential numbers, it would take less than a second to break this rather than the 8 months reported above. Need further evidence of this ? Have a look at haveibeenpwned, which will provide you with a mechanism to test just how many times “Welcome123” has appeared in data breaches
    1564764241-350631-hibp.png

    Why are credentials so weak ?

    My immediate response to this is that for as long as humans have habits, and create scenarios that enable them to easily remember their credentials, then this weakness will always exist. If you look at a sample taken from the LinkedIn breach, those passwords that occupy the top slots are arguably the least secure, but the easiest to remember from the human perspective. Passwords such as “password” and “123456” may be easy for users to remember, but on the flip side, weak credentials like this can be broken by a simple dictionary attack in less than a second.

    Here’s a selection of passwords still in use today – hopefully, yours isn’t on there
    1564764251-257407-passwordlist.jpeg
    We as humans are relatively simplistic when it comes to credentials and associated security methods. Most users who do not work in the security industry have little understanding, desire to understand, or patience, and will naturally choose the route that makes their life easier. After all, technology is supposed to increase productivity, and make tasks easier to perform, right ? Right. And it’s this exact vulnerability that a cyber criminal will exploit to it’s full potential.

    Striking a balance between the security of credentials and ease of recall has always had it’s challenges. A classic example is that networks, websites and applications nowadays typically have password policies in place that only permit the use of a so-called strong password. Given the consolidation and overall assortment of letters, numbers, non-alphanumeric characters, uppercase and lowercase, the password itself is probably “secure” to an acceptable extent, although the method of storing the credentials isn’t. A shining example of this is the culture of writing down sensitive information such as credentials. I’ve worked in some organisations where users have actually attached their password to their monitor. Anyone looking for easy access into a computer network is onto an immediate winner here, and unauthorised access or a full blown breach could occur within an alarmingly short period of time.

    Leaked credentials and attacks from within

    You could argue that you would need access to the computer itself first, but in several historical breach scenarios, the attack originated from within. In this case, it may not be an active employee, but someone who has access to the area where that particular machine is located. Any potential criminal has the credentials – well, the password itself, but what about the username ? This is where a variety of techniques can be used in terms of username discovery – in fact, most of them being non-technical – and worryingly simple to execute. Think about what is usually on a desk in an office. The most obvious place to look for the username would be on the PC itself. If the user had recently logged out, or locked their workstation, then on a windows network, that would give you the username unless a group policy was in place. Failing that, most modern desk phones display the name of the user. On Cisco devices, under Extension Mobility, is the ID of the user. It doesn’t take long to find this. Finally there’s the humble business card. A potential criminal can look at the email address format, remove the domain suffix, and potentially predict the username. Most companies tend to leverage the username in email addresses mainly thanks to SMTP template address policies – certainly true in on premise Exchange environments or Office 365 tenants.

    The credentials are now paired. The password has been retrieved in clear text, and by using a simple discovery technique, the username has also been acquired. Sometimes, a criminal can get extremely lucky and be able to acquire credentials with minimal effort. Users have a habit of writing down things they cannot recall easily, and in some cases, the required information is relatively easily divulged without too much effort on the part of the criminal. Sounds absurd and far fetched, doesn’t it ? Get into your office early, or work late one evening, and take a walk around the desks. You’ll be unpleasantly surprised at what you will find. Amongst the plethora of personal effects such as used gym towels and footwear, I guarantee that you will find information that could be of significant use to a criminal – not necessarily readily available in the form of credentials, but sufficient information to create a mechanism for extraction via an alternative source. But who would be able to use such information ?

    Think about this for a moment. You generally come into a clean office in the mornings, so cleaners have access to your office space. I’m not accusing anyone of anything unscrupulous or illegal here, but you do need to be realistic. This is the 21st century, and as a result, it is a security measure you need to factor in and adopt into your overall cyber security policy and strategy. Far too much focus is placed on securing the perimeter network, and not enough on the threat that lies within. A criminal could get a job as a cleaner at a company, and spend time collecting intelligence in terms of what could be a vulnerability waiting to be exploited. Another example of “instant intelligence” is the network topology map. Some of us are not blessed with huge screens, and need to make do with one ancient 19″ or perhaps two. As topology maps can be quite complex, it’s advantageous to be able to print these in A3 format to make it easier to digest. You may also need to print copies of this same document for meetings. The problem here is what you do with that copy once you have finished with it ?

    How do we address the issue ? Is there sufficient awareness ?

    Yes, there is. Disposing of it in the usual fashion isn’t the answer, as it can easily be recovered. The information contained in most topology maps is often extensive, and is like a goldmine to a criminal looking for intelligence about your network layout. Anything like this is classified information, and should be shredded at the earliest opportunity. Perhaps one of the worst offences I’ve ever personally experienced is a member of the IT team opening a password file, then walking away from their desk without locking their workstation. To prove a point about how easily credentials can be inadvertently leaked, I took a photo with a smartphone, then showed the offender what I’d managed to capture a few days later. Slightly embarrassed didn’t go anywhere near covering it.

    I’ve been an advocate of securing credentials for some time, and recently read about the author of “NIST Special Publication 800-63” (Bill Burr). Now retired, he has openly admitted the advice he originally provided as in fact, incorrect

    “Much of what I did I now regret.” said Mr Burr, who advised people to change their password every 90 days and use obscure characters.

    “It just drives people bananas and they don’t pick good passwords no matter what you do.”

    The overall security of credentials and passwords

    However, bearing in mind that this supposed “advice” has long been the accepted norm in terms of password securuty, let’s look at the accepted standards from a well-known auditing firm

    It would seem that the Sarbanes Oxley 404 act dictates that regular changes of credentials are mandatory, and part of the overarching controls. Any organisation that is regulated by the SEC (for example) would be covered and within scope by this statement, and so the argument for not regularly changing your password becomes “invalid” by the act definition and narrative. My overall point here is that the clearly obvious bad password advice in the case of the financial services industry is negated by a severely outdated set of controls that require you to enforce a password change cycle and be in compliance with it. In addition, there are a vast number of sites and services that force password changes on a regular basis, and really do not care about what is known to be extensive research on password generation.

    The argument for password security to be weakened by having to change it on a frequent basis is an interesting one that definitely deserves intense discussion and real-world examples, but if your password really is strong (as I mentioned previously, there are variations of this which are really not secure at all, yet are considered strong because they meet a complexity requirement), then a simple mutation of it could render it vulnerable. I took a simple lowercase phrase

    mypasswordissimpleandnotsecureatall

    1564764311-893646-nonillion.png
    The actual testing tool can be found here. So, does a potential criminal have 26 nonillion years to spare ? Any cyber criminal who possesses only basic skills won’t need a fraction of that time as this password is in fact made up of simple dictionary words, is all lowercase, and could in fact be broken in seconds.

    My opinion ? Call it how you like – the password is here to stay for the near future at least. The overall strength of the password or credentials stored using MD5, bCrypt, SHA1 and so on are irrelevant when an attacker can use established and proven techniques such as social engineering to obtain your password. Furthermore, the addition of 2FA or a SALT dramatically increases password security – as does the amount of unsuccessful attempts permitted before the associated account is locked. This is a topic that interests me a great deal. I’d love to hear your feedback and comments.

  • 0 Votes
    3 Posts
    184 Views

    @justoverclock yes, completely understand that. It’s a haven for criminal gangs and literally everything is on the table. Drugs, weapons, money laundering, cyber attacks for rent, and even murder for hire.

    Nothing it seems is off limits. The dark web is truly a place where the only limitation is the amount you are prepared to spend.

  • is my DMARC configured correctly?

    Solved Configure
    3
    3 Votes
    3 Posts
    209 Views

    @phenomlab said in is my DMARC configured correctly?:

    you’ll get one from every domain that receives email from yours.

    Today I have received another mail from outlook DMARC, i was referring to your reply again and found it very helpful/informative. thanks again.

    I wish sudonix 100 more great years ahead!