@DownPW yes, very. Shocking that the US military actually thinks this is a good idea. I mean, without going all “Hollywood” in terms of The Terminator franchise, there is a serious risk to human life here.
Blog
Articles written by the site owner
Trending
Trending
The European Union has initiated a significant antitrust probe into three of the world’s most influential tech companies: Apple, Meta (formerly Facebook), and Google’s parent company Alphabet. This move underscores the EU’s growing scrutiny of big tech firms and their market dominance, signalling a potentially pivotal moment in the regulation of digital platforms.
The investigation, announced by the European Commission, focuses on concerns regarding the companies’ practices in the digital advertising market. Margrethe Vestager, the EU’s Executive Vice-President for A Europe Fit for the Digital Age, emphasized the importance of fair competition in the digital sector and the need to ensure that dominant players do not abuse their power to stifle innovation or harm consumers.
At the heart of the probe lies the question of whether Apple, Meta, and Google have violated EU competition rules by leveraging their control over user data and digital advertising to gain an unfair advantage over competitors. These companies wield enormous influence, with vast user bases and access to extensive troves of personal information, which they use to target advertisements with remarkable precision.
Apple, through its iOS platform, has introduced privacy measures such as App Tracking Transparency, which allows users to opt out of being tracked across apps for advertising purposes, but while lauded for enhancing user privacy, these measures have raised concerns among app developers and advertisers who rely on targeted advertising for revenue. The EU investigation will likely delve into whether Apple’s privacy measures unfairly disadvantage competitors in the digital advertising ecosystem.
Meta, the parent company of social media behemoths like Facebook, Instagram, and WhatsApp, faces scrutiny over its data practices and alleged anti-competitive behaviour. The company’s vast user base and comprehensive user profiles make it a dominant player in the digital advertising market. However, Meta has faced criticism and regulatory challenges over issues ranging from data privacy to its handling of misinformation and hate speech on its platforms.
Google, with its ubiquitous search engine and digital advertising services, also comes under the EU’s microscope. The tech giant’s control over online search and advertising infrastructure gives it immense power in the digital economy. Concerns have been raised about Google’s practices regarding the use of data, the display of search results, and its dominance in the online advertising market.
The EU’s investigation into these tech giants reflects a broader global trend of increased scrutiny and regulatory action targeting big tech companies. Governments and regulatory bodies worldwide are grappling with how to rein in the power of these corporate giants while fostering competition and innovation in the digital economy.
Antitrust investigations and regulatory actions have become more common, with tech companies facing fines, lawsuits, and calls for structural reforms. In the United States, lawmakers and regulators have also intensified their scrutiny of big tech, with antitrust lawsuits filed against companies like Google and Meta.
The outcome of the EU’s investigation could have far-reaching implications for the future of digital competition and regulation. If the European Commission finds evidence of anti-competitive behavior or violations of EU competition rules, it could impose significant fines and require changes to the companies’ business practices. Moreover, the investigation could prompt broader discussions about the need for new regulations to address the unique challenges posed by the digital economy.
In response to the EU’s investigation, Apple, Meta, and Google have stated their commitment to complying with EU competition rules and cooperating with the European Commission’s inquiry. However, the tech giants are likely to vigorously defend their business practices and challenge any allegations of anti-competitive behavior.
As the digital economy continues to evolve and reshape industries and societies worldwide, the regulation of big tech companies will remain a contentious and complex issue. The EU’s antitrust investigation into Apple, Meta, and Google underscores the growing recognition among policymakers of the need to ensure that digital markets remain fair, competitive, and conducive to innovation.
Some of you might have noticed a feature on this forum that further extends the .highlight class in NodeBB from the default of a different colour border such as border-left: 1px solid blue; to something that looks like the below
57b892b1-296a-4e42-8f05-e0a08efa2068-image.png
And, as Sudonix has a number of “swatches” or “themes”, these are also colour coordinated to match. For example
d0368af2-425b-48bf-ad62-f9df5a5450c8-image.png
9524e9d0-91d7-41e3-9153-2b27aff26a74-image.png
There are more - try changing the swatch, and then view the last post in each thread, and you’ll see where and how this is being applied.
I want this effect!!Sure you do 🙂 Here’s how to get it. We are going to be extending the .highlight class of NodeBB and will be leveraging the :before pseudonym class as below.
.highlight:before { content: ""; position: absolute; inset: 0; border-radius: 0.375rem; padding: 3px; background: var(--bs-progress-bar-bg); -webkit-mask: linear-gradient(var(--bs-body-bg) 0 0) content-box, linear-gradient(var(--bs-body-bg) 0 0); -webkit-mask-composite: xor; mask-composite: exclude; pointer-events: none; } What does this do?In summary, this CSS snippet creates a highlighted effect around an element by using an absolutely positioned pseudo-element with a special mask to create an outline or glowing effect. The actual visual appearance would depend on the colours defined by the --bs-progress-bar-bg (note that this is not a NodeBB variable, but one I’ve defined which this forum uses - you’ll need to factor this into your colour scheme) and --bs-body-bg variables, but the technique is quite clever and allows for flexible styling.
What does your --bs-progress-bar-bg class look like?
Here’s an example
--bs-progress-bar-bg: linear-gradient(45deg, #5E81AC, #88C0D0, #8FBCBB, #A3BE8C, #D08770, #BF616A);Let’s break down the properties to understand how all of this works…
content: ""; - This rule sets an empty content for the pseudo-element. This is necessary for the pseudo-element to be rendered. position: absolute; - This rule positions the pseudo-element absolutely within its containing element. inset: 0; - This shorthand rule sets the top, right, bottom, and left properties to 0, effectively making the pseudo-element cover the entire space of its containing element. border-radius: 0.375rem; - This rule sets the border radius of the pseudo-element to create rounded corners. The value “0.375rem” is equivalent to 6 pixels. padding: 3px; - This rule adds padding of 3 pixels to the pseudo-element. background: var(--bs-progress-bar-bg); - This rule sets the background color of the pseudo-element to the value of the CSS custom property “–bs-progress-bar-bg”. Custom properties are a way to define reusable values in CSS. -webkit-mask: linear-gradient(var(--bs-body-bg) 0 0) content-box, linear-gradient(var(--bs-body-bg) 0 0); - This rule applies two linear gradients as masks to the pseudo-element. These gradients essentially create transparent regions in the pseudo-element, revealing the background color underneath. -webkit-mask-composite: xor; - This rule sets the compositing mode for the masks. The “xor” mode combines the two masks using the XOR (exclusive OR) operation. mask-composite: exclude; - This rule sets the compositing mode for the mask to “exclude”. This means that areas where the mask and the content overlap will be excluded, effectively creating a cutout effect. pointer-events: none; - This rule ensures that the pseudo-element does not respond to pointer events, allowing clicks and other interactions to pass through to the underlying elements.Using this approach, it’s possible to extend the capabilities of CSS much further than you probably imagined. Obviously, this isn’t something you’d want to overuse, but it can certainly provide a much needed edge for when you are trying to draw attention to a specific object or element.
Enjoy.
Those in the security space may already be aware of the secure DNS service provided by Quad9. For those who have not heard of this free service, Quad9 is a public Domain Name System (DNS) service that provides a more secure and privacy-focused alternative to traditional DNS services. DNS is the system that translates human-readable domain names (like www.google.com) into IP addresses that computers use to identify each other on the internet.
Quad9 is known for its emphasis on security and privacy. It uses threat intelligence from various cybersecurity companies to block access to known malicious websites and protect users from accessing harmful content. When a user makes a DNS query, Quad9 checks the requested domain against a threat intelligence feed, and if the domain is flagged as malicious, Quad9 blocks access to it.
One notable feature of Quad9 is its commitment to user privacy. Quad9 does not store any personally identifiable information about its users, and it does not sell or share user data.
https://www.quad9.net/
Users can configure their devices or routers to use Quad9 as their DNS resolver to take advantage of its security and privacy features. The DNS server addresses for Quad9 are usually 9.9.9.9 and 149.112.112.112.
The name “Quad9” is derived from the service’s use of four DNS servers in different geographic locations to provide redundancy and improve reliability. Users can configure their devices or routers to use the Quad9 DNS servers, and doing so can offer an additional layer of protection against malware, phishing, and other online threats. It’s important to note that while Quad9 can enhance security, it is not a substitute for other security measures such as antivirus software and good internet security practices, and if you are not using these technologies already, then you are leaving yourself open to compromise.
I’d strongly recommend you take a look at Quad9. Not only is it fast, but it seems to be extremely solid, and well thought out. If you’re using Cloudflare for your DNS, Quad9 is actually faster.
I came across this news article this morning
https://news.sky.com/story/e3-cancelled-gamings-most-famous-event-killed-off-for-good-13028802
This really is the end of an era, and it’s abundantly clear that the pandemic had a large part to play in its demise. From the article:
It comes after plans for its return earlier this year were scrapped, with the likes of PlayStation maker Sony and Assassin’s Creed developer Ubisoft among the companies that planned to skip it.
When big players such as Sony and Ubisoft do not plan on attending, the writing is on the wall. During the pandemic, various organisations were forced to adopt new ways of promoting their products, with live streams becoming the new normal - and by order of magnitude, a much cheaper alternative that has the same impact.
This clearly demonstrates that technology is continually evolving, and there doesn’t seem to be any signs of a return to previous pre-pandemic forms on multiple fronts. Just look at how the work from home model has dramatically changed, with virtually every organisation having some form or remote working program they never considered before.
Along the same lines, companies that were relatively minor before the pandemic have enjoyed a meteoric rise since being in a unique position to fill the void created by the pandemic. Out of the many is Zoom - take a look at the revenue graph below for an example
b486247b-8af4-4b6a-b180-c27838d2c59f-image.png
Source - https://www.businessofapps.com/data/zoom-statistics/
At the peak of pandemic. Zoom reported 200m connections per day, and whilst that figure may have dropped of late, Zoom is still considered the #1 video conferencing tool and used in day-to-day life to facilitate meetings across the globe - even for people sitting in the same office space.
However you look at it, most of these pandemic “rising stars” are now here to stay and considered part of everyday life.
For ages now, I’ve been highly suspicious of the frankly awful battery life on my OnePlus Pro 9 device. Over time, the battery life has become hugely problematic, and on average, I get around half a day of usage before I need to charge it again. Yes, I’m a power user, but I don’t use this device for video streaming or games, so it’s not as though I’m asking the earth in terms of reliability and performance.
As my contact is coming up for renewal, I started looking at the OnePlus Pro 11 - with it’s outrageous price tag, I decided to look elsewhere, and am currently waiting for a Samsung S23 Plus to be delivered (hopefully tomorrow).
It seems that this has been an issue since it was identified in 2021 - OnePlus evidently deliberately “throttles” the performance of apps such as WhatsApp, Twitter, Discord - in fact, most of them in order to “improve” battery life on the OnePlus 9 and 9 Pro phones 😠
There are two articles I came across that discuss the same topic.
https://www.xda-developers.com/oneplus-exaplains-oneplus-9-cpu-throttling
https://www.theverge.com/2021/7/8/22568107/oneplus-9-pro-battery-life-app-throttling-benchmarks
Having to discover this after you buy a supposedly “premium” handset is bad enough, but OnePlus themselves kept this quiet and told nobody. To my mind, this is totally unacceptable behaviour, and it means I will not use this brand going forward. The last time I had a Samsung, it was my trusty S2 which literally kept going until it finally gave up with hardware failure meaning it wouldn’t boot anymore. I could have re-flashed the ROM, but consigned it to the bin with full military honours 🙂
And so, here I am again back on the Samsung trail. Sorry OnePlus, but this is a bridge too far in my view. I faithfully used your models throughout the years, and my all-time favourite was easily the OnePlus 6T - a fantastic device. I only upgraded “because I could” and I’m sorry I did now.
Here’s to a (hopefully) happy ever after with Samsung.
I read this article with interest, and I must say, I do agree with the points being raised on both sides of the fence.
https://news.sky.com/story/amazon-microsoft-dominance-in-uk-cloud-market-faces-competition-investigation-12977203
There are valid points from OfCom, the UK regulator, such as
highlighted worries about committed spend discounts which, Ofcom feared, could incentivise customers to use a single major firm for all or most of their cloud needs.
This is a very good point. Customers are often tempted in with monthly discounts - but those only apply if you have a committed monthly spend, which if you are trying to reduce spend on technology, is hard to achieve or even offset.
On the flip side of the coin, AWS claims
Only a small percentage of IT spend is in the cloud, and customers can meet their IT needs from any combination of on-premises hardware and software, managed or co-location services, and cloud services.
This isn’t true at all. Most startups and existing technology firms want to reduce their overall reliance on on-premises infrastructure as a means of reducing cost, negating the need to refresh hardware every x years, and further extending the capabilities of their disaster recovery and business continuity programs. The less reliance you have on a physical office, the better as this effectively lowers your RTO (Recovery Time Objective) in the event that you incur physical damage owing to fire or water (for example, but not limited to) and have to replace servers and other existing infrastructure before you are effectively able to start the recovery process.
Similarly, businesses who adopt the on-premise model would typically require some sort of standby or recovery site that has both replication enabled between these sites, plus regular and structured testing. It’s relatively simple to “fail over” to a recovery site, but much harder to move back to the primary site afterwards without further downtime. For this reason, several institutions have adopted the cloud model as a way of resolving this particular issue, as well as the cost benefits.
The cost of data egress is well known throughout the industry and is an accepted (although not necessarily desirable) “standard” these days. The comment from AWS concerning the ability to switch between providers very much depends on individual technology requirements, and such a “switch” could be made much harder by leveraging the use of proprietary products such as AWS Aurora - slated as a MySQL and PostgreSQL and then attempting to switch back to native platforms - only to find some essential functionality is missing.
My personal view is that AWS are digging their heels in and disagree with the CMA because they want to retain their dominance.
Interestingly, GCS (Google Cloud Services) doesn’t seem to be in scope, and given Google’s dominance over literally everything internet related, this surprises me.
Just seen this post pop up on Sky News
https://news.sky.com/story/elon-musks-brain-chip-firm-given-all-clear-to-recruit-for-human-trials-12965469
He has claimed the devices are so safe he would happily use his children as test subjects.
Is this guy completely insane? You’d seriously use your kids as Guinea Pigs in human trials?? This guy clearly has easily more money than sense, and anyone who’d put their children in danger in the name of technology “advances” should seriously question their own ethics - and I’m honestly shocked that nobody else seems to have a comment about this.
This entire “experiment” is dangerous to say the least in my view as there is huge potential for error. However, reading the below article where a paralyzed man was able to walk again thanks to a neuro “bridge” is truly ground breaking and life changing for that individual.
https://news.sky.com/story/paralysed-man-walks-again-thanks-to-digital-bridge-that-wirelessly-reconnects-brain-and-spinal-cord-12888128
However, this is reputable Swiss technology at it’s finest - Switzerland’s Lausanne University Hospital, the University of Lausanne, and the Swiss Federal Institute of Technology Lausanne were all involved in this process and the implants themselves were developed by the French Atomic Energy Commission.
Musk’s “off the cuff” remark makes the entire process sound “cavalier” in my view and the brain isn’t something that can be manipulated without dire consequences for the patient if you get it wrong.
I daresay there are going to agreements composed by lawyers which each recipient of this technology will need to sign so that it exonerates Neuralink and it’s executives of all responsibility should anything go wrong.
I must admit, I’m torn here (in the sense of the Swiss experiment) - part of me finds it morally wrong to interfere with the human brain like this because of the potential for irreversible damage, although the benefits are huge, obviously life changing for the recipient, and in most cases may outweigh the risk (at what level I cannot comment not being a neurosurgeon of course).
Interested in other views - would you offer yourself as a test subject for this? If I were in a wheelchair and couldn’t move, I probably would I think, but would need assurance that such technology and it’s associated procedure is safe, which at this stage, I’m not convinced it’s a guarantee that can be given. There are of course no real guarantees with anything these days, but this is a leap of faith that once taken, cannot be reversed if it goes wrong.
I’ve just read this article with a great deal of interest. Whilst it’s not “perfect” in the way it’s written, it certainly does a very good job in explaining the IT function to a tee - and despite having been written in 2009, it’s still factually correct and completely relevant.
https://www.computerworld.com/article/2527153/opinion-the-unspoken-truth-about-managing-geeks.html
This is my interpretation;
The points made are impossible to disagree with. Yes, IT pros do want their managers to be technically competent - there’s nothing worse than having a manager who’s never been “on the tools” and is non technical - they are about as much use as a chocolate fireguard when it comes to being a sounding board for technical issues that a specific tech cannot easily resolve.
I’ve been in senior management since 2016 and being “on the tools” previously for 30+ years has enabled me to see both the business and technical angles - and equally appreciate both of them. Despite my management role, I still maintain a strong technical presence, and am (probably) the most senior and experienced technical resource in my team.
That’s not to say that the team members I do have aren’t up to the job - very much the opposite in fact and for the most part, they work unsupervised and only call on my skill set when they have exhausted their own and need someone with a trained ear to bounce ideas off.
On the flip side, I’ve worked with some cowboys in my industry who can talk the talk but not walk the walk - and they are exposed very quickly in smaller firms where it’s much harder to hide technical deficit behind other team members.
The hallmark of a good manager is one who knows how much is involved in a specific project or task in order to steer it to completion, and is willing to step back and let others in the team be in the driving seat. A huge plus is knowing how to get the best out of each individual team member and does not deploy pointless techniques such as micro management - in other words, be on their wavelength and understand their strengths and weaknesses, then use those to the advantage of the team rather than the individual.
Sure, there will always be those in the team who you wouldn’t stick in front of clients - not because of the fact that they don’t know their field of expertise, but may lack the necessary polish or soft skills to give clients a warm fuzzy feeling, or may be unable (or simply unwilling) to explain technology to someone without the fundamental understanding of how a variety of components and services intersect.
That should never be seen as a negative though. A strong manager recognizes that whilst team members are uncomfortable with being the “front of house”, they excel in other areas supporting and maintaining technology that most users don’t even realize exists, yet they use it daily (or some variant of it). It is these skills that mean IT departments and associated technologies run 24x7x365, and we should champion them more than we do already from the business perspective.
For a while now, my dual boot system (Windows, which I use for work, and KDE Neon which I use primarily) had been exhibiting issues in the sense that when I elected to start Neon, the system would hang during boot for 3-4 minutes and only display the below before finally showing the SDDM login
IMG_20211018_190033~2.jpg
Then, when you did actually get to login, the system would then hang for another two minutes before the desktop was displayed. And to add insult to injury, you couldn’t interact with the system for another 45 seconds !
Frustrating - VERY frustrating 🤬
As I’d recently made the system dual boot for work and home usage, I thought that the bios could be at fault here, so after some research, I went ahead and upgraded that. Unsurprisingly, the bios was pretty out of date. I’m no fan of bleeding edge when it comes to motherboard firmware, and will only upgrade if there is a genuine requirement. Besides, it’s pretty easy to brick a system by flashing with an incorrect bios which will leave you with a machine that won’t start at all.
The bios upgrade went well, but certainly contributed nothing to resolving the issue at hand. Windows booted perfectly with no issues at all, but KDE ? No dice. Same issue. After googling the message displayed just before SDDM started, there were literally hundreds of posts ranging from bios to display driver updates.
Undeterred (well, perhaps a bit disheartened) I started trawling though multiple posts trying to perhaps identify the Holy Grail. After a couple of hours of fruitless searching, I gave up. After all, KDE would actually boot, but not without significant delay. I could live with that (well, not really, but I’m the absence of a fix, I was going to have to put up with it).
Later in the week, I decided to clean the desk in my office meaning I needed to unplug everything so I could clear the area. After cleaning, I reconnected everything and powered the system back on. KDE was set as the default in the bootloader, so was the first choice - and it booted cleanly without errors and within the usual 5 seconds!
Ok. WTF. Somewhat perplexed at why this issue had suddenly “cured itself”, I decided to have a look at the connections. For a long time, I’ve had a USB cable from the back of the PC into my monitor so I can extend the USB ports I have and at the same time, make them more accessible (rather then having to climb on the desk just to connect a new device).
I realized that this was one lead I hadn’t connected. Curious, I shutdown the PC and then reconnected the cable. On boot, sure enough, a long delay and KDE started. I shutdown again and removed this cable. On power on, no issues and KDE boots in 5 seconds!
So after all that, it turns out it’s just a [censored] lead 🤬🤬 and I’ve spent ages looking for a “fix” when there actually isn’t one from the software perspective. I replaced the cable, and we’re all good. I really have no idea why Windows doesn’t complain though - and I certainly don’t have the energy to spend hours researching that!
One thing I’m seeing on a repeated basis is email address that do not match the site or the business they were intended for. People seem to spend an inordinate amount of time and money getting their website so look exactly the way they want it, and in most cases, usually with highly polished results.
They then subsequently undo all of that good work by using a completely different email address in terms of the domain for the contact. I’ve seen a mix of Hotmail (probably the worst), GMail, Outlook, Yahoo - the list is endless. If you’ve purchased a domain, then why not use it for email also so that users can essentially trust your brand, and not make them feel like they are about to be scammed!
One core reason for this is design services. They tend to build out the website design, but then stop short of finishing the job and setting up the email too. Admittedly, with a new domain comes the pitfalls of “trust” when set against established “mail clearing houses” such as MimeCast (to name one of many examples), and even if you do setup the mail correctly, without the corresponding and expected SPF, DMARC and DKIM records, your email is almost certainly to land up in junk - if it even arrives at all.
Here’s a great guide I found that not only describes what these are, but how to set them up properly
https://woodpecker.co/blog/spf-dkim/
I suspect most of these “design boutiques” likely lack the experience and knowledge to get email working properly for the domain in question - either that, or they consider that outside of the scope of what they are providing, but if I were asked to develop a website (and I’ve done a fair few in my time) then email is always in scope, and it will be configured properly. The same applies when I build a VPS as others here will likely attest to.
My personal experience of this was using a local alloy wheel refurb company (scuffed alloys on the car). I’d found a local company who came highly recommended, so contacted them - only to find that the owner was using a Hotmail address for his business! I did honestly reconsider, but after meeting up with the owner and seeing his work first hand (he’s done the alloys on two of my cars so far and the work is of an excellent standard), I was impressed, and he’s since had several work projects from me, and recommendations to friends and family.
I did speak to him about the usage of a Hotmail address on his website, and he said that he had no idea how to make the actual domain email work, and the guy who designed his website didn’t even offer to help - no surprises there. I offered to help him set this up (for free of course) but he said that he’d had that address for years and didn’t want to change it as everyone knew it. This is fair enough of course, but I can’t help but wonder how many people are immediately turned off or become untrusting because a business uses a publicly available email service…
Perhaps it’s just me, but branding (in my mind) is essential, and you have to get it right.
Ever since I moved the prior domain here to .Org, I’ve experienced issues with Google Search Console in the sense that the entire website is no longer crawled.
After some intensive investigation, it would appear that Cloudflare’s Bot Fight Mode is the cause. Essentially, this tool works at a js level and blocks anything it considers to be suspicious - including Google’s crawler. Cloudflare will tell you that you can exclude the crawler meaning it is permitted to access. However, they omit one critical element - you need to have a paid plan to do so.
With the Bot Fight Mode enabled, the website cannot be crawled
ddd7fa3f-a9a6-4543-8aef-c86e7041493f-image.png
And with it disabled
81009e82-0be8-4524-b0e1-95b3c31280c5-image.png
This also appears to be a well known issue
https://community.cloudflare.com/t/bot-fight-mode-blocking-googlebot-bingbot/333980/6
However, it looks like Cloudflare simply “responded” by allowing you to edit the BFM ruleset in a paid plan, but if you are using the free mode, the only solution you have is to disable it completely. I’ve done this, and yet, my site still doesn’t index!
I’ve completed disabled Cloudflare (DNS only) for now to see if the situation improves.
This really is a great read and a trip down memory lane in terms of how the mobile telephone has evolved over the years into the devices we know today.
Some surprises along the journey (based on little known facts), but definitely well worth the read.
Interestingly, the reference to the UK’s first mobile phone call which was made in 1985 is featured in My Journey which can be found here.
https://news.sky.com/story/the-matrix-phone-to-the-iphone-and-that-unforgettable-ringtone-50-moments-in-50-years-since-first-mobile-call-12845390
I can certainly draw parallel with the BlackBerry devices. I had several of these when RIM was the technology leader. It’s surprising how things turn around - one minute they were industry pioneers, and the next, they were on a path to rapid demise.
Then Nokia introduced the N95. I had one of these in 2006 for a short period, coupled with a Treo (who remembers those?)…
https://en.m.wikipedia.org/wiki/Treo_650
Enjoy the ride!
After seeing this post on the NodeBB forums, I wanted to get an idea of what members here think is important when joining a community, and the primary reasons they remain members and choose to keep coming back.
https://community.nodebb.org/topic/17178/vote-for-nodebb-ballot_box_with_ballot/36
I obviously have my own views around how communities should work, and if you look at the below blog post from me some time ago, it’s easy to see why I think this is important
https://sudonix.org/topic/141/how-to-destroy-a-community-before-it-s-even-built?_=1684754039261
Really interested in views as to me, this makes for an even greater experience 🙂
At the heart of today’s communications is a network. Ranging from simplistic to complex, each of these frameworks plays a pivotal role in joining disparate nodes together. But what happens when a design or security flaw impacts the speed, functionality, and overall security of your network ?
What factors create a network ?A network is a collection of components that, when joined together, provide the necessary transit to carry information from one system to another. The fundamental purpose of a network is to establish inter-connectivity between disparate locations, leverage a mutually understood communication language, and allow traffic to pass over a physical or logical link. The endless possibilities provided by a modern network allows businesses and individuals to communicate seamlessly, allowing for collaboration, communication, and integration whilst providing a centralized model for overall management.
The network has its origins set back as far as the 1960’s, and over the years, various implementations of connectivity standards and the associated fabric dawned and waned. The consolidation of these proposals (known as RFC) created three new standards – Ethernet, UDP, and TCP/IP. These accepted standards now form the underlying foundations of the network we utilize today – both from the enterprise perspective in the workplace and the individual using the internet. Ethernet is the physical medium (a network card, for example), whilst TCP and UDP are the transport protocols, or the common language mutually understood by thousands of vendors.
Adopted standardsThese early standards became the groundwork that the internet we know today was built on. Formerly known as ARPANET, and originally developed as a university network, it’s popularity and usage grew exponentially to form the world’s largest collection of interconnected devices, and led to it being nicknamed The Information Superhighway. The birth of the internet became the seed that established the genesis of communication we all now take for granted on a daily basis.
Today’s industry standards dictate how network equipment should be connected together, and with even the most basic knowledge, anyone can connect themselves to the internet in a matter of minutes. This ease of configuration and deployment means businesses and individuals can be online within a short time frame – albeit using an “out of the box” design, and with little (if any) consideration for security or risk.
Security implicationThe security implications of any network are a constantly moving target. New vulnerabilities are discovered in vendor equipment on a daily basis, and with some of these vulnerabilities being resident since day one (but either undiscovered or undisclosed) , planning for every possible scenario isn’t feasible – particularly if you have limited resources. When designing a network, it’s important to implement a means of limiting the attack vector. Whilst this sounds very complex, to a seasoned network architect, it isn’t. Essentially, what you should be doing is creating a jail based environment for each network segment.
Think outside of the box at this point – the general application of inside, outside, DMZ etc no longer provide sufficient scope if an attacker has made it onto your internal network. For example, take two departments, such as accounting and operations. How likely is it that these two entities need to share information or communicate directly at a PC level ? With this in mind, an accepted standard is for each department to reside in it’s own VLAN. Using industry defined ACLS, each department cannot communicate directly with another. They do, however, have access to the server VLAN - although this should also follow a similar security regime of only permitting access to essential services - in other words, adopt the least privilege model.
Whilst this sounds obvious, most network designs do not factor in this basic requirement. By “segmenting” each department, you establish a boundary between each of them. This means that if malware were to be installed on a PC in accounting, it would not be able to infect a machine in operations, or HR. Containerized network designs are secure, but not perfect. In the event of a PC being infected with malware, the VLAN it resides in still has access to the servers and other associated infrastructure that the client needs in order to perform it’s desired function. In this case, you would also need to only permit access to critical or essential services. The upside of such an approach is that the implemented network security means that a malware or ransomware attack is limited to infecting a small number of machines rather than the whole network. The downside is that there is an initial overhead in terms of discovery, implementation, and testing. In my view, however, the dividends outweigh the effort.
Balancing security against functionalitySecuring the server VLAN can be problematic. Establishing a balance between over gratuitous and insufficient connectivity is the ultimate headache. At this point, you need to consider what resides in this network segment. In essence, it’s the business equivalent of the crown jewels - the critical components of your entire estate. This “no fly zone” contains a wealth of information that is of interest and value to a cyber criminal. Assets such as intellectual property, financial data, and personally identifiable information are all a potential target in the event of a data breach.
If you consider the role that servers have, you’ll probably find that most of them really should not have (or even need) raw access to the internet. There are always some exceptions to this rule, but one of the first target areas to consider is the level of access to the outside world granted to a server. Even a server using NAT to communicate with an external host is at risk of compromise. From the network perspective, establishing a remote connection is just the start of a series of conversations and negotiations between the two endpoints. The main differences between TCP and UDP is that one waits for a response to a connection, whereas the other does not. UDP is a fire and forget protocol, making it ideal for DNS, SNMP, SYSLOG, and a wealth of other applications. TCP on the other hand will wait for a response from the remote host before continuing with the session. A lack of access in or out of a VLAN is not an attractive prospect for even a determined hacker.
Using various techniques, a cyber criminal can intercept the TCP headers to inject malicious content or payload, or masquerade as the remote host by means of a TCP redirect. This means that the network you are connected to may not be what you expected or desired. Packet sniffing is very easy once you have an understanding of how products such as WireShark function (an exploit known as eavesdropping). In order to significantly reduce the possibility of attack, only servers that have an essential requirement for an Internet connection in order to fulfill their designated function should be permitted access – even then, it should be only to the ports and IP addresses required, and nothing else. It goes without saying that industry standards should be adopted and adhered to – requests should be via a firewall with IDS and IPS capabilities. These devices have the ability to look at a network steam and determine if it has been tampered with. If the hashes do not match, or there are signs of modification not requested by either party, the session is destroyed (if using IPS) and an alert raised. This functionality can be dramatically altered by misconfiguration, so check thoroughly.
Vulnerabilities generated by older firmwareDevices running inferior versions of firmware are subject to compromise and potential exploit – particularly if they are edge based routers that are accessible from the outside world. Older versions of firmware on exposed routers can pose a significant risk to your perimeter and internal networks if vulnerabilities are not located and resolved quickly. A vulnerable router on a network can easily become an infiltration and extraction point, and you could find yourself the unwilling target of a data breach.
On the whole, adequate network design not only takes redundancy, scalability, and availability into account, but also security and stability. A classic example of failure to address the latter is the difference between your network standing up to a DDoS attack, or still being functional with only one VLAN or segment impacted. The days where we only made provisions for disaster recovery and business continuity are over. Security needs sound investment and knowledge in order to understand principles and apply standards correctly.
TakeawayI’m not into preaching to others about how they should be doing things from the networking perspective, but my basic advice would be
Carefully plan any new network implementation in advance. Visio and whiteboard sessions are important when thrashing out ideas, as an overall picture of the landscape is generally easier to digest than just text. Involve peer groups and key individuals from the outset. Everyone has their own unique insight as to how things should be structured, and just because it works for security, or the model you are developing, it may not necessarily work for the business as a whole Be prepared to make changes to the design, and by definition, listen to business advice. Nobody creates the holy grail of network concepts and implementation on their first attempt Unless you are blessed with a green field site, make a point of understanding the existing infrastructure and architecture, and design a mechanism for coexistence between the two environments. Be mindful of the potential for conflicting standards when dealing with different vendor equipment, and also consider that security could be negated in the existing environment whilst the integration process is underway.These are just a few of the points – there are many others. Want to know more, or have questions ? Just ask 🙂
During an unrelated discussion today, I was asked why I preferred Linux over Windows. The most obvious responses are that Linux does not have any licensing costs (perhaps not the case entirely with RHEL) and is capable of running on hardware much older than Windows10 will readily accept (or run on without acting like a snail). The other seeking point for Linux is that it’s the backbone of most web servers these days running either Apache or NGINX.
The remainder of the discussion centered around the points below;
Linux is pretty secure out of the box (based on the fact that most distros update as part of the install process), whilst Windows, well, isn’t. Admittedly, there’s an argument for both sides of the fence here - the most common being that Windows is more of a target because of its popularity and market presence - in other words, malware, ransomware, and “whatever-other-nasty-ware” (you fill in the blanks) are typically designed for the Windows platform in order to increase the success and hit rate of any potential campaign to it’s full potential.
Windows is also a monolithic kernel, meaning it’s installed in it’s entirety regardless of the hardware it sits on. What makes Linux unique is that each module is compiled based on the hardware in the system, so no “bloat” - you are also free to modify the system directly if you don’t like the layout or material design that the developer provided.
Linux is far superior in the security space. Windows only acquired “run as” in Windows XP, and a “reasonable” UAC environment (the reference to “reasonable” is loose, as it relates to Windows Vista). However, Microsoft were very slow to the gate with this - it’s something that Unix has had for years.
Possibly the most glaring security hole in Windows systems (in terms of NTFS) is that it can be easily read by the EXT file system in Linux (but not the other way round). And let’s not forget the fact that it’s a simple exercise to break the SAM database on a Windows install with Linux, and reset the local admin account.
Linux enjoys an open source community where issues reported are often picked up extremely quickly by developers all over the world, resolved, and an update issued to multiple repositories to remediate the issue.
Windows cannot be run from a DVD or thumb drive. Want to use it ? You’ll have to install it
Linux isn’t perfect by any stretch of the imagination, but I for one absolutely refuse to buy into the Microsoft ecosystem on a personal level - particularly using an operating system that by default doesn’t respect privacy. And no prizes for guessing what my take on Apple is - it’s essentially BSD in an expensive suit.
However, since COVID, I am in fact using Windows 11 at home, but that’s only for the integration. If I had the choice, I would be using Linux. There are a number of applications which I’d consider core that just do not work properly under Linux, and that’s the only real reason as to why I made the decision (somewhat resentfully) to move back to Windows on the home front.
Here’s a thought to leave you with. How many penetration testers do you know that use Windows for vulnerability assessments ?
This isn’t meant to be an “operating system war”. It’s a debate
Here’s a subject that’s close to my heart. I saw a thread on Twitter recently that essentially dismissed those who attended public school as not being worthy of entry into Oxbridge and were effectively “tarnishing the brand” - I kid you not. Here’s the screenshot which piqued my interest, quickly followed by disbelief.
20220507_145406.jpg
And here’s my response to this
Snobbery at its best (or worst, if you prefer). I attended a comprehensive school, didn’t go to college or university, don’t have a degree - but here I am - Director of Information Technology and CISO for a financial organisation. Oh, and I’m also free of the debt mantle…
My point? I attended a comprehensive, didn’t go to college (straight into employment) and didn’t have the funds or other financial means to attend university and get a degree. However, as a shining example of what effort and experience can achieve, I’m a Director of Information Technology, and Chief Information Security Officer for a financial firm in London.
Does that mean I, along with others who attended state school rather than private, and didn’t go to university aren’t fit for purpose? Do you really need a degree to succeed in life?
I have my thoughts on this but would love to hear others. I know @marusaky will have a view on this 🤔