Surface Web, Deep Web, And Dark Web Explained

Blog
  • 1631810200206-human1.jpg.webp
    When you think about the internet, what’s the first thing that comes to mind ? Online shopping ? Gaming ? Gambling sites ? Social media ? Each one of these would certainly fall into the category of what requires internet access to make possible, and it would be almost impossible to imagine a life without the web as we know it today. However, how well do we really know the internet and its underlying components ?

    Let’s first understand the origins of the Internet

    The “internet” as we know it today in fact began life as a product called ARPANET. The first workable version came in the late 1960s and used the acronym above rather than the less friendly “Advanced Research Projects Agency Network”. The product was initially funded by the U.S. Department of Defense, and used early forms of packet switching to allow multiple computers to communicate on a single network - known today as a LAN (Local Area Network).

    The internet itself isn’t one machine or server. It’s an enormous collection of networking components such as switches, routers, servers and much more located all over the world - all contacted using common “protocols” (a method of transport which data requires to reach other connected entities) such as TCP (Transmission Control Protocol) and UDP (User Datagram Protocol). Both TCP and UDP use a principle of “ports” to create connections, and ultimately each connected device requires an internet address (known as an IP address which is unique to each device meaning it can be identified individually amongst millions of other inter connected devices).

    In 1983, ARPANET began to leverage the newly available TCP/IP protocol which enabled scientists and engineers to assemble a “network of networks” that would begin to lay the foundation in terms of the required framework or “web” for the internet as we know it today to operate on. The icing on the cake came in 1990 when Tim Berners-Lee created the World Wide Web (www as we affectionately know it) - effectively allowing websites and hyperlinks to work together to form the internet we know and use daily.

    However, over time, the internet model changed as a result of various sites wishing to remain outside of the reach of search engines such as Google, Bing, Yahoo, and the like. This method also gave content owners a mechanism to charge users for access to content - referred to today as a “Paywall”. Out of this new model came, effectively, three layers of the internet.

    Three “Internets” ?

    To make this easier to understand (hopefully), I’ve put together the below diagram
    1626271657-557191-interneticeberg.webp

    The “Surface Web”

    Ok - with the history lesson out of the way, we’ll get back to the underlying purpose of this article, which is to reveal the three “layers” of the internet. For a simple paradigm, the easiest way to explain this is to use the “Iceberg Model”.

    The “internet” that forms part of our everyday lives consists of sites such as Google, Bing, Yahoo (to a lesser extent) and Wikipedia (as common examples - there are thousands more).

    The “Deep Web”

    The next layer is known as the “Deep Web” which typically consists of sites that do not expose themselves to search engines, meaning they cannot be “crawled” and will not feature in Google searches (in the sense that you cannot access a direct link without first having to login). Sites covered in this category - those such as Netflix, your Amazon or eBay account, PayPal, Google Drive, LinkedIn (essentially, anything that requires a login for you to gain access)

    The “Dark Web”

    The third layer down is known as the “Dark Web” - and it’s “Dark” for a reason. These are sites that truly live underground and out of reach for most standard internet users. Typically, access is gained via a TOR (The Onion Router - a bit more about that later) enabled browser, with links to websites being made up of completely random characters (and changing often to avoid detection), with the suffix of .onion. If I were asked to describe the Dark Web, I’d describe it as an underground online marketplace where literally anything goes - and I literally mean “anything”.

    Such examples are

    • Ransomware
    • Botnets,
    • Biitcoin trading
    • Hacker services and forums
    • Financial fraud
    • Illegal pornography
    • Terrorism
    • Anonymous journalism
    • Drug cartels (including online marketplaces for sale and distribution - a good example of this is Silk Road and Silk Road II)
    • Whistleblowing sites
    • Information leakage sites (a bit like Wikileaks, but often containing information that even that site cannot obtain and make freely available)
    • Murder for hire (hitmen etc.)

    Takeaway

    The Surface, Dark, and Deep Web are in fact interconnected. The purpose of these classifications is to determine where certain activities that take place on the internet fit. While internet activity on the Surface Web is, for the most part, secure, those activities in the Deep Web are hidden from view, but not necessarily harmful by nature. It’s very different in the case of the Dark Web. Thanks to it’s (virtually) anonymous nature little is known about the true content. Various attempts have been made to try and “map” the Dark Web, but given that URLs change frequently, and generally, there is no trail of breadcrumbs leading to the surface, it’s almost impossible to do so.
    In summary, the Surface Web is where search engine crawlers go to fetch useful information. By direct contrast, the Dark Web plays host to an entire range of nefarious activity, and is best avoided for security concerns alone.

  • @phenomlab some months ago I remember that I’ve take a look to the dark web……and I do not want to see it anymore….

    It’s really….dark, the content that I’ve seen scared me a lot……

  • @justoverclock yes, completely understand that. It’s a haven for criminal gangs and literally everything is on the table. Drugs, weapons, money laundering, cyber attacks for rent, and even murder for hire.

    Nothing it seems is off limits. The dark web is truly a place where the only limitation is the amount you are prepared to spend.


  • 5 Votes
    4 Posts
    72 Views

    @DownPW here. Hostrisk is automated and doesn’t accept registrations.

  • 2 Votes
    3 Posts
    77 Views

    @crazycells exactly. Not so long ago, we had the Cambridge Analytica scandal in the UK. Meta (Facebook) seem to be the ultimate “Teflon” company in the sense nothing seems to stick.

  • 4 Votes
    8 Posts
    278 Views

    @phenomlab
    Sorry to delay in responding, yes as i mentioned above, i had to remove my redis from docker and reinstall a new image with this command

    docker run --name=redis -p 127.0.0.1:6379:6379 -d -t redis:alpine

    and now when i test my ip and port on
    https://www.yougetsignal.com/tools/open-ports/

    the status of my redis port is closed. I think which to configure firewall in droplet digital ocean is a good idea too, and i will configure soon.
    Thanks for the help!

  • Nodebb as blogging platform

    General
    10
    5 Votes
    10 Posts
    315 Views

    @qwinter I’ve extensive experience with Ghost, so let me know if you need any help.

  • 5 Votes
    6 Posts
    652 Views

    Missed out on this deal ? Windscribe offer a limited free version. More about that here
    https://sudonix.org/topic/13/which-product-is-the-best-for-vpn/164?_=1652206628456

  • 0 Votes
    1 Posts
    140 Views

    1622032930658-hacked_listen-min.webp

    I’ve been a veteran of the infosec industry for several years, and during that time, I’ve been exposed to a wide range of technology and situations alike. Over this period, I’ve amassed a wealth of experience around information security, physical security, and systems. 18 years of that experience has been gained within the financial sector - the remaining spread across manufacturing, retail, and several other areas. I’ve always classed myself as a jack of all trades, and a master of none. The real reason for this is that I wanted to gain as much exposure to the world of technology without effectively “shoehorning” myself - pigeon holing my career and restricting my overall scope.

    I learned how to both hack and protect 8086 / Z80 systems back in 1984, and was using “POKE” well before Facebook coined the phrase and made it trendy (one of the actual commands I still remember to this day that rendered the CTRL, SHIFT, ESC break sequence useless was

    POKE &bdee, &c9

    I spent my youth dissecting systems and software alike, understanding how they worked, and more importantly, how easily they could be bypassed or modified.

    Was I a hacker in my youth ? If you understand the true meaning of the word, then yes - I most definitely was.

    If you think a hacker is a criminal, then absolutely not. I took my various skills I obtained over the years, honed them, and made them into a walking information source - a living, breathing technology encyclopedia that could be queried simply by asking a question (but not vulnerable to SQL injection).

    Over the years, I took an interest in all forms of technology, and was deeply immersed in the “virus era” of the 2000’s. I already understood how viruses worked (after dissecting hundreds of them in a home lab), and the level of damage that could be inflicted by one paved the way for a natural progression to early and somewhat infantile malware. In its earliest form, this malware was easily spotted and removed. Today’s campaigns see code that will self delete itself post successful execution, leaving little to no trace of its activity on a system. Once the APT (Advanced Persistent Threat) acronym became mainstream, the world and its brother realised they had a significant problem in their hands, and needed to respond accordingly. I’d realised early on that one of the best defences against the ever advancing malware was containment. If you “stem the flow”, you reduce the overall impact - essentially, restricting the malicious activity to a small subset rather than your entire estate.

    I began collaborating with various stakeholders in the organisations I worked for over the years, carefully explaining how modern threats worked, the level of damage they could inflict initially from an information and financial perspective, extending to reputation damage and a variety of others as campaigns increased in their complexity). I recall one incident during a tenure within the manufacturing industry where I provided a proof of concept. At the time, I was working as a pro bono consultant for a small company, and I don’t think they took me too seriously.

    Using an existing and shockingly vulnerable Windows 2003 server (it was still using the original settings in terms of configuration - they had no patching regime, meaning all systems were effectively vanilla) I exhibited how simple it would be to gain access first to this server, then steal the hash - effortlessly using that token to gain full access to other systems without even knowing the password (pass the hash). A very primitive exercise by today’s standards, but effective nonetheless. I explained every step of what I was doing along the way, and then explained how to mitigate this simple exploit - I even provided a step by step guide on how to reproduce the vulnerability, how to remediate it, and even provided my recommendations for the necessary steps to enhance security across their estate. Their response was, frankly, shocking. Not only did they attempt to refute my findings, but at the same time, dismissed it as trivial - effectively brushing it under the carpet so to speak. This wasn’t a high profile entity, but the firm in question was AIM listed, and by definition, were duty bound - they had a responsibility to shareholders and stakeholders to resolve this issue. Instead, they remained silent.

    Being Pro Bono meant that my role was an advisory one, and I wasn’t charging for my work. The firm had asked me to perform a security posture review, yet somehow, didn’t like the result when it was presented to them. I informed them that they were more than welcome to obtain another opinion, and should process my findings as they saw fit. I later found out through a mutual contact that my findings had been dismissed as "“unrealistic”, and another consultant had certified their infrastructure as “safe”. I almost choked on my coffee, but wrote this off as a bad experience. 2 months later, I got a call from the same mutual contact telling me that my findings were indeed correct. He had been contacted by the same firm asking him to provide consultancy for what on the face of it, looked like a compromised network.

    Then came the next line which I’ll never forget.

    “I don’t suppose you’d be interested in……”

    I politely refused, saying I was busy on another project. I actually wasn’t, but refused out of principle. And so, without further ado, here’s my synopsis

    “…if you choose not to listen to the advice a security expert gives you, then you are leaving yourself and your organisation unnecessarily vulnerable. Ignorance is not bliss when it comes to security…”

    Think about what you’ve read for a moment, and be honest with me - say so if you think this statement is harsh given the previous content.

    The point I am trying to make here is that despite sustained effort, valiant attempts to raise awareness, and constantly telling people they have gaping holes in systems for them to ignore the advice (and the fix I’ve handed to them on a plate) is extremely frustrating. Those in the InfoSec community are duty bound to responsibly disclose, inform, educate, raise awareness, and help protect, but that doesn’t extend to wiping people’s noses and telling them it wasn’t their fault that they failed to follow simple advice that probably could have prevented their inevitable breach. My response here is that if you bury your head in the sand, you won’t see the guy running up behind you intent on kicking you up the ass.

    Security situations can easily be avoided if people are prepared to actually listen and heed advice. I’m willing to help anyone, but they in return have to be equally willing to listen, understand, and react.

  • 0 Votes
    1 Posts
    163 Views

    Once in every while, you encounter a repetitive issue that no matter what you try to do to resolve it, the problem manifests itself over and over again - sometimes, even on a daily basis. Much of how the issue is remediated really depends on the person assigned to the task.

    You might be puzzled at why I’d write about something like this, but it’s a situation I see constantly - one I like to refer to as “over thinker syndrome”. What do I mean by this ? Here’s the theory. Some people are very analytical when it comes to problem solving. Couple that with technical knowledge and you could land up with a situation where something relatively simple gets blown out of all proportion because the scenario played out in the mind is often much further from reality than you’d expect. And the technical reasoning is usually always to blame. Sometime around 2007, a colleague noticed that the Exchange Server (2003 wouldn’t you know) would suddenly reboot half way through a backup job. Rightly so, he wanted to investigate and asked me if this would be ok. Anyone with an ounce of experience knows that functional backups are critical in the event of a disaster - none more so than I - obviously, I have the go ahead. One bright spark in my team suggested a reboot of the server, which immediately prompted the response

    “…it’s rebooting itself every day, so how will that help ?”

    The investigation

    Joking aside, we’ve all heard the “have you rebooted” question touted at some point during helpdesk discussions, but this one was different. A system rebooting itself is usually symptomatic of an underlying issue somewhere, and my team member was ready for the task ahead. Stepping up to the plate, he asked if it was ok to install some monitoring software on the server. Usually, installing additional software components in a production server without testing first is a non-starter, but seeing as we needed to get this resolved as quickly as possible to reinstate the nightly backup (which incidentally hasn’t run successfully for 3 days by now), I provided approval to proceed without question. There’s a leap of faith at this point, as you could cause more problems than that you actually set out to resolve in the first place, but, as with anything related to information technology, someone’s you have to accept an element of risk. The software itself was actually for the RAID controller and motherboard  The assigned technician had already decided it was related to something along the lines of a faulty RAM module, or perhaps an issue with the controller itself. My thoughts leaned elsewhere already at this point - is the server reboots itself at exactly the same time every day then there is an established pattern which should be investigated first. It’s a logical approach, but it’s a common trait for technical support staff to sometimes think outside of the box - or in this case, outside of the building. Not wanting to push my opinion, or trample on anyone’s toes, I decided to remain quiet and see just how far this would go before intervention was required.

    In this case, not very far. The following morning after another unannounced nightly reboot, the error “the previous shutdown at [insert time and date here] was unexpected” showed up in the event log. No real surprises there, and once again, exactly the same time as the previous night. I asked my technician for an update, and he informed me that he believed that the memory was faulty and somehow causing the server to blue screen and reboot. That was actually a reasonable response and so I commended him on his research and findings, but also reminded him to perform a manual backup so that we at least had something to revert to in the event of a failure. Later that afternoon, the same tech approached me and said that he had ordered some replacement memory, and wanted to arrange downtime to fit it. Trying to keep a poker face and remain passive, I agreed and the memory was replaced the same evening around 10pm. At 2am the following morning, kaboom ! - the server rebooted itself again. Not wanting to admit defeat, our courageous tech suggested that the problem could be due to the system overheating. Another fair point, but not realistic as you’d see this in event log as a thermal shutdown. I willingly entertained this, and allowed investigations into the CPU temperature to begin - after another manual backup. Unsurprisingly, the temperature data returned no smoking gun, so that was abandoned. The next port of call was to reapply the service pack. Now, I’ll admit that this used to fix a multitude of issues under Windows NT Server (particularly Service Pack 4) but not under Windows 2003. I declined this for obvious reasons - if you reapply the service pack, you run the risk of overwriting key DLL files that could (and often will) render Exchange inoperable. Not being prepared to introduce an unprecedented risk into what was already becoming something of a showcase, I suggested that we look elsewhere.

    The exasperation

    The final (and honestly more realistic suggestion) was to enable verbose logging in Exchange. This is actually a good idea, but only if you suspect that the information store could be the issue. Given the evidence, I wasn’t convinced. If there was corruption in the store, or on any of the disks, this would show itself randomly through the day and wouldn’t wait until 2am in the morning. Not wanting to come across as condescending, I agreed, but at the same time, set a deadline to escalation. I wasn’t overly concerned about the backups as these were being completed manually each day whilst the investigations were taking place. Neither was I concerned at what could be seen at this point as wasting someone’s time when you think you may have the answer to what now seemed to be an impossible problem. This is where experience will eclipse any formal qualifications hands down. Those with university degrees may scoff at this, but those with substantially analytical thinking patterns seem to avoid logic like the plague and go off on a wild tangent looking for a dramatically technical explanation and solution to a problem when it’s much simpler than you’d expect. Hence the title of this article - Avoid the “bulldozer to find a china cup” scenario. After witnessing another pained expression on the face of my now exasperated and exhausted tech, I said “let’s get a coffee”. In agreement, he followed me to the kitchen and then asked me what I thought the problem could be. I said that if he wanted my advice, it would be to step back and look at this problem from a logical angle rather than technical. The confused look I received was priceless - the guy must have really though I’d lost the plot. After what seemed like an eternity (although in reality only a few seconds) he asked me what I meant by this. “Come with me”, I said. Finishing his coffee, he diligently followed me to the server room. Once inside, I asked him to show me the Exchange Server. Puzzled, he correctly pointed out the exact machine. I then asked him to trace the power cables and tell me where they went.

    As with most server rooms, locating and identifying cables can be a bit of a challenge after equipment has been added and removed, so this took a little longer than we expected. Eventually, the tech traced the cables back to

    …an old looking UPS that had a red light illuminated at the front like it had been a prop in a Terminator film.

    The realisation

    Suddenly, the real cause of this issue dawned on the tech like a morning sunrise over the Serengeti. The UPS that the Exchange Server was unexpectedly connected to had a faulty battery. The UPS was conducting a self test at 2am each morning, and because the bypass test failed owing to the burnt battery, the connected server lost power and started back up after the offending equipment left bypass mode and went online.

    Where is this going you might ask ?  Here’s the moral of this (particular, and many others like it) story

    Just because a problem involves technology, it doesn’t mean that the answer has to be a complex technical one Logic and common sense has a part to play in all of our lives. Sometimes, it makes more sense just to step back, take a breath, and see something for what it really is before deciding to commit It’s easy to allow technical expertise to cloud your judgement - don’t fall into the trap of using a sledgehammer to break an egg You cannot buy experience - it’s earned, gained, and leaves an indelible mark

    Let’s hear your views. Did you ever come across a situation where no matter what you tried, nothing worked ? Did the solution turn out to be much simpler than you’d have ever thought ?

  • is my DMARC configured correctly?

    Solved Configure
    3
    3 Votes
    3 Posts
    209 Views

    @phenomlab said in is my DMARC configured correctly?:

    you’ll get one from every domain that receives email from yours.

    Today I have received another mail from outlook DMARC, i was referring to your reply again and found it very helpful/informative. thanks again.

    I wish sudonix 100 more great years ahead!