Skip to content

Surface Web, Deep Web, And Dark Web Explained

Blog
  • 1631810200206-human1.jpg.webp
    When you think about the internet, what’s the first thing that comes to mind ? Online shopping ? Gaming ? Gambling sites ? Social media ? Each one of these would certainly fall into the category of what requires internet access to make possible, and it would be almost impossible to imagine a life without the web as we know it today. However, how well do we really know the internet and its underlying components ?

    Let’s first understand the origins of the Internet

    The “internet” as we know it today in fact began life as a product called ARPANET. The first workable version came in the late 1960s and used the acronym above rather than the less friendly “Advanced Research Projects Agency Network”. The product was initially funded by the U.S. Department of Defense, and used early forms of packet switching to allow multiple computers to communicate on a single network - known today as a LAN (Local Area Network).

    The internet itself isn’t one machine or server. It’s an enormous collection of networking components such as switches, routers, servers and much more located all over the world - all contacted using common “protocols” (a method of transport which data requires to reach other connected entities) such as TCP (Transmission Control Protocol) and UDP (User Datagram Protocol). Both TCP and UDP use a principle of “ports” to create connections, and ultimately each connected device requires an internet address (known as an IP address which is unique to each device meaning it can be identified individually amongst millions of other inter connected devices).

    In 1983, ARPANET began to leverage the newly available TCP/IP protocol which enabled scientists and engineers to assemble a “network of networks” that would begin to lay the foundation in terms of the required framework or “web” for the internet as we know it today to operate on. The icing on the cake came in 1990 when Tim Berners-Lee created the World Wide Web (www as we affectionately know it) - effectively allowing websites and hyperlinks to work together to form the internet we know and use daily.

    However, over time, the internet model changed as a result of various sites wishing to remain outside of the reach of search engines such as Google, Bing, Yahoo, and the like. This method also gave content owners a mechanism to charge users for access to content - referred to today as a “Paywall”. Out of this new model came, effectively, three layers of the internet.

    Three “Internets” ?

    To make this easier to understand (hopefully), I’ve put together the below diagram
    1626271657-557191-interneticeberg.webp

    The “Surface Web”

    Ok - with the history lesson out of the way, we’ll get back to the underlying purpose of this article, which is to reveal the three “layers” of the internet. For a simple paradigm, the easiest way to explain this is to use the “Iceberg Model”.

    The “internet” that forms part of our everyday lives consists of sites such as Google, Bing, Yahoo (to a lesser extent) and Wikipedia (as common examples - there are thousands more).

    The “Deep Web”

    The next layer is known as the “Deep Web” which typically consists of sites that do not expose themselves to search engines, meaning they cannot be “crawled” and will not feature in Google searches (in the sense that you cannot access a direct link without first having to login). Sites covered in this category - those such as Netflix, your Amazon or eBay account, PayPal, Google Drive, LinkedIn (essentially, anything that requires a login for you to gain access)

    The “Dark Web”

    The third layer down is known as the “Dark Web” - and it’s “Dark” for a reason. These are sites that truly live underground and out of reach for most standard internet users. Typically, access is gained via a TOR (The Onion Router - a bit more about that later) enabled browser, with links to websites being made up of completely random characters (and changing often to avoid detection), with the suffix of .onion. If I were asked to describe the Dark Web, I’d describe it as an underground online marketplace where literally anything goes - and I literally mean “anything”.

    Such examples are

    • Ransomware
    • Botnets,
    • Biitcoin trading
    • Hacker services and forums
    • Financial fraud
    • Illegal pornography
    • Terrorism
    • Anonymous journalism
    • Drug cartels (including online marketplaces for sale and distribution - a good example of this is Silk Road and Silk Road II)
    • Whistleblowing sites
    • Information leakage sites (a bit like Wikileaks, but often containing information that even that site cannot obtain and make freely available)
    • Murder for hire (hitmen etc.)

    Takeaway

    The Surface, Dark, and Deep Web are in fact interconnected. The purpose of these classifications is to determine where certain activities that take place on the internet fit. While internet activity on the Surface Web is, for the most part, secure, those activities in the Deep Web are hidden from view, but not necessarily harmful by nature. It’s very different in the case of the Dark Web. Thanks to it’s (virtually) anonymous nature little is known about the true content. Various attempts have been made to try and “map” the Dark Web, but given that URLs change frequently, and generally, there is no trail of breadcrumbs leading to the surface, it’s almost impossible to do so.
    In summary, the Surface Web is where search engine crawlers go to fetch useful information. By direct contrast, the Dark Web plays host to an entire range of nefarious activity, and is best avoided for security concerns alone.

  • @phenomlab some months ago I remember that I’ve take a look to the dark web……and I do not want to see it anymore….

    It’s really….dark, the content that I’ve seen scared me a lot……

  • @justoverclock yes, completely understand that. It’s a haven for criminal gangs and literally everything is on the table. Drugs, weapons, money laundering, cyber attacks for rent, and even murder for hire.

    Nothing it seems is off limits. The dark web is truly a place where the only limitation is the amount you are prepared to spend.


Related Topics
  • 0 Votes
    4 Posts
    693 Views

    @DownPW 🙂 most of this really depends on your desired security model. In all cases with firewalls, less is always more, although it’s never as clear cut as that, and there are always bespoke ports you’ll need to open periodically.

    Heztner’s DDoS protection is superior, and I know they have invested a lot of time, effort, and money into making it extremely effective. However, if you consider that the largest ever DDoS attack hit Cloudflare at 71m rps (and they were able to deflect it), and each attack can last anywhere between 8-24 hours which really depends on how determined the attacker(s) is/are, you can never be fully prepared - nor can you trace it’s true origin.

    DDoS attacks by their nature (Distributed Denial of Service) are conducted by large numbers of devices whom have become part of a “bot army” - and in most cases, the owners of these devices are blissfully unaware that they have been attacked and are under command and control from a nefarious resource. Given that the attacks originate from multiple sources, this allows the real attacker to observe from a distance whilst concealing their own identity and origin in the process.

    If you consider the desired effect of DDoS, it is not an attempt to access ports that are typically closed, but to flood (and eventually overwhelm) the target (such as a website) with millions of requests per second in an attempt to force it offline. Victims of DDoS attacks are often financial services for example, with either extortion or financial gain being the primary objective - in other words, pay for the originator to stop the attack.

    It’s even possible to get DDoS as a service these days - with a credit card, a few clicks of a mouse and a target IP, you can have your own proxy campaign running in minutes which typically involves “booters” or “stressers” - see below for more

    https://heimdalsecurity.com/blog/ddos-as-a-service-attacks-what-are-they-and-how-do-they-work

    @DownPW said in Setting for high load and prevent DDoS (sysctl, iptables, crowdsec or other):

    in short if you have any advice to give to secure the best.

    It’s not just about DDos or firewalls. There are a number of vulnerabilities on all systems that if not patched, will expose that same system to exploit. One of my favourite online testers which does a lot more than most basic ones is below

    https://www.immuniweb.com/websec/

    I’d start with the findings reported here and use that to branch outwards.

  • Secure SSH connectivty

    Security
    7
    6 Votes
    7 Posts
    582 Views

    @phenomlab

    yep but I use it since several month and I haven’t see any bugs or crash
    In any case, I only use him anymore 🙂

    Tabby offers tabs and a panel system, but also themes, plugins and color palettes to allow you to push the experience to the limit. It can support different shells in the same window, offers completion, has an encrypted container for your passwords, SSH keys and other secrets, and can handle different connection profiles.

    Each tab is persistent (you can restore them if you close one by mistake) and has a notification system, which will let you know if, for example, a process is finished while you are tapping in another tab.

    It’s really a great terminal that will easily replace cmd.exe for Windowsians or your usual terminal. And it can even work in a portable version for those who like to carry their tools on a USB key.

    –> To test it, you can download it, but there is also a web version. Handy for getting an idea.

    https://app.tabby.sh

  • Link vs Refresh

    Solved Customisation
    20
    8 Votes
    20 Posts
    1k Views

    @pobojmoks Do you see any errors being reported in the console ? At first guess (without seeing the actual code or the site itself), I’d say that this is AJAX callback related

  • Blog Setup

    Solved Customisation
    17
    8 Votes
    17 Posts
    1k Views

    Here is an update. So one of the problems is that I was coding on windows - duh right? Windows was changing one of the forward slashes into a backslash when it got to the files folder where the image was being held. So I then booted up my virtualbox instance of ubuntu server and set it up on there. And will wonders never cease - it worked. The other thing was is that there are more than one spot to grab the templates. I was grabbing the template from the widget when I should have been grabbing it from the other templates folder and grabbing the code from the actual theme for the plugin. If any of that makes sense.

    I was able to set it up so it will go to mydomain/blog and I don’t have to forward it to the user/username/blog. Now I am in the process of styling it to the way I want it to look. I wish that there was a way to use a new version of bootstrap. There are so many more new options. I suppose I could install the newer version or add the cdn in the header, but I don’t want it to cause conflicts. Bootstrap 3 is a little lacking. I believe that v2 of nodebb uses a new version of bootstrap or they have made it so you can use any framework that you want for styling. I would have to double check though.

    Thanks for your help @phenomlab! I really appreciate it. I am sure I will have more questions so never fear I won’t be going away . . . ever, hahaha.

    Thanks again!

  • 0 Votes
    1 Posts
    345 Views
    No one has replied
  • We DON'T Need Blinky Lights

    Blog
    1
    0 Votes
    1 Posts
    297 Views
    No one has replied
  • 1 Votes
    1 Posts
    449 Views
    No one has replied
  • Keep It Simple, Stupid...

    Blog
    1
    0 Votes
    1 Posts
    476 Views
    No one has replied