Deep Web: Even Google has no idea of the phenomenon

Like an iceberg, of which only the tip is visible, the Internet harbours a huge amount of data that goes entirely unnoticed. Only a small part of it poses a serious problem, however.

In addition to the visible range, which is accessible to any everyday Web surfer, the Internet includes a much larger free space that search engines cannot index. The nature of the so-called “Deep Web” makes it impossible to investigate the volume and content of these data masses. It is estimated that this invisible area of the Internet could well be up to 500 times larger than the public Web.

It is considered certain that the Deep Web includes blocked and secured websites or networks with limited access. There are also exotic file or database formats that already existed before the emergence of the World Wide Web. Ample volume is also attributable to the Intranets of large organisations or stored measurement data that are no longer required. Accessing the Deep Web requires special tools; instead of HTTP, the protocols IRC, Gopher and FTP are primarily used.

Particularly the inaccessibility for standard browsers also magically attracts malevolent actors. Certain parts of the Deep Web are therefore considered as safe havens for criminal marketplaces. Above all, drugs, malicious software, pirated copies, and stolen passwords and identities are marketed under the cover of anonymity.

According to a study conducted by Trend Micro, almost half of the observed domains use the English language, although Russian has taken the lead in terms of the number of URLs. As long as the Deep Web guarantees absolute anonymity, both investigative authorities and cybercriminals are likely to be kept very busy in future. (Source: Trend Micro/bs)

Matomo