Английская Википедия:Bot prevention

Материал из Онлайн справочника
Перейти к навигацииПерейти к поиску

Шаблон:Short description Шаблон:Lead too short

Bot prevention refers to the methods used by web services to prevent access by automated processes.

Types of bots

Studies suggest that over half of the traffic on the internet is bot activity, of which over half is further classified as 'bad bots'.[1]

Bots are used for various purposes online. Some bots are used passively for web scraping purposes, for example, to gather information from airlines about flight prices and destinations. Other bots, such as sneaker bots, help the bot operator acquire high-demand luxury goods; sometimes these are resold on the secondary market at higher prices, in what is commonly known as 'scalping'.[2][3][4]

Detection techniques and avoidance

Various fingerprinting and behavioural techniques are used to identify whether the client is a human user or a bot. In turn, bots use a range of techniques to avoid detection and appear like a human to the server.[2]

Browser fingerprinting techniques are the most common component in anti-bot protection systems. Data is usually collected through client-side JavaScript which is then transmitted to the anti-bot service for analysis. The data collected includes results from JavaScript APIs (checking if a given API is implemented and returns the results expected from a normal browser), rendering complex WebGL scenes, and using the Canvas API.[1][5] TLS fingerprinting techniques categorise the client by analysing the supported cipher suites during the SSL handshake.[6] These fingerprints can be used to create whitelists/blacklists containing fingerprints of known browser stacks.[7] In 2017, Salesforce open sourced its TLS fingerprinting library (JA3).[8] Between August and September 2018, Akamai noticed a large increase in TLS tampering across its network to evade detection.[9][7]

Behaviour-based techniques are also utilised, although less commonly than fingerprinting techniques, and rely on the idea that bots behave differently to human visitors. A common behavioural approach is to analyse a client's mouse movements and determine if they are typical of a human.[1][10]

More traditional techniques such as CAPTCHAs are also often employed, however they are generally considered ineffective while simultaneously obtrusive to human visitors.[11]

The use of JavaScript can prevent some bots that rely on basic requests (such as via cURL), as these will not load the detection script and hence will fail to progress.[1] A common method to bypass many techniques is to use a headless browser to simulate a real web browser and execute the client-side JavaScript detection scripts.[2][1] There are a variety of headless browsers that are used; some are custom (such as PhantomJS) but it is also possible to operate typical browsers such as Google Chrome in headless mode using a driver. Selenium is a common web automation framework that makes it easier to control the headless browser.[5][1] Anti-bot detection systems attempt to identify the implementation of methods specific to these headless browsers, or the lack of proper implementation of APIs that would be implemented in regular web browsers.[1]

The source code of these JavaScript files is typically obfuscated to make it harder to reverse engineer how the detection works.[5] Common techniques include:[12]

Anti-bot protection services are offered by various internet companies, such as Cloudflare[13] and Akamai.[14][15]

Law

In the United States, the Better Online Tickets Sales Act (commonly known as the BOTS Act) was passed in 2016 to prevent some uses of bots in commerce.[16] A year later, the United Kingdom passed similar regulations in the Digital Economy Act 2017.[17][18] The effectiveness of these measures is disputed.[19]

References

Шаблон:Reflist