Overview
Like microorganisms in the human body, there are also good bots that provide useful or important functions. Internet crawlers such as Googlebot index Web sites for queries, chatbots answer advanced questions and draft emails, and transaction bots check your credit card details when you purchase products online. Regardless of the provider of those useful bots, most bot work on the web is destructive. Imperva found that “bad” bots accounted for 32 percent of all website visitors in 2023, while “good” bots accounted for 17.6 percent.
Malicious bots engage in mischief – from extracting information from websites without permission to outright criminal activities, including fraud and robbery. Scalper bots collect limited-availability pieces and resell them at a better price. Cybercriminals need sophisticated, flexible bots to carry out large-scale nefarious acts such as bank card fraud and gaining illegal access to consumer accounts.
Advertisers pay cash to direct ads to real human customers, so the question of how many of X customers are actually bots is important to the corporate’s industry fashion. The real part is any individual’s perception. This author has had the experience of being among the hordes of “fake” fans setting up new accounts on X. Those fans can only be identified as bots because they were all created in 2023 and 2024, they have unique usernames made up of random numbers, they all have female profile photos, and they never submitted any comments. We do.
Because the purpose of these unapproved bot fans is opaque, many social media users—including companies, politicians, and entertainers—acquire fake or computerized fans to boost their perceived social influence and online engagement. Bot accounts have also become an important factor in political discourse, with political campaigns and foreign governments deploying armies of bots to control online discussions and amplify positive narratives.
The growing phenomenon of bot customers and machine-generated content has fueled debate about how fake the Internet really is. The “dead Internet theory”, which emerged in the late 2010s and has recently received renewed attention, posits that bots and AI have essentially taken over the Internet, turning it into a man-made, dehumanized realm. Has become where most of the work is conducted. Through algorithms rather than community.
Proponents of the useless web idea claim that much of the content, entertainment, information, and community we encounter on the web is created through tools – for example, many YouTube celebrities are, if fact, AI-generated movies. Or “deepfakes.” While that is almost certainly not true, the increasing complexity and rapid deployment of AI models has made it easy to contemplate an age in which the Internet is completely swamped by useless computer-generated content.