Elon Musk’s Twitter bot has been questioned; it says
Elon Musk’s Twitter bot has been questioned; it says
Leading bot researchers have questioned the veracity of papers Elon Musk‘s legal team provided in his legal battle with Twitter.
In a countersuit against Twitter, Mr. Musk used Botometer, a website that tracks spam and fake accounts.
Mr. Musk’s team used this method to determine that 33% of the social media platform’s “visible accounts” were “false or spam.”
The statistic, according to Kaicheng Yang, the founder, and manager of the Botometer, “doesn’t imply anything.”
Mr. Yang expressed doubts about the technique employed by Mr. Musk’s team.
In a court hearing set for October in Delaware, a judge will rule on whether Mr. Musk must buy it or not.
In July, Mr. Musk announced that he was no longer interested in buying the business since he could not confirm the platform’s user base.
Since then, the world’s richest man has repeatedly maintained that there may be considerably more fake and spam accounts than Twitter say.
He claimed that his employees had discovered that one-third of accessible Twitter accounts were fake his counterclaim, which was made available to the public on August 5. According to the researchers, at least 10% of daily active users are bots.
Botometer is a program that determines a bot’s “score” out of five based on various criteria, including the frequency and timing of an account’s tweets, as well as the content of the Tweets.
A score of 0 means that a Twitter account is most likely a bot, while a score of 5 means it is most likely a natural person.
Researchers claim that the program cannot determine whether a particular account is a bot. According to Mr. Yang, “you need to pick a threshold to reduce the score to determine the prevalence [of bots].”
Mr. Yang claims that Mr. Musk’s countersuit is silent on the criterion utilized to arrive at its 33% figure.
“The countersuit’s lack of clarity leaves Mr. Musk (the defendant) free to take any action he sees fit. Therefore, the number has no significance for me, “explained he.
The comments cast doubt on the methodology used by Elon Musk‘s team to determine the number of bots on the site.
“The Musk Parties’ research has been hindered due to the minimal data that Twitter has supplied and the short time to analyze this incomplete data,” Mr. Musk’s legal team claims in the countersuit.
The algorithm incorporates machine learning, as well as elements like tweet regularity and language variety, as well as other telltale signals of robotic behavior, according to Clayton Davis, a data scientist who worked on the project.
“People behave in a certain way. If an account consistently behaves in ways that differ from how humans behave, it may not be human, “He claims.
The information is available only on Twitter
The developers of Botometer had previously made an effort to estimate the number of spam and fraudulent accounts on Twitter.
But according to Mr. Davis, the assessment was strongly qualified and depended on scant information.
According to Mr. Davis, Twitter is the only entity with access to God’s eye perspective. Twitter mainly relies on human analysis to determine how many accounts are bogus. Each quarter, it claims to randomly choose thousands of accounts and check them for bot activity.
Twitter claims it also utilizes private information, such as IP addresses, phone numbers, and geolocation, to determine if an account is real or phony, in contrast to other publicly accessible bot research tools.
It uses a Twitter account without a photo or location as an example, which raises red flags to a public bot detector. The account’s owner, though, can have strong beliefs regarding security.
Most effective approach to determining how many false accounts are present.
According to this definition, there are more spam and phone accounts on Twitter than there should be, according to Michael Kearney, the inventor of Tweet Bot or Not, a different free tool for evaluating bots.
Robots tweet more
The percentage of bots might range from 1% to 20%, he adds, depending on how they are defined.
“A rigorous definition, in my opinion, would provide a low number. Factors like bot accounts exist and tweet at far larger quantities, “explained.
Is a Twitter account run by a human who sends automatic tweets considered a bot?
While profiles like weather bots are openly promoted on Twitter, fake accounts are frequently controlled by individuals.
Twitter allegedly has a strong interest in undercounting phone accounts, according to certain bot specialists.
Twitter has a few goals that are slightly at odds, claims Mr. Davis.
“They are concerned about credibility, on the one hand. They want people to believe that the Twitter conversations are genuine. However, they also value having a large user base.”
The majority of Twitter’s income comes from advertising, and the more advertisers it can charge, the more money it can make.
According to Mr. Kearney, Twitter should have developed more effective methods for identifying bogus accounts.
He claims that Twitter “may not be using all the technologies they can to provide the clearest response.”
In its countersuit, Elon Musk‘s legal team claims that Twitter needs to estimate bot activity using more advanced technologies.
Twitter, he claims, has not provided him with enough useful information to allow him to independently verify bot estimations.
However, Mr. Yang thinks Twitter’s verification process is rather reliable and claims that if he had access to its data, he “would definitely do something similar to Twitter.”
But he also concurred that a better definition of a bot’s traits is required.
To come to an agreement on a recognized definition of a bot, he argues, “It’s vital to have individuals from both sides get down together and look over the accounts one by one.”