Researchers of the TU Wien (Vienna University of Technology, Austria) achieved a stunning – and at the same time scary – 76,1% click rate on possibly malicious links in conversations that took place on IRC using an automated social-engineering software dubbed “HoneyBot”.
Their new approach to automated social engineering (“ASE”) does not rely on artificial conversations from an AI but instead used the bot to relay messages between humans, effectively avoiding detection according to what is commonly referred to as “Turing Test” in which humans assess if they can tell apart that they are talking to a human or a computer program of sorts. Previous generations of such bots used an AI called “Artificial Intelligence Markup” to engage in conversations with a much lower success rate: Users where able to spot 80% of the bots after exchanging only 3 messages with them.
HoneyBot acts as a “Man-in-the-middle” and relays messages between two unsuspecting users which seem to have perfectly normal conversation going on:
bot → Alice: Hi!
Alice → bot: hello
bot → Carl: hello
Carl → bot: hi there, how are you?
bot → Alice: hi there, how are you?
Alice → bot: …
But that’s not all – the bot is capable of influencing the ongoing conversation by “dropping, inserting, or modifying messages” and the researchers assert that “if links (or questions) are inserted into such a conversation, they will seem to originate from a human user” and therefore the click-probability will be “higher than in artificial conversation approaches”.
The really sophisticated bot is able to determine the gender of the persons it is talking to and makes on-the-fly adjustments to all relayed messages so “Hello, i’m a guy” becomes “Hello, i’m a lady” when its gender-detection algorithm determined that the conversational partner likely is male. Insertion of links also has some level of sophistication – instead of just dumping a link into the conversation and hoping for a click, the bot has 3 options for doing so:
- Insert a random link: Along with a generic message a link is sent to the other user if they have been engaged in a conversation for a minimum number of messages
- Keywords: Reply with links to keywords such as “ASL?”
- Replacement link: Questions already containing links to sites such as YouTube are replaced with own links and therefore look most natural since the question was composed by a human. Also, the bot can inject probing questions to steer the conversation into a certain direction.
Trying to be as stealthy and sneaky as possible, the bot never contacts users with “administrative privileges” but replys to private messages by such, although it will never inserts links or questions into those conversations. Additionally, a random delay is used when “typing” messages to make detection even harder.
Aware that what they have created is a whole can of worms when used unethically, the researchers made sure that personally identifiable data such as eMail and IM addresses are never relayed and links sent in conversations are filtered if they’re not going to be replaced by HoneyBot.
The channels monitored by the bot where 2 dating and one generic chat channel of which neither the channels nor the network have been named in the research paper.
When talking about the ethics, the researchers conclude that they’re well within the guidelines set forth by the IRB (Institutional Review Board) based on similar researches and also got a nod from the legal department of the university. They chose to not inform users before the experiment since this would most likely have influenced the results as “users that are aware of participating in a study are likely to be more cautious than usual” and say that they “carried out the study only with users that responded to our messages and thereby accepted talking to the bot (i.e., stranger)” and emphasize that there were no “ongoing conversations intruded” by them. Also they note that all data collected “although largely anonymous” has been deleted after the “evaluation phase”.
With 3 seperate bots – a “periodic spam” bot, a private-message spam bot and a keyword spam bot – they evaluated the likelyhood of users clicking on links, the results can be seen in the below table:
Altogether, only 1.7% of the online users could be enticed into clicking a link by those 3 “classic” bot types and the bot only got to post 8 links on the Chat channel before it was banned by a channel op.
The longest conversation HoneyBot had took a staggering 2 and a half hours with 325 messages transmitted and it achieved a median chat time of “longer than 30 minutes”.
Out of the 3 possible URLs the bot has sent – broken down in IP, TinyURL and a MySpace link – TinyURL links where the most clicked about which the researchers rightfully say is counter-intuitive since “TinyURLs can hide arbitrary URLs whereas a MySpace link always leads to a profile”.
Furthermore, the MySpace links the bot sent out had to be reassembled by the user because a space character was inserted into the URL and the researchers said they’re “surprised that this reassembly has happened at all”.
It should not go unmentioned that the same type of research was conducted on Facebook where they created one male and one female profile and tried to befriend users of the opposite sex. The new friends, if successful in bootstrapping a conversation, then tried to make them click on the same links as the IRC bot. And even though 4 out of 10 people clicked them, the researchers believe that the attack could have been way more successful if they went as far as cloning profiles, befriend users from those and relay messages from cloned to authentic profiles.
As can be seen from the Facebook example, this kind of attack is not limited to IRC exclusively but can be adopted to a whole host of so-called Social-networking sites and systems.
Mitigation of these social engineering threats is not easy and there is no fast and hard measure that can prevent all of them, however raising awareness is one way to make users more alert to it and is what the researchers tried to achieve: “We hope that this paper will contribute to this process.”
In Soviet Russia Vienna bots social engineer you!