On November 22, during the ATS conference, AZ Frame hosted a thematic table titled “Social technologies of AD 2023 from the point of view of tools and action patterns.” The discussion was led by our invited expert in the field of cybersecurity, Jakub Staśkiewicz (opensecurity.ai), and we tried to answer the following questions:
- What do cybercriminals achieve today – what are their most effective operating patterns and tools?
- What will the widespread use of a fake user in attacks, powered by AI tools, bring?
What do criminals manage to achieve today?
First, some statistics, unchanged for many years. According to the latest CERT Polska report “The security landscape of the Polish Internet in 2022”, computer frauds account for 88% of all security incidents, and 64% of them are the phishing campaigns.
From our expert’s observations, another group of frauds also deserves attention, including:
- The so-called “banker” and “policeman” frauds, in which criminals – during phone conversations – impersonate bank employees or cybercrime police units
- Fake investment platforms, guaranteeing above-average rates of return
- Romance scam – scamming people, with whom a relationship of trust has been built
This type of fraud began to gain popularity around the time, when banks were required to implement two-factor authentication mechanisms. This is probably due to the fact that it is now a bit more difficult to rob someone with only their bank login and password obtained in a phishing attack.
Thus criminals resorted to new scenarios. The above-mentioned fraud techniques have a common element: they require direct interaction between the thief and the victim, involving the former’s time. So although these types of activities are lucrative, their scaling is not as easy as in the case of automated attacks. And here there is scope for new AI-based tools that may pose a serious threat.
AI crimes are already happening
Not yet on a mass scale, but computer fraud using artificial intelligence is already being reported. Here are some examples:
- Fraud of 243,000 dollars using a voice recording generated with the deep fake technology (false recording of the voice of the company’s CEO)
- Virtual kidnapping – an attempt to extort the ransom, using a video or voice recording generated with deep fake technology, showing an allegedly kidnapped child (a case of virtual kidnapping described by CNN)
- Stil a bit clumsy, but from our own area – deep fake with former president, Aleksander Kwaśniewski, encouraging very profitable investments.
What should we look forward to in the near future?
The previously described scenarios based on direct interaction between the criminal and the victim are quite profitable, and at the same time, criminals are already starting to use AI technologies – so it is expected that with their help they will soon be able to automate this type of attacks and carry them out on a mass scale.
The problem is that chatGPT tools will drastically affect the credibility and effectiveness of this type of campaign, for at least several reasons:
- They will allow for data analysis in order to better select the target group
- They will allow to generate much better quality content (without translation errors that currently cause some suspicion)
- They will allow to generate much more suggestive and convincing content. Their language will be perfectly suited to the recipient, his/her professional environment, social status, dialect, etc.
- They will allow for the automation of criminals’ actions and replacing them with agents in chat or telephone contacts
One of the more serious threats mentioned in the context of deep fake technologies is the possibility of using it for mass disinformation. Sensational reports uttered by famous politicians can lead to mass panic, cause social unrest, and even lead to paralysis of the operation of government institutions. Many of the parties to the world’s conflicts will certainly not hesitate to take advantage of such an opportunity.
How to defend oneself against AI frauds?
For now, the availability of techniques, or even recommendations and good practices in this area, is quite scarce. In 2019, Microsoft, Facebook and Amazon initiated the DFDC (Deepfake Detection Challenge) group, which aims to develop methods for recognizing materials generated by deepfakes. In 2020, the effectiveness of such tools (including those based on AI) was approximately 65%. This is definitely not enough to treat them as reliable.
Threat tracking companies for virtual kidnapping scams suggest the following preventive measures:
- Paying attention to what our loved ones are wearing when leaving home (so that we can later compare it with the recording of the alleged kidnapping).
- Sharing your location with other household members (e.g. via Google Maps) – this can also be useful in numerous situations.
- Paying attention to details such as slightly changed or untypically monotonic voice.
- Live conversations in which it seems that the other party is speaking according to a specific script and not answering our questions should also raise suspicions.
From the discussion at our table, we also learned that concerns should be raised not only by deep fake recordings prepared in advance, but also by those generated live. In this case, full interaction with the person whose image will be used might be necessary (e.g. during an arranged videoconference). In this case, a verification method may be to ask the interlocutor to perform a specific action, such as putting a candy in his mouth. In this case, AI algorithms will not cope so well with image generation (at least for now).
Moreover, as in the case of other social engineering techniques, it is recommended to be meticulous in assessing details and to exercise restraint in making hasty decisions based on emotions.
Social engineering attacks are extremely effective
Finally, it is worth pointing that in the case of social engineering attacks, we should not wonder whether the employees of our companies are susceptible to them, but rather how susceptible they are. My phishing tests show that the effectiveness of such an attack in an unprepared organization can reach 30-40%, and in some cases even 50%. However, companies that believe that they have eliminated this problem by conducting one training in the past are wrong. Only well-planned employee education and regular security awareness training can reduce this risk to below 10%. This is still a lot, but from a security point of view it makes a significant difference. This means that an average intruder, in order to extort a password from an employee, must send at least 10 emails instead of 2 or 3. This gives much greater chance of detecting and reporting an incident by more cautious people.
We would like to thank everyone who decided to take part in our session. The topic of social engineering turned out to be extremely interesting, and the AZ Frame table was one of the most crowded during the entire session, gathering a total of almost 30 guests!