Artificial Intelligence (AI) Voice Impersonation On Steroids – Means To Minimize Risks

Artificial Intelligence (AI) Voice Impersonation On Steroids – Means To Minimize Risks

Updated On:

Artificial Intelligence (AI) Voice Impersonation On Steroids: Previously this year, scammers has been using Artificial Intelligence to impersonate a German energy company’s CEO’s voice to hoodwink the CEO of their UK subsidiary to send a Hungarian vendor $243,000. The cash ended up disappearing into suspicious bank accounts in Mexico and elsewhere, as per the insurer Euler Hermes of the corporation.

Artificial Intelligence (AI) Voice Impersonation (aka ‘vishing’ or voice phishing) fraud has been a real concern for banks and other types of organization for years. Whilst vishing is currently estimated to account for a mere 1% of phishing attacks, its use for cybercrime is estimated to have increased over 350% since 2013.

new_releases

‘The Cyberthreat Handbook’ Published With Documented ‘Who’s Who’ Of Attackers

Artificial Intelligence (AI) Voice Impersonation On Steroids

AI increases the risks of voice impersonation. Not only is the AI voice software freely available, convincing impersonations can be hatched in little time. A recent Israeli National Cyber Directorate study found that software now exists which can accurately mimic someone’s voice after listening to it for 20 minutes. Artificial voice company Lyrebird promises anyone can create a digital voice that sounds like you or anyone else, in only a few minutes.

The more convincing the impersonation, the greater the sums those duped may be induced to hand over. Cybersecurity firm Symantec says it knows of at least three cases of executives’ voices being impersonated for fraudulent ends, with losing in one case totaling millions of dollars.

new_releases

Best Free Cloud Web Hosting with cPanel & Support 2019

Reputation Is As Big A Threat As Fraud

Voice impersonation has many uses beyond fraud and with AI voice software now freely available online, convincing fake news stories, hoaxes and reputational attacks are eminently possible.

Canadian psychology professor Jordan Peterson recently found himself at the mercy of a website where anyone could generate clips of themselves saying whatever they wanted in his voice. Most deepfakes poke fun at or, in the case of Mark Zuckerberg, seek to expose hypocrisy. But much of the content generated by Peterson’s website was vulgar and abusive, forcing him to threaten legal action.

Fortunately, the relatively limited nature of current AI, audio technologies has meant the numbers have so far been small and the damage limited. But these technologies are improving fast. Euler Hermes notes the AI software used to defraud the German energy company was able to mimic effectively its CEO’s voice, as well as his tone, punctuation and German accent.

new_releases

Google Fixes Critical PNG Security Bug, Though Billions of Android Users Still Vulnerable

Potential Artificial Intelligence (AI) Voice Impersonation Reputation Threats

It is surely only a matter of time before we see more regular instances of voice impersonation hitting – directly or indirectly – the reputations of companies, governments and other organizations. Scenarios might include:

  • A fake CEO audio message to employees regarding the new company strategy is ‘leaked’ to the outside world, allegedly by a short seller.
  • The voice of a well-known national politician is used to manipulate a senior director into discussing allegations of corporate fraud.
  • A fake voice recording of two executive board directors discussing making sexual innuendos about a colleague is used to blackmail the company.
  • An outsider gains entrance to a secured office by impersonating the voice of a company employee.
new_releases

HP And ExpressVPN Partnership To Engineer Better Online Security

How To Mitigate The Reputational Risks Of Artificial Intelligence (AI) Deepfakes

Incidents with a reputational dimension can be difficult to anticipate, and even harder to manage. AI complicates matters considerably. Whilst the risk of AI-fueled voice attacks may not be high priorities, here are five things cyber and security professionals can do to mitigate the problem:

  • Work with your risk management, communications, corporate/public affairs and other relevant teams to identify and assess actual and potential security, financial, reputational and other relevant vulnerabilities.
  • Educate your people, especially those in the public eye, to watch out for and recognize deepfake videos and voice impersonations, and make sure they understand what to do when they see or experience something unusual.
  • new_releases

    GDPR Deadlock: General Data Protection Regulation Principles Are Simple Yet Ad-Hoc For Many

  • Scan regularly for suspicious video and audio files and sites across the internet, social media and other relevant third-party platforms and channels.
  • Be prepared to respond quickly and appropriately to any incident which might impact your reputation. Specifically, make sure your cyber and communications plans are relevant and up to date.
  • Keep abreast of government and technology industry initiatives to combat the scourge of deepfakes, especially those aiming to improve detection and verification.
new_releases

Live Chat With Facebook Messenger Plugin Critical XSS Vulnerability Revealed

Artificial Intelligence may have been with us for decades, but the risks of malevolent Voice Impersonation on steroids and other types of deepfakes are only starting to become apparent. Every organization would be wise to consider now what these may mean for their name and image before today’s trickle turns into an avalanche.

, , , , , , , , , , , ,
Previous Post
InfoSec: Attackers Will Sneak In, Trick Is To Throw Them Out ASAP
Next Post
Bridge The Gap: Evaluating The Skills And Abilities Of A Security Team

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.
You need to agree with the terms to proceed

Menu

Pin It on Pinterest