Artificial Intelligence (AI) Voice Impersonation On Steroids – Means To Minimize Risks

Artificial Intelligence (AI) Voice Impersonation On Steroids – Means To Minimize Risks

Artificial Intelligence (AI) Voice Impersonation On Steroids: Previously this year, scammers has been using Artificial Intelligence to impersonate a German energy company’s CEO’s voice to hoodwink the CEO of their UK subsidiary to send a Hungarian vendor $243,000. The cash ended up disappearing into suspicious bank accounts in Mexico and elsewhere, as per the insurer Euler Hermes of the corporation.

Artificial Intelligence (AI) Voice Impersonation (aka ‘vishing’ or voice phishing) fraud has been a real concern for banks and other types of organization for years. Whilst vishing is currently estimated to account for a mere 1% of phishing attacks, its use for cybercrime is estimated to have increased over 350% since 2013.


Cybersecurity For SEO: How Website Security Impacts In Google Ranking

Artificial Intelligence (AI) Voice Impersonation On Steroids

AI increases the risks of voice impersonation. Not only is the AI voice software freely available, convincing impersonations can be hatched in little time. A recent Israeli National Cyber Directorate study found that software now exists which can accurately mimic someone’s voice after listening to it for 20 minutes. Artificial voice company Lyrebird promises anyone can create a digital voice that sounds like you or anyone else, in only a few minutes.

The more convincing the impersonation, the greater the sums those duped may be induced to hand over. Cybersecurity firm Symantec says it knows of at least three cases of executives’ voices being impersonated for fraudulent ends, with losing in one case totaling millions of dollars.


#COVID-19 Remote Workforce: Security Challenges Amid Novel Coronavirus

Reputation Is As Big A Threat As Fraud

Voice impersonation has many uses beyond fraud and with AI voice software now freely available online, convincing fake news stories, hoaxes and reputational attacks are eminently possible.

Canadian psychology professor Jordan Peterson recently found himself at the mercy of a website where anyone could generate clips of themselves saying whatever they wanted in his voice. Most deepfakes poke fun at or, in the case of Mark Zuckerberg, seek to expose hypocrisy. But much of the content generated by Peterson’s website was vulgar and abusive, forcing him to threaten legal action.

Fortunately, the relatively limited nature of current AI, audio technologies has meant the numbers have so far been small and the damage limited. But these technologies are improving fast. Euler Hermes notes the AI software used to defraud the German energy company was able to mimic effectively its CEO’s voice, as well as his tone, punctuation and German accent.


Healthcare Security 2020: What Will It Take To Create Change?

Potential Artificial Intelligence (AI) Voice Impersonation Reputation Threats

It is surely only a matter of time before we see more regular instances of voice impersonation hitting – directly or indirectly – the reputations of companies, governments and other organizations. Scenarios might include:

  • A fake CEO audio message to employees regarding the new company strategy is ‘leaked’ to the outside world, allegedly by a short seller.
  • The voice of a well-known national politician is used to manipulate a senior director into discussing allegations of corporate fraud.
  • A fake voice recording of two executive board directors discussing making sexual innuendos about a colleague is used to blackmail the company.
  • An outsider gains entrance to a secured office by impersonating the voice of a company employee.

AI-Driven Identity Analytics: Extend Existing Security Investments

How To Mitigate The Reputational Risks Of Artificial Intelligence (AI) Deepfakes

Incidents with a reputational dimension can be difficult to anticipate, and even harder to manage. AI complicates matters considerably. Whilst the risk of AI-fueled voice attacks may not be high priorities, here are five things cyber and security professionals can do to mitigate the problem:

  • Work with your risk management, communications, corporate/public affairs and other relevant teams to identify and assess actual and potential security, financial, reputational and other relevant vulnerabilities.
  • Educate your people, especially those in the public eye, to watch out for and recognize deepfake videos and voice impersonations, and make sure they understand what to do when they see or experience something unusual.
  • new_releases

    Insider Threats: Employees Might Compromise Up Your Entire Data

  • Scan regularly for suspicious video and audio files and sites across the internet, social media and other relevant third-party platforms and channels.
  • Be prepared to respond quickly and appropriately to any incident which might impact your reputation. Specifically, make sure your cyber and communications plans are relevant and up to date.
  • Keep abreast of government and technology industry initiatives to combat the scourge of deepfakes, especially those aiming to improve detection and verification.

How Secure Are Smart Cities In Real, While Balancing Privacy With Innovation?

Artificial Intelligence may have been with us for decades, but the risks of malevolent Voice Impersonation on steroids and other types of deepfakes are only starting to become apparent. Every organization would be wise to consider now what these may mean for their name and image before today’s trickle turns into an avalanche.

, , , , , , , , , , , ,
Previous Post
InfoSec: Attackers Will Sneak In, Trick Is To Throw Them Out ASAP
Next Post
Bridge The Gap: Evaluating The Skills And Abilities Of A Security Team

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.
You need to agree with the terms to proceed


Pin It on Pinterest