DEEPFAKE; the new business threat

Image of a female face against a digital background with AutoCAD lines and dots indicating facial features to be captured.

They say that imitation is the best form of flattery. However, it is becoming nearly impossible in the modern digital world to distinguish between what is real and what is fake.

This subject came up in a BFC staff meeting when it was discovered that our newsletter articles, which are also posted on our blog, were being coopted or piggybacked by another company, and reposted on their webpage, to promote its accounts receivable factoring business. However, unlike BFC’s website, that company’s webpage is very generic without any information about where the company is located, who is involved, or how their factoring program works.

With website building programs such as WordPress, Wix, Gator, Squarespace, and many more, it is a snap for anyone to build their own website. There are no requirements for a business license, incorporation, or legitimate business address. Simply think up a catchy name, or a name close to the company from which you are trying to steal business, and away you go.

Even as hacking and phishing continue to get more sophisticated (and continue to work), when artificial intelligence (AI) is used in combination with social hacking, it opens an entirely new pandora’s box. The number of possible exploits is only limited by the imagination of the nefarious actor. A sample of the more obvious scams include:

  • False claims of malfeasance, damaging a product or company’s reputation
  • Endorsements that are not real (you thought fake written reviews were harmful)
  • Video-backed HR complaints about a co-worker or a boss
  • Insurance fraud, support by “video proof”
  • False news about the company’s owners, founders, leaders, etc.
  • Onboarding processes subverted and fraudulent accounts created
  • Identity theft, using video to convince someone to alter critical personal data
  • Diversion of shipments
  • Orders for unwanted materials.
  • Payments and/or funds transfer fraudulently authorized
  • Blackmail based on the threat to release a damaging video

Voices and Fraud

Official-looking emails that are actually fake have been a significant problem for nearly as long as email technology has existed. In the early days, they were reasonably easy to spot, but computer graphics programs have improved to the point where it takes a keen eye to spot a fake request. To protect against an employee transferring money at the request of a fake email, many companies have instituted a requirement of secondary authentication, usually a phone call or voice verification. Perhaps the CEO’s Caller ID, verbally authorizing the transfer.

However, with today’s technology, a local phone number can be spoofed from anywhere in the world. Also, AI computers can listen to a person and duplicate their speech pattern, voice inflections, and exact tone. So, suppose your CEO puts out a video that talks about the company products, a financial report, or perhaps welcomes new employees. In that case, their voice can be flawlessly duplicated by a computer and made to say anything the scammer wants that person to say.

Video Fraud

Thanks to techniques developed by Hollywood, a person’s video can be manipulated and combined with altered audio to make the CEO or PIO appear to say anything the scammer wants them to say. These types of videos can be used to announce fake negative quarterly earnings, negative news about products or partnerships, and any number of things that might make company stock prices fall or anger customers, ruining the company’s reputation.

COVID-19 Opens the door

As we begin the second year of the COVID-19 pandemic, many businesses have developed successful work-from-home solutions for their employees. Normal business processes and communication are performed online with employees communicating, collaborating, and exchanging information digitally from home computers using Wi-Fi and internet routers that often are not as secure as the company’s internal computer system managed by a trained IT professional. This work-from-home paradigm can allow easy access to deepfakes.

How easy is it to fake?

The term deepfakes comes from the Reddit username of the person or persons who in 2017 released a series of pornographic clips modified using machine learning to include the faces of Hollywood actresses. Their code was released online, and various forms of AI video and image-generation technology are now available to any interested amateur.

Rosebud AI specializes in making the kind of glossy images used in e-commerce or marketing. Last year the company released a collection of 25,000 modeling photos of people that never existed, along with tools that can swap synthetic faces into any image. More recently, it launched a service that can put clothes photographed on mannequins onto virtual but real-looking models.

Rosebud is just one of nearly a dozen companies working to advance and perfect deepfake technology for commercial use.

Recently, advertising giant WPP sent corporate training videos to tens of thousands of employees worldwide. In the video, a presenter spoke in the recipient’s language and addressed them by name while explaining some basic AI concepts. The videos themselves were powerful demonstrations of what AI can do. The presenter, and the words it spoke, were entirely computer generated and flawlessly synthesized by software, impossible to tell that it was not a real person.

Going Viral

Part of the problem with deepfake is how easily people accept what they see and pass it along to others. Social media has made it possible for deepfake to go viral. On Wednesday, February 18, NASA’s Marse Perseverance rover made its historic landing. On Friday, February 20, a video called “Mars Fascinating” went viral with a 360  view of the surface of Mars, complete with audio of the wind blowing across the surface. By Monday, the video had more than 25.6 million views, was retweeted more than 41,300 times, and had more than 8,300 quote tweets. The problem is—the video was fake. In actuality, it was a series of photos taken by the rover Curiosity and stitched together to appear to be a video. The sound was data from a seismometer on Insight that was converted to audio. The full story can be read here.

As we advance, deepfakes and other offshoots of AI will require businesses to create an even more agile and holistic security and detection approach to protect devices, apps, data, and cloud services. This article only scratches the surface of deepfake technology and we encourage every person to read more about its effects on business, politics, and society.

There is nothing fake about Business Finance Corporation’s customer service or its ability to turn your bulging accounts receivable side of the balance sheet into ready cash. To learn more about our services, go to or call 702-947-3800.