Hacker, Researcher, and Security Advocate

Month: August 2019

Girl covering mouth with photo of another mouth

Deep Fakes in 2020

How artificial media could impact US Elections

Deep fakes, extremely convincing artificial media produced by deep neural networks, have entered the political arena. In 2017, Buzzfeed published a short video on YouTube that appeared to feature President Barack Obama sharing some surprisingly candid viewpoints. In reality, the visual portions of the video were artificially produced. The voice on the video was an impression done by comedian Jordan Peele. Many comments throughout social media lamented how terrifying this new technology is.

In my presentation, “The Death of Trust: Exploring the Deep Fake Threat”, at BSides Vancouver Island, I discussed the many threats posed by Deep Fakes. Among those threats is the expected role that Deep Fake media will play in the 2020 US Presidential election. While the technology continues to advance, it still has some important limitations that may help mitigate it’s impact. Moreover, there is a considerable amount of research being done into detection techniques. So far, this research boasts some impressive results.

How Deep Fakes are Created

Deep Fake videos are created using the learning capabilities of deep neural networks called Generative Adversarial Networks (GANs). In a GAN, two neural networks are pitted against each other. A Generator network is responsible for creating video frames that appear real. A Discriminator network in turn is attempts to validate whether the frame is “authentic” or not. The Generator essentially repeatedly tries to trick the Discriminator into believing images it creates are real.

Depiction of a GAN
A simplified view of a Global Adversarial Network (GAN)

Both networks are trained with a large set of still images of faces, often hundreds of thousands. After learning, the GAN can be provided a relatively small number of still images of the intended subject (for instance President Obama). The Generator is also typically provided a target video into which the subject’s face will be inserted. Frame-by-frame, the generator creates new artificial frames. The discriminator in turn decides if they belong with the set of subject images. Each time the discriminator rejects a frame, the Generator learns and refines it’s algorithms. The end result is very convincing video content.

The Political Threat

Deep Fakes are concerning for the political process because they further support the distribution of dis-information. As the capabilities of deep fake producing GANs improves, politically motivated actors can create false videos of their adversaries. These can be used to convince voters that a particular candidate said or did things things that are detrimental to their reputation. A striking characteristic of this type of dis-information is that once it is in the minds of the public, it is very hard to combat. Even with well documented evidence that a video is fake, many will still believe it is true.

However, another issue that is less talked about is the the opposite case. What happens when compromising video of a politician surfaces but they claim it is a fake? There are already an abundance of claims of “Fake News” echoing in political discord. Trying to prove the authenticity of a video claimed to be fake can be quite challenging. In this way, deep fake technology puts a heavy strain on our ability to trust anything we see or hear.

Limitations of Deep Fake Technology

The good news is, deep fake technology is still far from perfect. The limitations of the technology are constantly changing but researches continue to work on methods for exploiting those limitations. One limitation is that in the training process, GANs rely on facial images of a fixed size. This is due to processing limitations. As a result, researchers from University at Albany, SUNY have been able to train neural networks to find warping artifacts that are indicative of deep fake videos.

Another limitation of deep fake video creation is that currently the GANs do not account for context and linguistics. Facial habits that are specific to the content and or emotions being delivered are not easily replicated. Since training relies on static images, context-related expressions are not easily replicated. As a result, researchers from Dartmouth released research earlier this year that analyzes video for consistency with these “soft biometrics”. As of the release date, the study achieved a 95% accuracy rate. The researchers estimate that by the start of the 2020 primary season, that accuracy could be as high as 99%.

Finally more development needs to be done before fully synthesized (both audio and video) deep fake videos can be reliably produced. Tools like Adobe VoCo and Baidu’s “Deep Voice” can produce very realistic synthesized voices. However, combining both deepfaked audio and video has yet to be demonstrated with consistent reliable results. That said, it seems reasonable to expect that it is only a matter of time before fully synthesized video can be created from nothing more than a typewritten script.

Proving Authenticity

Researchers have also been working on ways to ensure that truly authentic videos can be validated. NYU researchers recently demonstrated how current high-end digital cameras can be modified to create digital watermarks. Their study went further however. They also used neural networks to overcome loss of forensic data due to regeneration (re-encoding an image/video). Overall they were able to build the framework of what could be an all-new approach to digital forensics.

Looking ahead

It certainly seems clear that for 2020, deepfakes will be a part of the (dis-) information bombarding the American public. If there is any good news in this it’s that we’ve not yet reached a level of capability talked about in the many doomsday scenarios regarding deepfakes. To truly limit the impact of deepfake media will require a coordinated approach of public awareness, careful and responsible journalism, and of course technological countermeasures. Security professionals can help shape the course of these three elements through our evangelism, influence and research.

Woman wandering in the desert

You Can’t Do Anything, Why Haven’t You Done Something?

The conflicting messages Security Professionals give Business Leaders

It’s not if you’ll get hacked, it’s when. This is a statement every security professional has probably heard. In fact, most of us have probably used it at one time or another. A slightly different version is, if a hacker wants to get it badly enough, they will and you can’t stop them. While these statements may be true, they are not helpful. Worse yet, they setup a form of paradox among security professionals. We tell leaders they can’t do anything to protect themselves but then shame them when they do nothing.

Why say these things?
Frustrated Man using Laptop
Photo by Tim Gouw on Unsplash

The origins and intents of these types of statement are most often genuine. Usually they’re used to convey the notion that being 100% hack-proof is unrealistic. Furthermore, they set the expectation that security is a continuous process not a destination or goal. While the sentiments are accurate but poorly communicated.

In some cases, however, these statements of hopelessness sound like an attempt to convey superiority. Their context seems to say, I know more about these attackers than you and you’re foolish to think we can stop them. Ultimately, the tone is counter productive and prevents us from inspiring the actions we want to see.

Shaming Inaction

Things get worse when, after telling leaders there is no hope, cyber security specialists turn around and shame them for not taking action. A breach occurs. The security folks point fingers at all the security initiatives that didn’t happen due to business decisions. Yet the blame game isn’t fair. Those fingers should be pointed at the ones who said there was no hope.

Consider this. When someone you count on for their expertise says a task simply can’t be done, how motivated do you feel? Will you spend time trying to accomplish something you have little passion for when the expert says there’s no hope? This is the scenario we as security professionals create when we share these messages of hopelessness. Basically, we’ve told them to just accept what is and move on. So how can we expect that they would do anything we ask?

Communicating better

When we talk to business leaders about security, we have to arm them with decision making criteria. We need to help them see that the course of action we’re recommending has tangible benefits. That doesn’t mean over-promising the impact of a new control or solution. Instead, we just need to help quantify the risks and the reduction of risk that will result. Give them some hope that if they do this thing, it will reduce the likelihood or impact of a compromise.

Of course quantifying risk makes most security folks shudder. It is hard to do and harder to do well. However, it’s not impossible. Focus on numbers. How many user accounts will no longer have static passwords with the new multi-factor solution? How many functional systems will be isolated to their own segment with that micro-segmentation proposal? Use those numbers to develop metrics. Will revenue generating systems be more secure? Well how much revenue are you helping protect?

The case being made doesn’t have to include complex formulas that create objective risk scores. Rather, we just need to provide tangible context of how much more secure will we be tomorrow over today? It sounds silly to say but ultimately that’s the decision business leaders are asked to make. We’re asking them to make a cost-benefit based decision in their heads. Make it easy for them.

When you give someone credible hope that their actions can be successful, they become motivated. Know what successful means and coach them if you have to. Success in security strategy is not becoming unhackable, we know that. It’s achieving continuous improvement over time. Stop spreading doom-and-gloom and wondering why they don’t take action. Use positive messaging to inspire action and get the results you want.

Powered by WordPress & Theme by Anders Norén