US unleashes military to fight 'disinformation attacks' - Jean Pierre Bansard - Advertisement & Marketing Agency.
15715
post-template-default,single,single-post,postid-15715,single-format-standard,qode-quick-links-1.0,ajax_fade,page_not_loaded,,qode-theme-ver-11.2,qode-theme-bridge,wpb-js-composer js-comp-ver-5.2.1,vc_responsive
 

US unleashes military to fight ‘disinformation attacks’

US unleashes military to fight ‘disinformation attacks’


The drive has been hindered by Senate Majority Leader Mitch McConnell’s refusal to consider election-security legislation. Critics have labelled him #MoscowMitch, saying he left the US vulnerable to meddling by Russia, prompting his retort of “modern-day McCarthyism.”

Disinformation played a role in 2016.  Then-Republican presidential candidate Donald Trump speaks to supporters.

Disinformation played a role in 2016. Then-Republican presidential candidate Donald Trump speaks to supporters.Credit:AP

US President Donald Trump has repeatedly rejected allegations that dubious content on platforms like Facebook, Twitter and Google aided his election win. Hillary Clinton supporters claimed a flood of fake items may have helped sway the results in 2016.

“The risk factor is social media being abused and used to influence the elections,” Syracuse University assistant professor of communications Jennifer Grygiel said in a telephone interview.

“It’s really interesting that Darpa is trying to create these detection systems but good luck is what I say. It won’t be anywhere near perfect until there is legislative oversight. There’s a huge gap and that’s a concern.”

False news stories and so-called deepfakes are increasingly sophisticated and making it more difficult for data-driven software to spot. AI imagery has advanced in recent years and is now used by Hollywood, the fashion industry and facial recognition systems.

Loading

Researchers have shown that these generative adversarial networks – or GANs – can be used to create fake videos.

Famously, Oscar-winning filmmaker Jordan Peele created a fake video of former President Barack Obama talking about the Black Panthers, Ben Carson, and making an alleged slur against Trump, to highlight the risk of trusting material online.

After the 2016 election, Facebook Chief Executive Officer Mark Zuckerberg played down fake news as a challenge for the world’s biggest social media platform. He later signalled that he took the problem seriously and would let users flag content and enable fact-checkers to label stories in dispute.

These judgments subsequently prevented stories being turned into paid advertisements, which were one key avenue toward viral promotion.

Loading

In June, Zuckerberg said Facebook made an “execution mistake” when it didn’t act fast enough to identify a doctored video of House Speaker Nancy Pelosi in which her speech was slurred and distorted.

“Where things get especially scary is the prospect of malicious actors combining different forms of fake content into a seamless platform,” Grotto said.

“Researchers can already produce convincing fake videos, generate persuasively realistic text, and deploy chatbots to interact with people. Imagine the potential persuasive impact on vulnerable people that integrating these technologies could have: an interactive deepfake of an influential person engaged in AI-directed propaganda on a bot-to-person basis.”

By increasing the number algorithm checks, the military research agency hopes it can spot fake news with malicious intent before going viral.

Loading

The agency added: “These SemaFor technologies will help identify, deter, and understand adversary disinformation campaigns.”

Current surveillance systems are prone to “semantic errors.” An example, according to the agency, is software not noticing mismatched earrings in a fake video or photo. Other indicators, which may be noticed by humans but missed by machines, include weird teeth, messy hair and unusual backgrounds.

The algorithm testing process will include an ability to scan and evaluate 250,000 news articles and 250,000 social media posts, with 5000 fake items in the mix. The program has three phases over 48 months, initially covering news and social media, before an analysis begins of technical propaganda. The project will also include week-long “hackathons.”

Program manager Matt Turek discussed the program on Thursday in Arlington, Virginia, with potential software designers. Darpa didn’t provide an on-the-record comment.

The agency also has an existing research program underway, called MediFor, which is trying to plug a technological gap in image authentication, as no end-to-end system can verify manipulation of images taken by digital cameras and smartphones.

“Mirroring this rise in digital imagery is the associated ability for even relatively unskilled users to manipulate and distort the message of the visual media,” according to the agency’s website. “While many manipulations are benign, performed for fun or for artistic value, others are for adversarial purposes, such as propaganda or misinformation campaigns.”

With a four-year project scale for SemaFor, the next election will have come and gone before the system is operational.

“This timeline is too slow and I wonder if it is a bit of PR,” Grygiel said. “Educating the public on media literacy, along with legislation, is what is important. But elected officials lack motivation themselves for change, and there is a conflict of interest as they are using these very platforms to get elected.”

Most Viewed in World

Loading

Advertisement Agency Bansard Jean Pierre

Source link

No Comments

Post A Comment