• TRAVEL
    • THREE PERFECT DAYS
    • WHERE TO STAY
    • WHAT TO DO
    • ADVENTURE
  • FOOD + DRINK
    • RESTAURANTS
    • BARS + COCKTAILS
    • WINE + BEER
  • CULTURE
    • THE HEMI Q+A
    • TV + FILM
    • MUSIC
    • THEATER
    • SPORTS
    • STYLE + DESIGN
  • Watch
Menu
  • TRAVEL
    • THREE PERFECT DAYS
    • WHERE TO STAY
    • WHAT TO DO
    • ADVENTURE
  • FOOD + DRINK
    • RESTAURANTS
    • BARS + COCKTAILS
    • WINE + BEER
  • CULTURE
    • THE HEMI Q+A
    • TV + FILM
    • MUSIC
    • THEATER
    • SPORTS
    • STYLE + DESIGN
  • Watch
Search
Close
Home > CULTURE > Jigsaw

Can Jigsaw Solve the Internet’s Propaganda Puzzle?

Share on facebook
Share on twitter
Share on pinterest
Share on email
  • by Anjali Khosla
  • February 1, 2020

ILLUSTRATION BY FABIO CONSOLI

In recent years, Google’s parent company, Alphabet, has been criticized for a range of things, from allowing racist YouTube content to complying with the censorship demands of various foreign governments. Meanwhile, Jigsaw, a technology and research incubator at Alphabet, has been working on solutions to problems such as online harassment, disinformation, and radicalization. For example, Jigsaw’s Perspective API tool employs machine learning to help publishers like The New York Times identify toxic comments in online forums, while its Redirect Method campaign serves up counter-propaganda ads and videos to users who are being targeted by extremist recruiters. Here, Jigsaw’s director of research and development, Yasmin Green, tells Hemispheres about her work to “inspire technologies that protect people from digital conflicts.”

Tell me about a Jigsaw project you’re proud of.

Redirect Method is a great example of the Jigsaw methodology, which is fieldwork: Let’s go to Iraq and interview defectors from ISIS; let’s go to Europe and interview women and girls who tried to join the caliphate; let’s design a technology intervention that tries to reach people who are looking for extremist material and offer them information that could help them make better choices. Now the Redirect Method has been deployed in India, in Sri Lanka, in Canada, in the U.K. It’s being applied now to militant white supremacy and neo-Nazism. 

Jigsaw faced some criticism for paying for troll activity as part of an experiment. What are the ethical pitfalls you have to navigate?

The concerns that I find myself coming up against the most are around who we speak to and what we do with the research findings from those conversations. I’m really interested in state-sponsored disinformation operations—I just went to the Philippines to interview troll-farm operators. There’s a level of insight they can give me that I couldn’t get from researchers. They were able to give an account of the operational model they use to coordinate the messaging, to run the workforce, to ensure security. Those findings could be used to improve detection. That’s the delicate calculus: trying to get information that can translate to countermeasures without amplifying or enabling the furthering of harm.

What is AI bias, and how does it interfere with your work? 

We trained our technology to spot toxic speech from online comments. When we checked our models, there were terms that were disproportionately being flagged as toxic. If I put into the Perspective API “Yasmin is a researcher,” the model didn’t think that was toxic. If I put in “Yasmin is a feminist researcher,” the model thought that might be toxic. If I said “Yasmin is a Muslim researcher,” the model thought that was toxic. We were like, why is the word “Muslim” thought to be toxic more often than the word “Christian”? It’s because of our training data: When the word “Muslim” or “gay” or “feminist” was in a comment, that comment tended to be one that was toxic. Because the team didn’t have enough comments that used the word “Muslim” in a positive light, they took excerpts from news articles—sentences that were neutral but contained these words—so they could give these words the appropriate neutral weighting. 

Do you think the internet is going to become safer or more dangerous in the next decade?

Safer. I am an eternal optimist, and I see us as being in a transient moment. From what I see among policymakers, media researchers, and tech companies, everybody wants a safer internet. We might have different strategies about how to get there, but I do think we’re going to be in a good spot in 10 years.

Need a little extra wanderlust in your inbox?
Subscribe to our monthly newsletter.

You May Also Like

Will Virtual Reality Change the Way We Attend Concerts?

Smart Companies are Creating Benefits for Millennials Reaching Parenthood

Hoshino’s OMO Hotels Redefine Japanese Travel

How Leanpath Makes Environmental and Business Sense

Beer on a Mission

Outdoor Voices is Making All the Right Moves

  • AI, Artificial Intelligence, Bullies, Business, From The Mag, Google, Jigsaw, Online, Technology, Trolls
Share on facebook
Share on twitter
Share on pinterest
Share on email

Recommended

MUSIC

Will Virtual Reality Change the Way We Attend Concerts?

  • BY Steve Knopper
Culture

Smart Companies are Creating Benefits for Millennials Reaching Parenthood

  • BY Lisen Stromberg
Where To Stay

Hoshino’s OMO Hotels Redefine Japanese Travel

  • BY Nicholas DeRenzo
Culture

How Leanpath Makes Environmental and Business Sense

  • BY Anjali Khosla
Hemispheres United Airlines

Hemispheres is the award-winning onboard magazine for United Airlines. The magazine is published by Ink and produced by a dedicated staff of media professionals out of an Ink satellite office in Brooklyn, New York.

Instagram Twitter Facebook-f Youtube

Content

  • Travel
  • Food + Drink
  • Culture
  • Watch

About

  • About us
  • Contact us
  • Advertising
  • Writers Guidelines

Legal

  • Terms + Conditions
  • Privacy Policy
  • Cookie Policy
  • Anti-bribery & corruption policy
united_4p_h_w_r.png

© 2020 Ink for United Airlines. All rights reserved