Predict your next investment

HEALTHCARE | Medical Devices & Equipment / Other Medical Devices
jigsaw.com

See what CB Insights has to offer

Founded Year

2003

Stage

Acquired | Acquired

Total Raised

$17.95M

Valuation

$0000 

Revenue

$0000 

About Jigsaw

Jigsaw is a prospecting tool used by sales professionals, marketers and recruiters to get fresh and accurate sales leads and business contact information.

Jigsaw Headquarter Location

1917 29 3/4 Ave

Rice Lake, Wisconsin, 54868,

United States

Latest Jigsaw News

Seven things to know about the Big Tech CEO hearing

Mar 25, 2021

Joe Williams is a senior reporter at Protocol covering enterprise software, including industry giants like Salesforce, Microsoft, IBM and Oracle. He previously covered emerging technology for Business Insider. Joe can be reached at JWilliams@Protocol.com. To share information confidentially, he can also be contacted on a non-work device via Signal (+1-309-265-6120) or JPW53189@protonmail.com. March 25, 2021 The perceptions surrounding autism and other developmental disorders are quickly changing. Companies like SAP and Dell, for example, employ hundreds of neurodiverse employees and interns in technical roles, as well as in marketing, customer relations and other non-engineering jobs. One such employee is Serena Schaefer, a software engineer at Microsoft who was recruited under the company's autism hiring program. <p>Overhearing high school parents question the abilities of teenagers like herself brought Schaefer into tech. Now, she serves as an example of the changing nature of career paths for neurodiverse individuals — a population that suffers from high unemployment and underemployment. </p><p>"There's this doubt if someone has autism. Being able to be given the chance to do something is crucial," Schaefer told Protocol. </p><p>Protocol talked to Schaefer to learn about her interview process at Microsoft, the importance of intersectionality when it comes to diversity efforts, why she hopes neurodiverse hiring programs go international and how companies can improve the experience of workers with autism. </p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="1"></div></div></p><p><em>This interview has been edited for brevity and clarity.</em></p><p><strong>What initially made you want to get into the tech industry?</strong></p><p>Back in freshman year of high school, I was at a small special-education school, and my biology teacher told me he was looking to start a robotics team. I didn't know anything about robotics really, but I just decided to say yes and join. It was the first-year team, so no one really knew anything except the mentors. </p><p>You are assigned to do a certain task, and that year it was to create a robot that shoots basketballs through a hoop. There's a kickoff event where you get the supplies you need and hear about this task. At that event, I overheard parents say how they weren't too optimistic or hopeful that we would be able to produce a robot like these top public high schools could. They thought we'd have fun but wouldn't be able to finish anything in the six weeks we'd been given. </p><p>That was really eye-opening but frustrating for me. I would have thought all the parents would have been the biggest supporters. But it didn't seem like everyone was very supportive, so I was determined to change their minds. It was amazing we actually finished the robot, competed in competitions and scored points. I just gained a really deep interest in technology. </p><p><strong>Like that experience with the parents, do you find that same stigma exists in the tech industry today?</strong> </p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="2"></div></div></p><p>I do. There's movies and TV shows showing kids with autism and Down syndrome and they're like, rocking in a corner and being nonfunctional. It's what people think of when they think of neurodiversity. Or they may think of Sheldon Cooper and think that everyone is good at math. But there's this doubt if someone has autism. Being able to be given the chance to do something is crucial. </p><p><br></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img class="rm-shortcode" loading="lazy" src="https://www.protocol.com/media-library/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yNTg3MjgxNS9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTY1NzMxMDg3M30.755QEJy3TnIAJkvf8HIxIc-lsHnx2VGroqEzKXO-2j8/image.jpg?width=980" id="ecdc3" data-rm-shortcode-id="b95dfcc0dcf161da59499abed51a2417" data-rm-shortcode-name="rebelmouse-image" alt="Serena Schaefer's mug. "> <small class="image-media media-caption" placeholder="Add Photo Caption...">Serena Schaefer's mug.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Photo: Microsoft</small></p><p><br></p><p><strong>You've seen a larger reckoning over diversity and inclusion across industries, but definitely tech is a big part of that discussion. Have you seen a shift when it comes to neurodiversity? What progress has happened and what needs work? </strong></p><p>There's been progress recently. Companies like Microsoft are helping to make neurodiversity hiring programs more mainstream, more well-known — at least in the U.S. If Microsoft expands programs like these to France and other countries, it would become even more mainstream. Once these other countries start recognizing neurodiversity at a more public level, it will be even better. </p><p><strong>Do you get the chance to do a lot of outreach or work with neurodiverse high school students or those looking to get into the tech industry?</strong></p><p>Recently, I haven't done much work outside of Microsoft. But whenever there was an autism hiring luncheon, I would go and talk to the candidates and meet them and talk about both my profession and neurodiversity itself. </p><p>I've done more women-in-STEM-related things. Intersectionality is something that is prevalent in conversations about diversity. Females with autism are fairly rare. And females in the tech industry are underrepresented. So I do think that we can talk about neurodiversity but also how it intersects with other types of experiences. </p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="3"></div></div></p><p><strong>What impressed you about Microsoft's hiring program? And you mentioned expanding globally. How else do you think the company can improve it?</strong></p><p>It was interviews over multiple days. So if you had an off day, you still would be able to go back the next day and actually prove what you can do. But the first day, rather than just sticking you in the room with some developer and have him or her ask preselected questions, you worked in a team to create a mock product and you got to present it to a bunch of hiring managers. And they didn't judge you based on your communication style. </p><p>With neurodiversity, some people think very fast. But some people may need a little more time to come up with the answer. Microsoft allowed everyone in the spectrum and accommodated everything they might need. It should be the standard for hiring for any candidate, not just those who happen to have a diagnosis. I had to prove I had a diagnosis. What about the people who don't have a diagnosis and [are] on the spectrum? Or have dyslexia? They should get a chance to prove themselves in that way. </p><p><strong>I've spoken to companies and other neurodiverse individuals who say the shift to remote work has features that are more accommodating. What has your experience been?</strong></p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="4"></div></div></p><p>There have been some positives. I tend to fidget a lot just to keep my hands busy and that may look kind of strange in a working environment. But working remotely, no one is going by my office wondering why I'm twirling a pen all the time. It lets me feel like I can be more relaxed and not have to constantly make sure that I am suppressing myself. I've found it hard to maintain eye contact with people, but looking at a computer screen makes it a little easier. You can stare at the camera. It's a bunch of small things that add up. It can be a bit isolating, but we do online games, virtual escape rooms, things like that. Nothing too negative, I would say it's more positive. </p><p><strong>Do you think the shift to remote work will help companies bring more people in under neurodiversity?</strong></p><p>It would be great if there was a choice of whether to come into the office or not. Some people might prefer being in an actual office. A hybrid model is the way to go. Hopefully this pandemic can help us reconsider interviewing and allowing employees to do their best work however they are able to. </p><p><strong>How should companies act to create a better environment for their neurodiverse employees once they're in the door?</strong></p><p>I was first hired as an intern and on the first day someone from the program came in and presented to my team about neurodiversity and possible challenges I may face. I didn't have to explain it myself. It made it a bit less awkward. And afterwards my colleagues would ask a lot of questions. It was really inspiring to see how willing they were to include me. </p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="5"></div></div></p><p>Companies should have those sessions for a whole group of teams, trying to make them recognize any unconscious bias that they have and be aware that their coworkers may have dyslexia or some learning disability — or may be neurodiverse. It's about expanding what's already there to a larger scale so that more people can be aware. </p><p><strong>How do you manage being an advocate for the community while also handling all the other day-to-day pressures of life and work?</strong></p><p>It's pressure, but it's good pressure. It makes me reflect on how I interact with others who are neurodiverse or have disabilities. But I'm also white. I have sight. I have hearing. By trying to do diversity- and inclusion-related efforts, I'm hopefully widening my own awareness of others. I want to do more than coding something or engineering something. I want to feel like I'm making a positive difference beyond just those who are using the products I am making.</p> From Your Site Articles James Daly has a deep knowledge of creating brand voice identity, including understanding various audiences and targeting messaging accordingly. He enjoys commissioning, editing, writing, and business development, particularly in launching new ventures and building passionate audiences. Daly has led teams large and small to multiple awards and quantifiable success through a strategy built on teamwork, passion, fact-checking, intelligence, analytics, and audience growth while meeting budget goals and production deadlines in fast-paced environments. Daly is the Editorial Director of 2030 Media and a contributor at Wired. February 16, 2021 For corporate IT managers, there are many motivations to move dynamic workloads to the cloud. It provides an irresistible trifecta of flexibility, scalability, and costs savings for those managing varying workloads. The past year of widespread shutdowns caused by COVID-19 have increased this demand. That's one reason the global cloud computing market size is expected to grow from $371.4 billion in 2020 to $832.1 billion by 2025, at a Compound Annual Growth Rate (CAGR) of 17.5%, according to Research and Markets. <p>While cloud deployments benefit CISOs and security administrators in many ways, they often don't suppress a critical attack vector — protecting data in use. Data exists <em>at rest</em> when it's stored, <em>in transit</em> when moving through the network, and <em>in use</em> as it's being processed. Data is often encrypted in the first two states, but not while processed. </p><p class="pull-quote">"Security is an evolving journey and an evolving conversation, but confidential computing is going to be a big part of that future." -Vikas Bhatia, Head of Product for Azure Confidential Computing at Microsoft</p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="1"></div></div></p><p>That's where the confidential computing approach offered by <a href="https://aka.ms/azure-sql-enclaves" rel="noopener noreferrer" target="_blank">Always Encrypted with secure enclaves</a> in Azure SQL comes in, plugging that final "in use" security gap. It does this by isolating computations to a hardware-based trusted execution environment (TEE), which provides a protected container by securing a portion of the processor and memory. Users run software inside the protected environment to shield portions of code and data from view, preventing modification outside the TEE.</p><p>Always Encrypted is a client-side technology that ensures that sensitive data stored in specific database columns (for example, credit card numbers, national identification numbers) are never revealed to the SQL Server on Azure Virtual Machines, or Azure SQL Database, a managed cloud database. This defense includes protection against database administrators or other privileged users, including cloud providers, who are authorized to access the database to perform management tasks but have no business need to access the information in the encrypted columns. </p><p>"You have a hardware-backed guarantee that your data will not be exposed to any of the attack vectors such as your own database administrator, bugs in the guest or host operating system, or even the hypervisor that your workload is running on," said Vikas Bhatia Head of Product for Azure Confidential Computing at Microsoft.. "Your data is safe and completely within your control. "<a href="#_msocom_1" rel="noopener noreferrer" target="_blank"></a></p><p><strong>The power of secure enclaves</strong></p><p>Always Encrypted enables the Database Engine to process some queries on encrypted data while preserving the confidentiality of the data at a column granularity. The data is decrypted from encrypted database columns only for processing by client applications with access to the encryption key. While current database systems provide sophisticated access control mechanisms and encryption support for data-at-rest, they do not protect the data against attackers with administrative privileges on the database or on the server that hosts the database. </p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="2"></div></div></p><p>Always Encrypted works using <em>secure enclaves, </em>a protected region of memory within the Database Engine process that can contain plaintext data. The secure enclave appears as an opaque box to the rest of the Database Engine and other processes on the hosting machine. There is no way to view any data or code inside the enclave from the outside, even with a debugger. These properties make the secure enclave a <em>trusted execution environment</em> that can safely access cryptographic keys and sensitive data in plaintext, without compromising data confidentiality. </p><p><a rel="noopener noreferrer" target="_blank"></a><a rel="noopener noreferrer" target="_blank"></a>Always Encrypted used by a wide variety of <a href="https://aka.ms/AlwaysEncryptedEnclavesAzureSQLDBBlog" rel="noopener noreferrer" target="_blank">customers</a> requiring confidentiality and governmental rules compliance, from financial institutions (such as Royal Bank of Canada, Financial Fabric) to insurance companies and health care organizations. These customers use Always Encrypted mostly for online transactional processing applications and encrypt only personally identifiable identifier (PII) columns such as social security numbers, names, email addresses, and credit card numbers. </p><p>That's an essential advantage. "With these businesses, their entire infrastructure is built on trust," Bhatia noted. "So security must not just be part of their technical foundation but part of their essence. Always Encrypted allows them to do that." </p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="3"></div></div></p><p>Azure confidential computing can also be scaled horizontally, meaning that it offers the ability to increase capacity by connecting multiple hardware or software entities so that they work as a single logical unit. Conversely, vertical scaling means that you scale by adding more power (CPU, RAM) to an existing machine.</p><p>"Security is an evolving journey and an evolving conversation," Bhatia said. "But confidential computing is going to be a big part of that future. "</p> Keep ReadingShow less Issie Lapowsky ( @issielapowsky ) is a senior reporter at Protocol, covering the intersection of technology, politics, and national affairs. Previously, she was a senior writer at Wired, where she covered the 2016 election and the Facebook beat in its aftermath. Prior to that, Issie worked as a staff writer for Inc. magazine, writing about small business and entrepreneurship. She has also worked as an on-air contributor for CBS News and taught a graduate-level course at New York University’s Center for Publishing on how tech giants have affected publishing. Email Issie . March 8, 2021 On a Friday in August 2017 — years before a mob of armed and very-online extremists took over the U.S. Capitol — a young Black woman who worked at Facebook walked up to the microphone to ask Mark Zuckerberg a question during a weekly companywide question-and-answer session. Zuckerberg had just finished speaking to the staff about the white supremacist violence in Charlottesville, Virginia, the weekend before — and what a difficult week it had been for the world. He was answering questions on a range of topics, but the employee wanted to know: Why had he waited so long to say something? <p>The so-called Unite the Right rally in Charlottesville had been planned in plain sight for the better part of a month on Facebook. Facebook took the event down only a <a href="https://www.businessinsider.com/facebook-removed-unite-the-right-charlottesville-rally-event-page-one-day-before-2017-8" target="_blank">day before</a> it began, citing its ties to hate groups and the threat of physical harm. That turned out to be more than a threat. The extremist violence in Charlottesville left three people dead and dozens more injured. Then-Attorney General Jeff Sessions later <a href="https://www.usnews.com/news/national-news/articles/2017-08-14/jeff-sessions-calls-charlottesville-attack-domestic-terrorism" target="_blank">called</a> it an act of "domestic terrorism." </p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="1"></div></div></p><p>Zuckerberg had already posted a contrite, cautious <a href="https://www.facebook.com/zuck/posts/10103969849282011?pnref=story" target="_blank">message</a> about the rally on Facebook earlier that week, saying the company would monitor for any further threats of violence. But his in-person response to the employee's question that day struck some on the staff as dismissive. "He said in front of the entire company, both in person and watching virtually, that things happen all over the world: Is he supposed to comment on everything?" one former employee recalled.</p><p>"It was something like: He can't be giving an opinion on everything that happens in the world every Friday," another former employee remembered.</p><p>Facebook's chief operating officer and resident tactician, Sheryl Sandberg, quickly swooped in, thanking the employee for her question and rerouting the conversation to talk about Facebook's charitable donations and how Sandberg herself thinks about what to comment on publicly. A Facebook spokesperson confirmed the details of this account, but said it lacked context, including that Zuckerberg did admit he should have said something sooner. </p><p>Still, to the people who spoke with Protocol, Zuckerberg's unscripted remarks that day underscored something some employees already feared: that the company had yet to take the threat posed by domestic extremists in the U.S. as seriously as it was taking the threat from foreign extremists linked to ISIS and al-Qaeda. "There wasn't a patent condemnation such that we would have expected had this been a foreign extremist group," a third former employee said.</p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="2"></div></div></p><p>At the time, tech giants were already hard at work figuring out how to crack down on the global terrorist networks that filled their sites with beheading videos and used social media to openly recruit new adherents. Just a few months before Charlottesville, Facebook, YouTube, Twitter and Microsoft had <a href="https://gifct.org/about/story/#june-26--2017---formation-of-gifct" target="_blank">announced</a> a novel plan to share intel on known terrorist content so they could automatically remove posts that had appeared elsewhere on the web. </p><p>Despite the heavy-handed approach to international jihadism, tech giants have applied a notably lighter touch to the same sort of xenophobic, racist, conspiratorial ideologies that are homegrown in the U.S. and held largely by white Westerners. Instead, they've drawn drifting lines in the sand, banning explicit calls for violence, but often waiting to address the deranged beliefs underlying that violence until something has gone terribly wrong. </p><p>But the Capitol riot on Jan. 6 and the spiraling conspiracies that led to it have forced a reckoning many years in the making on how both Big Tech and the U.S. government approach domestic extremists and their growing power. In the weeks since the riot, the Department of Homeland Security has issued a <a href="https://www.dhs.gov/ntas/advisory/national-terrorism-advisory-system-bulletin-january-27-2021" target="_blank">terrorism advisory</a> bulletin, warning of the increased threat of "domestic violent extremists" who "may be emboldened" by the riot. The head of the intelligence community has <a href="https://www.nytimes.com/2021/01/19/us/politics/avril-haines-domestic-terror-qanon.html" target="_blank">promised</a> to track domestic extremist groups like QAnon. And attorney general nominee Merrick Garland, who prosecuted the 1995 Oklahoma City bombing case, <a href="https://time.com/5941907/merrick-garland-domestic-terrorism/" target="_blank">said</a> during his recent confirmation hearing that investigating domestic terrorism will be his "first priority. "</p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="3"></div></div></p><p>Tech companies have followed suit, cracking down in ways they never have before on the people and organizations that worked to motivate and glorify that extremist behavior, including former President Donald Trump himself. </p><p>But the question now is the same as it was when that employee confronted Zuckerberg three years ago: What took so long? </p><p>Interviews with more than a dozen people who have worked on these issues at Facebook, Twitter and Google or inside the government shed light on how tech giants' defenses against violent extremism have evolved over the last decade and why their work on domestic threats lagged behind their work on foreign ones. </p><p>Some of it has to do with the War on Terror sociopolitical dynamics that have prioritized violent Islamism above all else. </p><p>Some of it has to do with the technical advancements that have been made in just the last four years. </p><p>And yes, some of it has to do with Trump.</p><h2>The room full of lawyers</h2><p>Nearly a decade before Q was a glimmer in some 4channer's eye, the tech industry was facing a different scourge – the proliferation of child sexual abuse material. In 2008, Microsoft called a Dartmouth computer science professor named Hany Farid to help the company figure out a way to do something about it. Farid, now a professor at University of California, Berkeley, traveled to Washington to meet with representatives from the tech industry to discuss a possible solution.</p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="4"></div></div></p><p>"I go down to D.C. to talk to them, and it's exactly what you think it is: a room full of lawyers, not engineers, from the tech industry, talking about how they can't solve the problem," Farid recalled. </p><p>To Farid, the fact that he was one of the only computer scientists at the meeting sent a message about how the industry thought about the problem, and still thinks about other content moderation problems — not as a technical challenge that rewarded speed and innovation, but as a legal liability that had to be handled cautiously.</p><p>At the time, tech companies were already attaching unique fingerprints to copyrighted material so they could remove anything that risked violating the Digital Millennium Copyright Act. Farid didn't see any reason why companies couldn't apply the same technology to automatically remove child abuse material that had been previously reported to the National Center for Missing &amp; Exploited Children's tipline. That might not catch every piece of child abuse imagery on the internet, but it would make a dent. In partnership with Microsoft, he spent the next year developing a tool called PhotoDNA that Microsoft <a href="https://news.microsoft.com/2009/12/15/new-technology-fights-child-porn-by-tracking-its-photodna/" target="_blank">deployed</a> across its products in 2009.<br></p><p>But social networks dragged their feet. Facebook became the first company outside of Microsoft to announce it had adopted PhotoDNA in <a href="https://blogs.microsoft.com/on-the-issues/2011/05/19/facebook-to-use-microsofts-photodna-technology-to-combat-child-exploitation/" rel="noopener noreferrer" target="_blank">2011</a>. Twitter took it up in <a href="https://www.theguardian.com/technology/2013/jul/22/twitter-photodna-child-abuse" rel="noopener noreferrer" target="_blank">2013</a>. (Google had begun using its own <a href="https://www.blog.google/outreach-initiatives/google-org/our-continued-commitment-to-combating/" rel="noopener noreferrer" target="_blank">hashing system</a> in 2008.) "Everybody came to this reluctantly," Farid said. "They knew if you came for [child sexual abuse material], you're now going to come for the other stuff. "</p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="5"></div></div></p><p>That turned out to be the right assumption. By 2014, that "other stuff" included a string of ghastly beheading videos, slickly filmed and distributed by ISIS's loud and proud propaganda arm. One of those videos in particular, which documented the beheading of journalist James Foley, horrified Americans as it filled their Twitter feeds and multiplied on YouTube. "It was the first instance of an execution video going really viral," said Nu Wexler, who was working on Twitter's policy communications team at the time. "It was one of the early turning points where the platforms realized they needed to work together. "</p><p>As the Foley video made the rounds, Twitter and YouTube scrambled to form an informal alliance, where each platform swapped links on the videos it was finding and taking down. But at the time, that work was happening through user reports and manual searches. "A running theme for a number of services was that we had a very manual, very reactive response to the threat that ISIS posed to our services, coupled with the speed of ISIS' territorial organizational expansion and at the same time the response from industry and from government being relatively siloed," said Nick Pickles, Twitter's senior director of public policy strategy.</p><p>The Foley video and several other videos that appeared on the platform in quick succession only underscored how insufficient that approach was. "The way that content was distributed by ISIS online represented a manifestation of their online and physical threat in a way which led to a far more focused policy conversation and urgency to address their exploitation of digital services," Pickles said.</p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="6"></div></div></p><p>But there was also trepidation among some in the industry. "I heard a tech executive say to me, and I wanted to punch him in the face: 'One person's terrorist is another person's freedom fighter,'" Farid remembered. "I'm like, there's a fucking video of a guy getting his head cut off and then run over. Do you want to talk about that being a 'freedom fighter'? "</p><p>To Farid, the choice for tech companies was simple: automatically filter out the mass quantities of obviously abhorrent content using hashing technology like PhotoDNA and worry about the gray areas later.</p><p class="pull-quote">There's a fucking video of a guy getting his head cut off and then run over. Do you want to talk about that being a 'freedom fighter'?</p><p>Inside YouTube, one former employee who has worked on policy issues for a number of tech giants said people were beginning to discuss doing just that. But questions about the slippery slope slowed them down. "You start doing it for this, then everybody's going to ask you to do it for everything else. Where do you draw the line there? What is OK and what's not?" the former employee said, recalling those discussions. "A lot of these conversations were happening very robustly internally. "</p><p>Those conversations were also happening with the U.S. government at a time when tech giants were very much trying to <a href="https://www.wired.com/2014/03/facebook-security/" rel="noopener noreferrer" target="_blank">distance</a> themselves from the feds in the wake of the Edward Snowden disclosures. "At first when we were meeting with social media companies to address the ISIS threat, there was some reluctance to feel like tech companies were part of the solution," said Ryan Greer, who worked on violent extremism issues in both the State Department and the Department of Homeland Security under President Obama. "It had to be a little bit shamed out of them. "</p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="7"></div></div></p><h2>The Madison Valleywood Project</h2><p>Then, in 2015, ISIS-inspired attackers shot up a theater in Paris and an office in San Bernardino. The next year, they carried out a series of bombings in Brussels and drove cargo trucks through crowds in Berlin and Nice. The rash of terrorist attacks in the U.S. and Europe changed the stakes for the tech industry. Suddenly, governments in those countries began forcefully pushing tech companies to prevent their platforms from becoming instruments of radicalization. </p><p>"You had multiple Western governments speaking with one voice and linking arms on this to put pressure to bear on these companies," said the former YouTube employee.</p><p>In the U.S., the Obama administration gave this pressure campaign a codename: <a href="https://www.nytimes.com/2016/02/25/technology/tech-and-media-firms-called-to-white-house-for-terrorism-meeting.html" rel="noopener noreferrer" target="_blank">The Madison Valleywood Project</a>, which was designed to get Madison Avenue advertisers, Silicon Valley technologists and Hollywood filmmakers to work with the government in the fight against ISIS. In February 2016, Obama invited representatives from all of those industries – Google, Facebook and Twitter among them – to a day-long summit at the White House that was laser-focused on ISIS. The day's opening speaker, former Assistant Attorney General John Carlin, applauded Facebook, Twitter and YouTube's nascent counterterrorism efforts but urged them to do more. "We anticipate — and indeed hope — that after today you will continue to meet without the government, to continue to develop on your own efforts, building on the connections you make today," Carlin said, according to a copy of the <a href="https://epic.org/foia/MadisonValleywood_2.pdf" rel="noopener noreferrer" target="_blank">speech</a> obtained by the Electronic Privacy Information Center.</p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="8"></div></div></p><p>"The ISIS threat really captivated both U.S. and international media at the time," said Greer, who now works as national security director for the Anti-Defamation League. "There was a constant drumbeat of questions: What are you doing about ISIS? What are you doing about ISIS? "</p><p>The mounting pressure seemed to have an impact. Just weeks before the White House summit, Twitter became the first tech company to <a href="https://blog.twitter.com/en_us/a/2016/combating-violent-extremism.html" rel="noopener noreferrer" target="_blank">publicly</a> enumerate the terrorist accounts it had removed in 2016. The communications team opted to publish the blog post laying out the stats without attaching the author's name, Wexler said, because they were fearful of directing more death threats to executives who were already being bombarded with them.</p><p>Over the course of the next year and a half, tech executives continued to hold meetings with the U.K.'s home secretary, the United Nations Counter-Terrorism Executive Directorate and the EU Internet Forum.</p><p><br></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <a href="https://media.gettyimages.com/photos/from-left-to-right-uk-home-secretary-amber-rudd-head-of-public-policy-picture-id865330526?b=1&amp;k=6&amp;m=865330526&amp;s=170x170&amp;h=_H9lHNEtUrY1xP4Xr75Pj4F0BvKbUzR9M64Rp2dRaQo=" target="_blank"><img class="rm-shortcode" loading="lazy" src="https://www.protocol.com/media-library/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yNTc0NzQyNS9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTY0NDQ5OTE1OH0.HluxS3UkzhEHVK7AjUGUmSMIjhcYdArHYvQbEdGG1GM/image.jpg?width=980" id="99b98" data-rm-shortcode-id="b78df8d5dcf6965b6363ea9429b987bd" data-rm-shortcode-name="rebelmouse-image"></a> <small class="image-media media-caption" placeholder="Add Photo Caption...">Twitter's Nick Pickles (second from left) and Facebook's Brian Fishman (third from left) attend the G7 Interior Ministerial Meeting in Ischia, Italy in October of 2017. </small> <small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Photo: Vincenzo Rando/Getty Images</small> </p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="9"></div></div></p><p><br></p><p>By June 2017, Microsoft, Facebook, Google and Twitter <a href="https://gifct.org/about/story/" rel="noopener noreferrer" target="_blank">emerged with</a> a plan to share hashed terrorist images and videos through a new group called the Global Internet Forum to Counter Terrorism, or GIFCT. That group has since grown to include smaller companies, including Snap, Pinterest, Mailchimp and Discord, and is led by Obama's former director of the National Counterterrorism Center, Nick Rasmussen.</p><p>Meanwhile, Google's internal idea lab, Jigsaw, which had been studying radicalization online for years, began running a novel <a href="https://www.wired.com/2016/09/googles-clever-plan-stop-aspiring-isis-recruits/" rel="noopener noreferrer" target="_blank">pilot</a> designed to stop people from getting pulled in by ISIS through search. Working with outside groups, Jigsaw began sponsoring Google search ads in 2016 that would run whenever users searched for terms that risked sending them down an ISIS rabbit hole. Those search ads, inspired by Jigsaw's interviews with actual ISIS defectors, would link to Arabic and English-language YouTube videos that aimed to counter ISIS propaganda. In 2017, even as Google and YouTube worked on ways to remove ISIS content algorithmically, YouTube <a href="https://blog.youtube/news-and-events/bringing-new-redirect-method-features" rel="noopener noreferrer" target="_blank">deployed</a> the Redirect Method to searches inside its own platform to help counter propaganda its automated filters had not yet found. </p><p>Facebook, meanwhile, hired an expert on jihadi terrorism, Brian Fishman, to head up Facebook's work on counterterrorism and dangerous organizations in April 2016. At the time, the list of dangerous organizations consisted mainly of foreign terrorist organizations, as well as well-known hate groups like the Ku Klux Klan and the neo-Nazi group Blood &amp; Honour. These organizations were banned from the platform, as was any praise of those groups. But Fishman's hiring was a clear signal that cracking down on ISIS and al-Qaeda had become a priority for Facebook.</p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="10"></div></div></p><p class="pull-quote">There was a constant drumbeat of questions: What are you doing about ISIS? What are you doing about ISIS?</p><p>After Fishman began, Facebook started using an approach similar to what the intelligence community would use to go after ISIS, relying not just on user reports and automated takedowns of known terrorist content, but using artificial intelligence as well as off-platform information to chase down whole networks of accounts.</p><p>ISIS's overt and corporate branding gave tech platforms a clear focal point to start with. "Some groups like the ISISes of the world and the al-Qaedas of the world are very focused on protecting their brand," Fishman said. "They retain tight control over the release of information." ISIS had media outlets, television stations, slogans and soundtracks. That meant platforms could begin sniffing out accounts that used that common branding without having to look exclusively at the content itself. </p><p>"I look back on that threat, and I recognize now in hindsight there were attributes of it that made it easier to go after than the types of domestic terrorism and extremism we're grappling with today," said Yasmin Green, director of research and development at Google's Jigsaw. "There was one organization, basically, and you had to make public your allegiance to it. Obviously, all of those things made it possible for law enforcement and the platforms to model and pursue it. "</p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="11"></div></div></p><p><br></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <a href="https://media.gettyimages.com/photos/yasmin-green-director-of-research-and-development-jigsaw-speaks-the-picture-id1210644850?b=1&amp;k=6&amp;m=1210644850&amp;s=170x170&amp;h=5xo3EZLexgMsab0xvC7TRT-pyfxqqtWDSityLtoUvx8=" target="_blank"><img class="rm-shortcode" loading="lazy" src="https://www.protocol.com/media-library/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yNTc0ODU4Ny9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTY2NjkxOTA0NH0.YNluKT1h_v-6kpS9FJexwgvFs1Q77ikBXzV4MndqhNo/image.jpg?width=980" id="99442" data-rm-shortcode-id="9448e621190bd34309d2730d21d23cdf" data-rm-shortcode-name="rebelmouse-image"></a> <small class="image-media media-caption" placeholder="Add Photo Caption...">Jigsaw's Yasmin Green has recently focused her violent extremism research on white supremacists and conspiracy theorists.</small> <small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Photo: Craig Barritt/Getty Images</small> </p><p><br></p><p>Around that time, Twitter also began investing in monitoring what Pickles calls "behavioral signals," not just tweets. "If you focus on behavioral signals, you can take action before they distribute content," Pickles said. "The switch to behavior meant that we could take action much faster at a much greater scale, rather than waiting. "</p><p>The development of automated filters across the industry was almost stunningly successful. Within a year, Facebook, Twitter and YouTube went from manually removing foreign terrorist content that had been reported by users to automatically taking down the vast majority of foreign terrorists' posts before anyone flags them. Today, according to Facebook's transparency reports, 99.8% of terrorist content is <a href="https://transparency.facebook.com/community-standards-enforcement#dangerous-organizations" rel="noopener noreferrer" target="_blank">removed</a> before a single person has even reported it.</p><p>"If you actually look a bit farther back, you understand just how much has moved in this arena," Green said. "That always makes me feel a little bit optimistic." </p><h2>The domestic dilemma</h2><p>It didn't hurt that both the United States and the United Nations keep lists of designated international terrorist organizations. To the "rooms full of lawyers" that help make these decisions, that kept things clean; use those lists as a guide and level a hammer on any organizations on them, and tech executives could be fairly confident they wouldn't face much second-guessing from the powers that be. "If a terrorist group is put on a watch list or terrorist list or viewed by the international community, by the UN, as a terrorist group, then that gives Facebook everything they need to have a very strong policy," said Yael Eisenstat, a former CIA officer who led election integrity efforts for Facebook's political ads in 2018. </p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="11"></div></div></p><p>The same can't be said for domestic extremists. In the United States, there's not an analogous list of domestic terrorist organizations for companies to work from. That doesn't mean acts of domestic terrorism go unpunished. It just means that people are prosecuted for the underlying crimes they commit, not for being part of a domestic terrorist organization. That also means that individual extremists who commit the crimes are the ones who face the punishment, not the groups they represent. "You have to have a violent crime committed in pursuit of an ideology," former FBI Acting Director Andrew McCabe said in a recent <a href="https://podcasts.apple.com/gb/podcast/dhs-warning-domestic-violent-extremists/id498897343?i=1000507279942" rel="noopener noreferrer" target="_blank">podcast</a>. "We hesitate to call domestic terrorists 'terrorists' until after something has happened." </p><p class="pull-quote">Nobody's going to have a hearing if a platform takes down 1,000 ISIS accounts. But they might have a hearing if you take down 1,000 QAnon accounts.</p><p>This gap in the legal system means tech companies write their own rules around what sorts of objectionable ideologies and groups ought to be forbidden on their platforms and often only take action once the risk of violence is imminent. "If something was illegal it was going to be handled. If something was not, then it became a political conversation," Eisenstat said of her time at Facebook.</p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="11"></div></div></p><p>Even in the best of times, it's an uncomfortable balancing act for companies that purport to prioritize free speech above all else. But it's particularly fraught when the person condoning or even espousing extremist views is the president of the United States. "Nobody's going to have a hearing if a platform takes down 1,000 ISIS accounts. But they might have a hearing if you take down 1,000 QAnon accounts," said Wexler, who worked in policy communications for Facebook, Google and Twitter during the Trump administration.</p><p>There never was a Madison Valleywood moment in the U.S. related to the <a href="https://www.newsweek.com/hate-crimes-under-trump-surged-nearly-20-percent-says-fbi-report-1547870" rel="noopener noreferrer" target="_blank">rising hate crimes</a> and domestic extremist events that marked the Trump presidency. Not after Charlottesville. Not after Pittsburgh. Not after El Paso. The former president <em>did</em> have what might be construed as the opposite of a Madison Valleywood moment when he held an event at the White House in 2019 where <a href="https://www.washingtonpost.com/technology/2019/07/11/who-was-who-trumps-social-media-summit/" rel="noopener noreferrer" target="_blank">far-right conspiracy theorists and provocateurs</a> discussed social media censorship. But this time, Facebook, Google and Twitter weren't invited.</p><p><br></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <a href="https://media.gettyimages.com/photos/poster-with-a-tweet-is-seen-as-president-donald-j-trump-participates-picture-id1155382518?b=1&amp;k=6&amp;m=1155382518&amp;s=170x170&amp;h=rssm82D_89TYJJFr8l77T0pLrAOoN70cMvhW2p3Eqoo=" target="_blank"><img class="rm-shortcode" loading="lazy" src="https://www.protocol.com/media-library/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yNTc0NzQ0MC9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTYzMzkyMzA5OX0.WyLHy4C-a0IV0S0HwMaQ75yknPytWxqbU1wG7ZR9Ygs/image.jpg?width=980" id="b9709" data-rm-shortcode-id="aea6abb03061310945c5554c829dada0" data-rm-shortcode-name="rebelmouse-image"></a> <small class="image-media media-caption" placeholder="Add Photo Caption...">President Trump's Social Media Summit in 2019 focused on alleged social media censorship of conservatives. </small> <small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Photo: Jabin Botsford/Getty Images</small> </p><p><br></p><p>"Platforms write their own rules, but governments signal which types of content they find objectionable, creating a permission structure for the companies to step up enforcement," Wexler said. "President Trump's comments after Charlottesville and his tacit support of the Proud Boys sent a deliberate message to tech companies: If you crack down on white nationalists' accounts, we'll accuse you of political bias and make your CEOs testify before Congress. "</p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="11"></div></div></p><p>During the Trump years, tech companies repeatedly courted favor with the president and his party. In 2019, Google CEO Sundar Pichai met <a href="https://www.cnbc.com/2019/03/27/trump-met-with-google-ceo-sundar-pichai-on-political-fairness-china.html" rel="noopener noreferrer" target="_blank">directly</a> with Trump to discuss what Trump later characterized as "political fairness." Internally, Google employees <a href="https://www.nbcnews.com/news/us-news/current-ex-employees-allege-google-drastically-rolled-back-diversity-inclusion-n1206181" rel="noopener noreferrer" target="_blank">told</a> NBC News that the company had rolled back diversity training programs because the company "doesn't want to be seen as anti-conservative." (Google denied the accusation to NBC.) On Election Day in 2020, YouTube allowed the Trump campaign to book an ad on its <a href="https://www.businessinsider.com/trump-2020-campaign-spends-big-on-youtube-biden-facebook-2020-11?amp" rel="noopener noreferrer" target="_blank">homepage</a> for the entire day, and it was the only <a href="https://stacks.stanford.edu/file/druid:tr171zs0069/EIP-Final-Report-v2.pdf" rel="noopener noreferrer" target="_blank">one of the top three social platforms</a> that had no explicit policy regarding attempts to delegitimize the election results. After the election, <a href="https://www.theverge.com/2020/11/4/21550180/youtube-oann-video-election-trump-misinformation-voting-final-results-facebook-twitter" rel="noopener noreferrer" target="_blank">videos</a> claiming Trump won went viral, but YouTube only began <a href="https://blog.youtube/news-and-events/supporting-the-2020-us-election/" rel="noopener noreferrer" target="_blank">removing</a> widespread allegations of fraud and errors on Dec. 9.</p><p>Google and YouTube declined to make any executives available for comment. In a statement, YouTube spokesperson Farshad Shadloo said: "Our Community Guidelines prohibit hate speech, gratuitous violence, incitement to violence, and other forms of intimidation. Content that promotes terrorism or violent extremism does not have a home on YouTube." Shadloo said YouTube's policies focus on content violations and not speakers or groups, unless those speakers or groups are included on a government foreign terrorist organization list.</p><p>At times, tech giants bent their own policies or adopted entirely new ones to accommodate President Trump and his most conspiratorial supporters on the far right. In January 2018, Twitter, for one, published its "world leaders <a href="https://blog.twitter.com/en_us/topics/company/2018/world-leaders-and-twitter.html" rel="noopener noreferrer" target="_blank">policy</a>" for the first time, seemingly seeking to explain why President Trump wasn't punished for threatening violence when he tweeted that his nuclear button was "much bigger &amp; more powerful" than Kim Jong Un's. Later that year, after Facebook, Apple and YouTube all <a href="https://www.vox.com/2018/8/6/17655658/alex-jones-facebook-youtube-conspiracy-theories" rel="noopener noreferrer" target="_blank">shut down</a> accounts and pages linked to Infowars' Alex Jones, Twitter CEO Jack Dorsey booked an interview with conservative kingmaker Sean Hannity, where he <a href="https://time.com/5361874/twitter-jack-dorsey-alex-jones-sean-hannity/" rel="noopener noreferrer" target="_blank">defended</a> Twitter's decision not to do the same. Just a few weeks later, the company would <a href="https://www.wired.com/story/twitter-bans-alex-jones-infowars/" rel="noopener noreferrer" target="_blank">reverse course</a> after Jones livestreamed a tirade against a CNN reporter on Twitter — while standing outside of Dorsey's own congressional hearing. </p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="11"></div></div></p><p><br></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <a href="https://media.gettyimages.com/photos/alex-jones-of-infowars-background-attends-a-senate-intelligence-in-picture-id1027224386?b=1&amp;k=6&amp;m=1027224386&amp;s=170x170&amp;h=1IyYVN-U74jheta0t0kJbu0S6ik1HK0i2o6F2l_FBW8=" target="_blank"><img class="rm-shortcode" loading="lazy" src="https://www.protocol.com/media-library/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yNTc0NzQ2MS9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTY1MDE2MzI2M30.Fn3bmuFCgNa8RHs9FbhpYRL79TFy0csfebnUMhCdv4E/image.jpg?width=980" id="03c68" data-rm-shortcode-id="2713b8f837f7dd9f37a2f330f1cb7184" data-rm-shortcode-name="rebelmouse-image"></a> <small class="image-media media-caption" placeholder="Add Photo Caption...">Twitter defended its decision not to remove accounts tied to InfoWars Alex Jones. Weeks later, Twitter reversed that decision, following CEO Jack Dorsey's testimony before Congress in September of 2018.</small> <small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Photo: Tom Williams/CQ Roll Call</small> </p><p><br></p><p>Jones was a lightning rod for Facebook, too. As BuzzFeed recently <a href="https://www.buzzfeednews.com/article/ryanmac/mark-zuckerberg-joel-kaplan-facebook-alex-jones?ref=bfnsplash&amp;utm_term=4ldqpho" rel="noopener noreferrer" target="_blank">reported</a>, Facebook decided in 2019 to do more than just ban Jones' pages. The company wanted to designate him as a dangerous individual, a label that also ordinarily forbids other Facebook users from praising or expressing support for those individuals. But according to BuzzFeed, Facebook altered its own rules at Zuckerberg's behest, creating a third lane for Jones that would allow his supporters' accounts to go untouched. And when President Trump threatened to shoot looters in the aftermath of George Floyd's killing, Facebook staffers <a href="https://www.washingtonpost.com/technology/2020/06/28/facebook-zuckerberg-trump-hate/" rel="noopener noreferrer" target="_blank">reportedly</a> called the White House themselves, urging the president to delete or tweak his post. When he didn't, Zuckerberg <a href="https://www.vox.com/recode/2020/6/3/21279434/mark-zuckerberg-meeting-facebook-employees-transcript-trump-looting-shooting-post" rel="noopener noreferrer" target="_blank">told</a> his staff the post didn't violate Facebook's policies against incitement to violence anyway.</p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="12"></div></div></p><p>"I would argue the looter-shooter post was more violating than [the post] on Jan. 6," Eisenstat said, referring to the Facebook video that ended up getting the former president kicked off of Facebook indefinitely in his last weeks in office. In the video, Trump told the Capitol rioters he loved them and that they were very special, while repeating baseless claims of election fraud. </p><p>For Facebook at least, this instinct to accommodate the party in power wasn't unique to the U.S., said tech entrepreneur Shahed Amanullah, who worked with Facebook on a series of global hackathons through his company, Affinis Labs. The goal of the hackathons, Amanullah said, was to fight all forms of hate and extremism online, and the events had been successful in countries like Indonesia and the Philippines. But when he brought the program to India, Amanullah said he received pressure from Facebook India's policy team to focus the event specifically on terrorism coming out of the majority-Muslim region of Kashmir. </p><p>The woman leading Facebook India's policy team at the time, Ankhi Das, was a vocal supporter of Indian Prime Minister Narendra Modi, and, <a href="https://www.wsj.com/articles/facebook-hate-speech-india-politics-muslim-hindu-modi-zuckerberg-11597423346" rel="noopener noreferrer" target="_blank">according</a> to The Wall Street Journal, had a pattern of allowing anti-Muslim hate speech to go unchecked on the platform. "I said there's no way I'm ever going to accept a directive like that," Amanullah recalled. </p><p>Though he was supposed to run seven more hackathons in the country, Amanullah cut ties. "That was the last time we ever worked with Facebook," he said. </p><p>A Facebook spokesperson told Protocol, "We've found nothing to suggest this is true. We've looked into it on our end, spoken to people who were present at the hackathon and have no reason to believe that anyone was pressured to shift the focus of the hack." Das did not respond to Protocol's request for comment.</p><p>To Amanullah, the experience working with Facebook in India signaled that the company was giving in to the Indian government and giving Islamist extremism an inordinate amount of attention compared to other threats. "If you want to talk about hate," he said, "you have to talk about all kinds of hate. "</p><h2>The reckoning</h2><p>Looking back in the wake of the Capitol riot, it's easy to view Charlottesville as a warning shot that went unheard, or at least insufficiently answered, by tech giants. And in many ways it was. But inside, things were also changing, albeit far more slowly than almost anyone believes they should have.</p><p>For Fishman, who had focused almost entirely on jihadism on Facebook until that point, the Unite the Right rally was a turning point. "Charlottesville was a moment when extremist groups on the American far right clearly were trying to overcome the historical fractioning of that movement and express themselves in more powerful ways," he said. "It absolutely was something we tracked and realized we needed to invest more resources into. "</p><p><br></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <a href="https://media.gettyimages.com/photos/neo-nazis-altright-and-white-supremacists-take-part-a-the-night-the-picture-id830696202?b=1&amp;k=6&amp;m=830696202&amp;s=170x170&amp;h=kiBAUb9a6_lE_STxphpsjj0HB7qMFyB3Iay26r5ShpA=" target="_blank"><img class="rm-shortcode" loading="lazy" src="https://www.protocol.com/media-library/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yNTc0ODM3MC9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTY3Mjg2OTAwMH0.uoccdzJpkOcTc1C4mEWPtfwkCRre2qhqmxF5q-2w01o/image.jpg?width=980" id="cc7be" data-rm-shortcode-id="b6f571c0d913a60ee8ccb11d9b0eb1f3" data-rm-shortcode-name="rebelmouse-image"></a> <small class="image-media media-caption" placeholder="Add Photo Caption...">The Unite the Right rally in Charlottesville, which had been planned on Facebook, left three dead and dozens injured.</small> <small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Photo: Zach D Roberts/Getty Images</small> </p><p><br></p><p>Immediately after the rally, Facebook <a href="https://twitter.com/Kantrowitz/status/897525520178397185?s=20" rel="noopener noreferrer" target="_blank">banned</a> eight far-right and white nationalist pages associated with it. That's in addition to hate groups like the National Socialist Movement, the KKK, The Daily Stormer and Identity Evropa, which were already designated as dangerous organizations and forbidden from the platform long before the rally. A few months later, in 2018, Fishman's team changed names, from the counterterrorism team to the dangerous organizations team, a signal that the company would double down on enforcement against hate groups, too. The team working primarily on violent extremism of all stripes at Facebook eventually grew to 350 people.</p><p>In late 2017, Twitter broadened its policy on violent extremism to prohibit all violent extremist groups, not just designated foreign terrorist organizations. And in 2019, YouTube <a href="https://blog.youtube/news-and-events/continuing-our-work-to-improve" rel="noopener noreferrer" target="_blank">promised</a> it would limit the spread of misinformation by keeping conspiracy theory videos out of recommendations, a promise it's had some <a href="https://www.nytimes.com/interactive/2020/03/02/technology/youtube-conspiracy-theory.html" rel="noopener noreferrer" target="_blank">success</a> in keeping.</p><p>Even so, these companies repeatedly bungled the definition of what constitutes hate and violent extremism. It was, after all, a year after Charlottesville when Zuckerberg boldly <a href="https://www.vox.com/2018/7/18/17575156/mark-zuckerberg-interview-facebook-recode-kara-swisher" rel="noopener noreferrer" target="_blank">defended</a> Facebook's policy of allowing Holocaust denial in an interview with Recode. It wasn't until 2020 that Zuckerberg changed his mind about that, <a href="https://www.facebook.com/zuck/posts/10112455086578451" rel="noopener noreferrer" target="_blank">citing</a> "data showing an increase in anti-Semitic violence." His post never fully acknowledged the role Facebook's earlier policies might have played in stoking that violence.</p><p>There were also legitimate reasons why identifying and removing domestic hate groups in the U.S. was harder than taking down networks of accounts tied to ISIS and al-Qaeda. For one thing, domestic groups and the people associated with them have traditionally been far more diffuse, making it tougher to draw neat lines around them or use clues in their branding to find and remove whole networks. "Groups that are less organized and less structured and don't put out official propaganda in the same sort of way, you have to use a different tool kit in order to get at those kinds of entities," Fishman said. "Chasing the networks becomes even more important than chasing known pieces of content that you can gather using vendors. "</p><p>Then, there's the fact that tech companies defined hate in limited ways. Facebook, Twitter and YouTube have all introduced hate speech policies that generally prohibit direct attacks on the basis of specific categories like race or sexual orientation. But what to do with a new conspiracy theory like QAnon that hinges on some imagined belief in a cabal of Satan-worshipping Democratic pedophiles? Or a group of self-proclaimed "Western chauvinists" like the Proud Boys cloaking themselves in the illusion that white pride doesn't necessarily require racial animus? Or the #StoptheSteal groups, which were based on a lie, propagated by the former president of the United States, that the election had been stolen? These movements were shot through with hate and violence, but initially, they didn't fit neatly into any of the companies' definitions. And those companies, operating in a fraught political environment, were in turn slow to admit, at least publicly, that their definitions needed to change. </p><p>It's also true that it's philosophically trickier for American companies to ban people who are not faraway terrorists, but other Americans, whom the U.S. government couldn't punish for their beliefs even if it wanted to. For all of the bad-faith concerns about censorship, it's also possible to argue with a straight face that platforms shouldn't have even more power to police speech than Congress has. "I find that to be kind of troubling at a time when so many people are talking about tech platforms having too much power, we look to them to do what governments would be unable to do," said Matt Perault, director of Duke University's Center on Science &amp; Technology Policy and Facebook's former director of public policy. </p><p>All of those considerations complicated tech companies' efforts to respond to domestic threats that could be harmful but haven't yet caused harm. "One of the biggest challenges around designing a policy and enforcement framework in these areas is when you have domestic [actors] who are part of this conversation, who are not promoting violence," Pickles said, after recently <a href="https://www.independent.co.uk/life-style/gadgets-and-tech/twitter-conspiracy-theory-donald-trump-b1790200.html" rel="noopener noreferrer" target="_blank">admitting</a> that Twitter was too slow to act on QAnon. It wasn't until after the Capitol attack that Twitter began <a href="https://blog.twitter.com/en_us/topics/company/2021/protecting--the-conversation-following-the-riots-in-washington--.html" rel="noopener noreferrer" target="_blank">permanently banning</a> accounts for primarily sharing QAnon content.</p><p>It's not always easy to predict which extreme beliefs will turn violent either, said Green, who has spent the last few years focusing on both violent <a href="https://jigsaw.google.com/the-current/white-supremacy/countermeasures/" rel="noopener noreferrer" target="_blank">white supremacists</a> and conspiracy theorists at Jigsaw. In late 2019, Green traveled to Tennessee and Alabama to meet with people who believe in a range of conspiracy theories, from flat earthers to Sandy Hook deniers. "I went into the field with a strong hypothesis that I know which conspiracy theories are violent and which aren't," Green said. But the research surprised her, as some conspiracy theorists she believed to be innocuous, like flat earthers, were far more militant followers than the ones she considered violent, like people who believed in white genocide. </p><p>"We spoke to flat earthers who could tell you which NASA scientists are propagating a world view and what they would do to them if they could," Green said.</p><p>Even more challenging: Of the 77 conspiracy theorists Green's team interviewed, there wasn't a single person who believed in only one conspiracy. That makes mapping out the scope of the threat much more complex than fixating on a single group.</p><p>The runup to the 2020 election did accelerate some of this work, in part because the real-world threat was accelerating, too. After extremists associated with the far right, anti-government boogaloo movement <a href="https://www.latimes.com/california/story/2020-06-17/far-right-boogaloo-boys-linked-to-killing-of-california-lawmen-other-violence" rel="noopener noreferrer" target="_blank">carried</a> out a series of killings in 2020, Facebook <a href="https://about.fb.com/news/2020/06/banning-a-violent-network-in-the-us/" rel="noopener noreferrer" target="_blank">designated</a> the boogaloos as a dangerous organization in June. "We saw something that we thought looked like a real network, a real entity, not just a bunch of guys using shared imagery in order to project frustration at the government," Fishman said.</p><p class="pull-quote">The reason we put policies [in place] like those we built over 2020 was because of concern about something like Jan. 6.</p><p>As the blurred lines between militia groups and conspiracy theorists became obvious throughout summer 2020, Facebook also <a href="https://about.fb.com/news/2020/08/addressing-movements-and-organizations-tied-to-violence/" rel="noopener noreferrer" target="_blank">barred</a> hundreds of militarized groups like the Oath Keepers. And, after initially attempting to merely limit the spread of QAnon, it wound up banning all accounts, pages and groups "representing QAnon" in October and launching its own version of the Redirect Method to point people searching QAnon-related terms toward more credible sources. Meanwhile, Facebook's automated filters against hate speech also made significant strides, with the company <a href="https://www.protocol.com/covid-facebook-content-moderation" target="_self">removing</a> more than double the amount of hate speech in the second quarter of 2020 as it did in the first.</p><p>YouTube <a href="https://www.washingtonpost.com/technology/2020/10/15/youtube-qanon-crackdown/" rel="noopener noreferrer" target="_blank">cracked down</a> on QAnon too, prohibiting "content that targets an individual or group with conspiracy theories that have been used to justify real-world violence." And it kicked out white supremacists like <a href="https://techcrunch.com/2020/06/29/youtube-ban-stefan-molyneux-david-duke-white-nationalism/" rel="noopener noreferrer" target="_blank">David Duke and Richard Spencer</a>, who had managed to evade the company's hate speech policies for years. Twitter, in addition to banning the <a href="https://www.thedailybeast.com/twitter-bans-far-right-militia-group-the-oath-keepers" rel="noopener noreferrer" target="_blank">Oath Keepers</a>, began aggressively labeling President Trump's tweets over the summer for glorifying violence and, after the election, for spreading misinformation about voter fraud. Twitter also launched a new policy prohibiting "<a href="https://help.twitter.

Predict your next investment

The CB Insights tech market intelligence platform analyzes millions of data points on venture capital, startups, patents , partnerships and news mentions to help you see tomorrow's opportunities, today.

Research containing Jigsaw

Get data-driven expert analysis from the CB Insights Intelligence Unit.

CB Insights Intelligence Analysts have mentioned Jigsaw in 3 CB Insights research briefs, most recently on Jul 14, 2021.

Jigsaw Patents

Jigsaw has filed 4 patents.

The 3 most popular patent topics include:

  • Fictional salespeople
  • GPS navigation devices
  • Lighting
patents chart

Application Date

Grant Date

Title

Related Topics

Status

12/14/2016

12/10/2019

Videotelephony, Teleconferencing, Web conferencing, Fictional salespeople, Social networking services

Grant

00/00/0000

00/00/0000

Subscribe to see more

Subscribe to see more

Subscribe to see more

00/00/0000

00/00/0000

Subscribe to see more

Subscribe to see more

Subscribe to see more

00/00/0000

00/00/0000

Subscribe to see more

Subscribe to see more

Subscribe to see more

Application Date

12/14/2016

00/00/0000

00/00/0000

00/00/0000

Grant Date

12/10/2019

00/00/0000

00/00/0000

00/00/0000

Title

Subscribe to see more

Subscribe to see more

Subscribe to see more

Related Topics

Videotelephony, Teleconferencing, Web conferencing, Fictional salespeople, Social networking services

Subscribe to see more

Subscribe to see more

Subscribe to see more

Status

Grant

Subscribe to see more

Subscribe to see more

Subscribe to see more

CB Insights uses Cookies

CBI websites generally use certain cookies to enable better interactions with our sites and services. Use of these cookies, which may be stored on your device, permits us to improve and customize your experience. You can read more about your cookie choices at our privacy policy here. By continuing to use this site you are consenting to these choices.