Predict your next investment

Angel Investor (Individual)

See what CB Insights has to offer

Investments

1

About Monique Woodard

Monique Woodard Headquarter Location

Predict your next investment

The CB Insights tech market intelligence platform analyzes millions of data points on venture capital, startups, patents , partnerships and news mentions to help you see tomorrow's opportunities, today.

Latest Monique Woodard News

Monique Woodard is betting on the people Sand Hill Road keeps overlooking

Mar 26, 2021

Tomio Geron ( @tomiogeron ) is a San Francisco-based reporter covering fintech. He was previously a reporter and editor at The Wall Street Journal, covering venture capital and startups. Before that, he worked as a staff writer at Forbes, covering social media and venture capital, and also edited the Midas List of top tech investors. He has also worked at newspapers covering crime, courts, health and other topics. He can be reached at tgeron@protocol.com or tgeron@protonmail.com. March 26, 2021 Retail investing is taking off in the U.S., with daily trading volume in January double what it was a year ago and much of the increase attributable to individual investors. It's more than just the Robinhood effect: A number of companies are making it easy to tack on investing alongside other financial services, and a broader range of apps now feature stock trading as a result. <p>The spread of stock trading in apps originally designed for borrowing, saving or paying has big implications. It could lure more people back into the stock market: The percentage of households owning stocks dropped after the dot-com bust and the 2008 financial crisis, and has only recently started to recover. That, in turn, has meant that the stock market's gains in recent years have only benefited some families.</p><p>But just adding a "buy stocks" button to an app won't reverse years of worsening income inequality by itself. Easy access to the markets could hurt some novice traders if apps lure them into risky, rapid-fire trading.</p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="1"></div></div></p><p>For startups looking to challenge the finance establishment, one-stop shopping is another way to attract customers and keep them once they're there with more options to save or invest.</p><p>Square's Cash App lets people get paid back by a friend for beers, and then turn around and plow it into shares of BUD. People who use Revolut or MoneyLion for banking can move money from savings into investing. Stash even offers a Stock-Back Card that pays customers for shopping with fractional shares of the retailers they frequent. Several apps promise free stock, often in familiar brand names, for making your first trade.</p><p>Online brokers have often marketed themselves to people who are already trading. Apps where trading is an add-on are more often trying to attract customers who are new to the markets.</p><p>"There's a big paradigm shift from a brokerage mentality to an embedded finance mentality," said Bob Cortright, CEO of DriveWealth, which has 90 partners with close to 10 million accounts, including Revolut, MoneyLion and Square.</p><h3>Time for investing</h3><p>There are arguments for getting consumers started with investing sooner. Starting to invest while people are young can compound into much <a href="https://www.cnbc.com/2017/09/27/nerdwallet-charts-show-the-power-of-compound-interest.html" target="_blank">larger savings</a> in the future. Especially if consumers invest for the long term — and avoid cycling in and out of stocks in search of short-term profits — these relatively small investments can turn into much larger retirement savings over time.</p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="2"></div></div></p><p>Younger Americans targeted by many of these apps generally invest less, said Brian Graham, a partner at Klaros Group. That's in part because they have less money to save: Wages stagnated after the 2007-2008 financial crisis and student loan debt has skyrocketed. </p><p>"We see this whole divide and accessibility problem," said Yoshi Yokokawa, CEO of Alpaca, which provides trading services for financial apps. </p><p>It's not clear how many users of these apps are new to investing. More than 2.5 million customers have bought stocks using the Cash App in the 10 months since launch, Square said in its third-quarter 2020 earnings report. Many of those users originally came to the app for peer-to-peer payments. Square now includes dollar-cost averaging options to address market volatility.</p><p>With features such as gamification, hyper-targeted customer segmentation and low or no trading fees, embedded investing in financial apps should draw more first-time investors, but there is a danger if apps focus on short-term trading, not longer-term investing, Klaros' Graham said. </p><p>"It'll expose people to financial services in a more integrated way. It has enormous potential to do good. It also has enormous potential to do bad," he said. </p><p>"A very small portion of American society actually invests in assets. Our customer base has historically been left out," said Dee Choubey, CEO of MoneyLion, whose customers mostly make between $50,000 and $100,000 a year. "Our mission is to create financial access. "</p><h3>Moving money</h3><p>Smashing banking and brokerage services together isn't a new idea. Following passage of the Gramm-Leach-Bliley Act in 1999, brokers and banks literally merged. Citigroup emerged from the $70 billion combination of Travelers, Salomon Smith Barney and Citibank, while E-Trade bought Telebanc for $1.8 billion. At the same time, the dot-com boom lured retail investors into the market to trade online.</p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="3"></div></div></p><p>What's different now is that you don't need billion-dollar deals to put banking and brokerage services together. All you need is an API that can route trades from an app to the market. Companies such as DriveWealth, Alpaca and Parkside offer backend trading services for a variety of customers. </p><p>Now, there's renewed retail interest in investing with a bull market that roared back after the pandemic crash last year, and low interest rates often don't provide meaningful savings. All-in-one financial apps make moving cash and buying stocks as easy as ordering an Uber — not something that requires expertise or even downloading a separate app. </p><p>Some companies offer education for new investors. MoneyLion asks for a customer's financial goals and provides a caution flag if they seek to invest in something that is outside of their risk profile. Square has a digital "My First Stock" slideshow complete with sharks and crocodiles to illustrate risk. Robinhood bought MarketSnacks, a financial media company, in 2019, and now calls it Robinhood Snacks.</p><p>Some, following Robinhood's lead, offer commission-free trading, while others like MoneyLion offer managed baskets of stocks for a fee. "It's philosophically different than stock trading apps out there," MoneyLion's Choubey said. "Our customers start out with $1,000 to $10,000 of investable assets. To start off you really should probably not be buying a fraction of a single stock of GameStop. You really should start in a managed way with your goals. "</p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="4"></div></div></p><p>Others use card rewards to get consumers interested in stocks. Stash customers have earned about 25 million stock rewards via the Stock-Back Card, the company says. And 35% of customers who received a stock through the card later invested their own cash in that company. Stash spokeswoman Vera Hanson calls the program a "gateway for customers to explore and discover new investments. "</p><p>But the common theme is that placing stock-purchasing next to other financial products such as banking or lending is a way to drive increasing interest in investing. </p><p>"That consumer behavior is tied to the investment behavior," Cortright said. "It's a way to get young people thinking about investing for the long haul. Sometimes they're not using their own money. It's a reward back for behavior. If every time there's a transaction you generate a lot of financial literacy, powerful things can come out of that over time. "</p> From Your Site Articles Kate Silver is an award-winning reporter and editor with 15-plus years of journalism experience. Based in Chicago, she specializes in feature and business reporting. Kate's reporting has appeared in the Washington Post, The Chicago Tribune, The Atlantic's CityLab, Atlas Obscura, The Telegraph and many other outlets. March 19, 2021 While debate has raged in recent months about the pros and cons of raising the minimum wage, there's one particular set of people in agreement regarding the benefits: hourly wage workers, like Remington*. In September 2020, Remington began working in an Amazon fulfillment center in DuPont, Washington, after his hours were cut at his restaurant job because of COVID-19. A few months into his new job, which has a starting wage of $15 an hour plus benefits, he said he feels more secure earning a steady paycheck at a rate higher than the federal minimum wage — especially with benefits. "The consistent pay is amazing," he said. "Instead of just wondering 'Am I going to make rent this month?' I'm like, 'Yeah, I got the bills this month. I can even go on a trip.'" Thanks to his job, he's finally been able to get new glasses and contacts and take his daughter skiing and fishing, all while saving for a house he's planning to buy with his siblings — two of whom work with him at Amazon. <p><div class="ad-tag"><div class="ad-place-holder" data-pos="1"></div></div></p><p>Earlier this month, the push for a federal minimum wage increase to $15 per hour failed to make it into the $1.9 trillion COVID-19 relief bill. But the battle isn't over. Multiple lawmakers have sponsored bills that seek to raise the federal minimum wage, including the<a href="https://edlabor.house.gov/imo/media/doc/2021-01-26%20Raise%20the%20Wage%20Act%20Fact%20Sheet.pdf" rel="noopener noreferrer" target="_blank"> Raise the Wage Act of 2021</a>, which aims for an incremental increase to $15 an hour over the next five years, and would give 32 million Americans an increase in pay. A growing body of research supports raising the minimum wage. </p><h4>The ripple effect of increased pay</h4><p>At $7.25, the federal minimum wage hasn't been raised in more than a decade. According to the<a href="https://www.epi.org/publication/raising-the-federal-minimum-wage-to-15-by-2025-would-lift-the-pay-of-32-million-workers/" rel="noopener noreferrer" target="_blank"> Economic Policy Institute</a>, a person who is working full-time today at the federal minimum wage earns 18% less than what they would have earned 10 years ago, if adjusted for the increased cost of living; in fact, they're also making less than their minimum-wage counterparts were making more than 50 years ago.</p><p>The EPI goes on to say that with an increase to $15 per hour, nearly one in three Black workers, one in four Hispanic workers and one in five white workers would see their wages rise. </p><p><cite class="pull-quote">All told, increasing the federal minimum wage to $15 by 2025 would result in $107 billion in higher wages for workers, which would trickle into economies across the country, fueling business growth and job growth.</cite> </p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="2"></div></div></p><p>According to the<a href="https://www.nelp.org/publication/u-s-needs-15-minimum-wage/" rel="noopener noreferrer" target="_blank"> National Employment Law Project</a>, the increase would be especially impactful to women, and it would boost the income of 59% of workers whose total family income is below the poverty line.</p><p>A<a href="https://www.ipsos.com/en-us/news-polls/minimum-wage-thought-leadership-2021" rel="noopener noreferrer" target="_blank"> new study</a> conducted by Amazon/Ipsos found that most Americans support raising the minimum wage. Results from the study, which involved interviews with more than 6,000 adults, revealed that two out of three support increasing the minimum wage to $15 an hour. Those polled believe that the higher pay would have a positive impact on workers, their community, the country and the economy. Further, those making more than $15 an hour were more likely to say they were earning more than they spend and they could come up with $500 in an emergency; while those making less than $15 an hour were more likely to say they can't afford basic household expenses, such as groceries, prescription drugs and doctor visits. Respondents largely agreed that they look to large businesses for leadership in pushing earnings higher, both when it comes to raising the federal minimum wage and raising the company's own minimum wage.</p><p>Across the country, legislators and business leaders have long learned to take matters into their own hands. To date, the minimum wage has increased in 29 states, plus Washington, D.C., and 45 localities have adopted minimum wages above the federal minimum wage. Businesses such as Amazon have also taken the lead on raising pay.</p><p>In 2018, Amazon was the first major retailer to raise its starting pay to $15 an hour. The impact of that raise has reverberated well beyond the walls of the online marketplace. Other major retailers, such as Best Buy, Walmart, Target and Costco followed suit in increasing employees' starting pay. A <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3793677" rel="noopener noreferrer" target="_blank">recent report</a> studying voluntary increases to wages by businesses found that when Amazon raised its pay to $15 in 2018, it led to a 4.7% increase in wages for employees at other companies in the same market. While critics frequently point to the risk of widespread job losses if the minimum wage increases, the researchers in this study found that that the probability of employment decreased just 0.8% following the boost.</p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="3"></div></div></p><p>Amazon's increase to $15 an hour has also proved an asset to the business itself, said Jay Carney, the senior vice president of Amazon Global Corporate Affairs. "Once we increased our starting wage to $15 an hour, the positive impact on employee morale and retention — and the surge in job applicants — was immediate. In fact, the month after we raised our starting wage, applications for hourly positions more than doubled. Employees who saw their paychecks increase told us that they had an easier time providing for their families and were able to spend on things like car repairs and home improvement projects. In short, the investments we made in our hourly employees were quickly transferred to local businesses and economies," he said. </p><h4>The faces of a living wage</h4><p>Luv Luv*, who works in an Amazon fulfillment center in Shakopee, Minnesota, said her job has helped her live the life she wants. A former hairstylist, she said she was drawn to the company for its generous health insurance policy, and the above-average wages, she added, are a bonus. "The pay at Amazon gave me the means to be able to afford the apartment I have. I get to live a comfortable life on my own terms. It's been a phenomenal journey for me," she said.</p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="4"></div></div></p><p>Chatonn*, who also lives in Shakopee, was drawn to Amazon for the income. She left her job in retail and hasn't looked back since she accepted a role at a fulfillment center. "The difference on my paycheck was huge. I'm earning $15 an hour instead of 10-something," she said. The increase in pay, she added, gives her more expendable income, so she can support her children's hobbies and endeavors. "It allows me to take better care of my family. My daughter is into art, I can pay for her art supplies. My son's in karate, I can afford karate now," she said. "Amazon allows me to be the parent that I want to be. That is everything to me. "</p><p>So often, the argument around wages is based on facts and figures, and the facts and figures show that a $15 federal minimum wage would <a href="https://www.epi.org/publication/why-america-needs-a-15-minimum-wage/" rel="noopener noreferrer" target="_blank">generate more than $107 billion in higher wages and help reverse decades of pay inequality</a>. But those numbers don't have faces or stories attached to them. In the cases of Remington, Luv Luv and Chatonn, an increase in pay has made an immeasurable impact, and it's one that's already flowing down to the next generation. </p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="5"></div></div></p><p><em>*Last names were withheld for privacy purposes.</em></p> Keep ReadingShow less Joe Williams is a senior reporter at Protocol covering enterprise software, including industry giants like Salesforce, Microsoft, IBM and Oracle. He previously covered emerging technology for Business Insider. Joe can be reached at JWilliams@Protocol.com. To share information confidentially, he can also be contacted on a non-work device via Signal (+1-309-265-6120) or JPW53189@protonmail.com. March 25, 2021 The perceptions surrounding autism and other developmental disorders are quickly changing. Companies like SAP and Dell, for example, employ hundreds of neurodiverse employees and interns in technical roles, as well as in marketing, customer relations and other non-engineering jobs. One such employee is Serena Schaefer, a software engineer at Microsoft who was recruited under the company's autism hiring program. <p>Overhearing high school parents question the abilities of teenagers like herself brought Schaefer into tech. Now, she serves as an example of the changing nature of career paths for neurodiverse individuals — a population that suffers from high unemployment and underemployment. </p><p>"There's this doubt if someone has autism. Being able to be given the chance to do something is crucial," Schaefer told Protocol. </p><p>Protocol talked to Schaefer to learn about her interview process at Microsoft, the importance of intersectionality when it comes to diversity efforts, why she hopes neurodiverse hiring programs go international and how companies can improve the experience of workers with autism. </p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="1"></div></div></p><p><em>This interview has been edited for brevity and clarity.</em></p><p><strong>What initially made you want to get into the tech industry?</strong></p><p>Back in freshman year of high school, I was at a small special-education school, and my biology teacher told me he was looking to start a robotics team. I didn't know anything about robotics really, but I just decided to say yes and join. It was the first-year team, so no one really knew anything except the mentors. </p><p>You are assigned to do a certain task, and that year it was to create a robot that shoots basketballs through a hoop. There's a kickoff event where you get the supplies you need and hear about this task. At that event, I overheard parents say how they weren't too optimistic or hopeful that we would be able to produce a robot like these top public high schools could. They thought we'd have fun but wouldn't be able to finish anything in the six weeks we'd been given. </p><p>That was really eye-opening but frustrating for me. I would have thought all the parents would have been the biggest supporters. But it didn't seem like everyone was very supportive, so I was determined to change their minds. It was amazing we actually finished the robot, competed in competitions and scored points. I just gained a really deep interest in technology. </p><p><strong>Like that experience with the parents, do you find that same stigma exists in the tech industry today?</strong> </p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="2"></div></div></p><p>I do. There's movies and TV shows showing kids with autism and Down syndrome and they're like, rocking in a corner and being nonfunctional. It's what people think of when they think of neurodiversity. Or they may think of Sheldon Cooper and think that everyone is good at math. But there's this doubt if someone has autism. Being able to be given the chance to do something is crucial. </p><p><br></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img class="rm-shortcode" loading="lazy" src="https://www.protocol.com/media-library/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yNTg3MjgxNS9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTY1NzMxMDg3M30.755QEJy3TnIAJkvf8HIxIc-lsHnx2VGroqEzKXO-2j8/image.jpg?width=980" id="ecdc3" data-rm-shortcode-id="b95dfcc0dcf161da59499abed51a2417" data-rm-shortcode-name="rebelmouse-image" alt="Serena Schaefer's mug. "> <small class="image-media media-caption" placeholder="Add Photo Caption...">Serena Schaefer's mug.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Photo: Microsoft</small></p><p><br></p><p><strong>You've seen a larger reckoning over diversity and inclusion across industries, but definitely tech is a big part of that discussion. Have you seen a shift when it comes to neurodiversity? What progress has happened and what needs work? </strong></p><p>There's been progress recently. Companies like Microsoft are helping to make neurodiversity hiring programs more mainstream, more well-known — at least in the U.S. If Microsoft expands programs like these to France and other countries, it would become even more mainstream. Once these other countries start recognizing neurodiversity at a more public level, it will be even better. </p><p><strong>Do you get the chance to do a lot of outreach or work with neurodiverse high school students or those looking to get into the tech industry?</strong></p><p>Recently, I haven't done much work outside of Microsoft. But whenever there was an autism hiring luncheon, I would go and talk to the candidates and meet them and talk about both my profession and neurodiversity itself. </p><p>I've done more women-in-STEM-related things. Intersectionality is something that is prevalent in conversations about diversity. Females with autism are fairly rare. And females in the tech industry are underrepresented. So I do think that we can talk about neurodiversity but also how it intersects with other types of experiences. </p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="3"></div></div></p><p><strong>What impressed you about Microsoft's hiring program? And you mentioned expanding globally. How else do you think the company can improve it?</strong></p><p>It was interviews over multiple days. So if you had an off day, you still would be able to go back the next day and actually prove what you can do. But the first day, rather than just sticking you in the room with some developer and have him or her ask preselected questions, you worked in a team to create a mock product and you got to present it to a bunch of hiring managers. And they didn't judge you based on your communication style. </p><p>With neurodiversity, some people think very fast. But some people may need a little more time to come up with the answer. Microsoft allowed everyone in the spectrum and accommodated everything they might need. It should be the standard for hiring for any candidate, not just those who happen to have a diagnosis. I had to prove I had a diagnosis. What about the people who don't have a diagnosis and [are] on the spectrum? Or have dyslexia? They should get a chance to prove themselves in that way. </p><p><strong>I've spoken to companies and other neurodiverse individuals who say the shift to remote work has features that are more accommodating. What has your experience been?</strong></p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="4"></div></div></p><p>There have been some positives. I tend to fidget a lot just to keep my hands busy and that may look kind of strange in a working environment. But working remotely, no one is going by my office wondering why I'm twirling a pen all the time. It lets me feel like I can be more relaxed and not have to constantly make sure that I am suppressing myself. I've found it hard to maintain eye contact with people, but looking at a computer screen makes it a little easier. You can stare at the camera. It's a bunch of small things that add up. It can be a bit isolating, but we do online games, virtual escape rooms, things like that. Nothing too negative, I would say it's more positive. </p><p><strong>Do you think the shift to remote work will help companies bring more people in under neurodiversity?</strong></p><p>It would be great if there was a choice of whether to come into the office or not. Some people might prefer being in an actual office. A hybrid model is the way to go. Hopefully this pandemic can help us reconsider interviewing and allowing employees to do their best work however they are able to. </p><p><strong>How should companies act to create a better environment for their neurodiverse employees once they're in the door?</strong></p><p>I was first hired as an intern and on the first day someone from the program came in and presented to my team about neurodiversity and possible challenges I may face. I didn't have to explain it myself. It made it a bit less awkward. And afterwards my colleagues would ask a lot of questions. It was really inspiring to see how willing they were to include me. </p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="5"></div></div></p><p>Companies should have those sessions for a whole group of teams, trying to make them recognize any unconscious bias that they have and be aware that their coworkers may have dyslexia or some learning disability — or may be neurodiverse. It's about expanding what's already there to a larger scale so that more people can be aware. </p><p><strong>How do you manage being an advocate for the community while also handling all the other day-to-day pressures of life and work?</strong></p><p>It's pressure, but it's good pressure. It makes me reflect on how I interact with others who are neurodiverse or have disabilities. But I'm also white. I have sight. I have hearing. By trying to do diversity- and inclusion-related efforts, I'm hopefully widening my own awareness of others. I want to do more than coding something or engineering something. I want to feel like I'm making a positive difference beyond just those who are using the products I am making.</p> From Your Site Articles Issie Lapowsky ( @issielapowsky ) is a senior reporter at Protocol, covering the intersection of technology, politics, and national affairs. Previously, she was a senior writer at Wired, where she covered the 2016 election and the Facebook beat in its aftermath. Prior to that, Issie worked as a staff writer for Inc. magazine, writing about small business and entrepreneurship. She has also worked as an on-air contributor for CBS News and taught a graduate-level course at New York University’s Center for Publishing on how tech giants have affected publishing. Email Issie . March 8, 2021 On a Friday in August 2017 — years before a mob of armed and very-online extremists took over the U.S. Capitol — a young Black woman who worked at Facebook walked up to the microphone to ask Mark Zuckerberg a question during a weekly companywide question-and-answer session. Zuckerberg had just finished speaking to the staff about the white supremacist violence in Charlottesville, Virginia, the weekend before — and what a difficult week it had been for the world. He was answering questions on a range of topics, but the employee wanted to know: Why had he waited so long to say something? <p>The so-called Unite the Right rally in Charlottesville had been planned in plain sight for the better part of a month on Facebook. Facebook took the event down only a <a href="https://www.businessinsider.com/facebook-removed-unite-the-right-charlottesville-rally-event-page-one-day-before-2017-8" target="_blank">day before</a> it began, citing its ties to hate groups and the threat of physical harm. That turned out to be more than a threat. The extremist violence in Charlottesville left three people dead and dozens more injured. Then-Attorney General Jeff Sessions later <a href="https://www.usnews.com/news/national-news/articles/2017-08-14/jeff-sessions-calls-charlottesville-attack-domestic-terrorism" target="_blank">called</a> it an act of "domestic terrorism." </p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="1"></div></div></p><p>Zuckerberg had already posted a contrite, cautious <a href="https://www.facebook.com/zuck/posts/10103969849282011?pnref=story" target="_blank">message</a> about the rally on Facebook earlier that week, saying the company would monitor for any further threats of violence. But his in-person response to the employee's question that day struck some on the staff as dismissive. "He said in front of the entire company, both in person and watching virtually, that things happen all over the world: Is he supposed to comment on everything?" one former employee recalled.</p><p>"It was something like: He can't be giving an opinion on everything that happens in the world every Friday," another former employee remembered.</p><p>Facebook's chief operating officer and resident tactician, Sheryl Sandberg, quickly swooped in, thanking the employee for her question and rerouting the conversation to talk about Facebook's charitable donations and how Sandberg herself thinks about what to comment on publicly. A Facebook spokesperson confirmed the details of this account, but said it lacked context, including that Zuckerberg did admit he should have said something sooner. </p><p>Still, to the people who spoke with Protocol, Zuckerberg's unscripted remarks that day underscored something some employees already feared: that the company had yet to take the threat posed by domestic extremists in the U.S. as seriously as it was taking the threat from foreign extremists linked to ISIS and al-Qaeda. "There wasn't a patent condemnation such that we would have expected had this been a foreign extremist group," a third former employee said.</p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="2"></div></div></p><p>At the time, tech giants were already hard at work figuring out how to crack down on the global terrorist networks that filled their sites with beheading videos and used social media to openly recruit new adherents. Just a few months before Charlottesville, Facebook, YouTube, Twitter and Microsoft had <a href="https://gifct.org/about/story/#june-26--2017---formation-of-gifct" target="_blank">announced</a> a novel plan to share intel on known terrorist content so they could automatically remove posts that had appeared elsewhere on the web. </p><p>Despite the heavy-handed approach to international jihadism, tech giants have applied a notably lighter touch to the same sort of xenophobic, racist, conspiratorial ideologies that are homegrown in the U.S. and held largely by white Westerners. Instead, they've drawn drifting lines in the sand, banning explicit calls for violence, but often waiting to address the deranged beliefs underlying that violence until something has gone terribly wrong. </p><p>But the Capitol riot on Jan. 6 and the spiraling conspiracies that led to it have forced a reckoning many years in the making on how both Big Tech and the U.S. government approach domestic extremists and their growing power. In the weeks since the riot, the Department of Homeland Security has issued a <a href="https://www.dhs.gov/ntas/advisory/national-terrorism-advisory-system-bulletin-january-27-2021" target="_blank">terrorism advisory</a> bulletin, warning of the increased threat of "domestic violent extremists" who "may be emboldened" by the riot. The head of the intelligence community has <a href="https://www.nytimes.com/2021/01/19/us/politics/avril-haines-domestic-terror-qanon.html" target="_blank">promised</a> to track domestic extremist groups like QAnon. And attorney general nominee Merrick Garland, who prosecuted the 1995 Oklahoma City bombing case, <a href="https://time.com/5941907/merrick-garland-domestic-terrorism/" target="_blank">said</a> during his recent confirmation hearing that investigating domestic terrorism will be his "first priority. "</p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="3"></div></div></p><p>Tech companies have followed suit, cracking down in ways they never have before on the people and organizations that worked to motivate and glorify that extremist behavior, including former President Donald Trump himself. </p><p>But the question now is the same as it was when that employee confronted Zuckerberg three years ago: What took so long? </p><p>Interviews with more than a dozen people who have worked on these issues at Facebook, Twitter and Google or inside the government shed light on how tech giants' defenses against violent extremism have evolved over the last decade and why their work on domestic threats lagged behind their work on foreign ones. </p><p>Some of it has to do with the War on Terror sociopolitical dynamics that have prioritized violent Islamism above all else. </p><p>Some of it has to do with the technical advancements that have been made in just the last four years. </p><p>And yes, some of it has to do with Trump.</p><h2>The room full of lawyers</h2><p>Nearly a decade before Q was a glimmer in some 4channer's eye, the tech industry was facing a different scourge – the proliferation of child sexual abuse material. In 2008, Microsoft called a Dartmouth computer science professor named Hany Farid to help the company figure out a way to do something about it. Farid, now a professor at University of California, Berkeley, traveled to Washington to meet with representatives from the tech industry to discuss a possible solution.</p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="4"></div></div></p><p>"I go down to D.C. to talk to them, and it's exactly what you think it is: a room full of lawyers, not engineers, from the tech industry, talking about how they can't solve the problem," Farid recalled. </p><p>To Farid, the fact that he was one of the only computer scientists at the meeting sent a message about how the industry thought about the problem, and still thinks about other content moderation problems — not as a technical challenge that rewarded speed and innovation, but as a legal liability that had to be handled cautiously.</p><p>At the time, tech companies were already attaching unique fingerprints to copyrighted material so they could remove anything that risked violating the Digital Millennium Copyright Act. Farid didn't see any reason why companies couldn't apply the same technology to automatically remove child abuse material that had been previously reported to the National Center for Missing &amp; Exploited Children's tipline. That might not catch every piece of child abuse imagery on the internet, but it would make a dent. In partnership with Microsoft, he spent the next year developing a tool called PhotoDNA that Microsoft <a href="https://news.microsoft.com/2009/12/15/new-technology-fights-child-porn-by-tracking-its-photodna/" target="_blank">deployed</a> across its products in 2009.<br></p><p>But social networks dragged their feet. Facebook became the first company outside of Microsoft to announce it had adopted PhotoDNA in <a href="https://blogs.microsoft.com/on-the-issues/2011/05/19/facebook-to-use-microsofts-photodna-technology-to-combat-child-exploitation/" rel="noopener noreferrer" target="_blank">2011</a>. Twitter took it up in <a href="https://www.theguardian.com/technology/2013/jul/22/twitter-photodna-child-abuse" rel="noopener noreferrer" target="_blank">2013</a>. (Google had begun using its own <a href="https://www.blog.google/outreach-initiatives/google-org/our-continued-commitment-to-combating/" rel="noopener noreferrer" target="_blank">hashing system</a> in 2008.) "Everybody came to this reluctantly," Farid said. "They knew if you came for [child sexual abuse material], you're now going to come for the other stuff. "</p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="5"></div></div></p><p>That turned out to be the right assumption. By 2014, that "other stuff" included a string of ghastly beheading videos, slickly filmed and distributed by ISIS's loud and proud propaganda arm. One of those videos in particular, which documented the beheading of journalist James Foley, horrified Americans as it filled their Twitter feeds and multiplied on YouTube. "It was the first instance of an execution video going really viral," said Nu Wexler, who was working on Twitter's policy communications team at the time. "It was one of the early turning points where the platforms realized they needed to work together. "</p><p>As the Foley video made the rounds, Twitter and YouTube scrambled to form an informal alliance, where each platform swapped links on the videos it was finding and taking down. But at the time, that work was happening through user reports and manual searches. "A running theme for a number of services was that we had a very manual, very reactive response to the threat that ISIS posed to our services, coupled with the speed of ISIS' territorial organizational expansion and at the same time the response from industry and from government being relatively siloed," said Nick Pickles, Twitter's senior director of public policy strategy.</p><p>The Foley video and several other videos that appeared on the platform in quick succession only underscored how insufficient that approach was. "The way that content was distributed by ISIS online represented a manifestation of their online and physical threat in a way which led to a far more focused policy conversation and urgency to address their exploitation of digital services," Pickles said.</p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="6"></div></div></p><p>But there was also trepidation among some in the industry. "I heard a tech executive say to me, and I wanted to punch him in the face: 'One person's terrorist is another person's freedom fighter,'" Farid remembered. "I'm like, there's a fucking video of a guy getting his head cut off and then run over. Do you want to talk about that being a 'freedom fighter'? "</p><p>To Farid, the choice for tech companies was simple: automatically filter out the mass quantities of obviously abhorrent content using hashing technology like PhotoDNA and worry about the gray areas later.</p><p class="pull-quote">There's a fucking video of a guy getting his head cut off and then run over. Do you want to talk about that being a 'freedom fighter'?</p><p>Inside YouTube, one former employee who has worked on policy issues for a number of tech giants said people were beginning to discuss doing just that. But questions about the slippery slope slowed them down. "You start doing it for this, then everybody's going to ask you to do it for everything else. Where do you draw the line there? What is OK and what's not?" the former employee said, recalling those discussions. "A lot of these conversations were happening very robustly internally. "</p><p>Those conversations were also happening with the U.S. government at a time when tech giants were very much trying to <a href="https://www.wired.com/2014/03/facebook-security/" rel="noopener noreferrer" target="_blank">distance</a> themselves from the feds in the wake of the Edward Snowden disclosures. "At first when we were meeting with social media companies to address the ISIS threat, there was some reluctance to feel like tech companies were part of the solution," said Ryan Greer, who worked on violent extremism issues in both the State Department and the Department of Homeland Security under President Obama. "It had to be a little bit shamed out of them. "</p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="7"></div></div></p><h2>The Madison Valleywood Project</h2><p>Then, in 2015, ISIS-inspired attackers shot up a theater in Paris and an office in San Bernardino. The next year, they carried out a series of bombings in Brussels and drove cargo trucks through crowds in Berlin and Nice. The rash of terrorist attacks in the U.S. and Europe changed the stakes for the tech industry. Suddenly, governments in those countries began forcefully pushing tech companies to prevent their platforms from becoming instruments of radicalization. </p><p>"You had multiple Western governments speaking with one voice and linking arms on this to put pressure to bear on these companies," said the former YouTube employee.</p><p>In the U.S., the Obama administration gave this pressure campaign a codename: <a href="https://www.nytimes.com/2016/02/25/technology/tech-and-media-firms-called-to-white-house-for-terrorism-meeting.html" rel="noopener noreferrer" target="_blank">The Madison Valleywood Project</a>, which was designed to get Madison Avenue advertisers, Silicon Valley technologists and Hollywood filmmakers to work with the government in the fight against ISIS. In February 2016, Obama invited representatives from all of those industries – Google, Facebook and Twitter among them – to a day-long summit at the White House that was laser-focused on ISIS. The day's opening speaker, former Assistant Attorney General John Carlin, applauded Facebook, Twitter and YouTube's nascent counterterrorism efforts but urged them to do more. "We anticipate — and indeed hope — that after today you will continue to meet without the government, to continue to develop on your own efforts, building on the connections you make today," Carlin said, according to a copy of the <a href="https://epic.org/foia/MadisonValleywood_2.pdf" rel="noopener noreferrer" target="_blank">speech</a> obtained by the Electronic Privacy Information Center.</p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="8"></div></div></p><p>"The ISIS threat really captivated both U.S. and international media at the time," said Greer, who now works as national security director for the Anti-Defamation League. "There was a constant drumbeat of questions: What are you doing about ISIS? What are you doing about ISIS? "</p><p>The mounting pressure seemed to have an impact. Just weeks before the White House summit, Twitter became the first tech company to <a href="https://blog.twitter.com/en_us/a/2016/combating-violent-extremism.html" rel="noopener noreferrer" target="_blank">publicly</a> enumerate the terrorist accounts it had removed in 2016. The communications team opted to publish the blog post laying out the stats without attaching the author's name, Wexler said, because they were fearful of directing more death threats to executives who were already being bombarded with them.</p><p>Over the course of the next year and a half, tech executives continued to hold meetings with the U.K.'s home secretary, the United Nations Counter-Terrorism Executive Directorate and the EU Internet Forum.</p><p><br></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <a href="https://media.gettyimages.com/photos/from-left-to-right-uk-home-secretary-amber-rudd-head-of-public-policy-picture-id865330526?b=1&amp;k=6&amp;m=865330526&amp;s=170x170&amp;h=_H9lHNEtUrY1xP4Xr75Pj4F0BvKbUzR9M64Rp2dRaQo=" target="_blank"><img class="rm-shortcode" loading="lazy" src="https://www.protocol.com/media-library/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yNTc0NzQyNS9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTY0NDQ5OTE1OH0.HluxS3UkzhEHVK7AjUGUmSMIjhcYdArHYvQbEdGG1GM/image.jpg?width=980" id="99b98" data-rm-shortcode-id="b78df8d5dcf6965b6363ea9429b987bd" data-rm-shortcode-name="rebelmouse-image"></a> <small class="image-media media-caption" placeholder="Add Photo Caption...">Twitter's Nick Pickles (second from left) and Facebook's Brian Fishman (third from left) attend the G7 Interior Ministerial Meeting in Ischia, Italy in October of 2017. </small> <small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Photo: Vincenzo Rando/Getty Images</small> </p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="9"></div></div></p><p><br></p><p>By June 2017, Microsoft, Facebook, Google and Twitter <a href="https://gifct.org/about/story/" rel="noopener noreferrer" target="_blank">emerged with</a> a plan to share hashed terrorist images and videos through a new group called the Global Internet Forum to Counter Terrorism, or GIFCT. That group has since grown to include smaller companies, including Snap, Pinterest, Mailchimp and Discord, and is led by Obama's former director of the National Counterterrorism Center, Nick Rasmussen.</p><p>Meanwhile, Google's internal idea lab, Jigsaw, which had been studying radicalization online for years, began running a novel <a href="https://www.wired.com/2016/09/googles-clever-plan-stop-aspiring-isis-recruits/" rel="noopener noreferrer" target="_blank">pilot</a> designed to stop people from getting pulled in by ISIS through search. Working with outside groups, Jigsaw began sponsoring Google search ads in 2016 that would run whenever users searched for terms that risked sending them down an ISIS rabbit hole. Those search ads, inspired by Jigsaw's interviews with actual ISIS defectors, would link to Arabic and English-language YouTube videos that aimed to counter ISIS propaganda. In 2017, even as Google and YouTube worked on ways to remove ISIS content algorithmically, YouTube <a href="https://blog.youtube/news-and-events/bringing-new-redirect-method-features" rel="noopener noreferrer" target="_blank">deployed</a> the Redirect Method to searches inside its own platform to help counter propaganda its automated filters had not yet found. </p><p>Facebook, meanwhile, hired an expert on jihadi terrorism, Brian Fishman, to head up Facebook's work on counterterrorism and dangerous organizations in April 2016. At the time, the list of dangerous organizations consisted mainly of foreign terrorist organizations, as well as well-known hate groups like the Ku Klux Klan and the neo-Nazi group Blood &amp; Honour. These organizations were banned from the platform, as was any praise of those groups. But Fishman's hiring was a clear signal that cracking down on ISIS and al-Qaeda had become a priority for Facebook.</p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="10"></div></div></p><p class="pull-quote">There was a constant drumbeat of questions: What are you doing about ISIS? What are you doing about ISIS?</p><p>After Fishman began, Facebook started using an approach similar to what the intelligence community would use to go after ISIS, relying not just on user reports and automated takedowns of known terrorist content, but using artificial intelligence as well as off-platform information to chase down whole networks of accounts.</p><p>ISIS's overt and corporate branding gave tech platforms a clear focal point to start with. "Some groups like the ISISes of the world and the al-Qaedas of the world are very focused on protecting their brand," Fishman said. "They retain tight control over the release of information." ISIS had media outlets, television stations, slogans and soundtracks. That meant platforms could begin sniffing out accounts that used that common branding without having to look exclusively at the content itself. </p><p>"I look back on that threat, and I recognize now in hindsight there were attributes of it that made it easier to go after than the types of domestic terrorism and extremism we're grappling with today," said Yasmin Green, director of research and development at Google's Jigsaw. "There was one organization, basically, and you had to make public your allegiance to it. Obviously, all of those things made it possible for law enforcement and the platforms to model and pursue it. "</p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="11"></div></div></p><p><br></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <a href="https://media.gettyimages.com/photos/yasmin-green-director-of-research-and-development-jigsaw-speaks-the-picture-id1210644850?b=1&amp;k=6&amp;m=1210644850&amp;s=170x170&amp;h=5xo3EZLexgMsab0xvC7TRT-pyfxqqtWDSityLtoUvx8=" target="_blank"><img class="rm-shortcode" loading="lazy" src="https://www.protocol.com/media-library/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yNTc0ODU4Ny9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTY2NjkxOTA0NH0.YNluKT1h_v-6kpS9FJexwgvFs1Q77ikBXzV4MndqhNo/image.jpg?width=980" id="99442" data-rm-shortcode-id="9448e621190bd34309d2730d21d23cdf" data-rm-shortcode-name="rebelmouse-image"></a> <small class="image-media media-caption" placeholder="Add Photo Caption...">Jigsaw's Yasmin Green has recently focused her violent extremism research on white supremacists and conspiracy theorists.</small> <small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Photo: Craig Barritt/Getty Images</small> </p><p><br></p><p>Around that time, Twitter also began investing in monitoring what Pickles calls "behavioral signals," not just tweets. "If you focus on behavioral signals, you can take action before they distribute content," Pickles said. "The switch to behavior meant that we could take action much faster at a much greater scale, rather than waiting. "</p><p>The development of automated filters across the industry was almost stunningly successful. Within a year, Facebook, Twitter and YouTube went from manually removing foreign terrorist content that had been reported by users to automatically taking down the vast majority of foreign terrorists' posts before anyone flags them. Today, according to Facebook's transparency reports, 99.8% of terrorist content is <a href="https://transparency.facebook.com/community-standards-enforcement#dangerous-organizations" rel="noopener noreferrer" target="_blank">removed</a> before a single person has even reported it.</p><p>"If you actually look a bit farther back, you understand just how much has moved in this arena," Green said. "That always makes me feel a little bit optimistic." </p><h2>The domestic dilemma</h2><p>It didn't hurt that both the United States and the United Nations keep lists of designated international terrorist organizations. To the "rooms full of lawyers" that help make these decisions, that kept things clean; use those lists as a guide and level a hammer on any organizations on them, and tech executives could be fairly confident they wouldn't face much second-guessing from the powers that be. "If a terrorist group is put on a watch list or terrorist list or viewed by the international community, by the UN, as a terrorist group, then that gives Facebook everything they need to have a very strong policy," said Yael Eisenstat, a former CIA officer who led election integrity efforts for Facebook's political ads in 2018. </p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="11"></div></div></p><p>The same can't be said for domestic extremists. In the United States, there's not an analogous list of domestic terrorist organizations for companies to work from. That doesn't mean acts of domestic terrorism go unpunished. It just means that people are prosecuted for the underlying crimes they commit, not for being part of a domestic terrorist organization. That also means that individual extremists who commit the crimes are the ones who face the punishment, not the groups they represent. "You have to have a violent crime committed in pursuit of an ideology," former FBI Acting Director Andrew McCabe said in a recent <a href="https://podcasts.apple.com/gb/podcast/dhs-warning-domestic-violent-extremists/id498897343?i=1000507279942" rel="noopener noreferrer" target="_blank">podcast</a>. "We hesitate to call domestic terrorists 'terrorists' until after something has happened." </p><p class="pull-quote">Nobody's going to have a hearing if a platform takes down 1,000 ISIS accounts. But they might have a hearing if you take down 1,000 QAnon accounts.</p><p>This gap in the legal system means tech companies write their own rules around what sorts of objectionable ideologies and groups ought to be forbidden on their platforms and often only take action once the risk of violence is imminent. "If something was illegal it was going to be handled. If something was not, then it became a political conversation," Eisenstat said of her time at Facebook.</p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="11"></div></div></p><p>Even in the best of times, it's an uncomfortable balancing act for companies that purport to prioritize free speech above all else. But it's particularly fraught when the person condoning or even espousing extremist views is the president of the United States. "Nobody's going to have a hearing if a platform takes down 1,000 ISIS accounts. But they might have a hearing if you take down 1,000 QAnon accounts," said Wexler, who worked in policy communications for Facebook, Google and Twitter during the Trump administration.</p><p>There never was a Madison Valleywood moment in the U.S. related to the <a href="https://www.newsweek.com/hate-crimes-under-trump-surged-nearly-20-percent-says-fbi-report-1547870" rel="noopener noreferrer" target="_blank">rising hate crimes</a> and domestic extremist events that marked the Trump presidency. Not after Charlottesville. Not after Pittsburgh. Not after El Paso. The former president <em>did</em> have what might be construed as the opposite of a Madison Valleywood moment when he held an event at the White House in 2019 where <a href="https://www.washingtonpost.com/technology/2019/07/11/who-was-who-trumps-social-media-summit/" rel="noopener noreferrer" target="_blank">far-right conspiracy theorists and provocateurs</a> discussed social media censorship. But this time, Facebook, Google and Twitter weren't invited.</p><p><br></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <a href="https://media.gettyimages.com/photos/poster-with-a-tweet-is-seen-as-president-donald-j-trump-participates-picture-id1155382518?b=1&amp;k=6&amp;m=1155382518&amp;s=170x170&amp;h=rssm82D_89TYJJFr8l77T0pLrAOoN70cMvhW2p3Eqoo=" target="_blank"><img class="rm-shortcode" loading="lazy" src="https://www.protocol.com/media-library/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yNTc0NzQ0MC9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTYzMzkyMzA5OX0.WyLHy4C-a0IV0S0HwMaQ75yknPytWxqbU1wG7ZR9Ygs/image.jpg?width=980" id="b9709" data-rm-shortcode-id="aea6abb03061310945c5554c829dada0" data-rm-shortcode-name="rebelmouse-image"></a> <small class="image-media media-caption" placeholder="Add Photo Caption...">President Trump's Social Media Summit in 2019 focused on alleged social media censorship of conservatives. </small> <small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Photo: Jabin Botsford/Getty Images</small> </p><p><br></p><p>"Platforms write their own rules, but governments signal which types of content they find objectionable, creating a permission structure for the companies to step up enforcement," Wexler said. "President Trump's comments after Charlottesville and his tacit support of the Proud Boys sent a deliberate message to tech companies: If you crack down on white nationalists' accounts, we'll accuse you of political bias and make your CEOs testify before Congress. "</p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="11"></div></div></p><p>During the Trump years, tech companies repeatedly courted favor with the president and his party. In 2019, Google CEO Sundar Pichai met <a href="https://www.cnbc.com/2019/03/27/trump-met-with-google-ceo-sundar-pichai-on-political-fairness-china.html" rel="noopener noreferrer" target="_blank">directly</a> with Trump to discuss what Trump later characterized as "political fairness." Internally, Google employees <a href="https://www.nbcnews.com/news/us-news/current-ex-employees-allege-google-drastically-rolled-back-diversity-inclusion-n1206181" rel="noopener noreferrer" target="_blank">told</a> NBC News that the company had rolled back diversity training programs because the company "doesn't want to be seen as anti-conservative." (Google denied the accusation to NBC.) On Election Day in 2020, YouTube allowed the Trump campaign to book an ad on its <a href="https://www.businessinsider.com/trump-2020-campaign-spends-big-on-youtube-biden-facebook-2020-11?amp" rel="noopener noreferrer" target="_blank">homepage</a> for the entire day, and it was the only <a href="https://stacks.stanford.edu/file/druid:tr171zs0069/EIP-Final-Report-v2.pdf" rel="noopener noreferrer" target="_blank">one of the top three social platforms</a> that had no explicit policy regarding attempts to delegitimize the election results. After the election, <a href="https://www.theverge.com/2020/11/4/21550180/youtube-oann-video-election-trump-misinformation-voting-final-results-facebook-twitter" rel="noopener noreferrer" target="_blank">videos</a> claiming Trump won went viral, but YouTube only began <a href="https://blog.youtube/news-and-events/supporting-the-2020-us-election/" rel="noopener noreferrer" target="_blank">removing</a> widespread allegations of fraud and errors on Dec. 9.</p><p>Google and YouTube declined to make any executives available for comment. In a statement, YouTube spokesperson Farshad Shadloo said: "Our Community Guidelines prohibit hate speech, gratuitous violence, incitement to violence, and other forms of intimidation. Content that promotes terrorism or violent extremism does not have a home on YouTube." Shadloo said YouTube's policies focus on content violations and not speakers or groups, unless those speakers or groups are included on a government foreign terrorist organization list.</p><p>At times, tech giants bent their own policies or adopted entirely new ones to accommodate President Trump and his most conspiratorial supporters on the far right. In January 2018, Twitter, for one, published its "world leaders <a href="https://blog.twitter.com/en_us/topics/company/2018/world-leaders-and-twitter.html" rel="noopener noreferrer" target="_blank">policy</a>" for the first time, seemingly seeking to explain why President Trump wasn't punished for threatening violence when he tweeted that his nuclear button was "much bigger &amp; more powerful" than Kim Jong Un's. Later that year, after Facebook, Apple and YouTube all <a href="https://www.vox.com/2018/8/6/17655658/alex-jones-facebook-youtube-conspiracy-theories" rel="noopener noreferrer" target="_blank">shut down</a> accounts and pages linked to Infowars' Alex Jones, Twitter CEO Jack Dorsey booked an interview with conservative kingmaker Sean Hannity, where he <a href="https://time.com/5361874/twitter-jack-dorsey-alex-jones-sean-hannity/" rel="noopener noreferrer" target="_blank">defended</a> Twitter's decision not to do the same. Just a few weeks later, the company would <a href="https://www.wired.com/story/twitter-bans-alex-jones-infowars/" rel="noopener noreferrer" target="_blank">reverse course</a> after Jones livestreamed a tirade against a CNN reporter on Twitter — while standing outside of Dorsey's own congressional hearing. </p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="11"></div></div></p><p><br></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <a href="https://media.gettyimages.com/photos/alex-jones-of-infowars-background-attends-a-senate-intelligence-in-picture-id1027224386?b=1&amp;k=6&amp;m=1027224386&amp;s=170x170&amp;h=1IyYVN-U74jheta0t0kJbu0S6ik1HK0i2o6F2l_FBW8=" target="_blank"><img class="rm-shortcode" loading="lazy" src="https://www.protocol.com/media-library/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yNTc0NzQ2MS9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTY1MDE2MzI2M30.Fn3bmuFCgNa8RHs9FbhpYRL79TFy0csfebnUMhCdv4E/image.jpg?width=980" id="03c68" data-rm-shortcode-id="2713b8f837f7dd9f37a2f330f1cb7184" data-rm-shortcode-name="rebelmouse-image"></a> <small class="image-media media-caption" placeholder="Add Photo Caption...">Twitter defended its decision not to remove accounts tied to InfoWars Alex Jones. Weeks later, Twitter reversed that decision, following CEO Jack Dorsey's testimony before Congress in September of 2018.</small> <small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Photo: Tom Williams/CQ Roll Call</small> </p><p><br></p><p>Jones was a lightning rod for Facebook, too. As BuzzFeed recently <a href="https://www.buzzfeednews.com/article/ryanmac/mark-zuckerberg-joel-kaplan-facebook-alex-jones?ref=bfnsplash&amp;utm_term=4ldqpho" rel="noopener noreferrer" target="_blank">reported</a>, Facebook decided in 2019 to do more than just ban Jones' pages. The company wanted to designate him as a dangerous individual, a label that also ordinarily forbids other Facebook users from praising or expressing support for those individuals. But according to BuzzFeed, Facebook altered its own rules at Zuckerberg's behest, creating a third lane for Jones that would allow his supporters' accounts to go untouched. And when President Trump threatened to shoot looters in the aftermath of George Floyd's killing, Facebook staffers <a href="https://www.washingtonpost.com/technology/2020/06/28/facebook-zuckerberg-trump-hate/" rel="noopener noreferrer" target="_blank">reportedly</a> called the White House themselves, urging the president to delete or tweak his post. When he didn't, Zuckerberg <a href="https://www.vox.com/recode/2020/6/3/21279434/mark-zuckerberg-meeting-facebook-employees-transcript-trump-looting-shooting-post" rel="noopener noreferrer" target="_blank">told</a> his staff the post didn't violate Facebook's policies against incitement to violence anyway.</p><p><div class="ad-tag"><div class="ad-place-holder" data-pos="12"></div></div></p><p>"I would argue the looter-shooter post was more violating than [the post] on Jan. 6," Eisenstat said, referring to the Facebook video that ended up getting the former president kicked off of Facebook indefinitely in his last weeks in office. In the video, Trump told the Capitol rioters he loved them and that they were very special, while repeating baseless claims of election fraud. </p><p>For Facebook at least, this instinct to accommodate the party in power wasn't unique to the U.S., said tech entrepreneur Shahed Amanullah, who worked with Facebook on a series of global hackathons through his company, Affinis Labs. The goal of the hackathons, Amanullah said, was to fight all forms of hate and extremism online, and the events had been successful in countries like Indonesia and the Philippines. But when he brought the program to India, Amanullah said he received pressure from Facebook India's policy team to focus the event specifically on terrorism coming out of the majority-Muslim region of Kashmir. </p><p>The woman leading Facebook India's policy team at the time, Ankhi Das, was a vocal supporter of Indian Prime Minister Narendra Modi, and, <a href="https://www.wsj.com/articles/facebook-hate-speech-india-politics-muslim-hindu-modi-zuckerberg-11597423346" rel="noopener noreferrer" target="_blank">according</a> to The Wall Street Journal, had a pattern of allowing anti-Muslim hate speech to go unchecked on the platform. "I said there's no way I'm ever going to accept a directive like that," Amanullah recalled. </p><p>Though he was supposed to run seven more hackathons in the country, Amanullah cut ties. "That was the last time we ever worked with Facebook," he said. </p><p>A Facebook spokesperson told Protocol, "We've found nothing to suggest this is true. We've looked into it on our end, spoken to people who were present at the hackathon and have no reason to believe that anyone was pressured to shift the focus of the hack." Das did not respond to Protocol's request for comment.</p><p>To Amanullah, the experience working with Facebook in India signaled that the company was giving in to the Indian government and giving Islamist extremism an inordinate amount of attention compared to other threats. "If you want to talk about hate," he said, "you have to talk about all kinds of hate. "</p><h2>The reckoning</h2><p>Looking back in the wake of the Capitol riot, it's easy to view Charlottesville as a warning shot that went unheard, or at least insufficiently answered, by tech giants. And in many ways it was. But inside, things were also changing, albeit far more slowly than almost anyone believes they should have.</p><p>For Fishman, who had focused almost entirely on jihadism on Facebook until that point, the Unite the Right rally was a turning point. "Charlottesville was a moment when extremist groups on the American far right clearly were trying to overcome the historical fractioning of that movement and express themselves in more powerful ways," he said. "It absolutely was something we tracked and realized we needed to invest more resources into. "</p><p><br></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <a href="https://media.gettyimages.com/photos/neo-nazis-altright-and-white-supremacists-take-part-a-the-night-the-picture-id830696202?b=1&amp;k=6&amp;m=830696202&amp;s=170x170&amp;h=kiBAUb9a6_lE_STxphpsjj0HB7qMFyB3Iay26r5ShpA=" target="_blank"><img class="rm-shortcode" loading="lazy" src="https://www.protocol.com/media-library/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yNTc0ODM3MC9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTY3Mjg2OTAwMH0.uoccdzJpkOcTc1C4mEWPtfwkCRre2qhqmxF5q-2w01o/image.jpg?width=

Monique Woodard Investments

1 Investments

Monique Woodard has made 1 investments. Their latest investment was in LawChamps as part of their Series A on October 10, 2018.

CBI Logo

Monique Woodard Investments Activity

investments chart

Date

Round

Company

Amount

New?

Co-Investors

Sources

10/17/2018

Series A

LawChamps

$6M

Yes

First Round Capital, Gingerbread Capital, Kapor Capital, LDR Ventures, Lightspeed Venture Partners, NFX, Niija Kuykendall, Undisclosed Angel Investors, UpHonest Capital, and VU Venture Partners

2

Date

10/17/2018

Round

Series A

Company

LawChamps

Amount

$6M

New?

Yes

Co-Investors

First Round Capital, Gingerbread Capital, Kapor Capital, LDR Ventures, Lightspeed Venture Partners, NFX, Niija Kuykendall, Undisclosed Angel Investors, UpHonest Capital, and VU Venture Partners

Sources

2

CB Insights uses Cookies

CBI websites generally use certain cookies to enable better interactions with our sites and services. Use of these cookies, which may be stored on your device, permits us to improve and customize your experience. You can read more about your cookie choices at our privacy policy here. By continuing to use this site you are consenting to these choices.