Shannon Bond

Shannon Bond is a business correspondent at NPR, covering technology and how Silicon Valley's biggest companies are transforming how we live, work and communicate.

Bond joined NPR in September 2019. She previously spent 11 years as a reporter and editor at the Financial Times in New York and San Francisco. At the FT, she covered subjects ranging from the media, beverage and tobacco industries to the Occupy Wall Street protests, student debt, New York City politics and emerging markets. She also co-hosted the FT's award-winning podcast, Alphachat, about business and economics.

Bond has a master's degree in journalism from Northwestern University's Medill School and a bachelor's degree in psychology and religion from Columbia University. She grew up in Washington, D.C., but is enjoying life as a transplant to the West Coast.

On Feb. 1, the editor of an award-winning Indian magazine got a call from his social media manager: The magazine's Twitter account was down.

"I said, 'Are you sure? Can you just refresh, and check again?' " recalled Vinod K. Jose, executive editor of The Caravan, which covers politics and culture. "But she said, 'No, no, it's real.' "

Copyright 2021 NPR. To see more, visit https://www.npr.org.

AILSA CHANG, HOST:

All right. Well, for more on this dilemma facing Twitter in India, we're going to turn now to NPR tech correspondent Shannon Bond.

Hey, Shannon.

SHANNON BOND, BYLINE: Hey, Ailsa.

Twitter users aren't known for staying quiet when they see something that's flat out wrong, or with which they disagree. So why not harness that energy to solve one of the most vexing problems on social media: misinformation?

With a new pilot program called Birdwatch, Twitter is hoping to crowdsource the fact-checking process, eventually expanding it to all 192 million daily users.

"I think ultimately over time, [misleading information] is a problem best solved by the people using Twitter itself," CEO Jack Dorsey said on a quarterly investor call on Tuesday.

Facebook is expanding its ban on vaccine misinformation and highlighting official information about how and where to get COVID-19 vaccines as governments race to get more people vaccinated.

"Health officials and health authorities are in the early stages of trying to vaccinate the world against COVID-19, and experts agree that rolling this out successfully is going to be helping build confidence in vaccines," said Kang-Xing Jin, Facebook's head of health.

January brought a one-two punch that should have knocked out the fantastical, false QAnon conspiracy theory.

After the Jan. 6 attack on the U.S. Capitol, the social media platforms that had long allowed the falsehoods to spread like wildfire — namely Twitter, Facebook and YouTube — got more aggressive in cracking down on accounts promoting QAnon.

Updated at 3:16 p.m. ET

Facebook's oversight board on Thursday directed the company to restore several posts that the social network had removed for breaking its rules on hate speech, harmful misinformation and other matters.

The decisions are the first rulings for the board, which Facebook created last year as a kind of supreme court, casting the final votes on the hardest calls the company makes about what it does and does not allow users to post.

The alternative social network MeWe had 12 million users at the end of 2020. Barely three weeks into 2021 — and two since a right-wing mob attacked the U.S. Capitol — the company says it's now passed 16 million.

CEO Mark Weinstein says this popularity is a testament to the reason he launched MeWe in 2016 as an alternative to Facebook. MeWe markets itself as privacy forward. It doesn't harness users' data to sell ads or decide what content to show them.

Two weeks ago, Facebook indefinitely suspended former President Donald Trump from its social network and Instagram, after a mob of his supporters stormed the U.S. Capitol. CEO Mark Zuckerberg said the risks of allowing Trump to keep using the social network were "too great."

Now, Facebook wants its newly formed independent oversight board to weigh in and decide whether it should reinstate Trump.

Updated at 3:05 p.m. ET

Willy Solis never saw himself as an activist.

"I'm an introvert, extreme introvert," he said. "That's my nature."

But 2020 changed that — like so many other things.

Los Angeles County Supervisor Sheila Kuehl's district sweeps from the beaches of Santa Monica to the San Fernando Valley. Among the two million people she represents are Latino communities hit especially hard by the coronavirus pandemic.

"Many essential workers, many market and pharmacy and food service and restaurant and hotel workers and a lot of health care workers," she said. "So a lot of people just had to go to work."

Updated at 6:32 p.m. ET

When you search on Google, do you get the best results? Or the results that are best for Google?

That question is at the heart of the latest lawsuit to challenge the tech giant's dominance over Internet search and advertising.

On Thursday, a bipartisan group of 38 attorneys general hit Google with the company's third antitrust complaint in less than two months, zeroing in on its role as "the gateway to the Internet."

This week, the Federal Trade Commission and 48 attorneys general unveiled blockbuster lawsuits accusing Facebook of crushing competition and calling for the tech giant to be broken up.

The twin complaints together run to nearly 200 pages documenting how Facebook became so powerful — and how, according to the government, it broke the law along the way.

Kolina Koltai first heard about the coronavirus back in January, but not from newspapers or TV. Instead, she read about it in anti-vaccination groups on Facebook.

"They were posting stories from China like, 'Hey, here's this mysterious illness,' or 'Here's this something that seems to be spreading,'" she said.

Updated at 9:30 p.m. ET

The Federal Trade Commission and 48 attorneys general across the nation filed much-anticipated lawsuits against Facebook on Wednesday, accusing the social media giant of gobbling up competitive threats in a way that has entrenched its popular apps so deeply into the lives of billions of people that rivals can no longer put up a fight.

Facebook is banning claims about COVID-19 vaccines that have been debunked by public health experts, as governments prepare to roll out the first vaccinations against the virus.

That includes posts that make false claims about how safe and effective the vaccines are, and about their ingredients and side effects.

Google illegally fired two employees involved in labor organizing last year, the National Labor Relations Board alleged in a complaint on Wednesday.

The tech giant also violated federal labor law, the agency said, by surveilling employees who viewed a union organizing presentation, interrogating others, unfairly enforcing some rules and maintaining policies that "discourage" workers from protected organizing activities.

Updated at 5:19 p.m. ET

Facebook users saw hate speech about once in every 1,000 pieces of content they viewed on the social network between July and September, the company said on Thursday.

This is the first time Facebook has publicly estimated the prevalence of hate speech on its platform, giving a sense of scale of the problem. It published the new metric as part of its quarterly report on how much content it removed from Facebook and Instagram for breaking rules ranging from violence to child exploitation to suicide and self-harm.

Updated Thursday at 11:02 a.m. ET

More than 200 Facebook workers say the social media company is making content moderators return to the office during the pandemic because the company's attempt to rely more heavily on automated systems has "failed."

The chief executives of Facebook and Twitter defended on Tuesday the steps they took to limit election misinformation before lawmakers on both sides of the aisle who have grown increasingly critical of Big Tech.

Maria Bartiromo, the Fox Business host, declared herself done with Twitter two days after the election.

She tweeted a link to an article that falsely claimed Democrats were trying to steal the election. Twitter hid the post behind a label warning that it contained misleading content. Twitter also notified Bartiromo that someone had complained about her account (even though it did clarify that she had not violated any rules and it was taking no action against her).

For Bartiromo, the label was the last straw.

Twitter said on Thursday it would maintain some changes it had made to slow down the spread of election misinformation, saying they were working as intended.

Before Election Day, Twitter, Facebook and other social networks had announced a cascade of measures billed as protecting the integrity of the voting process.

For Twitter, those included more prominent warning labels on misleading or disputed claims and limiting how such claims can be shared.

Twitter said on Thursday that it would maintain some of the changes it had made to slow down the spread of election misinformation, saying they were working as intended.

Before Election Day, Twitter, Facebook and other social networks had announced a cascade of measures billed as protecting the integrity of the voting process.

Last week, millions of Americans turned to cable news to watch election returns pour in. Some refreshed their Twitter feeds to get the latest tallies. And nearly 300,000 others kept an eye on the YouTube channel of 29-year-old Millie Weaver, a former correspondent for the conspiracy theory website Infowars, who offered right-wing analysis to her followers in a live-stream that carried on for almost seven hours the day after the election.

At times, her pro-Trump commentary veered into something else: misinformation.

Facebook removed a group filled with false claims about voter fraud and calls for real-world protests over vote counting that had gained more than 360,000 members since it was created on Wednesday.

California voters handed Uber and Lyft a big victory — and labor unions a big setback — when they approved a measure allowing the ride-hailing companies to keep classifying their drivers as independent contractors.

For Joe Renice, who drives for Uber in San Francisco, the measure's passage was a relief.

"This is a job that I make over $100,000 a year doing. And I have complete and total freedom and flexibility to do that," he said.

Murphy Bannerman first noticed the posts this summer in a Facebook group called Being Black in Arizona.

Someone started posting memes full of false claims that seemed designed to discourage people from voting.

The memes were "trying to push this narrative of, 'The system is a mess and there's no point in you participating,' " Bannerman said. She recalled statements such as, " 'Democrats and Republicans are the same. There's no point in voting.' 'Obama didn't do anything for you during his term, why should you vote for a Democrat this time around?' "

Copyright 2020 NPR. To see more, visit https://www.npr.org.

MARY LOUISE KELLY, HOST:

Updated Thursday at 10:55 a.m. ET

Some U.S. hospitals have been hit by coordinated ransomware attacks designed to infect systems for financial gain, federal agencies and a private-sector cybersecurity company warned on Wednesday.

A joint advisory by the Cybersecurity and Infrastructure Security Agency, the Department of Health and Human Services and the FBI says there is "credible information of an increased and imminent cybercrime threat" to U.S. hospitals and health care providers.

The CEOs of some of the biggest tech platforms defended the way they handle online speech to an audience of skeptical senators, many of whom seemed more interested in scoring political points than engaging with thorny debate over content moderation policies and algorithms.

A California appeals court says Uber and Lyft must classify their drivers as employees rather than independent contractors, siding with a lower court that found the ride-hailing companies were likely violating state labor law.

Pages