Over the weekend, a familiar debate broke out over a Facebook policy decision. The company announced that ads made by influencers, on behalf of politicians, would be allowed on the platform so long as they were labeled as ads. The company will not, however, put those ads in its Ads Library, where they can be reviewed by the public. It’s not clear that anyone will review those ads outside of Facebook, as the Federal Elections Commission, which regulates political advertising, currently has no policy on influencer marketing. The influencer posts can be fact-checked, unless they contain the speech of a politician, in which case they cannot.
Got all that? Great.
I don’t know why you would build a whole public ads library, require a certain subset of posts to be labeled as ads, and then exempt those ads from your ads library. I also don’t know why you would invite a fresh nine months’ worth of news cycles over unlabeled viral political ads from influencers, false political ads from influencers that are not fact-checked due to the presence of candidate political speech, and so on. The situation would seem to pit the company’s integrity teams against their advertising teams, with the advertising teams winning all the most important battles.
But set all that aside for a moment. Who should be setting all these policies in the first place? Should it be Facebook, or should it be someone else? Someone like, oh, say, the government?
Well, that’s what Facebook says it wants. Mark Zuckerberg said as much during a trip over the weekend to Europe.
“Even if I’m not going to agree with every regulation in the near term, I do think it’s going to be the kind of thing that builds trust and better governance of the internet, and will benefit everyone, including us, over the long term,” Zuckerberg said at the Munich Security Conference on Saturday.
He followed up with an op-ed in the Financial Times on Sunday, asserting that Facebook needs “more oversight and accountability.”
Facebook also released a white paper (PDF) outlining the approach it would like to see regulators take to creating legal standards for content moderation. The approach it would like to see, you may not be surprised to learn, is one that largely follows the avenues Facebook has already taken. That includes: requiring public reporting on policy enforcement actions; reducing the visibility of content that violates standards; and blocking attempts to regulate speech based on the content of that speech. (The paper does not address how countries might regulate political ads, though Zuckerberg’s statement that posts on Facebook ought to be regulated like something in between a telecom company and a newspaper suggests the answer is “very lightly.”
European regulators, for their part, dismissed Facebook’s white paper so quickly that you wondered if they had even bothered to read it. Here’s Valentina Pop in the Wall Street Journal:
Thierry Breton, the EU commissioner for internal market and services, who met with Mr. Zuckerberg on Monday, told reporters afterward that the Facebook white paper “is too low in terms of responsibility. There are interesting things, but it’s not enough.”
He said the commission will decide by the end of the year what kind of liability to impose on online platforms. “I told him the comparison with telecoms is not relevant. A message [on Facebook] reaches hundreds of millions. On telcos you have one-on-one communications.”
Even if you find Facebook’s suggested regulations self-serving, they do highlight important trade-offs that states will have to make as they consider new laws. Consider, for example, the increasingly popular idea of legally requiring platforms to remove bad posts within 24 hours. Facebook points out, rightly I think, that this creates the wrong incentives:
A requirement that companies “remove all hate speech within 24 hours of receiving a report from a user or government” may incentivize platforms to cease any proactive searches for such content, and to instead use those resources to more quickly review user and government reports on a firstin-first-out basis. In terms of preventing harm, this shift would have serious costs. […] Companies focused on average speed of assessment would end up prioritizing review of posts unlikely to violate or unlikely to reach many viewers, simply because those posts are closer to the 24-hour deadline, even while other posts are going viral and reaching millions.
Here Facebook’s preferred solution — requiring companies to take down bad posts that hit a certain threshold of virality — strikes me as more likely to create a positive effect.
Everyone who posts on the internet, and lives in the world that the internet creates, has a rooting interest in both platforms and nation states finding a good balance. And even as we watch Facebook struggle to articulate a coherent position on political ads, we see nation states adopting awful regulations that serve only to censor their citizens. Here’s Eileen Yu from over the weekend in ZDNet:
Singapore’s Ministry of Communications and Information (MCI) on Monday instructed Facebook to block access to the States Times Review (STR) page after the latter repeatedly refused to comply with previous directives issued under POFMA. The “disabling” order, outlined under Section 34 of the Act, requires Facebook to disable access for local users. […]
The spokesperson said: “We believe orders like this are disproportionate and contradict the government’s claim that POFMA would not be used as a censorship tool. We’ve repeatedly highlighted this law’s potential for overreach and we’re deeply concerned about the precedent this sets for the stifling of freedom of expression in Singapore.”
Among the stories that had outraged Singapore’s government was … a story about two critics of the government being arrested. (And it’s not just Singapore — see also these brand-new rules for social media in Pakistan.)
It’s easy to root for tech platforms to be regulated. It’s harder to accept that those regulations, when they finally do appear, are so often terrible.
Today in news that could affect public perception of the big tech platforms.
Trending up: Google is in talks with publishers about paying a licensing fee for content in a news product. If it moves forward, it would mark a shift in the search giant’s relationship with news organizations. (Benjamin Mullin / The Wall Street Journal)
Trending sideways: Amazon’s Ring is changing its privacy settings two weeks after a study showed the company shares peoples’ personal information with Facebook and Google without their consent. The change will let Ring users block the company from sharing most, but not all, of their data.
⭐ Kickstarter employees voted to unionize today, becoming the first prominent American technology company to join a resurgent labor movement. Here’s Kate Conger and Noam Scheiber in The New York Times:
The pro-union vote is significant for the technology industry, where workers have become increasingly activist in recent years over issues as varied as sexual harassment and climate change. Behemoth companies such as Google and Amazon have struggled to get a handle on their employees, who have staged walkouts and demanded that their companies not work with government entities and others.
But large-scale unionization efforts have faltered. Only a group of contractors at a Google office in Pittsburgh unionized last year, and a small group of Instacart workers managed to do so this month. In the past, most unionization drives have been associated with blue-collar workers and lower-paid white-collar workers rather than white-collar tech workers, who are often paid upward of $150,000 a year.
An Israeli court ordered Facebook to unblock the account of an NSO Group employee. The social media giant blocked the account after accusing NSO Group of helping government spies break into the phones of about 1,400 WhatsApp users, including journalists and activists. (Reuters)
Facebook’s former security chief Alex Stamos spoke about the likelihood of state-backed spies workingin Silicon Valley on a podcast last week. “I expect that every major US tech company has at least several people that have been turned by at least China, maybe Russia, probably Israel, and a couple other US allies,” he said. Fun! (Dot Dot Dot)
Google redraws the borders on its maps depending on who’s looking. The team in charge of these changes says they often have to alter maps due to political pressure and the whims of executives. (Greg Bensinger / The Washington Post)
An Australian court has ordered Google to identify an anonymous user who gave a negative review to a Melbourne dental surgeon. Dr. Matthew Kabbabe says a reviewer’s comment posted about three months ago urged others to “stay away” from his practice, which damaged his business. (Kim Lyons / The Verge)
Oracle’s top lobbyist in Washington DC, Ken Glueck, is a major force behind the increased government scrutiny of leading technology companies like Google and Amazon. Recently, he’s prodded federal antitrust regulators to investigate whether Google is violating competition laws. And now his boss, Larry Ellison, is throwing a fundraiser for President Trump. (James V. Grimaldi, Brody Mullins and John D. McKinnon / The Wall Street Journal)
Deepfake videos are making their way into Indian electoral campaigns. It’s a sign that the technology, first used extensively in porn, has become a political weapon. (Nilesh Christopher / Vice)
⭐ Microsoft has a new product called ElectionGuard that could help election officials instantly detect hacks. The open-source voting-machine software gets its first real-world test today, when it’s used in a local election in Fulton, Wisconsin. Here’s CNET’s Alfred Ng:
ElectionGuard addresses what has become a crucial concern in US democracy: the integrity of the vote. The software is designed to establish end-to-end verification for voting machines. A voter can check whether his or her vote was counted. If a hacker had managed to alter a vote, it would be immediately obvious because encryption attached to the vote wouldn’t have changed. […]
The local election will provide Microsoft an opportunity to find blind spots in the ElectionGuard system. The question is how many it will find. During ElectionGuard’s first demo at the Aspen Security Forum last July, Microsoft identified some user experience flaws. A big one: Voters were confused as to why two sheets of paper were printing out.
President Trump’s Twitter style seems to have encouraged other old rich men like Mike Bloomberg, Jay Carney, and Jeff Bezos to loosen up online. Just what the internet needed. (Jonah Engel Bromwich / The New York Times)
Twitter’s global data protection officer, Damien Kieran, is trying to build a privacy-conscious company culture. In an interview, he also said that the company “absolutely” supports strong privacy laws. (David Pierce / Protocol)
Signal is putting its $50 million investment from WhatsApp cofounder Brian Acton to good use, building out features to help it go mainstream. In the last three months, the company has added support for iPad and ephemeral images and video designed to disappear after a single viewing. It’s also announced plans to roll out a new system for group messaging. (Andy Greenberg / Wired)
Google has been slow to embrace AR, unlike Apple and Facebook. Part of this stems from the company’s experience with Google Glass, which bombed with consumers and raised privacy concerns. (Nick Bastone / The Information)
For YouTube’s 15th birthday, CEO Susan Wojcicki wrote a blogpost about the company’s goals, which include growing the revenue and audiences of YouTube creators and removing content that violates the company’s policies as quickly as possible. (YouTube)
Facebook internally prototyped a tabbed version of the News Feed for mobile devices. The prototype includes the standard Most Relevant feed, the Most Recent feed of reverse chronological posts that was previously buried as a sidebar bookmark, and an Already Seen feed of posts that people have already viewed. (Josh Constine / TechCrunch)
Instagram is in talks with video producers to increase its funding for shows on IGTV, its platform for longer-form videos. The conversations come as Facebook tests ways for creators to make more money from IGTV, including an advertising product and revenue-sharing program similar to those available on other social media platforms. (Sahil Patel / The Wall Street Journal)
Jalaiah Harmon, a 14-year-old in Atlanta, created one of the biggest dances on TikTok. But when the dance went viral, she didn’t get any credit. Luckily, this article seems to have changed that — Harmon performed the dance this weekend at the NBA All-Star game. (Taylor Lorenz / The New York Times)
The subreddit r/FemaleDatingStrategy offers dating advice to women and rules on how to act. But the advice can quickly turn judgmental and oppressively conservative. (Erin Taylor / The Verge)
The company behind the once-popular live mobile game HQ Trivia was said to be shutting down. Then CEO Rus Yusupov announced he may have found a last minute buyer. Prior to this news, former host Scott Rogowsky tweeted: “HQ didn’t die of natural causes. It was poisoned with a lethal cocktail of incompetence, arrogance, short-sightedness & sociopathic delusion.” (Kerry Flynn / CNN)
The drunken HQ Trivia finale was apparently pretty wild. The hosts cursed, sprayed champagne, threatened to defecate on the homes of trolls in the chat window and begged for new jobs. Josh Constine at TechCrunch:
“520 people are splitting $5. Send me your Venmo requests and I’ll send you your fraction of a penny.”