Jump to content

Facebook AKA “Hydra” Thread


Hugo Stiglitz

Recommended Posts

37 minutes ago, Hornius Emeritus said:

The last paragraphs sum up the problem, the solution, and, in my opinion, the unlikeliness of humanity deciding to save itself.

Quote

I still believe the internet is good for humanity, but that’s despite the social web, not because of it. We must also find ways to repair the aspects of our society and culture that the social web has badly damaged. This will require intellectual independence, respectful debate, and the same rebellious streak that helped establish Enlightenment values centuries ago.

We may not be able to predict the future, but we do know how it is made: through flashes of rare and genuine invention, sustained by people’s time and attention. Right now, too many people are allowing algorithms and tech giants to manipulate them, and reality is slipping from our grasp as a result. This century’s Doomsday Machine is here, and humming along.

It does not have to be this way.

 

Link to comment
Share on other sites

I had a conversation with a friend last night who is a low-information voter.  I was explaining how Facebook and social media in general works, a la, The Social Dilemma.  She basically tried to bury her head in the sand and didn't want to think about the implications.  I fear there are a lot of people out there like her that when are confronted with uncomfortable and/or serious life scenarios (politics, social media, finances, whatever) would rather just pretend these problems don't exist.  This is a big part of why we are where we are, imo.  

Not sure where I was going with this, other than, god dammit people, pay attention and help us fight this goddamn fight instead of pretending it's not happening.

  • Hook 'Em 1
  • Rage+1 1
Link to comment
Share on other sites

2 hours ago, Biff Tannen said:

I had a conversation with a friend last night who is a low-information voter.  I was explaining how Facebook and social media in general works, a la, The Social Dilemma.  She basically tried to bury her head in the sand and didn't want to think about the implications.  I fear there are a lot of people out there like her that when are confronted with uncomfortable and/or serious life scenarios (politics, social media, finances, whatever) would rather just pretend these problems don't exist.  This is a big part of why we are where we are, imo.  

Not sure where I was going with this, other than, god dammit people, pay attention and help us fight this goddamn fight instead of pretending it's not happening.

Kind of ties into comments I made yesterday, about people letting themselves be affected/stirred up by things that really don't affect them, while opening themselves up to being manipulated in a much easier manner by individuals/companies playing upon their discomfort (Trump scaring people that the libs are evil atheist socialists, etc.).  For all of the talk about voter fraud, it's far cheaper, easier, and legal to influence mass numbers of folks through social media and advertising.

A huge problem for people like your friend is that the bots are already here, and here to stay.  Doesn't matter what the platform is.   BBSes and Usenet could have kept on chugging along (and hell, Surly itself is basically the 2020 version of a 1993 BBS, and Usenet is technically still around, and even has evolved forms such as reddit), and the bots would have been eventually created and written to go dial up BBSes and go through Usenet and spread their messages.

And the bots attack through email and text message and voice calls as well.

We need to educate people like your friend, like some of my relatives, to recognize when they are getting played and manipulated, whether it's by bots through email or Facebook, or people like Trump spouting half-truths, etc. on twitter or Fox News or whatever.

Link to comment
Share on other sites

3 hours ago, Biff Tannen said:

I had a conversation with a friend last night who is a low-information voter.  I was explaining how Facebook and social media in general works, a la, The Social Dilemma.  She basically tried to bury her head in the sand and didn't want to think about the implications.  I fear there are a lot of people out there like her that when are confronted with uncomfortable and/or serious life scenarios (politics, social media, finances, whatever) would rather just pretend these problems don't exist.  This is a big part of why we are where we are, imo.  

Not sure where I was going with this, other than, god dammit people, pay attention and help us fight this goddamn fight instead of pretending it's not happening.

I guess that’s one thing you can do with girls who have teh dumbz.

Link to comment
Share on other sites

  • 1 month later...
6 minutes ago, Biff Tannen said:

I will save everyone the trouble because it needs to be read. He's right. And no, I don't care he is financially motivated to say it

"Technology does not need vast troves of personal data stitched together across dozens of websites and apps in order to succeed. Advertising existed and thrived for decades without it, and we're here today because the path of least resistance is rarely the path of wisdom.

If a business is built on misleading users on data exploitation, on choices that are no choices at all, then it does not deserve our praise. It deserves reform.

We should not look away from the bigger picture and a moment of rampant disinformation and conspiracy theory is juiced by algorithms. We can no longer turn a blind eye to a theory of technology that says all engagement is good engagement, the longer the better, and all with the goal of collecting as much data as possible.

Too many are still asking the question, 'How much can we get away with?' When they need to be asking, 'What are the consequences?'

What are the consequences of prioritizing conspiracy theories and violent incitement simply because of the high rates of engagement?

What are the consequences of not just tolerating but rewarding content that undermines public trust in life-saving vaccinations?

What are the consequences of seeing thousands of users joining extremist groups and then perpetuating an algorithm that recommends even more?

It is long past time to stop pretending that this approach doesn't come with a cause. A polarization of lost trust, and yes, of violence.

A social dilemma cannot be allowed to become a social catastrophe."

Edited by Neonmoon
  • Hook 'Em 2
Link to comment
Share on other sites

  • 1 month later...

Posted here as well-

Study: Far-Right Propaganda Gets the Most Engagement on Facebook, Especially When It's Lies

"They found that sources categorized as far-right by Newsguard and Media Bias/Fact Check did very well on Facebook, followed by those classified as far-left, other more moderately partisan sources, and finally those that were “center”-oriented. Those far-right sources tended to receive several hundred more interactions (likes, comments, shares, etc.) per 1,000 followers than other outlets. Far-right pages experienced skyrocketing engagement in early January, before the riot at the Capitol.

"Furthermore, those far-right sources classified as regularly spreading misinformation and conspiracy theories actually did better on engagement (426 interactions per 1,000 followers a week on average) than every other type of source (including far-right pages not classified as sources of misinfo, which got 259 interactions per 1,000 followers a week on average).

"That’s not even the most egregious part of it. While far-right sources were rewarded with higher engagement on Facebook when they spread misinfo or conspiracy theories, the Cybersecurity for Democracy findings show sources classified as “slightly right,” “center,” “slightly left,” or “far left” appeared to be subject to a “misinformation penalty.” Said penalty appeared to be much heavier for sources classified as centrist or left of center."

  • Hook 'Em 2
Link to comment
Share on other sites

  • 3 weeks later...

Damn good article.  Bolded portions by me.  

https://www.technologyreview.com/2021/03/11/1020600/facebook-responsible-ai-misinformation/

 

Quote

Joaquin Quiñonero Candela, a director of AI at Facebook, was apologizing to his audience.

It was March 23, 2018, just days after the revelation that Cambridge Analytica, a consultancy that worked on Donald Trump’s 2016 presidential election campaign, had surreptitiously siphoned the personal data of tens of millions of Americans from their Facebook accounts in an attempt to influence how they voted. It was the biggest privacy breach in Facebook’s history, and Quiñonero had been previously scheduled to speak at a conference on, among other things, “the intersection of AI, ethics, and privacy” at the company. He considered canceling, but after debating it with his communications director, he’d kept his allotted time.

 

Quote

As he stepped up to face the room, he began with an admission. “I’ve just had the hardest five days in my tenure at Facebook,” he remembers saying. “If there’s criticism, I’ll accept it.”

The Cambridge Analytica scandal would kick off Facebook’s largest publicity crisis ever. It compounded fears that the algorithms that determine what people see on the platform were amplifying fake news and hate speech, and that Russian hackers had weaponized them to try to sway the election in Trump’s favor. Millions began deleting the app; employees left in protest; the company’s market capitalization plunged by more than $100 billion after its July earnings call.

 

Quote

In the ensuing months, Mark Zuckerberg began his own apologizing. He apologized for not taking “a broad enough view” of Facebook’s responsibilities, and for his mistakes as a CEO. Internally, Sheryl Sandberg, the chief operating officer, kicked off a two-year civil rights audit to recommend ways the company could prevent the use of its platform to undermine democracy.

Finally, Mike Schroepfer, Facebook’s chief technology officer, asked Quiñonero to start a team with a directive that was a little vague: to examine the societal impact of the company’s algorithms. The group named itself the Society and AI Lab (SAIL); last year it combined with another team working on issues of data privacy to form Responsible AI.

 

Spoiler

Quiñonero was a natural pick for the job. He, as much as anybody, was the one responsible for Facebook’s position as an AI powerhouse. In his six years at Facebook, he’d created some of the first algorithms for targeting users with content precisely tailored to their interests, and then he’d diffused those algorithms across the company. Now his mandate would be to make them less harmful.

Facebook has consistently pointed to the efforts by Quiñonero and others as it seeks to repair its reputation. It regularly trots out various leaders to speak to the media about the ongoing reforms. In May of 2019, it granted a series of interviews with Schroepfer to the New York Times, which rewarded the company with a humanizing profile of a sensitive, well-intentioned executive striving to overcome the technical challenges of filtering out misinformation and hate speech from a stream of content that amounted to billions of pieces a day. These challenges are so hard that it makes Schroepfer emotional, wrote the Times: “Sometimes that brings him to tears.”

In the spring of 2020, it was apparently my turn. Ari Entin, Facebook’s AI communications director, asked in an email if I wanted to take a deeper look at the company’s AI work. After talking to several of its AI leaders, I decided to focus on Quiñonero. Entin happily obliged. As not only the leader of the Responsible AI team but also the man who had made Facebook into an AI-driven company, Quiñonero was a solid choice to use as a poster boy.

He seemed a natural choice of subject to me, too. In the years since he’d formed his team following the Cambridge Analytica scandal, concerns about the spread of lies and hate speech on Facebook had only grown. In late 2018 the company admitted that this activity had helped fuel a genocidal anti-Muslim campaign in Myanmar for several years. In 2020 Facebook started belatedly taking action against Holocaust deniers, anti-vaxxers, and the conspiracy movement QAnon. All these dangerous falsehoods were metastasizing thanks to the AI capabilities Quiñonero had helped build. The algorithms that underpin Facebook’s business weren’t created to filter out what was false or inflammatory; they were designed to make people share and engage with as much content as possible by showing them things they were most likely to be outraged or titillated by. Fixing this problem, to me, seemed like core Responsible AI territory.

I began video-calling Quiñonero regularly. I also spoke to Facebook executives, current and former employees, industry peers, and external experts. Many spoke on condition of anonymity because they’d signed nondisclosure agreements or feared retaliation. I wanted to know: What was Quiñonero’s team doing to rein in the hate and lies on its platform?

 

But Entin and Quiñonero had a different agenda. Each time I tried to bring up these topics, my requests to speak about them were dropped or redirected. They only wanted to discuss the Responsible AI team’s plan to tackle one specific kind of problem: AI bias, in which algorithms discriminate against particular user groups. An example would be an ad-targeting algorithm that shows certain job or housing opportunities to white people but not to minorities.

 

By the time thousands of rioters stormed the US Capitol in January, organized in part on Facebook and fueled by the lies about a stolen election that had fanned out across the platform, it was clear from my conversations that the Responsible AI team had failed to make headway against misinformation and hate speech because it had never made those problems its main focus. More important, I realized, if it tried to, it would be set up for failure.

The reason is simple. Everything the company does and chooses not to do flows from a single motivation: Zuckerberg’s relentless desire for growth. Quiñonero’s AI expertise supercharged that growth. His team got pigeonholed into targeting AI bias, as I learned in my reporting, because preventing such bias helps the company avoid proposed regulation that might, if passed, hamper that growth. Facebook leadership has also repeatedly weakened or halted many initiatives meant to clean up misinformation on the platform because doing so would undermine that growth.

In other words, the Responsible AI team’s work—whatever its merits on the specific problem of tackling AI bias—is essentially irrelevant to fixing the bigger problems of misinformation, extremism, and political polarization. And it’s all of us who pay the price.

“When you’re in the business of maximizing engagement, you’re not interested in truth. You’re not interested in harm, divisiveness, conspiracy. In fact, those are your friends,” says Hany Farid, a professor at the University of California, Berkeley who collaborates with Facebook to understand image- and video-based misinformation on the platform.

“They always do just enough to be able to put the press release out. But with a few exceptions, I don’t think it’s actually translated into better policies. They’re never really dealing with the fundamental problems.”

In March of 2012, Quiñonero visited a friend in the Bay Area. At the time, he was a manager in Microsoft Research’s UK office, leading a team using machine learning to get more visitors to click on ads displayed by the company’s search engine, Bing. His expertise was rare, and the team was less than a year old. Machine learning, a subset of AI, had yet to prove itself as a solution to large-scale industry problems. Few tech giants had invested in the technology.

Quiñonero’s friend wanted to show off his new employer, one of the hottest startups in Silicon Valley: Facebook, then eight years old and already with close to a billion monthly active users (i.e., those who have logged in at least once in the past 30 days). As Quiñonero walked around its Menlo Park headquarters, he watched a lone engineer make a major update to the website, something that would have involved significant red tape at Microsoft. It was a memorable introduction to Zuckerberg’s “Move fast and break things” ethos. Quiñonero was awestruck by the possibilities. Within a week, he had been through interviews and signed an offer to join the company.

His arrival couldn’t have been better timed. Facebook’s ads service was in the middle of a rapid expansion as the company was preparing for its May IPO. The goal was to increase revenue and take on Google, which had the lion’s share of the online advertising market. Machine learning, which could predict which ads would resonate best with which users and thus make them more effective, could be the perfect tool. Shortly after starting, Quiñonero was promoted to managing a team similar to the one he’d led at Microsoft.

 

Unlike traditional algorithms, which are hard-coded by engineers, machine-learning algorithms “train” on input data to learn the correlations within it. The trained algorithm, known as a machine-learning model, can then automate future decisions. An algorithm trained on ad click data, for example, might learn that women click on ads for yoga leggings more often than men. The resultant model will then serve more of those ads to women. Today at an AI-based company like Facebook, engineers generate countless models with slight variations to see which one performs best on a given problem.

Facebook’s massive amounts of user data gave Quiñonero a big advantage. His team could develop models that learned to infer the existence not only of broad categories like “women” and “men,” but of very fine-grained categories like “women between 25 and 34 who liked Facebook pages related to yoga,” and targeted ads to them. The finer-grained the targeting, the better the chance of a click, which would give advertisers more bang for their buck.

Within a year his team had developed these models, as well as the tools for designing and deploying new ones faster. Before, it had taken Quiñonero’s engineers six to eight weeks to build, train, and test a new model. Now it took only one.

News of the success spread quickly. The team that worked on determining which posts individual Facebook users would see on their personal news feeds wanted to apply the same techniques. Just as algorithms could be trained to predict who would click what ad, they could also be trained to predict who would like or share what post, and then give those posts more prominence. If the model determined that a person really liked dogs, for instance, friends’ posts about dogs would appear higher up on that user’s news feed.

Quiñonero’s success with the news feed—coupled with impressive new AI research being conducted outside the company—caught the attention of Zuckerberg and Schroepfer. Facebook now had just over 1 billion users, making it more than eight times larger than any other social network, but they wanted to know how to continue that growth. The executives decided to invest heavily in AI, internet connectivity, and virtual reality.

They created two AI teams. One was FAIR, a fundamental research lab that would advance the technology’s state-of-the-art capabilities. The other, Applied Machine Learning (AML), would integrate those capabilities into Facebook’s products and services. In December 2013, after months of courting and persuasion, the executives recruited Yann LeCun, one of the biggest names in the field, to lead FAIR. Three months later, Quiñonero was promoted again, this time to lead AML. (It was later renamed FAIAR, pronounced “fire.”)

 

In his new role, Quiñonero built a new model-development platform for anyone at Facebook to access. Called FBLearner Flow, it allowed engineers with little AI experience to train and deploy machine-learning models within days. By mid-2016, it was in use by more than a quarter of Facebook’s engineering team and had already been used to train over a million models, including models for image recognition, ad targeting, and content moderation.

Zuckerberg’s obsession with getting the whole world to use Facebook had found a powerful new weapon. Teams had previously used design tactics, like experimenting with the content and frequency of notifications, to try to hook users more effectively. Their goal, among other things, was to increase a metric called L6/7, the fraction of people who logged in to Facebook six of the previous seven days. L6/7 is just one of myriad ways in which Facebook has measured “engagement”—the propensity of people to use its platform in any way, whether it’s by posting things, commenting on them, liking or sharing them, or just looking at them. Now every user interaction once analyzed by engineers was being analyzed by algorithms. Those algorithms were creating much faster, more personalized feedback loops for tweaking and tailoring each user’s news feed to keep nudging up engagement numbers.

Zuckerberg, who sat in the center of Building 20, the main office at the Menlo Park headquarters, placed the new FAIR and AML teams beside him. Many of the original AI hires were so close that his desk and theirs were practically touching. It was “the inner sanctum,” says a former leader in the AI org (the branch of Facebook that contains all its AI teams), who recalls the CEO shuffling people in and out of his vicinity as they gained or lost his favor. “That’s how you know what’s on his mind,” says Quiñonero. “I was always, for a couple of years, a few steps from Mark's desk.”

 

With new machine-learning models coming online daily, the company created a new system to track their impact and maximize user engagement. The process is still the same today. Teams train up a new machine-learning model on FBLearner, whether to change the ranking order of posts or to better catch content that violates Facebook’s community standards (its rules on what is and isn’t allowed on the platform). Then they test the new model on a small subset of Facebook’s users to measure how it changes engagement metrics, such as the number of likes, comments, and shares, says Krishna Gade, who served as the engineering manager for news feed from 2016 to 2018.

If a model reduces engagement too much, it’s discarded. Otherwise, it’s deployed and continually monitored. On Twitter, Gade explained that his engineers would get notifications every few days when metrics such as likes or comments were down. Then they’d decipher what had caused the problem and whether any models needed retraining.

But this approach soon caused issues. The models that maximize engagement also favor controversy, misinformation, and extremism: put simply, people just like outrageous stuff. Sometimes this inflames existing political tensions. The most devastating example to date is the case of Myanmar, where viral fake news and hate speech about the Rohingya Muslim minority escalated the country’s religious conflict into a full-blown genocide. Facebook admitted in 2018, after years of downplaying its role, that it had not done enough “to help prevent our platform from being used to foment division and incite offline violence.”

While Facebook may have been oblivious to these consequences in the beginning, it was studying them by 2016. In an internal presentation from that year, reviewed by the Wall Street Journal, a company researcher, Monica Lee, found that Facebook was not only hosting a large number of extremist groups but also promoting them to its users: “64% of all extremist group joins are due to our recommendation tools,” the presentation said, predominantly thanks to the models behind the “Groups You Should Join” and “Discover” features.

 

In 2017, Chris Cox, Facebook’s longtime chief product officer, formed a new task force to understand whether maximizing user engagement on Facebook was contributing to political polarization. It found that there was indeed a correlation, and that reducing polarization would mean taking a hit on engagement. In a mid-2018 document reviewed by the Journal, the task force proposed several potential fixes, such as tweaking the recommendation algorithms to suggest a more diverse range of groups for people to join. But it acknowledged that some of the ideas were “antigrowth.” Most of the proposals didn’t move forward, and the task force disbanded.

Since then, other employees have corroborated these findings. A former Facebook AI researcher who joined in 2018 says he and his team conducted “study after study” confirming the same basic idea: models that maximize engagement increase polarization. They could easily track how strongly users agreed or disagreed on different issues, what content they liked to engage with, and how their stances changed as a result. Regardless of the issue, the models learned to feed users increasingly extreme viewpoints. “Over time they measurably become more polarized,” he says.

The researcher’s team also found that users with a tendency to post or engage with melancholy content—a possible sign of depression—could easily spiral into consuming increasingly negative material that risked further worsening their mental health. The team proposed tweaking the content-ranking models for these users to stop maximizing engagement alone, so they would be shown less of the depressing stuff. “The question for leadership was: Should we be optimizing for engagement if you find that somebody is in a vulnerable state of mind?” he remembers. (A Facebook spokesperson said she could not find documentation for this proposal.)

 

But anything that reduced engagement, even for reasons such as not exacerbating someone’s depression, led to a lot of hemming and hawing among leadership. With their performance reviews and salaries tied to the successful completion of projects, employees quickly learned to drop those that received pushback and continue working on those dictated from the top down.

One such project heavily pushed by company leaders involved predicting whether a user might be at risk for something several people had already done: livestreaming their own suicide on Facebook Live. The task involved building a model to analyze the comments that other users were posting on a video after it had gone live, and bringing at-risk users to the attention of trained Facebook community reviewers who could call local emergency responders to perform a wellness check. It didn’t require any changes to content-ranking models, had negligible impact on engagement, and effectively fended off negative press. It was also nearly impossible, says the researcher: “It’s more of a PR stunt. The efficacy of trying to determine if somebody is going to kill themselves in the next 30 seconds, based on the first 10 seconds of video analysis—you’re not going to be very effective.”

Facebook disputes this characterization, saying the team that worked on this effort has since successfully predicted which users were at risk and increased the number of wellness checks performed. But the company does not release data on the accuracy of its predictions or how many wellness checks turned out to be real emergencies.

That former employee, meanwhile, no longer lets his daughter use Facebook.

Quiñonero should have been perfectly placed to tackle these problems when he created the SAIL (later Responsible AI) team in April 2018. His time as the director of Applied Machine Learning had made him intimately familiar with the company’s algorithms, especially the ones used for recommending posts, ads, and other content to users.

It also seemed that Facebook was ready to take these problems seriously. Whereas previous efforts to work on them had been scattered across the company, Quiñonero was now being granted a centralized team with leeway in his mandate to work on whatever he saw fit at the intersection of AI and society.

At the time, Quiñonero was engaging in his own reeducation about how to be a responsible technologist. The field of AI research was paying growing attention to problems of AI bias and accountability in the wake of high-profile studies showing that, for example, an algorithm was scoring Black defendants as more likely to be rearrested than white defendants who’d been arrested for the same or a more serious offense. Quiñonero began studying the scientific literature on algorithmic fairness, reading books on ethical engineering and the history of technology, and speaking with civil rights experts and moral philosophers.

 

Over the many hours I spent with him, I could tell he took this seriously. He had joined Facebook amid the Arab Spring, a series of revolutions against oppressive Middle Eastern regimes. Experts had lauded social media for spreading the information that fueled the uprisings and giving people tools to organize. Born in Spain but raised in Morocco, where he’d seen the suppression of free speech firsthand, Quiñonero felt an intense connection to Facebook’s potential as a force for good.

 

Six years later, Cambridge Analytica had threatened to overturn this promise. The controversy forced him to confront his faith in the company and examine what staying would mean for his integrity. “I think what happens to most people who work at Facebook—and definitely has been my story—is that there's no boundary between Facebook and me,” he says. “It's extremely personal.” But he chose to stay, and to head SAIL, because he believed he could do more for the world by helping turn the company around than by leaving it behind.

“I think if you're at a company like Facebook, especially over the last few years, you really realize the impact that your products have on people's lives—on what they think, how they communicate, how they interact with each other,” says Quiñonero’s longtime friend Zoubin Ghahramani, who helps lead the Google Brain team. “I know Joaquin cares deeply about all aspects of this. As somebody who strives to achieve better and improve things, he sees the important role that he can have in shaping both the thinking and the policies around responsible AI.”

At first, SAIL had only five people, who came from different parts of the company but were all interested in the societal impact of algorithms. One founding member, Isabel Kloumann, a research scientist who’d come from the company’s core data science team, brought with her an initial version of a tool to measure the bias in AI models.

The team also brainstormed many other ideas for projects. The former leader in the AI org, who was present for some of the early meetings of SAIL, recalls one proposal for combating polarization. It involved using sentiment analysis, a form of machine learning that interprets opinion in bits of text, to better identify comments that expressed extreme points of view. These comments wouldn’t be deleted, but they would be hidden by default with an option to reveal them, thus limiting the number of people who saw them.

And there were discussions about what role SAIL could play within Facebook and how it should evolve over time. The sentiment was that the team would first produce responsible-AI guidelines to tell the product teams what they should or should not do. But the hope was that it would ultimately serve as the company’s central hub for evaluating AI projects and stopping those that didn’t follow the guidelines.

Former employees described, however, how hard it could be to get buy-in or financial support when the work didn’t directly improve Facebook’s growth. By its nature, the team was not thinking about growth, and in some cases it was proposing ideas antithetical to growth. As a result, it received few resources and languished. Many of its ideas stayed largely academic.

On August 29, 2018, that suddenly changed. In the ramp-up to the US midterm elections, President Donald Trump and other Republican leaders ratcheted up accusations that Facebook, Twitter, and Google had anti-conservative bias. They claimed that Facebook’s moderators in particular, in applying the community standards, were suppressing conservative voices more than liberal ones. This charge would later be debunked, but the hashtag#StopTheBias, fueled by a Trump tweet, was rapidly spreading on social media.

For Trump, it was the latest effort to sow distrust in the country’s mainstream information distribution channels. For Zuckerberg, it threatened to alienate Facebook’s conservative US users and make the company more vulnerable to regulation from a Republican-led government. In other words, it threatened the company’s growth.

Facebook did not grant me an interview with Zuckerberg, but previous reporting has shown how he increasingly pandered to Trump and the Republican leadership. After Trump was elected, Joel Kaplan, Facebook’s VP of global public policy and its highest-ranking Republican, advised Zuckerberg to tread carefully in the new political environment.

 

On September 20, 2018, three weeks after Trump’s #StopTheBias tweet, Zuckerberg held a meeting with Quiñonero for the first time since SAIL’s creation. He wanted to know everything Quiñonero had learned about AI bias and how to quash it in Facebook’s content-moderation models. By the end of the meeting, one thing was clear: AI bias was now Quiñonero’s top priority. “The leadership has been very, very pushy about making sure we scale this aggressively,” says Rachad Alao, the engineering director of Responsible AI who joined in April 2019.

It was a win for everybody in the room. Zuckerberg got a way to ward off charges of anti-conservative bias. And Quiñonero now had more money and a bigger team to make the overall Facebook experience better for users. They could build upon Kloumann’s existing tool in order to measure and correct the alleged anti-conservative bias in content-moderation models, as well as to correct other types of bias in the vast majority of models across the platform.

This could help prevent the platform from unintentionally discriminating against certain users. By then, Facebook already had thousands of models running concurrently, and almost none had been measured for bias. That would get it into legal trouble a few months later with the US Department of Housing and Urban Development (HUD), which alleged that the company’s algorithms were inferring “protected” attributes like race from users’ data and showing them ads for housing based on those attributes—an illegal form of discrimination. (The lawsuit is still pending.) Schroepfer also predicted that Congress would soon pass laws to regulate algorithmic discrimination, so Facebook needed to make headway on these efforts anyway.

(Facebook disputes the idea that it pursued its work on AI bias to protect growth or in anticipation of regulation. “We built the Responsible AI team because it was the right thing to do,” a spokesperson said.)

But narrowing SAIL’s focus to algorithmic fairness would sideline all Facebook’s other long-standing algorithmic problems. Its content-recommendation models would continue pushing posts, news, and groups to users in an effort to maximize engagement, rewarding extremist content and contributing to increasingly fractured political discourse.

Zuckerberg even admitted this. Two months after the meeting with Quiñonero, in a public note outlining Facebook’s plans for content moderation, he illustrated the harmful effects of the company’s engagement strategy with a simplified chart. It showed that the more likely a post is to violate Facebook’s community standards, the more user engagement it receives, because the algorithms that maximize engagement reward inflammatory content.

 

But then he showed another chart with the inverse relationship. Rather than rewarding content that came close to violating the community standards, Zuckerberg wrote, Facebook could choose to start “penalizing” it, giving it “less distribution and engagement” rather than more. How would this be done? With more AI. Facebook would develop better content-moderation models to detect this “borderline content” so it could be retroactively pushed lower in the news feed to snuff out its virality, he said.

The problem is that for all Zuckerberg’s promises, this strategy is tenuous at best.

 

Misinformation and hate speech constantly evolve. New falsehoods spring up; new people and groups become targets. To catch things before they go viral, content-moderation models must be able to identify new unwanted content with high accuracy. But machine-learning models do not work that way. An algorithm that has learned to recognize Holocaust denial can’t immediately spot, say, Rohingya genocide denial. It must be trained on thousands, often even millions, of examples of a new type of content before learning to filter it out. Even then, users can quickly learn to outwit the model by doing things like changing the wording of a post or replacing incendiary phrases with euphemisms, making their message illegible to the AI while still obvious to a human. This is why new conspiracy theories can rapidly spiral out of control, and partly why, even after such content is banned, forms of it can persist on the platform.

In his New York Times profile, Schroepfer named these limitations of the company’s content-moderation strategy. “Every time Mr. Schroepfer and his more than 150 engineering specialists create A.I. solutions that flag and squelch noxious material, new and dubious posts that the A.I. systems have never seen before pop up—and are thus not caught,” wrote the Times. “It’s never going to go to zero,” Schroepfer told the publication.

Meanwhile, the algorithms that recommend this content still work to maximize engagement. This means every toxic post that escapes the content-moderation filters will continue to be pushed higher up the news feed and promoted to reach a larger audience. Indeed, a study from New York University recently found that among partisan publishers’ Facebook pages, those that regularly posted political misinformation received the most engagement in the lead-up to the 2020 US presidential election and the Capitol riots. “That just kind of got me,” says a former employee who worked on integrity issues from 2018 to 2019. “We fully acknowledged [this], and yet we’re still increasing engagement.”

But Quiñonero’s SAIL team wasn’t working on this problem. Because of Kaplan’s and Zuckerberg’s worries about alienating conservatives, the team stayed focused on bias. And even after it merged into the bigger Responsible AI team, it was never mandated to work on content-recommendation systems that might limit the spread of misinformation. Nor has any other team, as I confirmed after Entin and another spokesperson gave me a full list of all Facebook’s other initiatives on integrity issues—the company’s umbrella term for problems including misinformation, hate speech, and polarization.

 

A Facebook spokesperson said, “The work isn’t done by one specific team because that’s not how the company operates.” It is instead distributed among the teams that have the specific expertise to tackle how content ranking affects misinformation for their part of the platform, she said. But Schroepfer told me precisely the opposite in an earlier interview. I had asked him why he had created a centralized Responsible AI team instead of directing existing teams to make progress on the issue. He said it was “best practice” at the company.

“[If] it's an important area, we need to move fast on it, it's not well-defined, [we create] a dedicated team and get the right leadership,” he said. “As an area grows and matures, you'll see the product teams take on more work, but the central team is still needed because you need to stay up with state-of-the-art work.”

When I described the Responsible AI team’s work to other experts on AI ethics and human rights, they noted the incongruity between the problems it was tackling and those, like misinformation, for which Facebook is most notorious. “This seems to be so oddly removed from Facebook as a product—the things Facebook builds and the questions about impact on the world that Facebook faces,” said Rumman Chowdhury, whose startup, Parity, advises firms on the responsible use of AI, and was acquired by Twitter after our interview. I had shown Chowdhury the Quiñonero team’s documentation detailing its work. “I find it surprising that we’re going to talk about inclusivity, fairness, equity, and not talk about the very real issues happening today,” she said.

“It seems like the ‘responsible AI’ framing is completely subjective to what a company decides it wants to care about. It’s like, ‘We’ll make up the terms and then we’ll follow them,’” says Ellery Roberts Biddle, the editorial director of Ranking Digital Rights, a nonprofit that studies the impact of tech companies on human rights. “I don’t even understand what they mean when they talk about fairness. Do they think it’s fair to recommend that people join extremist groups, like the ones that stormed the Capitol? If everyone gets the recommendation, does that mean it was fair?”

“We’re at a place where there’s one genocide [Myanmar] that the UN has, with a lot of evidence, been able to specifically point to Facebook and to the way that the platform promotes content,” Biddle adds. “How much higher can the stakes get?”

 

Over the last two years, Quiñonero’s team has built out Kloumann’s original tool, called Fairness Flow. It allows engineers to measure the accuracy of machine-learning models for different user groups. They can compare a face-detection model’s accuracy across different ages, genders, and skin tones, or a speech-recognition algorithm’s accuracy across different languages, dialects, and accents.

Fairness Flow also comes with a set of guidelines to help engineers understand what it means to train a “fair” model. One of the thornier problems with making algorithms fair is that there are different definitions of fairness, which can be mutually incompatible. Fairness Flow lists four definitions that engineers can use according to which suits their purpose best, such as whether a speech-recognition model recognizes all accents with equal accuracy or with a minimum threshold of accuracy.

But testing algorithms for fairness is still largely optional at Facebook. None of the teams that work directly on Facebook’s news feed, ad service, or other products are required to do it. Pay incentives are still tied to engagement and growth metrics. And while there are guidelines about which fairness definition to use in any given situation, they aren’t enforced.

This last problem came to the fore when the company had to deal with allegations of anti-conservative bias.

In 2014, Kaplan was promoted from US policy head to global vice president for policy, and he began playing a more heavy-handed role in content moderation and decisions about how to rank posts in users’ news feeds. After Republicans started voicing claims of anti-conservative bias in 2016, his team began manually reviewing the impact of misinformation-detection models on users to ensure—among other things—that they didn’t disproportionately penalize conservatives.

All Facebook users have some 200 “traits” attached to their profile. These include various dimensions submitted by users or estimated by machine-learning models, such as race, political and religious leanings, socioeconomic class, and level of education. Kaplan’s team began using the traits to assemble custom user segments that reflected largely conservative interests: users who engaged with conservative content, groups, and pages, for example. Then they’d run special analyses to see how content-moderation decisions would affect posts from those segments, according to a former researcher whose work was subject to those reviews.

The Fairness Flow documentation, which the Responsible AI team wrote later, includes a case study on how to use the tool in such a situation. When deciding whether a misinformation model is fair with respect to political ideology, the team wrote, “fairness” does not mean the model should affect conservative and liberal users equally. If conservatives are posting a greater fraction of misinformation, as judged by public consensus, then the model should flag a greater fraction of conservative content. If liberals are posting more misinformation, it should flag their content more often too.

But members of Kaplan’s team followed exactly the opposite approach: they took “fairness” to mean that these models should not affect conservatives more than liberals. When a model did so, they would stop its deployment and demand a change. Once, they blocked a medical-misinformation detector that had noticeably reduced the reach of anti-vaccine campaigns, the former researcher told me. They told the researchers that the model could not be deployed until the team fixed this discrepancy. But that effectively made the model meaningless. “There’s no point, then,” the researcher says. A model modified in that way “would have literally no impact on the actual problem” of misinformation.

 

This happened countless other times—and not just for content moderation. In 2020, the Washington Post reported that Kaplan’s team had undermined efforts to mitigate election interference and polarization within Facebook, saying they could contribute to anti-conservative bias. In 2018, it used the same argument to shelve a project to edit Facebook’s recommendation models even though researchers believed it would reduce divisiveness on the platform, according to the Wall Street Journal. His claims about political bias also weakened a proposal to edit the ranking models for the news feed that Facebook’s data scientists believed would strengthen the platform against the manipulation tactics Russia had used during the 2016 US election.

And ahead of the 2020 election, Facebook policy executives used this excuse, according to the New York Times, to veto or weaken several proposals that would have reduced the spread of hateful and damaging content.

Facebook disputed the Wall Street Journal’s reporting in a follow-up blog post, and challenged the New York Times’s characterization in an interview with the publication. A spokesperson for Kaplan’s team also denied to me that this was a pattern of behavior, saying the cases reported by the Post, the Journal, and the Times were “all individual instances that we believe are then mischaracterized.” He declined to comment about the retraining of misinformation models on the record.

Many of these incidents happened before Fairness Flow was adopted. But they show how Facebook’s pursuit of fairness in the service of growth had already come at a steep cost to progress on the platform’s other challenges. And if engineers used the definition of fairness that Kaplan’s team had adopted, Fairness Flow could simply systematize behavior that rewarded misinformation instead of helping to combat it.

Often “the whole fairness thing” came into play only as a convenient way to maintain the status quo, the former researcher says: “It seems to fly in the face of the things that Mark was saying publicly in terms of being fair and equitable.”

The last time I spoke with Quiñonero was a month after the US Capitol riots. I wanted to know how the storming of Congress had affected his thinking and the direction of his work.

In the video call, it was as it always was: Quiñonero dialing in from his home office in one window and Entin, his PR handler, in another. I asked Quiñonero what role he felt Facebook had played in the riots and whether it changed the task he saw for Responsible AI. After a long pause, he sidestepped the question, launching into a description of recent work he’d done to promote greater diversity and inclusion among the AI teams.

I asked him the question again. His Facebook Portal camera, which uses computer-vision algorithms to track the speaker, began to slowly zoom in on his face as he grew still. “I don’t know that I have an easy answer to that question, Karen,” he said. “It’s an extremely difficult question to ask me.”

Entin, who’d been rapidly pacing with a stoic poker face, grabbed a red stress ball.

 

I asked Quiñonero why his team hadn’t previously looked at ways to edit Facebook’s content-ranking models to tamp down misinformation and extremism. He told me it was the job of other teams (though none, as I confirmed, have been mandated to work on that task). “It’s not feasible for the Responsible AI team to study all those things ourselves,” he said. When I asked whether he would consider having his team tackle those issues in the future, he vaguely admitted, “I would agree with you that that is going to be the scope of these types of conversations.”

Near the end of our hour-long interview, he began to emphasize that AI was often unfairly painted as “the culprit.” Regardless of whether Facebook used AI or not, he said, people would still spew lies and hate speech, and that content would still spread across the platform.

I pressed him one more time. Certainly he couldn’t believe that algorithms had done absolutely nothing to change the nature of these issues, I said.

“I don’t know,” he said with a halting stutter. Then he repeated, with more conviction: “That’s my honest answer. Honest to God. I don’t know.”

 

  • Hook 'Em 1
Link to comment
Share on other sites

Brands are actively moving away from FB as the ability to scale profitably is going to O on the platform. 
 

it will be a multitude of factors that hurts Facebook. Other effective mediums to advertise (eg snap, Pinterest, tik tok) and the lack of attribution on FB which Apple is causing issues by their changes. 

Link to comment
Share on other sites

  • 2 weeks later...

I've said it multiple times, but the reason the Surl is a good source of info is because it's an amalgam of different sources.  When people use only one source for their information, especially one that uses algorithms to decide which info to show you, it's a recipe for dumb shit and creates a bullshit view of the world.  This is mainly on the townspeople for not looking beyond facebook.

  • Hook 'Em 2
Link to comment
Share on other sites

  • 1 month later...
4 hours ago, HoustonHorn said:

This is good to see.

https://arstechnica.com/gadgets/2021/05/96-of-us-users-opt-out-of-app-tracking-in-ios-14-5-analytics-find/

Interested in @closetojumping 's opinion on this. I seem to remember from the old site he was big into Facebook and data/advertising. Seems like this could be a sea change for targeted advertising.

Ha, first time I've ever been summoned to the CR, I think. I can't bring myself to be interested in engaging much either. I can tell you that companies will just find new ways to target customers and that Apple and Google will continue to track and analyze every single thing their users of their operating systems do, so this is more of just new rent taking from a couple of tech oligarchs at the expense of others in their group. So if you're rooting against FB and in favor of Google and Apple, this is good news. FB will continue to get theirs, though. Y'all's Insta and What'sApp addictions ensure it.

Link to comment
Share on other sites

I've deleted all FB apps from my phone and don't miss them at all. People I want to hear from, I do already. I will download the messenger and whatsapp apps once a month or so to see if I'm missed something someone else thinks is important. Then delete it when I'm done.

Link to comment
Share on other sites

18 hours ago, closetojumping said:

Ha, first time I've ever been summoned to the CR, I think. I can't bring myself to be interested in engaging much either. I can tell you that companies will just find new ways to target customers and that Apple and Google will continue to track and analyze every single thing their users of their operating systems do, so this is more of just new rent taking from a couple of tech oligarchs at the expense of others in their group. So if you're rooting against FB and in favor of Google and Apple, this is good news. FB will continue to get theirs, though. Y'all's Insta and What'sApp addictions ensure it.

No doubt Google is the worst offender. Apple is bad as well but nowhere close to Google. Not rooting for anything other than I'd like to see consumers continue to push back on privacy related issues. I despise FB and Insta and WhatsApp and TikTok and Twitter and.... I agree with Charlie Strong when it comes to social media. Unfortunately, if 95% of people don't care and use it, then it becomes a necessary evil, in some cases, because that is the only way to interact with particular groups. Surly is my primary social media.

Asking because you had good takes on the old site and seemed to be tapped in to FB's positioning as it related to marketing and growth opportunities. One of the biggest reasons for that was being able to offer very targeted advertising based on the data they collected. Considering a large majority of people interact with FB through their phone, this could have a significant impact on their ability to sell this data at a premium since it likely won't be as accurate and targeted. It will also be interesting to see if Google tries to do something similar in Android since they are after the same data. Will be tough for Google raise privacy concerns for an app without raising the same issues about their own tracking and sharing of data. The difference being the tracking is baked into the OS and Google seems to be considered more "trustworthy" than FB even though they really aren't.

And yes, it feels dirty posting in CR, but unfortunately that's where the thread is even though its not particularly political. I was shocked that given the choice 96% of people opted out. I would have guessed under 50%.

Link to comment
Share on other sites

  • 1 month later...
1 hour ago, Bama Chick said:

I’m no prosecutor but this shit here should be punishable by firing squad. Especially on July 4th.

Or we can appreciate how impressive it is for an artificial intelligence to pull off something like that.  The computational power required to navigate a fairly rapidly moving aquatic device over body of water such as that, while keeping the battery packs, computers, etc. contained in that humanoid-looking device all balanced is very impressive.

  • Like 1
  • Haha 2
Link to comment
Share on other sites

https://www.nytimes.com/2021/07/08/business/mark-zuckerberg-sheryl-sandberg-facebook.html?utm_source=newsletter&utm_medium=email&utm_campaign=newsletter_axiosam&stream=top

 

Quote

Sheryl Sandberg knew she’d be asked about the attacks on the Capitol.

For the past week, the country had been reeling from the violence in Washington, and with each passing day, reporters were uncovering more of the footprint left behind by the rioters on social media.

Speaking to the cameras rolling in her sun-filled Menlo Park, Calif., garden, Ms. Sandberg confronted this question, one she’d prepared for: Could Facebook have acted sooner to help prevent this?

 

Quote

Ms. Sandberg noted that the company had taken down many pages supporting the Proud Boys, a far-right militia, and “Stop the Steal” groups organized around the false claim that President Donald J. Trump had won the 2020 election. Enforcement was never perfect, she added, so some inflammatory posts remained up. But, she added, the blame primarily lay elsewhere.

“I think these events were largely organized on platforms that don’t have our abilities to stop hate, don’t have our standards, and don’t have our transparency,” she said.

 

Quote

That comment was picked up by news outlets across the world. Outraged members of Congress and researchers who studied right-wing groups accused Facebook of abdicating responsibility.

Those within Ms. Sandberg’s inner circle told her what she wanted to hear: Her words were being taken out of context, journalists were unfairly piling on, it wasn’t her fault.

But in other parts of the company, executives whispered to each other that Ms. Sandberg had, once again, slipped up. She was deflecting blame cast on her, or Facebook, they said.

 

Spoiler

Days later, indictments began to roll in for the rioters who had taken part in the attacks.

In one indictment, lawyers revealed how, in the weeks leading up to the Jan. 6 attacks, Thomas Caldwell and members of his militia group, the Oath Keepers, had openly discussed over Facebook the hotel rooms, airfare and other logistics around their trip to Washington.

On the day itself, people freely celebrated with posts on Facebook and Instagram. Minutes after Mr. Trump ended his speech with a call to his supporters to “Walk down Pennsylvania Avenue” toward the Capitol building, where hundreds of members of Congress sat, people within the crowd used their phones to livestream clashes with police and the storming of the barricades outside the building. Many, including Mr. Caldwell, were getting messages on Facebook Messenger from allies watching their advance from afar.

 

“All members are in the tunnel under” the Capitol read the message Mr. Caldwell received as he neared the building. Referring to members of Congress, the message added, “Seal them in. Turn on Gas.”

Moments later, Mr. Caldwell posted a quick update on Facebook that read, “Inside.”

The indictments made it clear just how large a part Facebook had played, both in spreading misinformation about election fraud to fuel anger among the Jan. 6 protesters, and in aiding the extremist militia’s communication ahead of the riots. For months, Facebook would be a footnote to a day that challenged the heart of American democracy. And Ms. Sandberg’s words attempting to place the blame elsewhere would continue to haunt her.

In the years since Mr. Trump won the 2016 election, Facebook has struggled with the role it played in his rise and in the growth of populist leaders across the world. The same tools that allowed Facebook’s business to more than double during those years — such as the News Feed that prioritized engagement and the Facebook groups that pushed like-minded people together — had been used to spread misinformation.

To achieve its record-setting growth, the company had continued building on its core technology, making business decisions based on how many hours of the day people spent on Facebook and how many times a day they returned. Facebook’s algorithms didn’t measure if the magnetic force pulling them back to Facebook was the habit of wishing a friend happy birthday, or a rabbit hole of conspiracies and misinformation.

Facebook’s problems were features, not bugs, and were the natural outgrowth of a 13-year partnership between Mark Zuckerberg, Facebook’s chief executive and one of its founders, and his erudite business partner, Ms. Sandberg, its chief operating officer. He was the technology visionary and she understood how to generate revenue from the attention of Facebook’s now 2.8 billion users. They worked in concert to create the world’s biggest exchange of ideas and communication.

This account, adapted from a forthcoming book on Facebook, is drawn from more than 400 interviews, including those with former and current employees of all levels of the company. The interviews paint a portrait of the Trump presidency as a trying period for the company and for its top leaders. The Trump era tested a central relationship at Facebook — between Ms. Sandberg and Mr. Zuckerberg — and she became increasingly isolated. Her role as the C.E.O.’s second-in-command was less certain with his elevation of several other executives, and with her diminishing influence in Washington.

The view from inside the upper echelons of the company was clear: It felt as though Facebook was no longer led by a No. 1 and No. 2, but a No. 1 and many.

The pair continued their twice-weekly meetings, but Mr. Zuckerberg took over more of the areas once under her purview. He made the final call on issues surrounding Mr. Trump’s spread of hate speech and dangerous misinformation, decisions Ms. Sandberg often lobbied against or told allies she felt uncomfortable with. Mr. Zuckerberg oversaw efforts in Washington to fend off regulations and had forged a friendly relationship with Mr. Trump. Ms. Sandberg surrounded herself with a “kitchen cabinet” of outside political advisers and a team of public relations officials who were often at odds with others in the company.

A spokeswoman for Facebook dismissed this characterization.

“The fault lines that the authors depict between Mark and Sheryl and the people who work with them do not exist,” said Dani Lever, the spokeswoman. “All of Mark’s direct reports work closely with Sheryl and hers with Mark. Sheryl’s role at the company has not changed.”

It is true that the core of the partnership hasn’t formally changed. Mr. Zuckerberg controls the direction of the company and Ms. Sandberg the ad business, which continues to soar unabated.

Both executives declined to comment for this story, perhaps letting the company’s performance speak for itself.

Facebook’s market valuation is now over $1 trillion.

The Beginning of an Unusual Pairing

A Christmas party is not an ideal place to avoid small talk, but Mr. Zuckerberg had arrived at the holiday gathering determined to try. It was December 2007, and Facebook was still a private company with just several hundred employees. Despite his aversion to party chat, he allowed himself to be introduced to Sheryl Sandberg.

From the moment they met, both have said, they sensed the potential to transform the company into the global power it is today.

As guests milled around them, he described his goal of turning every person in the country with an internet connection into a Facebook user. It might have sounded like a fantasy to others, but Ms. Sandberg was intrigued and threw out ideas about what it would take to build a business to keep up with that kind of growth. “It was actually smart. It was substantive,” Mr. Zuckerberg later recalled. Ms. Sandberg would go on to tell Dan Rose, a former vice president at Facebook, that she felt she had been “put on this planet to scale organizations.”

After the Christmas party, Mr. Zuckerberg and Ms. Sandberg continued their conversations over late dinners at Ms. Sandberg’s favorite neighborhood restaurant, Flea Street, and her pristine Atherton home. (Mr. Zuckerberg still lived in a Palo Alto apartment with only a futon on the floor.) Ms. Sandberg walked Mr. Zuckerberg through how she had helped expand Google’s ad business, turning search queries into data that gave advertisers rich insights about users, contributing to the company’s spectacular cash flow.

In some ways, they were opposites. Ms. Sandberg was a master manager and delegator. Her calendar at Google was scheduled to the minute. Meetings rarely ran long and typically culminated in action items. At 38, she was 15 years older than Mr. Zuckerberg, was in bed by 9:30 p.m. and up every morning by 6 for a hard cardio workout. He was a night owl, coding way past midnight and up in time to straggle into the office late in the morning. Mr. Rose recalled being pulled into meetings at 11 p.m., the middle of Mr. Zuckerberg’s workday.

Mr. Zuckerberg recognized that Ms. Sandberg excelled at, even enjoyed, all the parts of running a company that he found unfulfilling. And she would bring to Facebook an asset that her new boss knew he needed: experience in Washington, D.C. Mr. Zuckerberg wasn’t interested in politics and didn’t keep up with the news. The year before, while Mr. Zuckerberg was visiting Donald Graham, then the chairman of The Washington Post, a reporter handed the young C.E.O. a book on politics that the reporter had written. Mr. Zuckerberg said to Mr. Graham, “I’m never going to have time to read this.”

“I teased him because there were very few things where you’ll find unanimity about, and one of those things is that reading books is a good way to learn. There is no dissent on that point,” Mr. Graham said. “Mark eventually came to agree with me on that, and like everything he did, he picked it up very quickly and became a tremendous reader.”

In the lead-up to his talks with Ms. Sandberg, Mr. Zuckerberg experienced a brush with controversy that stoked concerns about potential regulations. Government officials were beginning to question if free platforms like Facebook were harming users with the data they collected. In December 2007, the Federal Trade Commission issued self-regulatory principles for behavioral advertising to protect data privacy. Mr. Zuckerberg needed help navigating Washington.

“Mark understood that some of the biggest challenges Facebook was going to face in the future were going to revolve around issues of privacy and regulatory concerns,” Mr. Rose said. Ms. Sandberg, he noted, “obviously had deep experience there, and this was very important to Mark.”

To Ms. Sandberg, the move to Facebook, a company led by an awkward 23-year-old college dropout, wasn’t as counterintuitive as it might have appeared. She was a vice president at Google, but she had hit a ceiling: There were several vice presidents at her level, and they were all competing for promotions. Eric Schmidt, then the chief executive, wasn’t looking for a No. 2. Men who weren’t performing as well as she was were getting recognized and receiving higher titles, former Google colleagues maintained.

“Despite leading a bigger, more profitable, faster-growing business than the men who were her peers, she was not given the title president, but they were,” recalled Kim Scott, a leader in the ad sales division. Ms. Sandberg was looking for something new. She said yes to Facebook.

Mr. Zuckerberg brought in Ms. Sandberg to deal with growing unease about the company in Washington. She professionalized the ragtag office there, which had been opened by a recent college graduate whose primary job was to help lawmakers set up their Facebook accounts. She represented Facebook as a member of President Barack Obama’s Council on Jobs and Competitiveness, along with other executives and labor union leaders. After one meeting of the council, she accompanied Mr. Obama on Air Force One to Facebook’s headquarters, where the president held a public town hall to discuss the economy. But soon, there were cracks in the facade.

In October 2010, she met with the F.T.C. chairman, Jonathan Leibowitz, to try to quell a privacy investigation. In his office, a relaxed and confident Ms. Sandberg began the meeting with a claim that Facebook had given users more control over their data than any other internet company and that the company’s biggest regret was not communicating clearly how its privacy policy worked.

The F.T.C. officials immediately challenged her, according to people who attended the meeting. Mr. Leibowitz noted that, on a personal level, he had watched his middle-school-age daughter struggle with the privacy settings on Facebook, which had automatically made it easier for strangers to find users like her. “I’m seeing it at home,” he said.

“That’s so great,” Ms. Sandberg responded. She went on to describe the social network as “empowering” for young users. Mr. Leibowitz hadn’t meant it as good news — and emphasized to her that the F.T.C. was deeply concerned about privacy.

Ms. Lever, the Facebook spokeswoman, described the meeting as “substantive,” with a detailed explanation of the company’s privacy policies. She added that the characterization of tension in the room “misrepresents what actually happened.”

But to the people who were there, Ms. Sandberg seemed to be hearing only what she wanted to hear.

An Oval Office Offering

 

The executives made their way through the lobby of Trump Tower, past reporters shouting questions they ignored, into the gold elevators and up to meet with the president-elect.

“Everybody in this room has to like me,” President-elect Trump said to the group he had gathered there in December 2016. It included Ms. Sandberg and the chief executives of Apple, Amazon, Google and Microsoft.

But Ms. Sandberg had made her preferences very clear: She did not like him. In fact, she was still in shock and mourning for Hillary Clinton’s defeat. She was a reliable and prominent Democratic bundler. She had served as chief of staff to Treasury Secretary Lawrence Summers during the Clinton administration and her name had been floated for Treasury secretary in a potential Hillary Clinton administration. Now she was waylaid from her path back into politics, after eight years of stratospheric success as a feminist icon and business leader.

Moreover, her Democratic connections were of limited use in the newly elected administration. She called on Joel Kaplan, the company’s top Republican and vice president of global policy, whom she hired in May 2011. (Mr. Kaplan, who accompanied Ms. Sandberg to Trump Tower, stayed one day longer to interview with the Trump transition team for the position of director of the Office of Management and Budget. Facebook said he withdrew his candidacy before the meeting, but took the interview anyway.)

Mr. Kaplan, a former deputy chief of staff for President George W. Bush, had warned Ms. Sandberg and Mr. Zuckerberg that they had to repair relations with Republicans who resented their support for Democrats. Ms. Sandberg attended the Trump Tower meeting, seated two chairs to the right of the president-elect and between Vice President Mike Pence and Larry Page, one of Google’s founders, but barely spoke. The president-elect, who had sparred with many of the companies whose leaders he now addressed, and who would go on to complicate Facebook’s policies on speech in ways company leaders did not yet comprehend, appeared to be in good spirits that day.

“You’ll call my people, you’ll call me. It doesn’t make any difference,” Mr. Trump said. “We have no formal chain of command over here.”

Facebook did call him. But it was Mr. Zuckerberg who became the emissary to Washington.

In the months and years after the 2016 election, Facebook confronted a number of challenges connected to the Trump presidency. The company investigated and dealt with fallout from the scope of Russian interference with the election on its platform.

Ms. Lever, the Facebook spokeswoman, noted that it was natural for Mr. Zuckerberg to take on a larger role in dealing with speech and misinformation. Other tech leaders were also increasingly engaged on those issues. “These areas demanded more time, attention and focus, which both Mark and Sheryl have given them,” she said.

At the same time, Mr. Zuckerberg and Ms. Sandberg continued to drift further apart. He was critical of her handling of public relations related to election interference and another scandal in March 2018, when it was revealed that Cambridge Analytica, a political consulting firm working for Mr. Trump, had used data harvested from Facebook users to target voters. Both were breaches that technically stemmed from his side of the business — products — but she was in charge of dealing with the public’s anger over the episodes. One of her primary roles had been to charm Washington on Facebook’s behalf, and protect and burnish its image. Neither project was going particularly well.

On the afternoon of Sept. 19, 2019, Mr. Zuckerberg slipped into the Oval Office for a meeting unrecorded in public schedules for the president.

Mr. Trump leaned forward, resting his elbows on the ornately carved 19th-century Resolute desk. As he boasted about the performance of the economy under his administration, a jumbo glass of Diet Coke collected condensation on a coaster in front of him. Mr. Zuckerberg sat on the other side of the desk, in a straight-back wooden chair wedged between Mr. Kaplan and Jared Kushner, Mr. Trump’s son-in-law and senior adviser. Dan Scavino, the president’s director of social media, sat at the end of the row.

Mr. Zuckerberg had come with a gift.

He told Mr. Trump that a team had run the numbers using proprietary internal data, and the president had the highest engagement of any politician on Facebook, according to people familiar with the discussion. Mr. Trump’s personal account, with 28 million followers at that time, was a blowout success. The former reality show star was visibly pleased.

Later in the day, Mr. Trump disclosed the meeting on Facebook and Twitter, posting a photo of the two men shaking hands, a wide smile on the C.E.O.’s face. “Nice meeting with Mark Zuckerberg of @Facebook in the Oval Office today,” read the caption.

Mr. Zuckerberg’s introduction to Mr. Trump’s White House had come through Mr. Kaplan and Peter Thiel, an early investor in Facebook and the tech industry’s most vocal supporter of the president. Mr. Zuckerberg had first gotten to know Mr. Kushner, who graduated from Harvard the year Mr. Zuckerberg began.

Before his Oval Office meeting, Mr. Zuckerberg scheduled an appointment with Mr. Kushner, who had led digital media strategy for the Trump campaign. He wanted to deliver a compliment about the campaign, and told Mr. Kushner: “You were very good on Facebook.”

Not Sandberg’s Washington Anymore

 

Ms. Sandberg greeted House Speaker Nancy Pelosi with a smile. The speaker responded coolly, but she did invite Ms. Sandberg to join her on the couches in the guest seating area.

It was May 8, 2019, and the appointment with the speaker capped two days of difficult meetings with lawmakers about efforts to prevent disinformation during the 2020 elections.

It was a trying period for Ms. Sandberg. Her work responsibilities were crushing: Friends said she was feeling tremendous pressure, and some guilt, for the cascade of scandals confronting the company.

The tense mood in the speaker’s office was in stark contrast to the one during a visit Ms. Sandberg made to Ms. Pelosi in July 2015. They took a photo together, with both women smiling, and later Ms. Pelosi posted it to Facebook, heaping praise on Ms. Sandberg’s advocacy for women in the work force.

Now, four years later, Ms. Sandberg sought to regain some of that favor as she described efforts to take down fake foreign accounts, the hiring of thousands of content moderators and the use of artificial intelligence and other technologies to quickly track and take down disinformation. She assured Ms. Pelosi that Facebook would not fight regulations. She pointed to Mr. Zuckerberg’s opinion essay in The Washington Post in April, which called for privacy rules, laws requiring financial disclosures in online election ads, and rules that enabled Facebook users to take their data off the social network and use it on rival sites.

The two talked for nearly an hour. Ms. Sandberg admitted that Facebook had problems, and the company appeared to be at least trying to fix them. Ms. Pelosi was still on guard, but the efforts appeared to be a step forward.

Finally. They seem to be getting it, Ms. Pelosi said.

Two weeks later, a video featuring the speaker was widely shared on Facebook. Someone had manipulated the video, making it seem as if Ms. Pelosi was slurring her words.

On a Facebook page called Politics Watchdog, the video attracted two million views and was shared tens of thousands of times. From there, it was shared to hundreds of private Facebook groups, many of them highly partisan pages. Within 24 hours, Mr. Trump’s personal lawyer and a former mayor of New York City, Rudy Giuliani, had tweeted the link, along with the message, “What’s wrong with Nancy Pelosi? Her speech pattern is bizarre.”

The private Facebook groups Mr. Zuckerberg had championed two months earlier as part of a pivot to privacy were the ones now spreading the video. Within the confines of the small groups, Facebook users not only joked with one another about how to edit the video but also shared tips on how to ensure that it would reach the maximum number of people. YouTube quickly took down the video, but Facebook was where it was getting significant traction.

The speaker’s staff was livid. Her office had particularly strong ties to Facebook. Catlin O’Neill, Ms. Pelosi’s former chief of staff, was one of Facebook’s most senior Democratic lobbyists.

Inside Facebook, executives were ignoring the Pelosi staff’s calls because they were trying to formulate a response. The fact checkers and the A.I. hadn’t flagged the video for false content or prevented its spread. It was easy to fool Facebook’s filters and detection tools with simple workarounds, it turned out.

But the doctored video of Ms. Pelosi revealed more than the failings of Facebook’s technology to stop the spread of misleading viral videos. It exposed the internal confusion and disagreement over the issue of highly partisan political content.

Executives, lobbyists, and communications staff spent the next day in a slow-motion debate. Ms. Sandberg said she thought there was a good argument to take the video down under rules against disinformation, but she left it at that. Mr. Kaplan and members of the policy team said it was important to appear neutral to politics and to be consistent with the company’s promise of free speech.

Ms. Sandberg would have been the senior woman in those discussions, as she was in any discussion at the company, and probably one of few women involved in the decision making at all. After their 2015 visit, Ms. Pelosi had expressed admiration for Ms. Sandberg’s work on behalf of women, and both knew well the additional scrutiny and attacks that female leaders can face. That the video existed at all and had spread so widely, often with gendered commentary, was also a testament to that.

The conversations became tortured exercises in “what-if” arguments. Mr. Zuckerberg and other members of the policy team pondered if the video could be defined as parody. If so, it could be an important contribution to political debate. Some communications employees noted that the same kind of spoof of Ms. Pelosi could have appeared on the television show “Saturday Night Live.” Others on the security team pushed back and said viewers clearly knew that “S.N.L.” was a comedy show and that the video of Ms. Pelosi was not watermarked as a parody.

Employees involved in the discussions were frustrated, but they emphasized that a policy for just one video would also affect billions of others, so the decision could not be rushed.

“It’s easy to criticize the process, but there isn’t a playbook for making policy decisions that make everyone happy, particularly when attempting to apply standards consistently,” Ms. Lever, the spokeswoman, said this week.

On Friday, 48 hours after the video surfaced, Mr. Zuckerberg made the final call. He said to keep it up.

Ms. Sandberg did not try to explain, or justify, the decision to Ms. Pelosi’s staff.

Later that year, Mr. Zuckerberg had a chance to publicly elaborate on the thinking behind that decision and others like it. On Oct. 17, he appeared at Georgetown University’s campus in Washington to deliver his first major public address on Facebook’s responsibility as a platform for speech.

 

He described Facebook as part of a new force that he called “the fifth estate,” which provided an unfiltered and unedited voice to its 2.7 billion users. He warned against shutting down dissenting views. The cacophony of voices would, of course, be discomfiting, but debate was essential to a healthy democracy. The public would act as the fact checkers of a politician’s lies. It wasn’t the role of a business to make such consequential governance decisions, he said.

Ms. Lever added recently that the company did not want to act unilaterally to make these choices and would welcome regulations from legislators.

Immediately after the Georgetown address, civil rights leaders, academics, journalists and consumer groups panned the speech, saying political lies had the potential to foment violence.

An aide to Ms. Sandberg fired off a series of angry emails about the Georgetown speech to her. She wrote back that he should forward the emails to Nick Clegg, a former British deputy prime minister who had become Facebook’s vice president of global affairs and communications, and others who might influence Mr. Zuckerberg’s thinking. Her inaction infuriated colleagues and some of her lieutenants — his decisions, after all, were in direct contradiction to the core values she promoted in public. There was little she could do to change Mr. Zuckerberg’s mind, Ms. Sandberg confided to those close to her.

Just a few days after Mr. Zuckerberg told the subdued Georgetown crowd that Facebook would not curtail political speech, Ms. Sandberg appeared at Vanity Fair’s New Establishment Summit in Los Angeles, and sat for an interview with Katie Couric. The two women had once bonded over their shared experience of being widowed young; Ms. Couric had lost her husband to colon cancer at age 42. Ms. Sandberg’s husband, Dave Goldberg, died in May 2015, and Ms. Couric had supported Ms. Sandberg’s 2017 book about coping with that loss, “Option B,” with interviews at public events.

But during their nearly hourlong conversation, Ms. Couric grilled Ms. Sandberg about bullying on Instagram and Facebook. She pushed her to defend Facebook against calls to break up the company and asked skeptically if the promised privacy reforms would be effective.

Several times, Ms. Sandberg conceded that the issues were difficult and that Facebook felt responsible, but she stopped short of saying that the company would take the type of decisive action demanded by civil liberty groups and academics.

Toward the end of the conversation, Ms. Couric posed the question that few were bold enough to ask Ms. Sandberg directly: “Since you are so associated with Facebook, how worried are you about your personal legacy as a result of your association with this company?” Ms. Sandberg didn’t skip a beat as she reverted to the message she had delivered from her first days at Facebook.

“I really believe in what I said about people having voice. There are a lot of problems to fix. They are real, and I have a real responsibility to do it. I feel honored to do it,” she said, with a steady voice and calm smile. She later told aides that inside, she was burning with humiliation.

Good for the World or Facebook?

Ms. Sandberg and Mr. Zuckerberg still meet at the start and end of each week, signaling to the company, and to the outside world, that they remain in lock step. Friends and Facebook executives speak to their personal closeness.

When they first met, Mr. Zuckerberg realized that Ms. Sandberg could excel at the parts of the C.E.O. job that he found boring. In the 13 years they’ve been working together, Mr. Zuckerberg now understands that he cannot outsource some of those duties.

At least not to another person. He is concerned about the company’s position in the world, but he generally is less swayed by Ms. Sandberg’s view, or anyone else’s.

Instead, he relies on two internal metrics, known internally as GFW, Good-for-the-World, and CAU, Cares-about-users. Facebook constantly polls its own users on whether they saw Facebook as one or both of those things.

Both the numbers plummeted, and remained low, after the revelations about Russian election interference and data harvesting by Cambridge Analytica. For years, they failed to rise, no matter how many promises Facebook made to do better and how many new security programs the company started. Mr. Zuckerberg, who received the numbers weekly, told aides that eventually the tide would turn and people would start to see Facebook differently.

Privately, executives told each other there were other numbers that mattered more.

On Jan. 27, 2021, just weeks after the riots in Washington, Mr. Zuckerberg and Ms. Sandberg joined an earnings call with investment analysts.

In yet another about-face decision on speech, Mr. Zuckerberg announced that Facebook was planning to de-emphasize political content in the News Feed because, he said, “people don’t want politics and fighting to take over their experience on our service.”

He was still making calls on the biggest policy decisions. The announcement was also a tacit acknowledgment of Facebook’s yearslong failure to control hazardous rhetoric running roughshod on the social network, particularly during the election. “We’re going to continue to focus on helping millions of more people participate in healthy communities,” he added.

Then Ms. Sandberg shifted the focus to earnings. “This was a strong quarter for our business,” she said. Revenue for the fourth quarter was up 33 percent, to $28 billion, “the fastest growth rate in over two years.”

 

  • Hook 'Em 1
Link to comment
Share on other sites

https://www.nytimes.com/2021/07/14/technology/facebook-data.html

 

Quote

One day in April, the people behind CrowdTangle, a data analytics tool owned by Facebook, learned that transparency had limits.

Brandon Silverman, CrowdTangle’s co-founder and chief executive, assembled dozens of employees on a video call to tell them that they were being broken up. CrowdTangle, which had been running quasi-independently inside Facebook since being acquired in 2016, was being moved under the social network’s integrity team, the group trying to rid the platform of misinformation and hate speech. Some CrowdTangle employees were being reassigned to other divisions, and Mr. Silverman would no longer be managing the team day to day.

 

Quote

The announcement, which left CrowdTangle’s employees in stunned silence, was the result of a yearlong battle among Facebook executives over data transparency, and how much the social network should reveal about its inner workings.

On one side were executives, including Mr. Silverman and Brian Boland, a Facebook vice president in charge of partnerships strategy, who argued that Facebook should publicly share as much information as possible about what happens on its platform — good, bad or ugly.

 

Quote

On the other side were executives, including the company’s chief marketing officer and vice president of analytics, Alex Schultz, who worried that Facebook was already giving away too much.

They argued that journalists and researchers were using CrowdTangle, a kind of turbocharged search engine that allows users to analyze Facebook trends and measure post performance, to dig up information they considered unhelpful — showing, for example, that right-wing commentators like Ben Shapiro and Dan Bongino were getting much more engagement on their Facebook pages than mainstream news outlets.

 

Spoiler

These executives argued that Facebook should selectively disclose its own data in the form of carefully curated reports, rather than handing outsiders the tools to discover it themselves.

Team Selective Disclosure won, and CrowdTangle and its supporters lost.

An internal battle over data transparency might seem low on the list of worthy Facebook investigations. And it’s a column I’ve hesitated to write for months, in part because I’m uncomfortably close to the action. (More on that in a minute.)

But the CrowdTangle story is important, because it illustrates the way that Facebook’s obsession with managing its reputation often gets in the way of its attempts to clean up its platform. And it gets to the heart of one of the central tensions confronting Facebook in the post-Trump era. The company, blamed for everything from election interference to vaccine hesitancy, badly wants to rebuild trust with a skeptical public. But the more it shares about what happens on its platform, the more it risks exposing uncomfortable truths that could further damage its image. 

The question of what to do about CrowdTangle has vexed some of Facebook’s top executives for months, according to interviews with more than a dozen current and former Facebook employees, as well as internal emails and posts.

These people, most of whom would speak only anonymously because they were not authorized to discuss internal conversations, said Facebook’s executives were more worried about fixing the perception that Facebook was amplifying harmful content than figuring out whether it actually was amplifying harmful content. Transparency, they said, ultimately took a back seat to image management.

Facebook disputes this characterization. It says that the CrowdTangle reorganization was meant to integrate the service with its other transparency tools, not weaken it, and that top executives are still committed to increasing transparency.

“CrowdTangle is part of a growing suite of transparency resources we’ve made available for people, including academics and journalists,” said Joe Osborne, a Facebook spokesman. “With CrowdTangle moving into our integrity team, we’re developing a more comprehensive strategy for how we build on some of these transparency efforts moving forward.”

But the executives who pushed hardest for transparency appear to have been sidelined. Mr. Silverman, CrowdTangle’s co-founder and chief executive, has been taking time off and no longer has a clearly defined role at the company, several people with knowledge of the situation said. (Mr. Silverman declined to comment about his status.) And Mr. Boland, who spent 11 years at Facebook, left the company in November.

“One of the main reasons that I left Facebook is that the most senior leadership in the company does not want to invest in understanding the impact of its core products,” Mr. Boland said, in his first interview since departing. “And it doesn’t want to make the data available for others to do the hard work and hold them accountable.”

Mr. Boland, who oversaw CrowdTangle as well as other Facebook transparency efforts, said the tool fell out of favor with influential Facebook executives around the time of last year’s presidential election, when journalists and researchers used it to show that pro-Trump commentators were spreading misinformation and hyperpartisan commentary with stunning success.

The Twitter Account That Launched 1,000 Meetings

Here’s where I, somewhat reluctantly, come in.

I started using CrowdTangle a few years ago. I’d been looking for a way to see which news stories gained the most traction on Facebook, and CrowdTangle — a tool used mainly by audience teams at news publishers and marketers who want to track the performance of their posts — filled the bill. I figured out that through a kludgey workaround, I could use its search feature to rank Facebook link posts — that is, posts that include a link to a non-Facebook site — in order of the number of reactions, shares and comments they got. Link posts weren’t a perfect proxy for news, engagement wasn’t a perfect proxy for popularity and CrowdTangle’s data was limited in other ways, but it was the closest I’d come to finding a kind of cross-Facebook news leaderboard, so I ran with it.

At first, Facebook was happy that I and other journalists were finding its tool useful. With only about 25,000 users, CrowdTangle is one of Facebook’s smallest products, but it has become a valuable resource for power users including global health organizations, election officials and digital marketers, and it has made Facebook look transparent compared with rival platforms like YouTube and TikTok, which don’t release nearly as much data.

But the mood shifted last year when I started a Twitter account called @FacebooksTop10, on which I posted a daily leaderboard showing the sources of the most-engaged link posts by U.S. pages, based on CrowdTangle data.

Last fall, the leaderboard was full of posts by Mr. Trump and pro-Trump media personalities. Since Mr. Trump was barred from Facebook in January, it has been dominated by a handful of right-wing polemicists like Mr. Shapiro, Mr. Bongino and Sean Hannity, with the occasional mainstream news article, cute animal story or K-pop fan blog sprinkled in.

The account went semi-viral, racking up more than 35,000 followers. Thousands of people retweeted the lists, including conservatives who were happy to see pro-Trump pundits beating the mainstream media and liberals who shared them with jokes like “Look at all this conservative censorship!” (If you’ve been under a rock for the past two years, conservatives in the United States frequently complain that Facebook is censoring them.)

The lists also attracted plenty of Facebook haters. Liberals shared them as evidence that the company was a swamp of toxicity that needed to be broken up; progressive advertisers bristled at the idea that their content was appearing next to pro-Trump propaganda. The account was even cited at a congressional hearing on tech and antitrust by Representative Jamie Raskin, Democrat of Maryland, who said it proved that “if Facebook is out there trying to suppress conservative speech, they’re doing a terrible job at it.”

Inside Facebook, the account drove executives crazy. Some believed that the data was being misconstrued and worried that it was painting Facebook as a far-right echo chamber. Others worried that the lists might spook investors by suggesting that Facebook’s U.S. user base was getting older and more conservative. Every time a tweet went viral, I got grumpy calls from Facebook executives who were embarrassed by the disparity between what they thought Facebook was — a clean, well-lit public square where civility and tolerance reign — and the image they saw reflected in the Twitter lists.

 

As the election approached last year, Facebook executives held meetings to figure out what to do, according to three people who attended them. They set out to determine whether the information on @FacebooksTop10 was accurate (it was), and discussed starting a competing Twitter account that would post more balanced lists based on Facebook’s internal data.

They never did that, but several executives — including John Hegeman, the head of Facebook’s news feed — were dispatched to argue with me on Twitter. These executives argued that my Top 10 lists were misleading. They said CrowdTangle measured only “engagement,” while the true measure of Facebook popularity would be based on “reach,” or the number of people who actually see a given post. (With the exception of video views, reach data isn’t public, and only Facebook employees and page owners have access to it.)

 

Last September, Mark Zuckerberg, Facebook’s chief executive, told Axios that while right-wing content garnered a lot of engagement, the idea that Facebook was a right-wing echo chamber was “just wrong.”

“I think it’s important to differentiate that from, broadly, what people are seeing and reading and learning about on our service,” Mr. Zuckerberg said.

But Mr. Boland, the former Facebook vice president, said that was a convenient deflection. He said that in internal discussions, Facebook executives were less concerned about the accuracy of the data than about the image of Facebook it presented.

“It told a story they didn’t like,” he said of the Twitter account, “and frankly didn’t want to admit was true.”

The Trouble With CrowdTangle

Around the same time that Mr. Zuckerberg made his comments to Axios, the tensions came to a head. The Economist had just published an article claiming that Facebook “offers a distorted view of American news.”

The article, which cited CrowdTangle data, showed that the most-engaged American news sites on Facebook were Fox News and Breitbart, and claimed that Facebook’s overall news ecosystem skewed right wing. John Pinette, Facebook’s vice president of global communications, emailed a link to the article to a group of executives with the subject line “The trouble with CrowdTangle.”

“The Economist steps onto the Kevin Roose bandwagon,” Mr. Pinette wrote. (See? Told you it was uncomfortably close to home.)

Nick Clegg, Facebook’s vice president of global affairs, replied, lamenting that “our own tools are helping journos to consolidate the wrong narrative.”

Other executives chimed in, adding their worries that CrowdTangle data was being used to paint Facebook as a right-wing echo chamber.

David Ginsberg, Facebook’s vice president of choice and competition, wrote that if Mr. Trump won re-election in November, “the media and our critics will quickly point to this ‘echo chamber’ as a prime driver of the outcome.”

Fidji Simo, the head of the Facebook app at the time, agreed.

“I really worry that this could be one of the worst narratives for us,” she wrote.

Several executives proposed making reach data public on CrowdTangle, in hopes that reporters would cite that data instead of the engagement data they thought made Facebook look bad.

But Mr. Silverman, CrowdTangle’s chief executive, replied in an email that the CrowdTangle team had already tested a feature to do that and found problems with it. One issue was that false and misleading news stories also rose to the top of those lists.

“Reach leaderboard isn’t a total win from a comms point of view,” Mr. Silverman wrote.

Mr. Schultz, Facebook’s chief marketing officer, had the dimmest view of CrowdTangle. He wrote that he thought “the only way to avoid stories like this” would be for Facebook to publish its own reports about the most popular content on its platform, rather than releasing data through CrowdTangle.

“If we go down the route of just offering more self-service data you will get different, exciting, negative stories in my opinion,” he wrote.

Mr. Osborne, the Facebook spokesman, said Mr. Schultz and the other executives were discussing how to correct misrepresentations of CrowdTangle data, not strategizing about killing off the tool.

A few days after the election in November, Mr. Schultz wrote a post for the company blog, called “What Do People Actually See on Facebook in the U.S.?” He explained that if you ranked Facebook posts based on which got the most reach, rather than the most engagement — his preferred method of slicing the data — you’d end up with a more mainstream, less sharply partisan list of sources.

“We believe this paints a more complete picture than the CrowdTangle data alone,” he wrote.

That may be true, but there’s a problem with reach data: Most of it is inaccessible and can’t be vetted or fact-checked by outsiders. We simply have to trust that Facebook’s own, private data tells a story that’s very different from the data it shares with the public.

Tweaking Variables

Mr. Zuckerberg is right about one thing: Facebook is not a giant right-wing echo chamber.

But it does contain a giant right-wing echo chamber — a kind of AM talk radio built into the heart of Facebook’s news ecosystem, with a hyper-engaged audience of loyal partisans who love liking, sharing and clicking on posts from right-wing pages, many of which have gotten good at serving up Facebook-optimized outrage bait at a consistent clip.

CrowdTangle’s data made this echo chamber easier for outsiders to see and quantify. But it didn’t create it, or give it the tools it needed to grow — Facebook did — and blaming a data tool for these revelations makes no more sense than blaming a thermometer for bad weather.

It’s worth noting that these transparency efforts are voluntary, and could disappear at any time. There are no regulations that require Facebook or any other social media companies to reveal what content performs well on their platforms, and American politicians appear to be more interested in fighting over claims of censorship than getting access to better data.

It’s also worth noting that Facebook can turn down the outrage dials and show its users calmer, less divisive news any time it wants. (In fact, it briefly did so after the 2020 election, when it worried that election-related misinformation could spiral into mass violence.) And there is some evidence that it is at least considering more permanent changes.

This year, Mr. Hegeman, the executive in charge of Facebook’s news feed, asked a team to figure out how tweaking certain variables in the core news feed ranking algorithm would change the resulting Top 10 lists, according to two people with knowledge of the project.

The project, which some employees refer to as the “Top 10” project, is still underway, the people said, and it’s unclear whether its findings have been put in place. Mr. Osborne, the Facebook spokesman, said that the team looks at a variety of ranking changes, and that the experiment wasn’t driven by a desire to change the Top 10 lists.

As for CrowdTangle, the tool is still available, and Facebook is not expected to cut off access to journalists and researchers in the short term, according to two people with knowledge of the company’s plans.

Mr. Boland, however, said he wouldn’t be surprised if Facebook executives decided to kill off CrowdTangle entirely or starve it of resources, rather than dealing with the headaches its data creates.

“Facebook would love full transparency if there was a guarantee of positive stories and outcomes,” Mr. Boland said. “But when transparency creates uncomfortable moments, their reaction is often to shut down the transparency.”

 

Edited by Francisco 2.0
  • Rage+1 1
Link to comment
Share on other sites

3 hours ago, Fudge Nuggets said:

If they're killing people, then fucking do something about it.  America is too stupid to fix it through citizen action.

This.  I don't know how we got to the point where the people in power have made themselves powerless.  They're very good at pointing stuff out and then doing nothing.

Link to comment
Share on other sites

So Lewis Hamilton wins a controversial Formula 1 race, and FB and twitter are pulling down racist comments left and right as fast as they can.

But dipshits can deliberately spread covid misinformation around, and.....

I get it - it's easier to yank comments that have certain words, you can auto-purge the racist stuff fairly easy, but still.

Link to comment
Share on other sites

4 minutes ago, atomheartbevo said:

So Lewis Hamilton wins a controversial Formula 1 race, and FB and twitter are pulling down racist comments left and right as fast as they can.

But dipshits can deliberately spread covid misinformation around, and.....

I get it - it's easier to yank comments that have certain words, you can auto-purge the racist stuff fairly easy, but still.

well yeah, I guess they could, but have we considered the impact on earnings if FB started depriving all of the olds of their morning outrage? Those top 10 links posted above show how much activity all these magatards generate.  

image.thumb.png.a547794ea29e60468523a65ece550f1c.png

Won't somebody think about the shareholders?

Link to comment
Share on other sites

On 3/23/2021 at 5:49 PM, Francisco 2.0 said:

“When you’re in the business of maximizing engagement, you’re not interested in truth. You’re not interested in harm, divisiveness, conspiracy. In fact, those are your friends,” says Hany Farid, a professor at the University of California, Berkeley who collaborates with Facebook to understand image- and video-based misinformation on the platform.

From @Francisco 2.0's link above - the facebook problem in a nutshell.

________________

Kill your facebook

The stark reality is most facebook users will never do it. They are too far gone.

[insert boiling frog syndrome: the inability or unwillingness of people to react to or be aware of sinister threats that arise gradually rather than suddenly]

 

Link to comment
Share on other sites

23 minutes ago, Blotto said:

well yeah, I guess they could, but have we considered the impact on earnings if FB started depriving all of the olds of their morning outrage? Those top 10 links posted above show how much activity all these magatards generate.  

image.thumb.png.a547794ea29e60468523a65ece550f1c.png

Won't somebody think about the shareholders?

Guess what FB did to that data tool that shows the top 10 FB links are usually MAGA links? 

they killed it 

 

Link to comment
Share on other sites

  • 2 months later...
2 minutes ago, Hornius Emeritus said:

Wow. As of right now, Facebook, Instagram and What's App have been down worldwide for 30 minutes .... and the DNS records for each are gone. You can't ping any of them. 

I just did this NSlookup:

i-HFMmMfN.jpg

Yeah, I noticed it was down on my laptop a little while ago. The phone app appeared to be working but you can't click through to anything.

What does this mean? Outside attack or is this somehow related to the news that broke over the weekend and the whistleblower's appearance on 60 Minutes?

Link to comment
Share on other sites

2 minutes ago, C-Man said:

Yeah, I noticed it was down on my laptop a little while ago. The phone app appeared to be working but you can't click through to anything.

What does this mean? Outside attack or is this somehow related to the news that broke over the weekend and the whistleblower's appearance on 60 Minutes?


I don't know what it might mean in the larger socio-political-commercial context but it means that, as of this moment, none of them are on the internet right now. 

FB has a trillion dollar market cap. Would be the business story of the century if it continues like this. 

Link to comment
Share on other sites

14 minutes ago, Hornius Emeritus said:

Wow. As of right now, Facebook, Instagram and What's App have been down worldwide for 30 minutes .... and the DNS records for each are gone. You can't ping any of them. 

I just did this NSlookup:

i-HFMmMfN.jpg

Read that it appears to be an issue with a BGP configuration push.

 

There is an engineer somewhere right now.

Sorry The Hangover GIF

  • Hook 'Em 3
Link to comment
Share on other sites

Couldn't happen to a better organization.

But damn, those poor IT bastards.  Pulled from reddit:

 

Quote

Looks like their BGP routes got pulled

And, they host their own DNS.

So, when the routes went down, so did all the authoritative name servers. There is no longer an active SOA for Facebook.com domains.

 

And they can't use their badges to get into the datacenters, because, yup, that's down too:

 

Quote

Somebody goes to the data center, plugs in an RS-232 cable into a big expensive router, start a putty session, and rewrites config files.

This is complicated by the fact that VPN, VoiP, Door Badges, Chat, or anything else ran on FB's internal network but exposed to the internet via gateway/employee portal is gone.

So, they may need a fire-ax to get into the datacenter.

 

And...

 

Quote

I did not understand at first how they killed their domain names at once, but I get it now. Big ass company buy domain registrar, hosts it entirely in their own shit and then blows up their shit. Smooth move...

https://www.registrarsec.com/ is Facebooks wholly owned (and very 404) domain registrar.

Being decentralized only works if you are actually decentralized.

 

 

Link to comment
Share on other sites

3 minutes ago, Longhorn_Fan68 said:

can anyone translate nerd?

 

Sort of.  Best analogy I can come up with at the moment.

 

Facebook self-deleted all of its street addresses, zip codes, phone numbers, etc this morning.  So the mail, delivery trucks and customers no longer know how to get to their various properties all over the world.

Further, because Facebook did such a thorough job of deleting all of this information, the only way to fix their problems is to show up in person and start creating their addresses, zip codes and phone numbers all over again.  Which takes time.

 

 

Edited by Francisco 2.0
  • Hook 'Em 3
  • Haha 1
Link to comment
Share on other sites

This isn't something you "oops" it's also very strange that at any given time there isn't someone inside of the DC floor at one of the hundreds of DC across the globe that FB has. 

There's something very fucked going on and we won't know for a while what it is, but enjoy it while you can. 

Link to comment
Share on other sites

2 minutes ago, immamac said:

This isn't something you "oops" it's also very strange that at any given time there isn't someone inside of the DC floor at one of the hundreds of DC across the globe that FB has. 

There's something very fucked going on and we won't know for a while what it is, but enjoy it while you can. 

I am conflicted here. Not on their social accounts, so that's not an issue, but the new front lines in the modern cyber war are unsettling due to things like being able to have working infrastructure.

Link to comment
Share on other sites

HAHAHAHA, WE TOLD YOU TO NOT FUCK WITH CONSERVATIVE DISINFORMATION AND HATE SPEECH, BIG TECH!  Parler.com still resolving like a BOSS!

 

$ dig parler.com

; <<>> DiG 9.10.6 <<>> parler.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 57087
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2

;; QUESTION SECTION:
;parler.com.                    IN      A

;; ANSWER SECTION:
parler.com.             300     IN      A       185.203.41.218

;; AUTHORITY SECTION:
parler.com.             22938   IN      NS      ns1.parler.com.
parler.com.             22938   IN      NS      ns2.parler.com.

;; ADDITIONAL SECTION:
ns1.parler.com.         22938   IN      A       216.246.208.246
ns2.parler.com.         22938   IN      A       216.246.208.247

;; Query time: 229 msec
;; SERVER: 192.168.88.1#53(192.168.88.1)
;; WHEN: Mon Oct 04 13:55:37 MDT 2021
;; MSG SIZE  rcvd: 112

 

  • Haha 1
Link to comment
Share on other sites

12 minutes ago, immamac said:

Lol Facebook definitely didn't do this. 

I've heard of poisoning BGP routes, but never something like this that completely takes ALL of their routes off the internet. They must have some compromised credentials getting exploited or something. 

I just literally don't understand how you can run such a large IT enterprise and have EVERYTHING in EVERY region go down like this. They either are running a clown show of an IT org, or they got pwned harder than any tech firm has ever been pwned before

  • Hook 'Em 1
Link to comment
Share on other sites

 

10 minutes ago, immamac said:

This isn't something you "oops" it's also very strange that at any given time there isn't someone inside of the DC floor at one of the hundreds of DC across the globe that FB has. 

There's something very fucked going on and we won't know for a while what it is, but enjoy it while you can. 

 

Yet, not a word from Facebook.  If they were taken down by a third party, I'd have a feeling Zuckerberg would have used smoke signals, a flare gun, something....to put out the word.  Hell, he has a Twitter account he could have used; sure, it would be a bad look to use a Facebook competitor, but it's over 4 hours in now; how many millions have they lost so far?

Link to comment
Share on other sites



×
×
  • Create New...