estadisticas web Skip to content

Zuckerberg's testimony at the American congress: the full text

Zuckerberg testimony Congress

The full text of Zuckerberg's testimony before the American Congress on the Cambridge Analytica scandal, the influence of fake news on the American elections and the privacy of people on social media

The testimony of Zuckerberg in front of the American Congress a fundamental moment for the history of Facebook, but not only, after the scandal Cambridge Analytica the social, and the whole Web, tries to recover the trust of the users. The US Congress has decided to release the full text of what to say to Mark Zuckerberg in front of the Congressmen, after the Cambridge Analytica scandal, also called Facebook Datagate.

Zuckerberg testimony Congress

The full text of Zuckerberg's testimony at the American congress

Walden Chairman, Pallone Ranking Member, and Members of the Committee,

We face a number of important issues around privacy, safety, and democracy. Before I talk about the steps we're taking to address them, I want to talk about how we got here.

Facebook is an idealistic and optimistic company. For all of our existence, we are focusing on all the good that connecting people can bring. As Facebook has grown, people everywhere have gotten powerful new tool to stay connected to people, love their voices heard, and build communities and businesses. Just recently, we've seen the #metoo movement and the March for Our Lives, organized, at least in part, on Facebook. After Hurricane Harvey, people raised more than $ 20 million for relief. And more than 70 million small businesses now use Facebook to grow and create jobs.

But it’s clear now that we didn’t do enough to prevent these tools from being used for harm as well. That goes for fake news, foreign interference in elections, and hate speech, as well as developers and data privacy. We didn´t take a broad enough of our responsibility, and that was a big mistake. It was my mistake, and I’m sorry. I started Facebook, I run it, and I'm responsible for what happens here.

I know now we have to go through every part of our responsibility.

It is not enough to just connect people, we have to make sure those connections are positive. It's not enough to just give people a voice, we have to make sure people aren't using it to hurt people or spread misinformation. It is not enough to give people control of their information, we have to make sure developers have given it to you to protect it too. Across the board, we have a responsibility to not just build tools, but to make sure those tools are used for good.

But I am committed to getting it right.

That includes improving the way we protect people information and safeguard elections around the world. Here are a few key things we're doing:


Over the past few weeks, we have been working to understand exactly what happened with Cambridge Analytics and taking steps to make sure this doesn't happen again. We took important action to prevent this from happening again four years ago, but we also made mistakes, there is more to do, and we need to step up and do it.

A. What Happened

In 2007, we launched the Facebook Platform with the vision that more apps should be social. Your calendar should show your pictures where your friends should show their pictures. To do this, we enabled people to log into apps and share their friends.

In 2013, a Cambridge University researcher named Aleksandr Kogan created a personality quiz app. 300,000 people agreed to this Given the way, he was able to access some information about tens of millions of their friends.

In 2014, to prevent abusive apps, we announced that we were changing the platform to dramatically limit the Facebook information apps could access. Most importantly, apps like Kogan's could not ask for more information about a person's friends unless their friends had also authorized the app. Public profile, friend list, and email address. These actions would prevent any app like Kogan’s from being able to access as much Facebook data today.

In 2015, we learned from journalists at The Guardian that shared data from his app with Cambridge Analytica. Kogan's app from our platform, and demanded that we were deleted immediately, including Cambridge data which they ultimately did.

Last month, we learned from The Guardian, The New York Times and Channel 4 that Cambridge Analytics may have deleted the data as they had certified. We immediately banned them from using our services. Cambridge Analytics claims they have already deleted the data and agreed to a forensic audit by a firm we have investigated this. We’re also working with the U.K. Information Commissioner's Office, which has jurisdiction over Cambridge, as it completes its investigation into what happened.

B. What We Are Doing

Kogan and Cambridge Analytics does not happen again. Here are some of the steps we’re taking:

Safeguarding our platform. We need to make sure that developers like this. Kogan who got access to a lot of information.

1. We've made some big changes to the Facebook platform in 2014 to dramatically restrict the amount of data that comes with our apps on our platform. I can't do what Kogan did years ago.

2. But there are more things we can do to limit the information developers in place of preventive abuse.

We are removing developers from your data if you haven't used their app in three months.

We are giving you an app when you approve your name, profile photo, and email address. That's a lot less than apps can get on any other major app platform.

We are looking for developers to get a job in a private place.

We’re restricting more APIs like groups and events. You might like to sign up for apps, but that might also share other information like other posts in groups you are going to will be going to be much more restricted.

Two weeks ago, we found out that a feature that lets someone look at their phone number and email was abused. This feature was useful in cases where people had the same name, but it was abused to link people 's public information to a phone number they already had. When we found out about abuse, we shut this feature down.

3. Investigating other apps. We are in the process of investigating every app that had access to large amounts of information before we locked down our platform in 2014. If we detect suspicious activity, we will do a full forensic audit. And if we find that someone is improperly using data, we'll ban them and tell everyone affected.

4. Building better controls. Finally, we're making it easier to understand which apps you have allowed to access your data. This week we started showing everyone's apps to your data. But you are going to put it to the top of News Feed to make sure everyone sees it. And we have told everyone whose Facebook information may have been shared with Cambridge Analytica.

Beyond the steps we had already taken in 2014, I believe these are the next steps we must take to continue to secure our platform.


Facebook's mission is giving people a voice and bringing people closer together. Those are deeply democratic values ​​and we are proud of them. I don’t want anyone to use our tools to undermine democracy. That’s not what we stand for.

We were too slow to spot and respond to Russian interference, and we're working hard to get better. Our sophistication in handling these threats is growing and improving quickly. We will also be able to give everyone a voice and to be a force for good in democracy everywhere.

A. What Happened

The 2016 U.S. presidential election was no exception.

Our security team has been aware of traditional Russian cyber threats like hacking and malware for years. Leading up to Election Day in November 2016, we have discovered and dealt with several threats with ties to Russia. This included activity by a group called APT28, that the U.S. government has publicly linked to Russian military intelligence services.

But when we were in the front line, we had a lot of fun, we had a lot of fun, we had a lot of fun, we had a lot of fun. We shut these accounts down for violating our policies.

After the election, we continue to investigate and learn more about these new threats. What we found was the bad actors who had coordinated networks of fake accounts to intervene in the election: promoting or attacking specific candidates and causes, creating distrust in political institutions, or simply spreading confusion. Some of these bad actors also used our ads tools.

We have learned how to run a research agency in the US, Europe, and Russia. We found about 470 accounts and pages linked to the IRA, which generated around 80,000 Facebook posts over about a two-year period.

I was at least 126 million people. On Instagram, where we are finding out about 120,000 pieces of content, and estimated that an additional 20 million people were likely to be served.

Over the same period, the IRA also spent about $ 100,000 on more than 3,000 ads on Facebook and Instagram, which were seen by an estimated 11 million people in the United States. We shut down these IRA accounts in August 2017.

B. What We Are Doing

There are no questions that we should have spotted Russian interference earlier, and we're working hard to make sure it doesn't happen again. Our actions include:

Building new technology to prevent abuse. Since 2016, we have improved techniques to prevent fake accounts more generally. There have been successfully deployed since then. For example:

1. In France, the leading up to the presidential election in 2017, we found and took down 30,000 fake accounts.

2. In Germany, before the 2017 elections, we worked directly with the election commission to learn from the threats and to share information.

3. In the U.S. Senate Alabama special election last year, we deployed new AI tools that proactively detected and removed fake accounts from Macedonia trying to spread misinformation.

4. We have disabled thousands of accounts tied to organized, financially motivated fake news spammers. These investigations have been used to improve our automated systems that find fake accounts.

5. Last week, we took down more than 270 additional pages and accounts operated by the IRA and used to target people in Russia and Russian speakers in countries like Azerbaijan, Uzbekistan and Ukraine. Some of the pages we removed were controlled by the IRA.

Significantly increasing our investment in security. We have now about 15,000 people working on security and content review. We’ll have more than 20,000 by the end of this year.

1. We have been making a significant impact on our other investments we are making that will significantly impact our profitability going forward. But I want to be clear about what our priority is: protecting our community is more than maximizing our profits.

Strengthening our advertising policies. We know some Members of Congress are exploring ways to increase transparency and political advertising, and we're happy to keep working with Congress on that. But we aren't waiting for legislation to act.

1. From now, every advertiser wants to run political or issue ads will need to be authorized. To get authorized, advertisers will need to confirm their identity and location. Any advertiser who doesn t pass will be prohibited from running political or issue ads. We will also be able to show you who paid for them. We’re starting this in the U.S. and expanding to the rest of the world

2. For even greater political ads transparency, we have also built a tool that lets anyone see the ads on the page. We are testing this in Canada now and we’ll launch it globally this summer. We’re also creating a searchable archive of past political ads.

3. We will also require people who manage large pages to be verified as well. This will make much harder for people to run pages using fake accounts, or to grow virally and spread

4. We are hiring thousands of more people. We are committed to getting this in time for the critical months before the 2018 elections in the U.S. as well as elections in Mexico, Brazil, India, Pakistan and elsewhere in the next year.

5. These steps by themselves won''t stop people trying to game the system. But they will make it a lot harder for anyone to do what the Russians do during the 2016 election and use fake accounts and pages to run ads. Election interference is a problem that is bigger than any one platform, and that is why we support the Honest Ads Act.

Sharing information. We have been working with other technology companies to share information about threats, and we are also cooperating with the U.S. and foreign governments on election integrity.

At the same time, it’s also important to get more straight forward and larger ways to play role in elections.

In 2016, people had billions of interactions and open discussions on Facebook that may never have happened offline. Candidates had direct channels to communicate with tens of millions of citizens. Campaigns spent tens of millions of dollars organizing and advertising online to get their messages out further. And we organized "get out of the vote" efforts that helped more than 2 million people register to vote

Security around elections is not a problem you ever fully solve. Organizations like the IRA are sophisticated adversaries who are constantly evolving, but we keep improving our techniques to stay ahead. And we also keep building tools to help people make their voices heard in the democratic process.


My top priority was connecting people, building communities and bringing the world closer together. Advertisers and developers will never take priority over that as long as I'm running Facebook.

I started Facebook when I was in college. Weve like a long way since then. We now serve more than 2 billion people around the world. I believe deeply in what we’re doing. And when we address these challenges, I know we look back and view helping people connect and make more people.

I realize the issues we are talking about today and not just issues for Facebook and our community Thank you for me here today, and I'm ready to take your questions.

Mark Zuckerberg sighted on Capitol Hill from CNBC.

Video Embed Size: 530 X 298 640 X 360