Transparency Report Update: January – June 2020

Today, we present our update to the Automattic Transparency Report. This most recent data covers the first six months of 2020, from January 1, 2020 – June 30, 2020. 

Please read through the report here

In addition to our full report, we’re making a CSV file of our data available for the first time. Those looking to parse through the numbers can download the data from our current report here.

We are pleased to remind you that in late 2019, Automattic acquired Tumblr. Our teams have been working hard to align our data gathering processes across platforms so that in the future we can present our transparency report in a more unified way. Data regarding information requests and takedown demands aimed at Tumblr content for the January to June, 2020 period will be published soon. In the meantime, you can find Tumblr’s most recent transparency reports at this link.

At Automattic, we have an ongoing commitment to transparency. In this most recent report, we are once again presenting data on information requests, takedown demands, and intellectual property complaints that we have received during this reporting period. There are two notable data points that we would like to highlight this time around: the volume of Internet Referral Unit (IRU) reports, and DMCA takedown notices that we received. 


Internet Referral Unit Reports

We have previously discussed both the challenges and successes that we experience when addressing extremist content while protecting freedom of speech. In response to extremist content, we have worked with a number of IRUs from around the world. These referral units are state authorities who report to us material that they believe is in violation of our Terms of Service. Upon receipt of these reports, we review user accounts and take action when content or behaviors run afoul of our long-standing position on terrorist activity.  

We are pleased to report that our efforts in this area appear to have paid off as there has been a dramatic decrease in IRU reports during this period. In total, we received three reports. This is a noteworthy decline in volume compared to the 34 reports we received during the previous reporting period in 2019 (as well as the 82 reports we received during the second half of 2017 when we first began reporting on these specific requests). See

Click here for more information.


DMCA Notices

The number of DMCA takedown notices received during this reporting period — 9,462 — was almost double what it was in the last reporting period. This is due to a single complainant: the anti-piracy company, 3ants, who submitted a nearly record-breaking 4,544 notices in the first six months of the year. The last time we experienced this volume of takedown notices from a single complainant was during the second half of 2018 when Link Busters submitted 4,965 notices during a six-month period.

Unfortunately, companies such as these often submit multiple, duplicate notices targeting the same material, particularly when they use bots. Additionally, many of these takedown notices target content that we either do not host, or have already been removed. Because so many of these takedown notices are automated, replies are not monitored by humans and our requests for additional information are often met with no response. In this case, 3ants has taken it to the extreme, and we’ve had to adjust our internal processes to deal with the influx of these thousands of duplicate notices.

For more information, please see our Intellectual Property page.


We hope you will find this information interesting and helpful. Please contact us if you have any questions on this report. Additionally, if there are other types of data you would like to see in future reports, we would love to hear from you.

We’re Against Bots, Filtering, and the EU’s New Copyright Directive.

The European Union is considering a dangerous new amendment to its Copyright Directive that will restrict speech and expression and potentially change the way many in Europe, and worldwide, experience the internet.

Article 13 of the Directive on Copyright in the Digital Single Market would effectively require Internet platforms – of all sizes – to proactively filter any content posted by their users, with the goal of stamping out materials that allegedly infringe on copyrights…before they are even posted.

The Directive would also require platforms to respond to almost any copyright infringement notice submitted by a rights holder, without adequate standards for completeness or effective recourse in the event of mistakes or abuse.

We’re against the proposed change to Article 13 because we have seen, first-hand, the dangers of relying on automated tools to police nuanced speech and copyright issues. Bots or algorithms simply cannot determine whether a blog post, photo in a news article, or video posted to a website is copyright infringement or legitimate use. This is especially true on a platform like, where copyrighted materials are legitimately posted in the context of news articles, commentary, criticism, remixing, memes — thousands of times per day.

We’ve also seen how copyright enforcement, without adequate procedures and safeguards to protect free expression, skews the system in favor of large, well-funded players, and against those who need protection the most: individual website owners, bloggers, and small publishers who don’t have the resources or legal wherewithal to defend their legitimate speech.

Based on our experience, the changes to Article 13, while well-intentioned will almost certainly lead to a flood of unintended, but very real, censorship and chilling of legitimate, important, online speech.

There’s a vote coming up on this proposal, later this month, and fortunately there’s still time to stop it. If you’d like to contact a Member of the EU Parliament and voice your concerns, you can do that here.

Automattic at RightsCon 2018

On May 16, 2018, Automattic is teaming up with Global Partners Digital for a panel on content moderation and changes to legal liability for online platforms being considered by governments around the world.

During our RightsCon session, Content Regulation in the Digital Age, we will explore the role of publishing platforms and social networks in the face of increasing calls to regulate content, and consider the definition of ‘content’ as it relates to ever expanding forms of expression facilitated by the internet. Automattic has long been a strong advocate for our users and for transparency in reporting the actions we take with regards to content moderation on our platforms. We look forward to sharing details of this work as part of the discussion at RightsCon.

Moderated by Charles Bradley, Executive Director of Global Partners Digital, the hour should prove to be a lively discussion between panelists as well as an engaging Q&A with conference attendees. During the week, Global Partners Digital will also be launching their white paper on a rights-respecting model of online content regulation.

This is Automattic’s second consecutive year attending RightsCon. We’re pleased to return and connect with industry professionals and digital rights groups from across the globe. Our trust and safety team will be present throughout the week. Also keep an eye on the schedule for details about our official after party following the conference sessions on Thursday, May 17.

RightsCon Toronto takes place Wednesday, May 16 to Friday, May 18 at the Beanfield Centre at Exhibition Place. Tickets are available now!

Transparency Report Update: July – December 2017

Today we invite you to read our bi-annual update to the Automattic transparency report, covering the period from July 1, 2017 to December 31, 2017. Read through to find details on the number of information requests and takedown demands we received from governments, as well as from intellectual property rights holders under the DMCA.

Our commitment to transparency pushes us to find ways to improve what we share, and we’re pleased to announce three major additions for this reporting period. In this update, we include statistics on the number of notices we’ve received from Internet Referral Units (IRUs). We also share details on the country blocks that we’ve been forced to put in place in countries like Turkey and Russia. Finally, we include more insight into the government takedown demands we receive and how we respond. Want to learn more? Read on!

Internet Referral Unit Stats

As we acknowledged in our last blog post, addressing extremist content while simultaneously protecting freedom of speech is a major challenge for online platforms, including Currently, state authorities report to us material that they believe to be in violation of our Terms of Service, which we then evaluate. In an effort to shed more light on this process, we’ve created a whole new section in our report, where we’ll share updated figures every six months. Click here for more information.

Country Block Transparency

Automattic routinely receives censorship demands from government agencies around the world concerning content published on We go to great lengths to fight these, but are sometimes left with no choice but to geoblock content in certain regions. We otherwise risk having access to our services completely cut off.

We’ve experimented with different ways of making this data public in the past (including submitting details to the Lumen Database, and creating a “mirrored archive” for Russia), but until now there has been no central place to find details on all currently geoblocked sites. Our new page changes all that, and includes the lists in text format to hopefully make it easier for censorship-monitoring sites like OONI and Blocked to import and track.

More Insight Into Government Takedown Demands

Previously, we didn’t include situations where we removed content because it violated our policies in the “percentage of requests where content was removed.” This is because we would have removed the content regardless of how we came across it. In other words, the fact that the content was brought to our attention via a government demand or court order had no bearing on the action we took. For example, if a government entity reported a site for selling controlled substances, we’d suspend the site, but not count that as a request where content was removed. However, we recognized that we could provide more insight about the actions we take and the types of reports we’re seeing from each country by sharing a new data point: “percentage of requests where content was removed due to a violation of our policies.” To minimize potential confusion, we’ve renamed the original description to “percentage of requests where content was removed solely in response to the demand.”

We hope that you find all of this new information interesting, and we’d encourage other platforms to share their own experiences in these areas.

Tackling Extremist Content on

At Automattic, we build tools that make it easy for anybody to share their voice online. is home to tens of millions of sites, covering a huge variety of topics and issues—from news blogs, to websites for Fortune 500 companies, to thriving ecommerce stores. It also means that we inevitably host content which some may find offensive, distasteful, or even abhorrent.

Like most online hosts, we do not pre-screen the content that our millions of users post to our services. We evaluate reports of content that goes against our User Guidelines or Terms of Service, and take some measures to proactively search out spam or other abusive content that’s posted to our services. We try to make the process for reporting illegal or abusive sites as transparent and simple as possible. If you see a site that you think violates our rules, please report it to us here.

One category of content that has been a focus for law enforcement and all internet companies—including Automattic—is terrorist or extremist propaganda.

The volume of these sites reported on is not high. In 2017, we received 131 reports from government Internet Referral Units that sites violated our policies against extremist content. This is out of the millions of sites that we host. Even given the relatively low volume, we do take this issue very seriously and have developed some specific procedures to address it on our platform. Continue reading

Transparency Report Update: January – June 2017

The time has come for the bi-annual update to our transparency report, covering January 1 to June 30, 2017. As usual, we’ve shared updated data about national security requests, government requests for user information, government demands for removal of content, as well as notices of copyright and trademark infringement. We’ve included some of the most interesting highlights below.

Intellectual Property

We received the highest number of DMCA takedown notifications in a single six-month reporting period. From the 5,006 we received between July-December 2016 to 9,273 this period, we saw an 85% increase.

The bulk of this increase comes down to just two complainants, both of which submitted over 2,000 reports each, accounting for 47% of the 9,273 total takedowns received. To truly appreciate the top two complainants’ volume, you should note that the third highest complainant submitted 371 reports.

We often received multiple copies of the same DMCA notice (sometimes days apart!), and often these duplicate notices target material that had already been removed or was hosted elsewhere. This highlights the problem with automated takedown systems that have no element of human review. We’ve written about the issues involved with DMCA takedown bots before, and the figures in this report suggest that the problem isn’t going away. The high volume of thoughtless takedown places a big burden on the small team we have, who are charged with carefully reviewing and processing our notices, to ensure the rights of copyright holders and our users are protected.

Due to the high volume of bot generated deficient takedown notices, we removed content in response to only 22% of the DMCA notices we received in this period, compared with our average of approximately 60% across the past six reporting periods.

Meanwhile, we can count on one hand the number of counter notices we received from our site owners, and it’s the fewest we’ve seen in a reporting period to date: 5. As we’ve mentioned, the counter notice process is intimidating and can lead to a user being sued in federal court, which means spending a tremendous amount in legal fees defending their case, even if the court determines the user’s material isn’t infringing.

Government Demands

We receive a steadily increasing number of takedown demands from governments around the world, with a 61% increase this reporting period compared to last’s.

Takedown demands from certain countries are particularly concerning. While Russia is sending us the greatest volume, we’re seeing especially problematic demands from the Turkish government, which actively censors content that criticizes the Turkish government, officials, and/or army. In hopes of defending the rights of our users to speak about their government, we have filed objections to 13 of the most egregious court orders. Unfortunately we have not seen much success. Eleven of our appeals have been rejected, and two are pending. Additionally, we do what we can to partner with our affected users in filing our objections — Turkish citizens who are blogging about their government. However this is very fraught territory, especially when filing an objection would be made in a user’s name. One of our users said it best: to appeal, and reveal their identity, would be “suicide.”

Further, after consulting with additional experts on Turkish law, we’ve learned that not only are the users who bring the case at risk, even the lawyers and judges in cases involving national security (for example, content that criticized military involvement and suggested military misconduct) can be charged for assisting terrorist organizations.

We will continue to fight for freedom of expression and our users in Turkey, which includes being as transparent as we can about the situation, the demands we receive, and our response to them.

National Security

We recently shared some information about the process we followed to lift the nondisclosure restrictions associated with five NSLs received in previous years and provide copies of our correspondence with the government. We hope this information will be useful to other companies who may wish to take advantage of the legal options that are currently available to challenge NSL nondisclosure orders.

We’ve also developed a form reciprocal notice request. If your company has received an NSL in the past and you would like the government to review the letter’s nondisclosure requirement, this form may be useful to you.

As always, please take a look through the data, and let us know if you have any questions or other types of data that you’d like to see in future reports!

Shining Light on National Security Letters

Transparency. We aim for it in most everything we do at Automattic.

When it comes to legal demands from the government, being fully transparent can be hard and even impossible in cases where we are prohibited by law from revealing information about a legal request we receive. Nowhere is the lack of transparency more controversial than in the area of National Security Letters (“NSLs”).

Today we are releasing and publishing redacted versions of five NSLs, which we hope will add to the public’s understanding of this legal tool and help inform the debate about their scope and use.

We would also like to share some information about the process we followed to lift the nondisclosure restrictions associated with these NSLs and provide copies of our correspondence with the government on this subject. We hope this information will be useful to other companies who may wish to take advantage of the legal options that are currently available to challenge NSL nondisclosure orders.

What is an NSL?

NSLs are a form of government legal process (like a subpoena) used to request information from communications service providers, like phone and internet companies, about their users in national security investigations.

NSLs are legally controversial because no judge reviews the information demands before they are issued, and they often come with a nondisclosure requirement (also known as a “gag order”) that lasts for an indefinite amount of time and can end up being permanent. An NSL gag order legally prohibits the recipient (often an internet company like Automattic) from sharing a copy of the NSL with the user whose account info is being requested. It also prevents the company from sharing any information about the NSL publicly, or from even making a public statement that they received the NSL at all. Instead, the government allows NSL recipients to report the number of NSLs they receive in a broad range, which is designed to give the public an idea of the number of NSLs received during a certain period of time. This is why we reported receipt of 0-249 NSLs for certain periods covered by our Transparency Report.

The EFF has published a comprehensive set of resources about NSLs here, if you’re interested in learning more about them.

National Security Letters Received by Automattic

Below are redacted copies of five National Security Letters received by Automattic between 2010 and 2013.

Each of the NSLs that we are publishing initially included an indefinite nondisclosure requirement that prohibited us from sharing any information about the letter or publicly acknowledging that we received an NSL.

We recently requested that these nondisclosure requirements be lifted, under the “reciprocal notice” procedures of the USA FREEDOM Act. More detail on the procedures that we followed is below.

In response to our requests, the FBI lifted the gag orders with respect to all information in each of the NSLs we are making available today. Before publishing the letters publicly, however, we decided to redact the following information from each letter: (1) the site URL about which the government requested information, (2) names of Automattic personnel to whom the request was addressed, and (3) name and contact information for the FBI personnel involved in making the information request.

We made these limited redactions in order to protect privacy interests. The NSLs are otherwise what we received when they were served onto us.

In response to four of the letters, we produced information that was responsive to the government’s request. We did not have user information that was responsive to one of the NSLs, and did not produce any information as a result.

Before publishing these letters publicly, we notified each account holder whose information was requested or produced under the NSLs we received, and provided them with a copy of the relevant NSL.

Legal Review of Nondisclosure Requirements

The USA FREEDOM Act, passed in 2015, includes two avenues under which an NSL nondisclosure requirement might be reviewed and lifted.

First, the Act requires the FBI to periodically review the NSLs that they have issued and determine whether their nondisclosure requirements are still necessary. As a result of these reviews, the FBI has terminated gag orders for at least a handful of NSLs. We know, for example, that two companies were able to discuss and publish NSLs for the first time as a result of this periodic review: Yahoo released three NSLs in June 2016 and Google released eight NSLs in December 2016.

Second, the statute provides a mechanism under which the recipient of an NSL, like Automattic, has the right to ask the FBI to review the nondisclosure requirement accompanying an NSL. This process is referred to as “reciprocal notice.” If an NSL recipient invokes reciprocal notice, the FBI must review the NSL within 30 days and decide whether the nondisclosure requirement is still necessary. If the FBI decides it is not, it lifts the gag order. If the FBI decides that secrecy is still needed, the government must seek review of the nondisclosure requirement in federal court. A judge then reviews the nondisclosure requirement to determine if it should stay in place, needs to be modified, or should be terminated.

It is our policy to invoke the reciprocal notice procedure for any NSLs we receive. If and when a nondisclosure requirement is lifted, our policy is to share the contents of the NSL with any affected users where possible, as well as to publish a version of the NSL.

In May 2017, we sent letters to the FBI invoking the reciprocal notice procedure for each of the five NSLs that we are publishing today. Though each of the NSLs is several years old (the oldest letter dates back to 2010), we have a strong commitment to transparency and thought it was important to do what we could to disclose NSLs to our affected users and the public, even though these disclosures are several years after the fact.

In response to our letters, the FBI declined to seek judicial review of any of the five nondisclosure requirements. Instead, the government lifted the nondisclosure requirement for each letter, allowing us to share a copy of each letter publicly, with voluntary redactions to protect the privacy of the people involved.

Based on our correspondence with the government, we’ve developed a form reciprocal notice request here in Google Docs format. If your company has received an NSL in the past and you would like the government to review the letter’s nondisclosure requirement, this form may be useful to you. We have also included a copy of the FBI’s response to each of our request letters (see below).

Automattic’s Commitment to Transparency

We believe that the government does critically important work to protect our national security, and that investigative tools like NSLs are necessary to that work. At the same time, we take our commitment to transparency very seriously, and believe that our users and the public have a right to be informed about the nature of the tools that the government uses to conduct investigations and the scope of their use. That is why we worked to lift the gag orders on the NSLs that we are releasing today. We hope that the information we’ve published adds to the body of knowledge and helps inform the important public debate about NSLs.

We also continue to believe that NSLs pose serious constitutional concerns, particularly because they indefinitely prevent companies like us from speaking about them, and informing our users or the public about the NSLs that we receive. The procedures used to lift nondisclosure requirements are flawed because they put the burden of seeking an end to secrecy almost entirely on the companies, like Automattic, who receive NSLs. Though flawed, these procedures are all we have for now. We were able to use them to remove the nondisclosure orders on the letters we publish today and would like to see other companies who have received NSLs follow the same path.

Redacted NSLs (pdf)






Redacted FBI Response Letters (pdf)

NSL-10-287729_FBI Response_Redacted

NSL-10-288826_FBI Response_Redacted

NSL-12-355105_FBI Response_Redacted

NSL-12-355263_FBI Response_Redacted

NSL-13-365428_FBI Response_Redacted

Hall of Shame: Something Stinks in Abbotsford

For our latest Hall of Shame entry, we turn our gaze towards the City of Abbotsford in Canada. For reference, here’s their logo. Commit it to memory, as you’ll want to remember what it looks like for later:

city of abbotsford.jpg

City officials took issue with a 2013 post written by a homeless blogger that criticized them for reportedly “deliberately spread[ing] chicken manure on a homeless person’s camp” in an effort to deter people from congregating in the area. To demonstrate just how… dirty a move the blogger thought this was, he illustrated his post with a doctored image of the city’s logo, which had been modified to include a large … well, see for yourself:

City of Abbotsford Parody Logo

The accompanying text reads:

“Oh crap! Abbotsford already needs to update their new city logo.”

That seems to make the blogger’s feelings quite clear. Unhappy, however, with this depiction of their logo, a marketing firm purporting to act on behalf of the Abbotsford city council sent us a DMCA takedown notification earlier this January, claiming copyright over the image.


It is unclear why the city council decided to go down this particular route in an attempt to have the image removed, or why it took them almost four years to do so. What is clear, however, is that this stinks. Pardon the pun. It was glaringly obvious that the addition of the hilariously large feces was for the purposes of parody, and tied directly to the criticisms laid out in the post. As a result, it seems hard to believe that the city council took fair use considerations into account before firing off their ill-advised notice, and trying to wipe up this mess.

We rejected the complaint, and passed it on to the blogger for his perusal. In response, he updated the logo, just in case there was any doubt that the image was being used for the purposes of commentary or criticism:

City of Abbotsford Parody Logo

Much clearer now.

City of Abbotsford, welcome to the Hall of Shame.

Note: Our use of the Abbotsford city logo in this post is also for the purposes of commentary or criticism, and therefore falls under fair use protections. If anybody on the council happens to be reading, please don’t send us another DMCA takedown. 🙂

Reforming the DMCA

We at Automattic are firm believers in legitimate copyright protection. We are also vigilant about shielding our users from abuse, particularly in cases in which the abuse aims to censor legitimate criticism or ignores fair use of copyrighted materials. As an online service provider, balancing these diverse interests and rights is important to us and requires careful review and diligence. Section 512 of the DMCA was enacted to provide online service providers like Automattic with guidance on handling these issues; however, in the almost 30 years since the law was passed, the Internet landscape has evolved significantly, leaving deficiencies in the safeguards of Section 512.

To help remedy these deficiencies, the US Copyright Office started an initiative last year to study and propose reforms to the DMCA. We were grateful to have an opportunity to submit our feedback and to highlight the issues we commonly experience with the current system—namely, abusive DMCA notices, a deficient counter notice process, and the impact of copyright bots on fair use. As a follow up, the Copyright Office recently solicited empirical data and analyses to help shed light on the effectiveness and impact of the current Section 512 safe harbors – and Automattic was happy to share the data we’ve gathered on the subject in recent years.

Some key findings that we highlighted:

  • With three years of data relating to the copyright infringement notices we receive, it was particularly striking to see how consistent the figures are year after year on subjects such as counter notices, fair use, and procedural mistakes that we reject.
  • 10% of the notices of claimed infringement we receive are directed at clear fair uses or uncopyrightable content, or contain clear misrepresentations regarding copyright ownership. If our experience is representative of other online service providers in the industry, the overall volume of abuse is significant.
  • The number of counter notices we receive is remarkably low, which we believe is not the result of a correspondingly low number of false or mistaken assertions of infringement, but instead results from the concern that sending a counter notice is likely to result in costly litigation, even if that litigation would ultimately turn out to hold that no infringement had occurred.
  • More than a third of the notices we receive simply do not contain the required information—they either include incorrect information, leave out pertinent information, or fail to provide a clear description of the unauthorized material.
  • Automattic has spent a significant amount on legal fees in bringing lawsuits against blatant violators of the DMCA, but has been unable to recover these costs or collect on judgments in our favor because the remedies available under the law are often illusory even in cases of clear abuse.

Our data shows a continuing issue with the current DMCA system, which allows abuse to go unfettered due to a lack of real statutory consequences. Internet users need a more effective remedy than the counter notice to adequately safeguard their legitimate content. Stricter form-of-notice requirements, opportunity for targets to respond before content is removed, and statutory damages for abusive notices are some possible solutions that would provide increased protection for Internet users.

We are hopeful that our feedback and data will help guide reforms toward creating a more equitable environment for Internet users. We look forward to seeing how the law evolves and will continue to work hard to make the DMCA process as fair and balanced as possible.  

For more information about the data we collect, you can view our transparency reports related to section 512 here.

You can read our full comments here: Section 512 Comments

Automattic at RightsCon 2017

Automattic’s mission is to democratize publishing, part of which involves fighting for digital rights online. As a result, we are proud to sponsor RightsCon 2017 — a conference starting today, centred around “how to keep the internet open, free, and secure.”

RC2017-official-logoSeveral members of our legal and policy teams are happily in Brussels to join the

On Thursday at 4 pm, we will host a session on the day-to-day realities of dealing with takedown demands from all over the world. If you are interested in the practical perspective of a service provider fighting for bloggers’ rights, we hope you will come and ask us tough questions.

Later, on Thursday at 6:15 pm, we invite all conference attendees to continue the conversation over drinks and snacks at a cocktail reception on-site immediately following the programming.

If you are not at the conference in person, you can follow along on social media with #rightscon and hopefully many sharp blog posts to come.