Transparency Report Update: January – June 2021

Today we’re rolling out Automattic’s most recent transparency report which covers January 1, 2021 – June 30, 2021. As in past reports, we’re sharing data about national security requests, government requests for user information, government demands for content removal, as well as notices of copyright and trademark infringement.

We’re committed to transparency and we’ve continued iterating on our reports every six months to provide you with the most comprehensive data possible. In our previous transparency report, we began surfacing the number of requests we receive from users exercising their privacy rights. These figures include data access requests, deletion requests, and do not sell requests. We’re following that practice again this round and plan to continue going forward.

We also previously mentioned that our transparency reports for both and Tumblr platforms were moving closer together following Automattic’s acquisition of Tumblr in 2019. In the first half of the year, we created a unified landing page as a jumping off point for both individual reports. We’re now taking that one step further by bringing both reports into the same site which should make navigating and comparing data much easier.

More on that below! and Tumblr

When it comes to democratizing publishing, and Tumblr have always been very philosophically aligned. Both platforms began transparency reporting in 2013 and our reports have evolved along similar lines over the years. Of course, users interact with our products in different ways which in turn impacts the volume and types of requests we receive for each platform, and it’s been interesting to see how those trends develop through our transparency report data.

Comparing that data in past years hasn’t always been easy, however, as the information existed in different places and was presented in different formats. Beginning today, both reports can now be found at on the same site with the report available here and the Tumblr report available here.

We’ve also taken steps towards unifying the format of our data, presenting information on things like government requests and intellectual property disputes in clean tables that make it easier to read and a bit more straightforward to compare across platforms.

Going Forward

While we’re excited about these developments, there’s more we’d like to do in upcoming transparency reports. Specifically, there’s room to unify our data-tracking methods between platforms so that we can get closer to an apples-to-apples comparison. Complete parity isn’t the ultimate goal, but we hope to make it easier for folks reviewing the data to do analysis and identify correlations.

Similarly, Automattic has a family of other products including recent additions such as Day One and Pocket Casts. If we begin receiving relevant requests related to these platforms, our aim will be to present data broken out by individual product in the interest of greater clarity and transparency.
These are just a couple of goals we have in mind, so please watch this space for future updates.

And, as always, please drop us a line if you have questions or suggestions about our transparency reports!

The Digital Services Act: Defending the Digital Single Market and the Open Internet

A coalition of technology companies consisting of Automattic, Jodel, Seznam, Twitter and Vimeo have published a joint letter titled “The Digital Services Act: Defending the Digital Single Market and the Open Internet”. The letter highlights their concerns over the potential fragmentation of the EU’s single market for the Internet as a result of national initiatives that overlap or conflict with the Digital Services Act Proposal. 

In the letter – the second of its kind since December – the companies urge policymakers to preserve the principles of the EU’s Digital Single Market by discouraging unilateral national measures that build regulatory borders across the region – borders that are particularly challenging for small-to-medium size companies and which have the net effect of further empowering the largest companies operating in this space. New EU legislation must also include due process protections to ensure that European precedents aren’t copied elsewhere to pursue oppressive political agendas.  

Specifically, the letter highlights how: 

  • The DSA should maintain prohibitions on general monitoring and focus on illegal content – a concept that must be clearly defined and adopted across the EU. 
  • The DSA’s approach to content moderation should be future-proof and proportionate to perceived harm, moving past the leave-up-take-down models. 
  • The DSA should strengthen provisions on open and interoperable standards across online platforms and establish clear criteria for “very large online platforms” that consider the entire Internet ecosystem and not just a handful of companies. 
  • The DSA should increase online transparency via flexible and tailored requirements across sectors in the areas of policy enforcement, data access for researchers, and disclosures on coordinated actions to manipulate platforms 

Furthermore, signatories are calling for a balanced approach to content moderation. The new rules must reflect the everyday reality of the internet and provide legal clarity over what constitutes illegal content. New legislation should allow for flexible, differentiated requirements across sectors that consider the Internet’s entire information ecosystem e.g. platforms that rely on community moderation. 

On this occasion, the letter’s signatories stated: “The Open Internet has been an unprecedented catalyst for economic and social development. With the Digital Services Act, the EU has the opportunity to  build on its commitment to an Open Internet. The EU could set a coherent benchmark that discourages divergent national rulemaking and which sets a model for the rest of the world to follow. We urge policymakers to avoid a one-size-fits-all approach and adopt rules promoting consumer choice, open standards between platforms, meaningful transparency and a proportionate model for content moderation”.

Automattic, Jodel, Seznam, Twitter and Vimeo.

Download the letter here (PDF).

Transparency Report Update: July – December 2020

It’s time once again for the bi-annual update to Automattic’s transparency report covering the period from July 1, 2020 – December 31, 2020. As ever, we’re providing details about government requests for user information as well as government demands for content removal. We’re also providing information on intellectual property disputes such as DMCA takedown notices and reports of trademark infringement.

Transparency remains a top priority for us and we’re always looking for better ways to present this valuable data. Last year, we announced that we were making a CSV file of our transparency report data available for the first time. We’re thrilled to continue this approach for the reporting period between July through December, 2020. For those looking to slice and dice our data more granularly, you can download a copy here.

For the first time, we’re also reporting on the number of requests we receive from users exercising their privacy rights. Specifically, these statistics include data access requests, deletion requests, and do not sell requests. You can learn more about these requests and Automattic’s commitment to user privacy here.

Further, we previously mentioned that we’ve been working with our colleagues at Tumblr to better align our data gathering processes and present a unified transparency report under one roof. We’re now one step closer to this goal and you can find our new landing page here which will direct you to the specific report you’re looking for.

Below, you can find a few more interesting highlights!

Intellectual Property

In our last update, we mentioned that the total volume of DMCA takedown notices received in the first half of 2020 was just shy of 10,000—nearly double the volume of the previous six months. We also pointed out that this spike was the result of a single complainant: the anti-piracy company, 3ants. The Madrid-based firm was our top complainant by volume last reporting period and they hold that title again for notices received between July through December, 2020. However, while many brand protection companies have little interest in open dialogue with hosting platforms, we’ve been really pleased by how receptive 3ants has been to our feedback.

Aside from sheer volume, the primary challenge with processing takedown notices from companies that submit them en masse is that the notices typically require additional scrutiny by our Trust & Safety teams to ensure validity. It’s common for anti-piracy firms to take a “kitchen sink” approach with their notices which often cite content that we don’t host or that we have already removed, and they tend to bury the required statutory DMCA language in a sea of unnecessary legalese. Perhaps most frustrating is that we rarely receive any replies from these types of companies when we ask for clarification about a notice.

So, given the landscape you can imagine our surprise when we contacted 3ants with our concerns earlier in the year and received a prompt reply! In addition to welcoming our input, 3ants committed to adjusting their approach for takedown notice submissions to Automattic. Since then, their notices have arrived through our DMCA form, each notice only cites content from a single website rather than dozens, and we now receive replies to our requests for clarification. Each of these improvements means we can more quickly review and process takedown notices from 3ants which is a big win for both rights holders and site owners alike.

Unfortunately not every complainant is as cooperative as 3ants. For years we’ve been speaking out against abuses of the DMCA such as the use of automated systems which flood platforms with takedown notices regardless of context. These methods are often prone to error and make it difficult for platforms to prioritize valid notices submitted by individual rights holders.

But what happens when human-generated notices cause similar issues? This is the situation we found ourselves in during the second half of 2020. Across those six months, we received over 2,800 takedown notices which—despite being submitted by humans—contained many of the same issues we encounter with automated notices. We informed the complainant of each instance that they targeted content hosted elsewhere only to receive multiple follow ups demanding that we disable access to the materials (among other threats). These follow ups, which fall outside of established DMCA processes, are a major time suck for platforms who are verifying whether or not new and valid notices are included in these replies.

This type of behavior undermines the spirit of the DMCA and is ultimately detrimental to rights holders. While platforms like ours are forced to engage with these tactics, individual content creators must wait patiently as they navigate the DMCA process in a more typical fashion. Lastly, it also obscures transparency report data as it’s challenging for hosting providers to effectively log excessive follow ups which fall outside of expected DMCA processes.

Government Takedown Demands

Fortunately, both IRU requests and government takedown demands have remained relatively low, though we did see slight upticks in both categories compared to our previous report. IRU requests increased from a total of 3 in January – June 2020 to a total of 11 in July – December. Similarly, the volume of government takedown demands accompanied by a court order jumped up from 12 to 26 over the same timeframe.

The vast majority of these court orders came from Turkey with a total of 23 over the last six months of the year. Turkey also holds onto their top position as the region where the most sites are geoblocked with a total of 470 blocked sites (followed by Russia at 343 and Pakistan at 301). 

Although demands from Pakistan have decreased following their meteoric rise to 3rd place in terms of geoblocked content over the past several years, their demands still rely on some questionable reasoning. For example, we recently received a content removal demand aimed at a blog post which “harms the reputation of government officials.”

We hope you find this information interesting and helpful. As always, please contact us if you have any questions or suggestions about these reports.

Automattic Joins Internet Works

Automattic joined several of our peer companies today to launch Internet Works, a coalition dedicated to working with Congress to promote the benefits of Section 230, a provision of the Communications Decency Act. We look forward to sharing with elected leaders and their staff how Section 230 allows our trust and safety teams to thoughtfully balance content moderation and freedom of expression to provide a safe and equitable community for our users.

You can read the full press release here.

Crossroads for the Open Internet

This week, Automattic joined Mozilla, Twitter, and Vimeo to provide recommendations for how the EU’s Digital Services Act proposals might preserve the open Internet and cultivate a healthy, competitive market for small and emerging companies. We look forward to working with EU leaders as they chart a course for the future of the web.

The full letter can be read here.

Transparency Report Update: January – June 2020

Today, we present our update to the Automattic Transparency Report. This most recent data covers the first six months of 2020, from January 1, 2020 – June 30, 2020. 

Please read through the report here

In addition to our full report, we’re making a CSV file of our data available for the first time. Those looking to parse through the numbers can download the data from our current report here.

We are pleased to remind you that in late 2019, Automattic acquired Tumblr. Our teams have been working hard to align our data gathering processes across platforms so that in the future we can present our transparency report in a more unified way. Data regarding information requests and takedown demands aimed at Tumblr content for the January to June, 2020 period will be published soon. In the meantime, you can find Tumblr’s most recent transparency reports at this link.

At Automattic, we have an ongoing commitment to transparency. In this most recent report, we are once again presenting data on information requests, takedown demands, and intellectual property complaints that we have received during this reporting period. There are two notable data points that we would like to highlight this time around: the volume of Internet Referral Unit (IRU) reports, and DMCA takedown notices that we received. 

Internet Referral Unit Reports

We have previously discussed both the challenges and successes that we experience when addressing extremist content while protecting freedom of speech. In response to extremist content, we have worked with a number of IRUs from around the world. These referral units are state authorities who report to us material that they believe is in violation of our Terms of Service. Upon receipt of these reports, we review user accounts and take action when content or behaviors run afoul of our long-standing position on terrorist activity.  

We are pleased to report that our efforts in this area appear to have paid off as there has been a dramatic decrease in IRU reports during this period. In total, we received three reports. This is a noteworthy decline in volume compared to the 34 reports we received during the previous reporting period in 2019 (as well as the 82 reports we received during the second half of 2017 when we first began reporting on these specific requests). See

Click here for more information.

DMCA Notices

The number of DMCA takedown notices received during this reporting period — 9,462 — was almost double what it was in the last reporting period. This is due to a single complainant: the anti-piracy company, 3ants, who submitted a nearly record-breaking 4,544 notices in the first six months of the year. The last time we experienced this volume of takedown notices from a single complainant was during the second half of 2018 when Link Busters submitted 4,965 notices during a six-month period.

Unfortunately, companies such as these often submit multiple, duplicate notices targeting the same material, particularly when they use bots. Additionally, many of these takedown notices target content that we either do not host, or have already been removed. Because so many of these takedown notices are automated, replies are not monitored by humans and our requests for additional information are often met with no response. In this case, 3ants has taken it to the extreme, and we’ve had to adjust our internal processes to deal with the influx of these thousands of duplicate notices.

For more information, please see our Intellectual Property page.

We hope you will find this information interesting and helpful. Please contact us if you have any questions on this report. Additionally, if there are other types of data you would like to see in future reports, we would love to hear from you.

We’re Against Bots, Filtering, and the EU’s New Copyright Directive.

The European Union is considering a dangerous new amendment to its Copyright Directive that will restrict speech and expression and potentially change the way many in Europe, and worldwide, experience the internet.

Article 13 of the Directive on Copyright in the Digital Single Market would effectively require Internet platforms – of all sizes – to proactively filter any content posted by their users, with the goal of stamping out materials that allegedly infringe on copyrights…before they are even posted.

The Directive would also require platforms to respond to almost any copyright infringement notice submitted by a rights holder, without adequate standards for completeness or effective recourse in the event of mistakes or abuse.

We’re against the proposed change to Article 13 because we have seen, first-hand, the dangers of relying on automated tools to police nuanced speech and copyright issues. Bots or algorithms simply cannot determine whether a blog post, photo in a news article, or video posted to a website is copyright infringement or legitimate use. This is especially true on a platform like, where copyrighted materials are legitimately posted in the context of news articles, commentary, criticism, remixing, memes — thousands of times per day.

We’ve also seen how copyright enforcement, without adequate procedures and safeguards to protect free expression, skews the system in favor of large, well-funded players, and against those who need protection the most: individual website owners, bloggers, and small publishers who don’t have the resources or legal wherewithal to defend their legitimate speech.

Based on our experience, the changes to Article 13, while well-intentioned will almost certainly lead to a flood of unintended, but very real, censorship and chilling of legitimate, important, online speech.

There’s a vote coming up on this proposal, later this month, and fortunately there’s still time to stop it. If you’d like to contact a Member of the EU Parliament and voice your concerns, you can do that here.

Automattic at RightsCon 2018

On May 16, 2018, Automattic is teaming up with Global Partners Digital for a panel on content moderation and changes to legal liability for online platforms being considered by governments around the world.

During our RightsCon session, Content Regulation in the Digital Age, we will explore the role of publishing platforms and social networks in the face of increasing calls to regulate content, and consider the definition of ‘content’ as it relates to ever expanding forms of expression facilitated by the internet. Automattic has long been a strong advocate for our users and for transparency in reporting the actions we take with regards to content moderation on our platforms. We look forward to sharing details of this work as part of the discussion at RightsCon.

Moderated by Charles Bradley, Executive Director of Global Partners Digital, the hour should prove to be a lively discussion between panelists as well as an engaging Q&A with conference attendees. During the week, Global Partners Digital will also be launching their white paper on a rights-respecting model of online content regulation.

This is Automattic’s second consecutive year attending RightsCon. We’re pleased to return and connect with industry professionals and digital rights groups from across the globe. Our trust and safety team will be present throughout the week. Also keep an eye on the schedule for details about our official after party following the conference sessions on Thursday, May 17.

RightsCon Toronto takes place Wednesday, May 16 to Friday, May 18 at the Beanfield Centre at Exhibition Place. Tickets are available now!

Transparency Report Update: July – December 2017

Today we invite you to read our bi-annual update to the Automattic transparency report, covering the period from July 1, 2017 to December 31, 2017. Read through to find details on the number of information requests and takedown demands we received from governments, as well as from intellectual property rights holders under the DMCA.

Our commitment to transparency pushes us to find ways to improve what we share, and we’re pleased to announce three major additions for this reporting period. In this update, we include statistics on the number of notices we’ve received from Internet Referral Units (IRUs). We also share details on the country blocks that we’ve been forced to put in place in countries like Turkey and Russia. Finally, we include more insight into the government takedown demands we receive and how we respond. Want to learn more? Read on!

Internet Referral Unit Stats

As we acknowledged in our last blog post, addressing extremist content while simultaneously protecting freedom of speech is a major challenge for online platforms, including Currently, state authorities report to us material that they believe to be in violation of our Terms of Service, which we then evaluate. In an effort to shed more light on this process, we’ve created a whole new section in our report, where we’ll share updated figures every six months. Click here for more information.

Country Block Transparency

Automattic routinely receives censorship demands from government agencies around the world concerning content published on We go to great lengths to fight these, but are sometimes left with no choice but to geoblock content in certain regions. We otherwise risk having access to our services completely cut off.

We’ve experimented with different ways of making this data public in the past (including submitting details to the Lumen Database, and creating a “mirrored archive” for Russia), but until now there has been no central place to find details on all currently geoblocked sites. Our new page changes all that, and includes the lists in text format to hopefully make it easier for censorship-monitoring sites like OONI and Blocked to import and track.

More Insight Into Government Takedown Demands

Previously, we didn’t include situations where we removed content because it violated our policies in the “percentage of requests where content was removed.” This is because we would have removed the content regardless of how we came across it. In other words, the fact that the content was brought to our attention via a government demand or court order had no bearing on the action we took. For example, if a government entity reported a site for selling controlled substances, we’d suspend the site, but not count that as a request where content was removed. However, we recognized that we could provide more insight about the actions we take and the types of reports we’re seeing from each country by sharing a new data point: “percentage of requests where content was removed due to a violation of our policies.” To minimize potential confusion, we’ve renamed the original description to “percentage of requests where content was removed solely in response to the demand.”

We hope that you find all of this new information interesting, and we’d encourage other platforms to share their own experiences in these areas.

Tackling Extremist Content on

At Automattic, we build tools that make it easy for anybody to share their voice online. is home to tens of millions of sites, covering a huge variety of topics and issues—from news blogs, to websites for Fortune 500 companies, to thriving ecommerce stores. It also means that we inevitably host content which some may find offensive, distasteful, or even abhorrent.

Like most online hosts, we do not pre-screen the content that our millions of users post to our services. We evaluate reports of content that goes against our User Guidelines or Terms of Service, and take some measures to proactively search out spam or other abusive content that’s posted to our services. We try to make the process for reporting illegal or abusive sites as transparent and simple as possible. If you see a site that you think violates our rules, please report it to us here.

One category of content that has been a focus for law enforcement and all internet companies—including Automattic—is terrorist or extremist propaganda.

The volume of these sites reported on is not high. In 2017, we received 131 reports from government Internet Referral Units that sites violated our policies against extremist content. This is out of the millions of sites that we host. Even given the relatively low volume, we do take this issue very seriously and have developed some specific procedures to address it on our platform. Continue reading