The 25th edition of our biannual transparency report, covering the period from July through December 2025 is now available. The work of Automattic’s Trust & Safety team is grounded in key principles meant to prioritize an open, safe internet, and protect free expression while ensuring users can exercise their rights under frameworks such as the Digital Millennium Copyright Act (DMCA) and the European Union’s Digital Services Act (DSA). Although the DMCA and DSA are important tools, this report shows that they are not immune to misuse—particularly as bad actors increasingly weaponize AI to exploit them.
Copyright and the DMCA
We are seeing continued exploitation of the DMCA notice-and-takedown system by third-party monitoring services—in some instances, through the use of AI-generated mass reporting methods. While DMCA abuse by third-party reporters may not be a new trend, it is notable that reporters are leveraging AI to submit reports en masse, almost certainly to maximize revenue. We support strong copyright protection for our users, in keeping with our principles to empower users and uphold their rights. We believe, however, that it is important to call out such abusive behaviour, especially since it consumes valuable Trust & Safety resources that would be better directed toward processing legitimate DMCA notices or other legitimate reports.
One such company, Enforcity, has submitted large volumes of AI‑generated DMCA notices targeting non-existent content on WordPress.com and WordPress VIP. In this reporting period, Trust & Safety processed a total of 838 inactionable reports from Enforcity alone. Enforcity and a handful of other third‑party reporters continue to file abusive notices despite repeated clarification that we do not host the material in question. We suspect this behaviour is largely driven by payment structures that reward the submission of reports regardless of their legitimacy.
EU Regulation and Defamation Claims
Our Trust & Safety team regularly receives notices from complainants citing the DSA and the GDPR that request the removal of content from our platforms. Notably, during this reporting period, both of these pieces of European legislation have been cited—by attorneys and private individuals—as a removal tool for content that is alleged to be defamatory. Online defamation is complex, often blurring the lines between a user’s opinion and factual information. We routinely push back against demands that undermine protections of free expression and, unless legally prohibited, notify users of complaints made against them. Regulatory tools like the DSA and GDPR can be used to pressure platforms to remove the highly contextual content that is often the subject of defamation claims. These laws have equipped users with greater rights to protect themselves, but they have also, albeit unintentionally, provided a means for certain individuals to attempt to curb free expression online.
Phishing Scam Alert
While we don’t typically include spam‑related data in our Transparency Report updates, transparency and user safety are key pillars of Trust & Safety’s work. We therefore want to draw attention to a significant rise in phishing scams—particularly email and subscriber‑based attempts to obtain the private information of our users by impersonating WordPress.com, through means such as fake email messages. In an unfortunate twist, advancements in AI mean that these attacks are becoming more sophisticated and harder to spot, including by our teams. Protecting users’ private information is a key pillar of Automattic’s Trust & Safety team, and so it is important that we remain vigilant to user reports and the increasingly novel scams carried out by bad actors.
We welcome you to take a look through the data. As always, let us know if you have comments, suggestions, or requests.
The full transparency report is available here.