By Lisa Macpherson
October 20, 2025
Instagram’s recent announcement that it will be “guided” by PG-13 ratings to determine what teens will see on its platform is, at first glance, a reassuring move for parents. The Motion Picture Association (MPA)’s familiar rating system is designed to provide parents with the information they need to determine if a film is appropriate for their children. As a former marketing executive, I can absolutely imagine the Instagram-sponsored focus group in which a group of parents concerned about their children’s exposure to toxic content on social media said something along the lines of, “I wish there were a rating system for internet content, like the movies have.” (What Instagram is trying to do here also has its roots in marketing: “Borrowed equity” refers to a strategy where a brand leverages the existing trust, reputation, and credibility of another company, organization, or brand to enhance its own brand image.)
But what Instagram has announced isn’t that. Not even close. When compared to the actual rules and practices of the Motion Picture Association it purports to emulate, Instagram’s claim falls apart. In fact, the Chairman and CEO of the MPA, Hollywood studios’ main trade group, stated within hours of Instagram’s announcement that “…assertions that Instagram’s new tool will be ‘guided by PG-13 movie ratings’ or have any connection to the film industry’s rating system are inaccurate.” Neither Instagram nor its parent company, Meta, ever actually conferred with the MPA. Also, whether parents actually wished for it or not, Instagram’s announcement is based on a false premise: that parents can or should bear the main responsibility for ensuring their kids are only exposed to age-appropriate content in social spaces.
Here’s a recap of the very big differences between the MPA’s voluntary, transparent, and independent system and Instagram’s opaque, self-governed, and self-serving one.
I’ll start with what the two systems do, in fact, have in common: The MPA’s rating system was a voluntary film industry initiative designed to hold off government regulation. It was developed in 1968 as an alternative to the old “Hays Code” and the threat of local censorship boards. At a time when the only technology regulation Congress may – let me repeat, may – be able to agree on is the need to protect children (and 35 states are working on the same thing), the regulatory threat Instagram is facing is quite real. Both systems are also rooted in a need to insulate constitutionally protected speech – user posts and motion pictures – from government evaluation and control.
The similarities, however, end there. Whatever its origins, the MPA’s film ratings now comprise a widely adopted, industry-standard system used by studios, filmmakers, theaters, and distributors, as well as a resource trusted by a majority of parents. In contrast, Instagram’s new filter is a walled garden: proprietary and confined entirely to its own platform. Its content decisions have no external point of reference and no bearing beyond the Instagram app itself. It provides no guidance to creators on how to ensure their content will be seen by intended audiences. Built upon Instagram’s existing “Teen Accounts,” which have come under considerable criticism since their introduction last year, the new guidelines also hinge on the ability of Meta’s own artificial intelligence systems to “find suspected teens on [its] platforms and proactively place them in Teen Account settings.”
The MPA’s ratings system rests on an independent and transparent rating body and process. The Classification and Ratings Administration (CARA) is an independent group of parents (their individual identities are private to avoid attempts at influence). It is transparent about the factors considered, such as violence, language, and sexual content, providing parents with specific reasons for a rating. In addition to letter ratings, CARA provides brief descriptions of the specifics behind a movie’s rating of PG, PG-13, R, or NC-17. Instagram’s content filtering algorithms, however, constitute a black box. Instagram says that teens’ experiences in the 13+ setting will “feel closer to the Instagram equivalent of watching a PG-13 movie.” But the platform has not released the specific metrics, policies, or algorithmic rules it will use to determine what is and isn’t “PG-13,” nor has it offered to provide any data on the impacts of the initiative to researchers (or parents, or anyone else). That means parents won’t really be able to assess what their kids are going to experience. The lack of standardized, public criteria makes Instagram’s claim to be “guided by PG-13” purely performative.
Another way the MPA has established its ratings’ credibility is its public and transparent appeals process. Filmmakers who disagree with a rating can, and frequently do, contest the decision. An independent appeals board, which includes members outside the MPA, hears arguments and re-evaluates the film. Instagram’s new content controls offer no such recourse. The lack of an appeals process means Instagram’s “ratings” are final, unreviewable, and opaque, operating outside any standard of public accountability.
Instagram noted that it will continue to run “regular surveys” on Instagram inviting parents to review a series of posts that Instagram has already shown to teens, to “confirm” whether parents think they’re appropriate for teens. This method means that none of Instagram’s stakeholders – creators, teen users, parents, or regulators – will ever know what content was filtered out. That means creators can’t adapt their content or appeal its categorization, and neither teens nor parents can understand or anticipate the impacts of the overall algorithm that enforces the PG-13 filter. Nor will it be clear what its societal impacts are: For example, a recent investigation showed that Meta, Instagram’s parent company, had been restricting content with LGBTQ-related hashtags from search and discovery on its platforms.
Finally, the integrity of the MPA’s rating board is rooted in its independence from the major studios whose films the MPA rates. CARA’s raters are parents who work outside the film industry, a deliberate structure designed to ensure objective decision-making. Their purpose is to try to reflect the values of American parents, not to advance the commercial interests of any single filmmaker, studio, distributor or theatre. Instagram’s new system, however, constitutes “rating its own homework.” The same company that profits from distributing the content is still responsible for policing it. This inherent conflict of interest undermines any claim of objectivity and allows Instagram to define and enforce its standards in a way that serves its own advertising-based business model. (We had related concerns about the Oversight Board that Facebook, now Meta, put in place to review important and disputed content moderation cases; these are expressed here and here.)
In the company’s announcement, Instagram points to surveys showing parents’ satisfaction with its proposed “PG-13 ratings.” (Note: The parents’ “satisfaction” is based on a verbal description of the ratings system, since it hadn’t been introduced yet.) Even if parents’ satisfaction holds in real life, this is a distraction from the main point: The best way to ensure safe, healthy online experiences for kids and teens is not to restrict children’s access to technology platforms or specific pieces of content, but to require technology companies to design those services with children’s wellbeing as a primary consideration.
In Instagram’s new system, teens under 18 will be automatically placed into a special “13+” setting, and they won’t be able to opt out without a parent’s permission. Parents can also choose a new, stricter setting that restricts even more content. That shifts much responsibility for safe and healthy online experiences to individual users and their families, instead of the corporations that profit from the use of their online platforms. As we noted in our recent paper, “The Kids Aren’t Alright Online: How To Build a Safer, Better Internet for Everyone,” rather than asking children to navigate exploitative systems or parents to police every online interaction, we should demand that companies build platforms that are safe for everyone by default.
As a marketing strategy, Instagram’s attempt to borrow the equity of the MPA’s “PG-13 ratings” is pretty clever. But this clever branding masks a system that lacks the accountability and transparency that defines the credibility of the ratings it seeks to emulate.
