Business Categories Reports Podcasts Events Awards Webinars
Contact My Account About

Is Everyone Lying About Their SPF Claims? It's Complicated.

Published April 7, 2026
Published April 7, 2026
Janna Mandell x Troy Ayala

Key Takeaways:

  • SPF test results vary widely due to human and lab variability.
  • Targeted testing and commercial pressure can incentivize manipulated or unreliable SPF claims.
  • Brands bear reputational and legal risk; they must audit labs and use multiple tests.

Sunscreen has a credibility problem, and it may be about to get worse.

Even before Australia had a sunscreen crisis last year, which led to the recall of more than 20 sunscreen products, the sunscreen industry had to battle everything—from fearmongering and conspiracy theories to the rise of #TanTok—to convince people to wear sunscreen.

But the Australian controversy wasn’t just an Australian problem. Instead, it shone a light on how esoteric SPF testing can be, opening a Pandora’s Box of doubt and ethical questions. If brand founders do not understand how to read an SPF report, how can they know if the data has been manipulated? If variation is ubiquitous in SPF testing, how can brands and their manufacturers stand by a single SPF lab test that was performed prelaunch?

And, if the testing lab is given a target number (it is) and financially incentivized to meet it, how do consumers know the SPF test results are valid?

It's complicated.

While researching our story on the Australian sunscreen scandal, a scientist who had previously worked at the ethically questionable testing lab Princeton Consumer Research (PCR) and asked to remain anonymous told BeautyMatter exclusively that only a fraction of the brands using PCR were Australian; the majority were US brands.

This prompted us to ask the top-selling US sunscreen brands at Sephora, Ulta Beauty, CVS, Target, and Credo to provide their in vivo SPF test reports to prove that their data had not been manipulated, as the Australian brands' had.

Of the 36 brands BeautyMatter contacted, 30 either opted not to participate, sent us the wrong page of the report, refused to submit the in vivo test results page, or ignored our request completely. Of the six that submitted their data, only Beauty of Joseon and Badger sent us valid test results.

This discrepancy between a sunscreen label and its actual SPF test results is nothing new. Consumer Reports has released its sunscreen ratings since 2013. Every year, the publication exposes mislabeled sunscreens, calling out inconsistencies between the labels’ SPF values and the publication’s own independent test results. But there have been critics over the years who have questioned Consumer Reports’ testing methods and its reliance on only three panelists.

An Unpredictable Truth

US sunscreen is regulated by the Food and Drug Administration (FDA) as an over-the-counter drug, which means brands and manufacturers must follow the FDA’s sunscreen monograph for testing SPF, which measures UVB protection. The monograph requires in vivo testing (using human panelists), but also relies on human judgment to interpret the results.

To measure the Minimal Erythema (skin redness) Dose (MED), the lab is required to test a minimum of 10 evaluable human subjects. A lab technician applies 2 mg/cm² of the sunscreen sample to test sites located on the participants’ backs and leaves adjacent test sub-sites unprotected to compare how the skin behaves with and without the product.

This is followed by a technician shining a solar simulator on the subject’s back in incremental doses. The panelists then return to the lab 16-24 hours later so that a technician can observe the MED of both the protected and unprotected skin.

So, to review: a human technician applies sunscreen to human subjects, and then another human technician judges the test results. But humans are not machines, which is one reason a brand could get an SPF of 51 at one lab and 28 at another, or get different results across batches of the same sunscreen.

Since 2007, Curtis Cole, PhD, President of Sun & Skin Consulting, has been a US delegate to the International Organization for Standardization (ISO), serving on the technical committee that develops global standards for sun protection testing. 

Dr. Cole told BeautyMatter that there is actually no true SPF number. “When we run the test, we’re only estimating the average.”

Dr. Cole’s most recent paper, “The Variability of In Vivo Sunscreen Sun Protection Factor Values,” showed there is so much inter-laboratory variability in SPF testing that “even with the best laboratories recommended by ISO experts, audited by ISO experts, we still saw ±64% variability.”

Approved in December 2024, two new ISO protocols (one of which Dr. Cole helped architect) were created to provide SPF testing labs with a more ethical and objective alternative to the ISO 24444 (2019) in vivo method.

ISO 23675 (a double-plate method) is an in vitro method in which a robot applies the samples to plates that mimic human skin. ISO 23698 (hybrid diffuse reflectance) combines in vitro and in vivo methodologies and still requires human subjects, but they are exposed to a very low UVA dose, and the SPF is calculated using optical instrumentation rather than visual grading by a technician.

Both new ISO standards have been widely accepted across Europe and are under evaluation for adoption in South America, Asia, and Australia. But here in the US, the FDA still only accepts in vivo testing for SPF results according to its monograph.

Chelcie Mejia, Photobiology Manager at Eurofins CRL Cosmetics in the US, oversees a lab that tests hundreds of sunscreens. She confirmed to BeautyMatter that one of the main reasons for the wide variability in SPF results across testing labs is human involvement. Mejia pointed out that every scientist should apply sunscreen the same exact way as per the monograph, “but one [technician] could be more light-handed versus another person.”

Not only are all applications done by humans, so are the evaluations. “We're using a grading scale that's provided to us by the monograph. It [the evaluation] could be a difference of one and 0.5; that could be the deciding factor of whether or not your sample passed or failed,” Mejia said.

There is also the ethical concern of purposely burning humans. Sure, the UV dose is controlled, and all subjects have consented prior to testing, but UV radiation is still a carcinogen, and sunburns (even on small patches) can be painful. 

In vivo testing is also very expensive, largely due to the cost of paying panelists.

Although in vivo testing has been considered the gold standard in SPF testing for the last 40 years, its variability, lack of reproducibility, and ethical issues have long made it controversial.

Recognizing these issues, Australia’s Therapeutic Goods Administration (TGA) released a consultation paper last month, proposing several options to “improve the current regulatory framework” for sunscreen. These include increased oversight of third-party testing labs and changes to SPF labeling, removing the numeric value and replacing it with “low, medium, high, very high.” The TGA is currently seeking feedback on the proposed paper.

Under Pressure

When a consumer shops for sunscreen in the US, they expect the SPF number on the bottle to match the level of protection inside. What they don’t know is that the number on the bottle is a measured mean (an approximation) with inherent variability baked in.

According to Dr. Cole, the FDA does not have a requirement around variability within a test, but does require producers to subtract the lower one-sided 95% standard error from the average SPF to determine the highest SPF a brand can claim on the label in order to provide a more conservative estimate of the SPF one can expect.

Under ISO 24444, the results of a given SPF test must fall within a specified range. Per this ISO method, “Confidence limits (95% Confidence Interval) for the mean SPF must fall within the range of ± 17% of the mean SPF.”

Before a lab performs an SPF test, the sponsor (the brand or manufacturer that ordered the test) gives the lab a “target number.” 

It’s like a professor giving you an answer key before the final exam. 

But the stakes are higher. 

If the lab does not meet that target, there’s a good chance the sponsor will shop for a lab that does.

Dr. Michael Traudt, Head of Clinical and Photobiology Services at Consumer Product Testing Company (CPT Labs), told BeautyMatter that his lab has lost several years of business for telling clients that their formulations did not meet the target number.

At the beginning of the process, before Dr. Traudt and his team test a formulation, he uses one of two online sunscreen calculators to estimate the SPF. “So, if they [the sponsor] give me a [target number of] 50, and the calculator gives us a 16, we reach out to the client and say, ‘With all due respect, we cannot ethically test this [in vivo] as a 50.’ Because if we use the UV dose for a 50 [on a subject], we're going to give people second-degree burns.”

Dr. Traudt said that his team will then test the product at the lower predicted value and on two volunteers. If it turns out the lab severely undershot it, Traudt and his team will increase the number at no extra cost. “99 times out of 100, we're not wrong. And when we tell them, almost every client has said, ‘Okay, thank you. We don't need any further testing.'”

Mejia sets expectations with the client from the beginning. “We like to say that we're not guaranteeing your product is going to work. We're testing it. You're sending it to us to test its efficacy and performance. You're not sending it to us, saying this is absolutely a 50, so yes, many, many times clients have left and chosen to go elsewhere.”

Brands pressure contract manufacturers because they are on a tight timeline to get orders to retailers. Manufacturers pressure testing labs because they promised a target number and delivery date to their brand clients. If a third-party SPF testing lab is pressured to hit the client's target number rather than provide an unbiased result, this could lead to inflated SPF values, particularly if the lab is at risk of losing the customer.

“If a lab gives a test sponsor a really low SPF, chances are the sponsor may go to another lab to get the number they need,” said Dr. Cole. “And the companies are under a tremendous amount of pressure from their marketing [departments], so they tell the manufacturer, ‘Test it wherever. Just get me my number so I can sell my product.’ Full stop.”

John Staton, an Australia-based SPF testing expert and a member of the ISO TC 217 W.G.7 Committee and the AS/NZS Sunscreen Committee, told BeautyMatter that “baking in” variability in testing protocols can occur for several reasons. “Two of these revolve around test laboratories requesting a ‘target SPF.’ This requirement originated from FDA guidance and was adopted into ISO only in the latest version of ISO 24444, with a view to harmonization. I did not support this.”

Staton said that the target number introduces what he would describe as a temptation to run a ritualistic approach to test study design, and negates one key principle of clinical experimentation: “The blinding of what should be a determination, not a certification of efficacy.”

“The other obvious temptation is that of fraud,” Staton said.

Too Perfect

The four brands that provided us with their problematic SPF in vivo test reports genuinely believed they had sent valid reports; unfortunately, that was not the case. Three out of the four were tested in different labs, but shared the same tell: five of the 10 human subjects were rated with identical SPF numbers, as were the other five with a different uniform SPF.

Human panelists analyzed by human technicians aren’t the only source of variability in SPF test results. The lamps the lab uses during a test may have slightly different intensities, which Dr. Cole says are accounted for in the equation. Unprotected MEDs can change from day one to day two, resulting in a different SPF number. “And that's why having a precise, exact number for five out of 10 panelists is a red flag. We expect to see more variability than just two numbers.”

BeautyMatter gave Staton (who provides consultancy clients with a spreadsheet-based audit system to evaluate the major aspects of in vivo test compliance) a redacted in vivo test report posted by a US skincare brand on its Instagram page.

Staton said that in the test report BeautyMatter provided, he saw the same obvious formulaic approach to setting the exposure levels, with only two SPF values differing for the protected skin.

“Eight out of 10 results [on this report] for MEDu [unprotected skin] were at exactly the same light exposure joules,” Staton explained. “The published paper on this [co-authored by Dr. Cole], which is based on a compilation of 9,400 individual test subjects, shows only 0.6% (6 in 1,000) of test subjects sit exactly on the best fit line. And here [the redacted SPF report BeautyMatter provided] shows 80%, i.e., 800 out of 1,000 skins, behaving the same way! My overall recommendation would be to be careful with relying on SPF test reports of this quality.”

“The integrity of the SPF testing results falls on the testing lab.”
By Chelcie Mejia, Photobiology Manager, Eurofins CRL Cosmetics

Timeline of Deceit

Where there is room for interpretation, there is room for manipulation—and nothing proved this more than the AMA Laboratories scandal.

BeautyMatter first reported on the AMA Laboratories controversy in 2021, when the owner of the lab, Gabriel Letizia, Jr., pled guilty to defrauding AMA customers of more than $46 million by testing products on materially lower numbers of panelists than the numbers specified and paid for by AMA’s customers. Hiring panelists is expensive, and AMA was taking out large sums of clients’ cash “to pay panelists,” when in reality they pocketed most of the cash and hired fewer panelists.

Lesson: A lab paying panelists in cash is a red flag.

In 2020, Judit Rácz, founder of INCIDecoder, suspected that K-beauty cult favorite Purito was overstating the SPF of its Centella Green Level Unscented Sun 50+ sunscreen because it listed only two filters in its INCI. Rácz paid to have the sunscreen tested at two independent labs in Poland and Germany; both rated the sample with an SPF of approximately 19. Purito pointed the finger at its contract manufacturer, but the company issued a recall and suspended sales of all three of its sunscreens.

This led to the recall or suspension of other K-beauty sunscreens from the same manufacturer, including Dr.Jart+, Some By Mi, Dear Klairs, and KraveBeauty, while Keep Cool was criticized for its delayed apology and lack of action.

The Korea Institute of Dermatological Sciences, a research organization, later tested the same Purito sample and reported an SPF of 28.4, underscoring the prevalence of inter-laboratory variability in SPF testing.

Lesson: Ask your manufacturer which third-party lab they use, then audit that lab. If you do not have a scientist in-house who can read an SPF test report, hire a consultant who can translate the data for you.

In last year’s coverage of the Australian sunscreen scandal, BeautyMatter’s PCR scientist source said he observed the lab reusing volunteers to make the volunteer data appear more robust. He also claimed to have seen “rounding up of numbers and pushing back erythemal scoring dates outside the window.”

The Australian sunscreen scandal was hardly the first mass SPF-labeling controversy, but it was unusually consequential: it prompted a sweeping Therapeutic Goods Administration (TGA) investigation and a cascade of recalls, cancellations, and discontinuations. The scandal was covered worldwide, yet US sunscreen brands didn’t comment, nor did they take the opportunity to share their own SPF results.

Lesson: Identical results for human subjects in an in vivo SPF test report, down to the decimal point, are a huge red flag, according to our experts. They said that while in vivo SPF test results may be close, they should not be identical. Individual subject values should be recorded as calculated, with decimals included, not rounded or stripped away.

The data is the data.

A Murder of Crows

In the US, sunscreen can be marketed without submitting efficacy data or obtaining FDA pre-approval, even though manufacturers are required to conduct SPF testing to support their label claims. But this does not stop class-action suits against brands that overstate SPF claims.

In other words, the FDA might not go after your brand for a mislabeled SPF, but the plaintiff’s bar certainly will.

“I feel like a broken record,” Meredith Petillo, Vice President of Technical Regulatory Affairs at the Independent Beauty Association, told BeautyMatter. “You cannot legislate compliance, right? And I’m not the only one to use this analogy, but think of it this way: We have speed limits on highways. They [states and municipalities] set the speed limits, but that does not stop you from exceeding them. In New Jersey, the maximum speed limit is 65, but New Jersey people usually drive at 80 miles per hour. Just because everyone is speeding does not make it legal or safe.”

And if you get pulled over and say that everyone else was driving 80, the officer is not going to care; you still broke the law, Petillo said. “By putting the rules in place, you can't prevent somebody from breaking them. And now, we’re kind of living through this reckoning.”

So, what’s going to happen if you have inflated SPF claims? “You may very well face a lawsuit,” Petillo said. “This is the most litigious country in the world.”

Currently, there are several active class-action lawsuits over mislabeled SPF claims. While these lawsuits may be valid because these sunscreens are mislabeled, the actual SPF tests showed SPF values above 15, which the FDA still considers protective.

Legal documents show Clinique is being sued over its Broad Spectrum SPF 50 Mineral Sunscreen Fluid for Face, which, according to testing, only provides an SPF of 26. L’Oréal is being sued over its La Roche-Posay Anthelios Melt-In-Milk Sunscreen, alleging it claims an SPF of 60 when it actually tested at SPF 34. And Sun Bum is being sued for its Mineral Sunscreen Lotion SPF 50 testing at 17. 

Who Is Responsible in the End?

When a sunscreen brand is exposed for overstating its SPF value, in most cases it’s the brand’s name, not the manufacturer or testing lab, in the headlines. It’s the brand that loses consumer trust and credibility. It’s the brand that would have to potentially issue a recall or halt sales.

Brands need to do their due diligence when partnering with manufacturers and testing labs. But this alone won’t solve the problem. Brands need to put less pressure on manufacturers, who must put less pressure on testing labs. And brands need to commit to a lab, even if the lab comes back saying the brand needs to reformulate to get to that target number.

SPF testing can cost up to $50K per SKU per lab, and experts recommend using at least two labs to account for the variation. This makes a single test a big investment for many independent brands, particularly if they have to reformulate. But are the financial and reputational risks associated with a potential recall worth the risk?

“The integrity of the SPF testing results falls on the testing lab,” said Mejia. “But ensuring labeling, marketing, and regulatory compliance are the responsibility of the brand or the client. Consumer trust and safety should always be the main priority for us as the testing lab.”

×

2 Article(s) Remaining

Subscribe today for full access