What the March 2026 update actually changed
Google’s March 2026 core update finished rolling out on April 8, 2026, after 12 days of turbulence that reshuffled rankings more aggressively than any core update since at least August 2024. [1] SE Ranking data showed 79.5% of top-3 URLs changed positions, compared to 66.8% in the December 2025 core update, and a full 24.1% of pages sitting in the top 10 fell completely out of the top 100. [11] For SEOs trying to figure out what Google actually adjusted, the operational question is whether this update specifically retooled how expertise signals are weighted, or whether the community is, once again, reading tea leaves in volatility data.
Google itself offered almost nothing. The Search Status Dashboard described it as “a regular update designed to better surface relevant, satisfying content for searchers from all types of sites.” [3] No companion blog post. No new guidance. No named signal changes. Search Engine Journal confirmed that Google issued no specific goals or documentation beyond the boilerplate. [4]
What the data does show is a clear pattern in who won and who lost. Official and institutional sites gained visibility: Census.gov, BLS.gov, Amazon.jobs, and USAJobs all climbed on queries where they are the primary source. [11] Aggregators and intermediary platforms went the other direction. YouTube recorded the largest visibility loss in the dataset. Job aggregators like Glassdoor and ZipRecruiter declined. In health, broad consumer sites dropped while clinical and research-driven specialists gained. [11] The directional signal is hard to ignore: Google appears to be routing queries toward the entity that actually owns the information, not the site that repackages it.
Complicating the picture, the March 2026 spam update completed just two days before this core update began rolling out, and a Discover update had landed in February. [4] Three updates in five weeks makes clean attribution nearly impossible. Some ranking drops blamed on E-E-A-T changes may actually be spam-related demotions, and vice versa. Semrush Sensor volatility hit 8.7 out of 10 during peak days, exceeding the August 2024 update, [5] but that number reflects the combined effect of overlapping updates, not a single algorithmic shift.
Moving beyond topical authority to verifiable experience
For the past several years, the SEO playbook for E-E-A-T has been heavily weighted toward topical authority: publish enough content in a subject cluster, build internal links between related pages, and Google will treat you as an authority on that topic. The March 2026 results suggest that playbook is no longer sufficient on its own, even if Google has not explicitly said so.
One third-party analysis found that 73% of top-ranking pages after the update displayed detailed author credentials, up from 58% before the rollout. [12] I want to be honest about the limitations of that number: it is an unverified industry observation, not a controlled study, and correlation with ranking gains does not establish causation. But it aligns with the broader pattern visible in the Search Engine Land volatility analysis, where primary-source sites displaced intermediaries across health, jobs, government data, travel, and real estate verticals. [11]
What seems to be happening is a shift in how Google evaluates the gap between “this site covers the topic” and “this site (or this author) actually has direct access to the information.” A hospital system writing about a surgical procedure has verifiable experience that a health content mill does not, regardless of how many articles the content mill publishes about surgery. An employer posting its own job listings has a relationship to the data that an aggregator scraping those listings cannot replicate. Topical authority was always a proxy for expertise; Google now appears to be looking past the proxy toward the underlying reality.
This does not mean topical clusters are dead. It means they are table stakes. If your site covers personal finance with 200 articles but none of them are written by (or substantively involve) someone with actual financial credentials or first-hand experience managing money, the cluster alone may not protect your rankings against a site that publishes 40 articles written by CFPs with named portfolios. In my analysis of several sites that lost visibility in March, the common thread was not thin content or poor link profiles; it was the absence of any verifiable connection between the author and the subject matter.
How to demonstrate first-hand expertise in content
The SEO community has been circulating advice about author bios and LinkedIn links for years, and most of that advice predates this update. [6] What the March 2026 data suggests is that surface-level authorship signals (a name and a headshot in a byline box) are probably not what is moving the needle. Google’s systems are getting better at distinguishing between a real expert who contributed to the content and a name bolted onto a page for SEO purposes.
First-person perspective embedded in the content itself is one of the strongest differentiators I have seen in post-update winners. A dermatologist writing “In my practice, I see this reaction in roughly 15% of patients on this medication” provides a signal that no amount of research-based paraphrasing can replicate. That sentence contains a claim rooted in direct clinical experience, and it is the kind of specificity that is extremely difficult to fake at scale. Content that reads like a well-sourced Wikipedia summary, no matter how accurate, lacks this dimension entirely.
Original data, proprietary research, and unique media (photos from a job site, screenshots from a tool the author actually uses, charts built from first-party datasets) all function as experience signals. They are hard to produce without genuine involvement in the subject, which is precisely why they carry weight. A travel article with the author’s own photographs from a hotel, timestamped and geotagged, is a fundamentally different asset than one illustrated with stock photography and rewritten TripAdvisor reviews.
Primary source citations also appear to matter more than they did before the update. Sites that gained visibility in the health vertical tended to cite specific studies, clinical guidelines, or institutional data rather than linking to other secondary content sites. [11] This tracks with the broader pattern of Google favoring the source over the intermediary: if your content cites the primary source, you are at least one step closer to the information’s origin than a competitor who cites another blog that cites the source.
Auditing your content for new expertise signals
Search Engine Journal recommended waiting at least one week after the rollout completed (so, after April 15) before drawing conclusions from Search Console data, and accounting for the spam update overlap by using a baseline from before March 24. [4] That is sound advice, but the audit itself needs to go deeper than comparing traffic curves.
Start by isolating pages that lost significant impressions or position between your pre-March 24 baseline and post-April 15 data. Then ask a question that most SEO audits skip: for each declining page, can you identify a specific human being whose verifiable experience or credentials back the claims in the content? If the answer is “no, it was written by a freelancer who researched the topic,” that page is a candidate for the kind of demotion this update appears to have triggered. This is not about penalizing freelancers; it is about whether the content carries any signal of genuine expertise beyond competent summarization.
Compare your declining pages against the pages that replaced them in the SERPs. Look at what those new top-ranking pages have that yours do not. In the verticals most affected by this update (health, jobs, government data, travel, real estate), the winners tend to share a few characteristics: named authors with externally verifiable credentials, content that includes first-person observations or proprietary data, and citations that point to primary sources rather than other secondary content. [11]
| Audit dimension | Pre-update baseline | Post-update target |
|---|---|---|
| Author attribution | Byline with generic bio | Named expert with externally verifiable credentials (LinkedIn, institutional affiliation, published work) |
| Experience signals in body | Third-person research summary | First-person observations, proprietary data, original media |
| Source citations | Links to other blogs and secondary content | Links to primary sources (studies, government data, official documentation) |
| Content origin | Freelance-written from SERPs | Subject-matter expert drafted or substantively reviewed with visible contribution |
One thing I would caution against is treating this audit as a checklist to game. Adding an author bio to a page that was written by someone with no real expertise in the topic is not going to fix the underlying problem. Google’s systems are increasingly capable of evaluating whether the content itself reflects genuine knowledge, not just whether the metadata claims it does. If your audit reveals that most of your content was produced by generalist writers without subject-matter involvement, the fix is not cosmetic; it is structural.
Some sites that lost visibility in March had technically correct author pages with schema markup, bios, and social links, but the content itself read like it could have been written by anyone with access to Google and 30 minutes. [13] That gap between the authorship wrapper and the content’s actual depth is exactly the kind of mismatch that a quality-focused core update can expose.
Rethinking your author and expert sourcing strategy
For publishers who rely on freelance content production (which is most of them), the March 2026 update creates a real operational problem. Hiring subject-matter experts to write content is expensive, slow, and difficult to scale. Most genuine experts are not professional writers, and most professional writers are not genuine experts. The content industry has been built on the assumption that a skilled generalist writer can produce authoritative content on any topic with enough research, and that assumption is now under pressure.
There are a few models that seem to work better in the post-update environment. One is the expert-as-source model, where a professional writer interviews a credentialed expert and weaves their direct quotes, anecdotes, and first-person insights into the piece. The expert gets a byline or co-byline, and the content carries genuine experience signals because it literally contains the expert’s words and observations. This is how traditional journalism has always worked, and it is striking that SEO is only now arriving at the same conclusion.
Another approach is the practitioner-author model, where the person doing the work (the doctor, the financial planner, the software engineer) writes or dictates the content, and an editor shapes it for readability and SEO. This produces the strongest expertise signals but is the hardest to scale. It also requires a different kind of editorial relationship, one where the editor is adding structure and clarity rather than generating the substance.
A third option, and the one I think is most underexplored, is transparent expert review. Publish content written by a competent writer, but include a visible, named expert review with the reviewer’s specific comments or additions. Not a generic “medically reviewed by Dr. Smith” badge, but actual inline contributions: “Dr. Smith notes that this dosage recommendation changed in the 2025 AHA guidelines” or “Per our review with [named CPA], this deduction applies only to pass-through entities.” This gives the content a layer of verifiable expertise without requiring the expert to write the entire piece.
What will not work, and I think this is worth stating plainly, is the approach many sites have taken of creating elaborate author pages for people who did not meaningfully contribute to the content. Google’s quality raters have been trained to evaluate whether content reflects genuine expertise, and the algorithmic systems appear to be catching up to that standard. A fake expert byline is not just ineffective; it is a trust signal pointing in the wrong direction.
The broader implication of the March 2026 update is that Google is narrowing the gap between what its quality rater guidelines describe and what its algorithms actually enforce. For years, E-E-A-T was a concept that mattered in theory but could be approximated with surface-level signals in practice. The sites that lost visibility in March were, in many cases, the ones that had been approximating expertise rather than demonstrating it. Whether Google explicitly changed expertise signal weighting or simply improved its ability to distinguish real expertise from performed expertise, the practical effect is the same: the bar is higher, and the old shortcuts are failing.
Keep watching the SERPs through May and June. Search Engine Journal noted that smaller unannounced core updates are ongoing, [4] and the rankings may not fully stabilize for weeks. If your site was hit, resist the urge to make sweeping changes before you have clean post-rollout data. But do start building the expert sourcing infrastructure now, because the next update is unlikely to reverse this direction.
Sources
- March 2026 core update – Google Search Status Dashboard
- Google’s March 2026 Broad Core Update Has Completed Rolling Out – SERoundtable
- Google March 2026 core update rolling out now – Search Engine Land
- Google Confirms March 2026 Core Update Is Complete – Search Engine Journal
- Google’s March 2026 Core Update: What We Know & What to Watch – Level Agency
- Google March 2026 Core Update: What Changed & What To Do – ClickRank
- Expertise Wins: Navigating the Google March 2026 Core Update – Syntactics Inc
- Google’s March 2026 Update was more volatile than December’s – Search Engine Land
- Google March 2026 Core Update – OrangeMonke
- March 2026 Core Update Content Quality Winners & Losers – Digital Applied

