Trends Stitcher
- Chris Green
- 3 minutes ago
- 3 min read
TL;DR
We need to use Google Trends data more as the rest of our click/attribution data will degrade because of AI search.
Google Trends scales each query independently to 0–100. You can compare up to 5 terms together, but anything more breaks down.
Introducing a workaround - Trend-Stitching (link below).
You get a single comparable dataset where every term can be meaningfully compared, even if they never appeared in the same batch.
To access the Trends Stitcher, click here > https://trends-stitcher.streamlit.app/
Why do we need to Stitch Google Trends Together?
Google Trends is great, but it has one significant quirk/annoyance
Every query is scaled independently to a 0–100 range.
“100” means the peak search interest within that query, not across queries.
So if you download “Nike” and “Adidas” separately, each will show a 0–100 scale... but you can’t tell if Nike is x2, x5, or x10 more searched than Adidas.
For example:
"Nike" > 100 in June = Nike’s personal peak
"Puma" > 100 in March = Puma’s personal peak. Nike’s 100 equal Puma’s
This is why raw Trends data is not directly comparable across different queries. If you search up top 5 together it can work-around this, but if you want to use more terms, it can be challenging.
To compare terms, we need a way to put them all on the same scale. Trend-stitching does this by:
Google Trends only lets you compare 5 terms at a time. This groups your list into overlapping batches so every term co-occurs with others at least once.
When two terms appear in the same batch, we can compute their relative ratio. Example:
In one batch, Nike = 100, Adidas = 60→ Nike is ~1.67× Adidas.
Stitching connects all terms through these overlaps. Even if “Puma” never directly co-occurred with “Nike,” we can link them via “Adidas” or another shared term.
Consensus scaling (no single pivot)
Instead of picking one “anchor” (which can distort results if it’s too popular or too niche), this solves a system of equations across all ratios.
It gives each term a global scale factor (how much it needs to be stretched/shrunk).
Then it is normalised so that the maximum across all terms & time = 100.
Every term is now on the same comparable scale, even if they never appeared in the same batch.
How to Use
Using it is pretty simple:
Enter your API key in the sidebar (get one at serpapi.com)
Add search terms (one per line)
(Optional) Configure:
Geo (e.g., US, GB)
Timeframe (e.g., "today 5-y", "all")
Batch size (max 5 terms per request)
Run Analysis - wait for results. First run may take longer.
Advanced Options
Date filtering - restrict to custom start/end dates (if needed)
Smoothing - add rolling averages (3, 7, 30, 90, 365 days), which can make longer series easier to read
Cache - speed up repeated runs (can clear/disable if stale), can help reduce API calls especially as Streamlit can refresh often when you don't mean it too!
Outputs
Main Chart
This is the main thing we're looking for, a chart with all the trends in a comparable fashion.
Interactive line chart, all terms on the same 0–100 scale
Click legend items to toggle terms
Full normalized dataset, downloadable as CSV
You might notice it gets a bit cramped, but you can download the data to CSV later if this is an issue.
Year-on-Year (YoY)
This was mainly to offset how hard it is to report on yearly trends out of the Trends UI:
Current vs prior year chart
% change vs previous year
Downloadable YoY table
The easiest way to read this is trends over the red-dotted line are growing, trends below are declining.
Explainability
This is where we look at how the tool stitched the terms. You probably won't need this, but this it what it displays:
Pivot scores - which terms make good anchors
Consensus scale factors - how each term was adjusted
Downloadable CSVs for all details
An "anchor" term is what is used to score all the other terms by - so increased or decreased. The scale factors are how much the original trend was changed by. It'll warn if the scale ratio is too high as this means that some large trends will mean the smaller ones are hard to analyse.
Any/All Feedback welcome!
I need to keep testing this and validating the outputs + I have a wishlist of features/ways to integrate this into to other workflows, but I'd love to hear about your experiences here.