Instructions on how to collect semantic kernels for Amazon review product sites from Alexander Pavlutsky: collecting semantics for Amazon and other affiliate programs.
1. Site type
To start with, there are 2 types of website reviews and affiliate programs:
- site for a category of offers
In the first case, the affiliate program gives us a product (for example, a device that blocks wi-finetwork), the product has a name (for example, Wi-Fi Blocker 2000) and by this name we collect brand product requests.
Another example is classic Amazon sites and similar to that.
In the case with Amazon, we collect requests specifically for the product category, i.e. "Wi-Fi blockers" and onlythen make an article with a selection of devices for it.
The semantics and content for branded and niche queries are different, that is why this should also be considered.
- KeyClusterer for clustering (optional)
- Detailed SEO Extension – Chrome extension
- KeywordSurfer – Chrome extension
If you need Ahrefs for one-time or not frequent tasks, we recommend you to use $7 trials. Free KeyClusterer or free Chrome extensions, which also help a lot.
If you need any help with uploads and you can’t buy software, just get in touch with me and I will help you absolutely for free.
3. Niches researchand choosing topics for articles
It’s a lot easier to collect semantics for specific products. We go to the affiliate program and collect all the offers.
We check each offer for 2 options:
- niche name/product category.
The most advantageous situation is when the product name includes the niche name (or vice versa).
For example, the "rocket spanish" offer. If analyzing it, its frequency is 3500, and a low KD, which is 17. Here the name of the offer and the niche does not match, because the niche keywords are: learn spanish, etc.
Another example is the "voice translator device". Its frequency is 600 (which is okay), KD 33 (also acceptable). But at the same time, there is a similar root keyword "voice translator" with a frequency of 9700, which enables to bring much more traffic if they have successfully coincided.
Does this mean that keywords without a niche query should be totally excluded? –Of course not, we work with both.
Butwith such a good extra keyword, you can give this request a larger budget, write several articles, combine them into a power page.
When choosing products (offers in the affiliate program), make sure you pay attention to the search engine results.
Because in the case with the "voice translator device", it has clear search results with articles, reviews, best,etc. This is a great option to use your website here. Moreover, Google gave the top ranking not to Amazon, but to the review site.
But if you go to the voice translator app, you’ll see that the top 4 rankings are taken by the App Store and Google Play, then the fifth review site and then the Google support site. What I mean is that getting into the TOP with your article with the same input will only work by a chance of a miracle.Personally, I wouldn’t go into such requests.
What frequency and KD should I work with?
We even take a monthly frequency of 40-50. Why? Because the frequency is often false and the numbers are pretty much higher there. Plus, even with a $100-budget article, such a keyword in the top will pay off in a year.
Of course, if budgets are limited, I would take the frequency from 500 and KD to 20. There are many keywords like this, and it’s easy torank TOP with them.
If you search well, you will find examples of KD 1 and frequencies of 10,000+ in each affiliate program.
A visual assessment of the search results and a clear understanding of the real rankingthat I can expect for are much more important to me. The frequency is lying, KD is lying, but the SEO feeling that I’vepersonally tested over the years is never mistaken.
It’s misguiding thinking not to make use of the request just because of a large KD. Not making use of the request thinking that it will be difficult to move powerful sites/materials is somewhat true. Making use ofrequests that seem complicated is exactly the strategy that ultimately returns the greatest profit.
Even 3-5% of the materials that hit the TOP by large keywords, generally pay off all the costs of the site.
4. Where can I find rich keywords?
The only place where you can find them is Search Console. You can’tget keywords from Search Console, unless you create your website.
It's funny when you talk to top teams and they say: "Well, before we enter a niche tightly, we launch 100 satellites there just to collect semantics as much as possible and get prepared to enter it".
This is the exact moment when a newbie webmaster who writes texts himself for his sites and installsAdsense from the first day of the site's life (because getting a dab of money also worth it) bursts into tears. But that's how the market is distributed.
The main idea is that neither Ahrefs, nor Similarweb nor any other tool will show as many keywordsas the Search Console. And as a rule, when the article starts to gain ranking/clicks, it makes sense to exclude the keywords from the console that are not in the text and insert them in order to get the best rankings on keywords that are not yet promoted by competitors (and when they start, you will already hit the TOP).
Sophisticated teams have developed solutions for themselves that monitor the console and automatically suggest which keywords to insert in which block of text. Average webmasters do it manually. It’s not that handy, but at some stage, it really works.
5. Site and clusters structure
The first thingthat one should do about semantics (even for a review site) is the structure, section, or cluster of a site.
Most often, no one bothers about the structure of the review site for some Amazon-like affiliate programs, because there isn’t much content in these sites, and one puts the entire emphasis on it. And you can reach it in just 1-2 clicks from the main page.
But those who work out the structure from top to bottom always get more profit.
5.1. Category page
90% of webmasters don’t design categories, but these pages can also collect keywords. The best way is to focus on news sites because they have long been making hub pages and collecting traffic from them. For example, if you click on National Geographic or Pop Sci, they get traffic from requests suchas "articles about animals".
Let’s take a closer look at our topic: examples of hubs from Backlinko or Mangools is a logical grouping, a beautiful visual design. It’s pretty affordable and it can take the project to another level.
5.2. Linked clusters
Moreover, these hubs are good for interlinking: they provide links to internal articles, and in internal articles we place links to hubs. In this case, both the user and Google benefit from it.
We often have projectson carrying out deep analysis of an entire structure and clusters. Hereyou can find some examples of statements of work on linking to such hubs.
All this is done to deliver high-quality linking. The main problem in this regard is to perform linking at the stage of article publication/optimization. It's already behind time.
I would recommend doing it at the stage:
- include intothe statement of work for the copywriter during site structuring.
The larger the amount of work you can plan and organize in advance, the better the result will be. Because doing the linking by plugins (using the method of repeated phrases) or do it manually by typing afterward is absolutely ineffective.
5.3. What data should I collect at the stage of site structuring?
- all marker requests and data on frequency/competition in the format "page topic – frequency – competition";
- hierarchy of these queries;
- connection between these hubs at the linking level (now it has become trendy again to remember semantic cocoons);
- URLs of pages and understanding of how we swing the upper nesting level by the lower one.
That is, at this stage, we collect a sketch of semantics, sufficient to create the site structure.
This is the raw file, and after the full clustering is completed, some elements will change. The main thing is to see the whole picture of it.