Technical SEO Interview Questions Flashcards

1
Q

Which SEO factors are not in your control?

A

The algorithm changes are obviously out of our control, we don’t know how google is going to make major changes until they announce them and how they will affect our seo strategy or current optimizations that we have. Also ranking factors are kind of out of our control as well, we can’t say for certain factors are helping us to rank.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Explain spiders, robots & crawlers

A

To me they all really mean the same thing, so googlebot is google’s crawler which is their program they use to automatically discover and scan websites by following links from one webpage to another.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is a soft 404?

A

A soft 404 error is when a URL that returns a page telling the user that the page does not exist and also a 200 (success) status code. In some cases, it might be a page with no main content or empty page.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Can you explain AMPs

A

AMP stands for accelerated mobile pages and this is a framework that you can use to create user first websites with fast speed, Google has retired this in a way, it’s not as important as it used to be

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Can you explain what schema is?

A

Schema is structured data that is added to a site there’s different types of schema such as article schema, review, video schema etc. And this gives more information on what the webpage content is about. This helps search engines understand the content better.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What should you do with a 404 error?

A

404s should only be redirected to a category or parent page if that’s the most relevant user experience available. They don’t always have to be redirected somewhere.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the difference between crawling, indexing and rendering?

A

Crawling is when Google discovers and webpages by going from link to link. Indexing is the process of storing and analyzing those crawled pages in a “index database” and Google pulls from that index whenever there is a search query. And rendering I believe is what happens when a browser requests a pages so taking that html and css and javascript and building a visual representation of those webpages.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How would you go about doing an SEO audit?

A

So based on audits I have done I usually do it using SEMrush and screaming frog. So i’m looking at any errors in semrush in terms of duplicate content, missing metadata, duplicate metadata, 404s, any broken internal links etc. And I’ll use screaming frog for the same thing I find that I get slightly different results with both.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Can you explain what a regex is and what is your experience with Regex?

A

where regular expressions can be used to set up filters so that you only see the data you want to see. I believe this can be done in Google Analytics and Google Search Console.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are some recent Google algorithm updates?

A

Well product review update, we had a couple core updates they do every year, the helpful content update is one of the most recent ones and that focuses on having good high quality content, there was also the spam update in December. I think theses updates want to really give a good overall user experience.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the difference between no-follow, no-index and disallow?

A

Noindex: tells search engines not to include your page(s) in search results. Noindex: tells search engines not to include your page(s) in search results. Disallow: tells search engines not to crawl your page in the robots.txt, this is good for pages you don’t really have a use for for users.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is your process of developing an seo strategy?

A

Start of with what the client goals are then i’ll do an audit of the site or the site section that i’m working on. From there I’ll do keyword research, competitor analysis, create an on-page optimization recommendations for the H1 and subtopics, and SEO elements, internal linking etc. Track and measure KPIs for my content and update where necessary.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How do you measure performance?

A

I look at organic sessions YoY and MoM, as well as new users, clicks and impressions, I also look at non branded search queries, local listing performance so so website clicks, direction clicks and phone calls, keyword performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What data can you get from SEMrush/GSC/Analytics?

A

For GSC I look at pages that are indexed, any 404s that might need redirecting, clicks, impressions and overall CTR. For Google Analytics I focus on organic sessions, new visitors, page view entrances, events, and the bounce rate. SEMrush I look at similar things, traffic, domain rating, any technical seo.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is mobile first indexing?

A

Google predominantly uses the mobile version of a site’s content, crawled with the smartphone agent, for indexing and ranking.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How Do You Check Whether A URL Is Indexed By Google?

A

I use GSC to check indexing for the client accounts I worked on. Outside of that I also use GSC for my own blog. After I complete an article I go to submit it for indexing in GSC to give it priority.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

How Do You Block A URL From Being Indexed?

A

add a noindex robots meta tag to your pages.

18
Q

What Are The Most Important SEO Ranking Factors, In Your Opinion?

A

Page Speed, publishing high quality content, mobile friendliness, internal linking.

19
Q

What is the importance of Javascript for SEO?

A

Long story short, JavaScript can complicate the search engines’ ability to read your page, leaving room for error, which could be detrimental for SEO.

20
Q

What is considered good site navigation?

A

So good site navigation takes into consideration the user experience and how easily accessible the navigation should be. This is linking to the most important pages, responsiveness and you also want to consider content hierarch

21
Q

What is link building and why does it matter?

A

Link building is the effort to build backlinks to a particular site. You can build links naturally by publishing helpful and high quality content. You can also build links through guest posting on other websites. Recieving links from other credible sites helps to build your own website authority and trustworthiness over time.

22
Q

Can you describe what EAT is?

A

EAT stands for Expertise, Authoritativeness and Trustworthiness. E-A-T is part of Google’s algorithm and baked into Google’s Search Quality Evaluator Guidelines.

23
Q

What is an HTML sitemap?

A

The HTML sitemaps include every page on the website – from the main pages to lower-level pages and can be thought of as a well-organized table of content.

24
Q

What is a meta tag?

A

Meta tags are snippets of text that describe a page’s content; the meta tags don’t appear on the page itself, but only in the page’s source code.

25
Q

What is an XML sitemap?

A

An XML sitemap is a file that lists a website’s essential pages, making sure Google can find and crawl them all. This is important for all sites but especially sites that have 1000s of pages, frequently add new pages or doesn’t have strong internal linking.

26
Q

What is robots.txt and when would you use it?

A

A robots.txt file tells search engine crawlers which URLs the crawler can access on your site. This is used mainly to avoid overloading your site with requests. You can use robots.txt to manage crawl traffic. In practice, robots.txt files indicate whether certain user agents (web-crawling software) can or cannot crawl parts of a website. These crawl instructions are specified by “disallowing” or “allowing” the behavior of certain (or all) user agents.

27
Q

What are orphan pages?

A

Orphan pages are website pages that are not linked to from any other page or section of your site.

28
Q

What are broken links?

A

These are links that no longer work, this webpage can no longer be found or accessed by the user.

29
Q

Can you explain structured data?

A

Structured data is a standardized format for providing information about a page and classifying the page content.

30
Q

What SEO Myths Have You Had Enough Of?

A

That writing content is not a good way to get backlinks. I don’t think that is true.

31
Q

What Is Your Favorite Website Crawler And Why?

A

I like using a combination of SEMrush and Screaming Frog, screaming frog is a more customizable tool

32
Q

How Do You Analyze Page Speed And Core Web Vitals?

A

So for page speed I like to use Google’s page speed insights, I look at how fast it’s performing but also the SEO score, accessibility and performance on both mobile and desktop. Then I look at what is causing the slowest speed.

33
Q

What is LCP?

A

LCP measures when the largest content element in the viewport is rendered to the screen. This approximates when the main content of the page is visible to users.

34
Q

What Are Some Quick Technical SEO Wins?

A

I think optimizing metadata so title tags, meta descriptions, H1, compressing images if it’s impacting speed, as well as adding a relevant alt text, implementing breadcrumbs, linking to other pages.

35
Q

A Site That’s Been Online 9 Months Is Getting Zero Traffic. Why?

A

Your pages might not be getting indexed, maybe they’re not high quality or you’re utilizing a no-index tag. Your site user experience is not good, you could have been hit by an algorithm update.

36
Q

Can you explain canonical tags?

A

Canonical tags are a way of telling search engines which URL is the primary copy of the page.

37
Q

What are breadcrumbs?

A

Breadcrumbs are a secondary navigation aid that helps users easily understand the relation between their location on a page (like a product page) and higher level pages (a category page, for instance)

38
Q

Can you explain pagination?

A

Pagination is a a series of content that is broken up into a multi-page list. This can sometimes cause duplicate content especially on ecommerce sites.

39
Q

What Do You Use Google Search Console For?

A

I use it to check indexing of pages, check performance metrics such as clicks, impressions and search queries, core web vitals and mobile usability.

40
Q

How would you increase page speed?

A

Use a Content Delivery Network (CDN) …
Move your website to a better host. …
Optimize the size of images on your website. …
Reduce the number of plugins. …
Minimize the number of JavaScript and CSS files. …
Use website caching.

41
Q

What is Chat GPT and how can it affect SEO

A

I belive it’s an AI software that can generate content based on the text you input. I don’t belive it will replace search engines anytime soon. I think nice but I don’t think it will replace SEO professionals either if anything it can be helpful