Your Shopify store can look completely fine from the outside while Google is quietly blocked from reaching some of your most important pages. There are no error messages in your admin dashboard. Products load normally for customers. Everything seems to be working. But somewhere in your robots.txt file, a Disallow rule is telling Google’s crawler to stay away from URLs that should absolutely be indexed.

This problem is more common than most store owners realize, and it tends to get worse over time as more apps get installed and more rules get added to the file without anyone reviewing them. This guide walks you through exactly how to find the problem, understand what it means, and fix it the right way whether you are on a standard Shopify plan or Shopify Plus.

1 in 5
Shopify stores have a robots.txt rule they did not intentionally add
48 hrs
average time for Google to re-crawl after a robots.txt fix
100%
of robots.txt blocks are fixable once you locate the rule

What robots.txt does and why it matters for Shopify SEO

Robots.txt is a plain text file that lives at the root of your domain. When Google’s crawler visits your store for the first time or returns for a regular crawl, one of the first things it does is fetch this file and read the instructions inside. Those instructions tell the crawler which pages and directories it is allowed to access and which ones it should skip entirely.

The file uses a simple format. A rule that starts with “Disallow:” followed by a URL path tells the crawler not to visit any URL that begins with that path. A rule that starts with “Allow:” explicitly permits access to a path even if a broader Disallow rule would otherwise block it. The rules apply to the crawlers listed under the “User-agent:” line above them.

Why this creates problems on Shopify stores

Shopify generates a robots.txt file automatically for every store. The default rules are generally sensible. They block crawlers from accessing things like the checkout, the cart, the account login pages, and various internal admin paths. None of those pages need to be in Google’s index, so blocking them is the right call.

The problem starts when the file changes without you knowing about it. Shopify apps are a very common cause of this. When you install a new app, it sometimes writes its own Disallow rules into your robots.txt to protect its internal pages from being crawled. That is reasonable in theory, but the rules occasionally end up broader than intended and accidentally block paths that contain your real store content.

Theme updates can also modify robots.txt behavior, especially on Shopify Plus where the robots.txt is editable through the theme editor. If someone on your team edited the file to try to solve one problem and introduced a new rule in the process, you would not necessarily see the impact right away. The damage tends to show up gradually as crawl coverage drops and pages fall out of the index one by one.

Pro tip: Make it a habit to visit yourstore.com/robots.txt any time you install a new app or update your theme. It takes thirty seconds to read the file and comparing it to what was there before can save you weeks of lost rankings if a new rule is causing problems.

How to tell if robots.txt is blocking your pages

There are two reliable ways to find out whether your robots.txt file is responsible for your pages not being crawled or indexed. Using both of them together gives you a complete picture.

Method one: Read the robots.txt file directly

Open a browser and go to yourstore.com/robots.txt, replacing “yourstore” with your actual domain. Read through every line of the file. The rules that matter most are the ones listed under “User-agent: *” because the asterisk means they apply to all crawlers, including Googlebot.

For each Disallow rule you see, ask yourself whether the path it blocks could match any of your important pages. For example, a rule that says Disallow: /collections/ would block Google from crawling all of your collection pages. A rule that says Disallow: /blogs/ would block all of your blog posts. A rule that appears harmless because it targets an app-specific path like /apps/someplugin/ is usually fine, but a rule that blocks a broader path like /pages/ could be cutting off pages you rely on for organic traffic.

Method two: Use Google Search Console

Log into Google Search Console and go to Indexing in the left sidebar, then click Pages. Scroll down to the section that breaks down why pages are not indexed. Look specifically for a category called “Blocked by robots.txt.” Click on it to see the full list of URLs that Google found but could not crawl because of your robots.txt rules.

This view is especially useful because it shows you the actual impact of the problem. You may have a robots.txt rule that technically applies to a certain path, but if Google never tried to crawl a URL matching that path, there is no harm done. The Search Console report shows you exactly which real pages on your store are being blocked right now.

Common mistake: Store owners sometimes see a URL in the “Blocked by robots.txt” list and assume it must be a page they intentionally blocked. Always verify by going to that URL in your browser to check what it actually is. Some of the most damaging blocks are on pages that store owners would absolutely want indexed, like a popular collection page or a high-traffic blog post.

What Shopify robots.txt should and should not block

Understanding the difference between pages that should be blocked and pages that should be crawlable will help you evaluate every rule in your robots.txt file with confidence. Here is a clear breakdown.

Page Type Should Be Blocked Should Be Crawlable Reason
Checkout pages (/checkout/) No SEO value, private transaction data
Cart page (/cart) Session-specific, no ranking value
Account pages (/account) Private user data, login-gated
Admin and app internal pages Not meant for public indexing
Product pages (/products/) Core ranking pages for ecommerce
Collection pages (/collections/) Category pages that drive organic traffic
Blog posts (/blogs/) Content marketing and long-tail keywords
Standard pages (/pages/) About, contact, policy pages need indexing
Homepage Most important page on the entire store

If you see a Disallow rule in your robots.txt that touches any of the paths in the right column of this table, that rule needs to be removed or narrowed down so it no longer affects those pages. The rule may have been added for a legitimate reason related to an app or internal page, but if it is written too broadly, the cure is worse than the disease.

The robots.txt file is one of the first things we check in any Shopify SEO audit. Store owners are often shocked to find that a filter app or review widget they installed months ago quietly added a Disallow rule that has been blocking Google from an entire section of their store ever since. LeanScaleMedia SEO Audit Team

Step-by-step: How to fix the blocking rules

Once you have identified the specific Disallow rule causing the problem, the fix depends on where the rule came from and which Shopify plan you are on. Work through the steps below in order.

If you are on Shopify Plus

  • Step 1: Open the theme code editor In your Shopify admin, go to Online Store and then Themes. Click the three-dot menu next to your current theme and select Edit code. In the file list on the left, look for a file called robots.txt.liquid. This file controls your entire robots.txt output.
  • Step 2: Locate the problematic Disallow rule Read through the robots.txt.liquid file carefully. Find the specific line or section that is generating the Disallow rule you want to remove or change. Rules added by apps may appear as liquid code blocks that output Disallow lines dynamically, or they may be written as plain text lines directly in the file.
  • Step 3: Remove or narrow the rule If the rule is blocking an entire path that should be crawlable, delete that line entirely. If the rule is broadly written but serves a purpose for a specific subdirectory, rewrite it to be more specific. For example, if you see Disallow: /collections/ but you only want to block a particular filtered path, change it to something like Disallow: /collections/some-specific-path/ so that your main collection pages remain accessible.
  • Step 4: Save the file and verify Save the robots.txt.liquid file. Then open a new browser tab and visit yourstore.com/robots.txt to confirm that the rule you removed is no longer showing in the output. If it is still there, go back and look for another place in the file where the same rule might be generated.

If you are on a standard Shopify plan

  • Step 1: Identify which app added the rule Standard Shopify plans do not give you direct access to a robots.txt.liquid file, but apps installed on your store can still influence the file. Look at your installed apps and think about which ones were added around the time the problem started. Common culprits are SEO apps, review apps, wishlist apps, and loyalty program apps.
  • Step 2: Check the app settings Open the settings page for each suspected app. Some apps have a setting that lets you control whether they add anything to robots.txt. If you find the option, turn it off or adjust it to only block the paths the app actually needs protected.
  • Step 3: Contact the app developer If you cannot find the setting yourself, contact the app developer’s support team. Explain that one of their Disallow rules is blocking important pages on your store and ask them to help you narrow it down or remove it. Most reputable app developers will respond quickly because a misconfigured robots.txt rule reflects poorly on their product.
  • Step 4: Uninstall the app if necessary If the app developer cannot or will not help, and the problem is causing meaningful harm to your SEO, consider uninstalling the app. When an app is uninstalled, any rules it added to robots.txt are typically removed along with it. Verify this by checking your robots.txt file again after the uninstall.
Common mistake: Do not add an Allow rule on top of a broad Disallow rule and assume that fixes the problem. Allow rules in robots.txt are meant to create exceptions within a blocked directory, and they do not always behave the way you expect when the rules conflict. The cleaner solution is to remove the overly broad Disallow rule and rewrite it more specifically if needed.

How to verify the fix and get your pages indexed

Fixing robots.txt is only the first half of the job. You also need to confirm the fix actually worked and then take action to get your previously blocked pages crawled and indexed as quickly as possible.

Confirming the fix in Google Search Console

  • Step 1: Run the URL Inspection tool on a previously blocked page Go to Google Search Console and use the URL Inspection tool. Enter the URL of one of the pages that was showing as “Blocked by robots.txt.” The inspection result should now show that the URL is allowed to be crawled. If it still shows as blocked, Google may not have fetched your updated robots.txt yet. Wait a few hours and try again.
  • Step 2: Request indexing for each affected page Once the URL Inspection tool confirms access is restored, click the Request Indexing button. Do this for each of your most important pages that were previously blocked. Google has a daily limit on manual indexing requests, so prioritize your highest-value pages first, such as your top collection pages and best-selling product pages.
  • Step 3: Monitor the Pages report over the following weeks Go back to the Pages report in Search Console over the next two to four weeks. The number of pages showing as “Blocked by robots.txt” should decrease as Google re-crawls those URLs and finds them accessible. At the same time, your total indexed page count should start to grow as those pages make it into Google’s index.
  • Step 4: Check your sitemap for consistency While you are doing this cleanup, also verify that your sitemap at yourstore.com/sitemap.xml does not contain any URLs that are still blocked in robots.txt. Having a URL in your sitemap and also blocked in robots.txt sends conflicting signals to Google. All URLs in your sitemap should be fully accessible.
Bottom line: A robots.txt fix on Shopify typically takes between 48 hours and two weeks to fully reflect in Google’s index, depending on how frequently Google crawls your store. Using the Request Indexing feature in Search Console speeds up the process significantly for your most important pages.

How to prevent this problem from happening again

Once your robots.txt is clean and your pages are back in the index, put a simple system in place to make sure the problem does not quietly come back over time.

  • Check your robots.txt file every time you install a new app on your store. This takes less than a minute and can catch a bad rule before it has time to cause damage.
  • Keep a saved copy of what your robots.txt looked like when everything was working correctly. If a future change introduces a problem, comparing the two versions will immediately show you what changed.
  • Review the Pages report in Google Search Console once a month. The “Blocked by robots.txt” category should ideally show zero URLs. If it starts growing, investigate right away.
  • When evaluating new apps before installation, look at reviews that mention SEO or crawling issues. Sometimes other merchants have already discovered that an app causes robots.txt problems and have left notes about it.

Free Consultation

Want us to audit your Shopify robots.txt and technical SEO?

We help Shopify brands find hidden crawl blocks and indexing problems that are costing them organic traffic. Book a free call and we will dig into your store’s setup.

Book a free strategy call →

Frequently asked questions

Go to yourstore.com/robots.txt in your browser and read every Disallow rule carefully. Then compare those paths against the URLs of your most important product, collection, and blog pages. If any of those pages match a Disallow rule, they are being blocked. You can also check Google Search Console under Indexing and then Pages, and look for pages with the status “Blocked by robots.txt” to see exactly which pages Google cannot currently access.
Yes, but the ability to edit robots.txt depends on your Shopify plan. Shopify Plus merchants can edit the robots.txt.liquid file directly inside the theme code editor, which gives full control over every rule in the file. Standard Shopify plan merchants have more limited customization options, but can often influence the file through app settings or by contacting the app developer. If a third-party app added a problematic Disallow rule, working with that app’s support team is usually the fastest path to a fix.
Shopify robots.txt should block pages that have no SEO value and that you do not want Google crawling. These include the checkout pages, the cart, account login and registration pages, admin areas, and internal app pages. It should never block product pages, collection pages, blog posts, or your homepage. Blocking those pages prevents Google from indexing your store content and will directly reduce your organic search traffic over time.
No, a robots.txt block does not permanently remove a page from Google. It prevents Google from crawling the page, but if Google had already indexed the page before the block was applied, it may stay in the index for some time based on its cached version. Once you remove the Disallow rule and Google is able to crawl the page again, it will update its records. Using the Request Indexing feature in Google Search Console after fixing your robots.txt will speed up this process significantly.
Your Shopify robots.txt can change when you install a new app, update your theme, or when Shopify pushes platform-level changes. Many third-party apps write their own Disallow rules to robots.txt during installation to protect their internal pages. If you notice new rules appearing in your robots.txt that you did not add yourself, check which apps you installed around that time and review whether their rules are affecting pages you need Google to crawl.