Skip to content

Ensure that samples aren't loaded when they're hit by a crawler #200

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
MatthewHawkins opened this issue Mar 12, 2025 · 3 comments
Open

Comments

@MatthewHawkins
Copy link
Member

No description provided.

@StephanWald
Copy link
Member

To be more specific: when the documentation is hit by Google or Bing etc. crawler we don't want to load the samples inside the iframe in order to keep server load low.

@hyyan
Copy link
Member

hyyan commented Mar 12, 2025

https://developers.google.com/search/docs/crawling-indexing/robots/intro

User-agent: Googlebot
Disallow: /webforj*

@bbrennanbasis
Copy link
Member

@hyyan, from the same page that you linked:

If other pages point to your page with descriptive text, Google could still index the URL without visiting the page. If you want to block your page from search results, use another method such as password protection or noindex.

Would it be more effective to add the following annotation to the Application class?

@AppMeta(name = "robots", content = "noindex")

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants