Project

General

Profile

Actions

Defect #6734

closed

robots.txt: disallow crawling issues list with a query string

Added by Ве Fio over 13 years ago. Updated almost 4 years ago.

Status:
Closed
Priority:
Normal
Assignee:
Category:
SEO
Target version:
Start date:
2010-10-24
Due date:
% Done:

0%

Estimated time:
Resolution:
Fixed
Affected version:

Description

When robots visit robots.txt, it tells them to disallow /projects/project/issues, but nowhere does it tell it to disallow /issues

From looking at access logs, Googlebot (but all other bots do it to) was indexing /issues, and was indexing many useless pages, mainly like this:

66.249.68.115 - - [24/Oct/2010:07:05:00 -0700] "GET /issues?sort=assigned_to%2Cupdated_on%2Cstatus%3Adesc HTTP/1.1" 200 6254 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 

There are about a few hundred of those entries. I disallowed the sort parameter with Google Webmaster Tools, but that's just working around the issue for now.


Files

6734.patch (1.14 KB) 6734.patch Go MAEDA, 2020-06-28 09:38

Related issues

Related to Redmine - Defect #7582: hiding form pages from search enginesClosedJean-Baptiste Barth2011-02-09

Actions
Related to Redmine - Patch #3754: add some additional URL paths to robots.txtNew2009-08-18

Actions
Related to Redmine - Feature #31617: robots.txt: disallow crawling dynamically generated PDF documentsClosedGo MAEDA

Actions
Related to Redmine - Defect #38201: Fix robots.txt to disallow issue lists with a sort or query_id parameter in any positionClosedGo MAEDA

Actions
Actions

Also available in: Atom PDF