Which field should be extracted for this relevant use-case?
index={wxxx} googlebot | fields URIs | stats count by URIs | addcoltotals count
Is this search correct?
You want to look in the User Agent field of your web server access logs. You want to look for Googlebot.
For us we see entries like
"Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html) - -"
if you are using CIM then the web model has lot of this already done and accelerated if needed for faster analysis.
https://docs.splunk.com/Documentation/CIM/4.9.1/User/Web
You want to look in the User Agent field of your web server access logs. You want to look for Googlebot.
For us we see entries like
"Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html) - -"
From the logs obtained which field should be extracted which gives data for crawled urls?
ex log : {mainsite} "66.249.66.131, 184.28.127.88, 165.254.1.201" - - [22/Oct/2017:02:45:03 -0400] "GET /somelink.html?promoid=P3KMQYMW&mv=other HTTP/1.1" 200 158382 10371 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
Which query should be used.
Hi. If you see how the answer in https://answers.splunk.com/answers/584114/how-to-identify-pages-with-404-page-not-found-stat.html mentions getting at the fields, you would find the field associated with
"Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
So search around the time that got that above event and then add
<your initial search>| table *
Scroll around and you will find the one that has "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
If you don't have fields then you will need to add props.conf to identify the fields. There's lots of Splunk documentation on how to do that.
If you have control over what is creating the web logs, I highly recommend using field/value pairs instead of positional fields. It makes life so much easier if your logs have status=200 bytes=10371 etc and Splunk pulls these fields out for you.