I have a SH and two IDX
I run a search in SH and using "collect" command i push the results to a index=sql which is available in IDX1 and IDX2 (load balanced based upon the availability)
so it was pushing results until may28 and after that it stopped pushing
i ran the search separately still it doesnot push the results.
but it says "Successfully wrote file to '/opt/splunk/var/spool/splunk/63g3hs73g37shevents.stashnew'."
so what is happening?
is there anyway i can see whether the data is getting push to IDX1 or IDX2
Two of the most possible issues would be:
1. Forwarding to the indexers has been disabled, or otherwise circumvented.
2. The inputs.conf setting for stash has been malformed or overrided
Another option might be that you’ve forced the OS
to give the files the splunk service user is creating wrong permissions!
I went to settings
--Forwarding and receiving
-- configure Forwarding
--App context=ALL owner=Any
I get this:
splunk-site2-indexer01:9997 enabled -- what are these ??
so i shd add my IDX1(sql01) here ??
Yes your indexer should be listed in the list on that page.
If you don’t recognize those two servers, then it’s possible someone changed your config to point to these two new servers.
You’ll want to check your search peers in your distributed search setup too. If your search head isn’t searching the indexers where the data is going... then obviously the search will fail.
Just thought of another common cause, does your role have access to the index? Is your role inheriting another role that has search filters that exclude the index?
The question lies in how to know these two indexers in forwarding and receiving Splunk-site1-indexer01: 9997, Splunk-site2-indexer01:9997 are my IDX1(18.104.22.168) and IDX2(22.214.171.124) ??
and in distributed search setup i can see
Splunk instance name PeerUri
But it doesnot show like
Splunk instance name PeerUri
does that mean both are different ??
Those are your splunk indexers where you are forwarding data generated on your search head to. They are where the index named sql needs to be... but it sounds to me like you may be confusing things a bit.
When you use | collect index=sql
Splunk writes to a “stash” file which is then read by your search head and indexed. Since your search head is forwarding though, it doesn’t index it locally instead it sends it to those two splunk servers on port 9997. Now assuming you have an index named sql on those indexers, you should be able to find the data you’ve collected by searching splunk for “index=sql”
yes i understood that
but i created index=sql in 126.96.36.199 ip address and 188.8.131.52 ip address .
according to my knowledge they r not named
first i wanna know what are these indexers :
is there any way i can log in to those indexers ?
my whole point is how to check splunk-site1-indexer01:9997 and splunk-site2-indexer01:9997 are 184.108.40.206 and 220.127.116.11 ?
please help me
you can go to command prompt and run this command:
This will give you the ip address if those names are in dns. If it says non-existent domain, then try this command at command prompt:
It will give you the last IP your system has for that server.
Do these IPs match the same servers that are in your search peers?
If not, you’re forwarding the stashed data to indexers that you are not searching.