We have am Splunk server in which one is configured the http event collector. We also created a new index for that and collector is pointed to it.
We are sending data thru http POST to splunk with url and token in JSON format, we got for each request a 200 OK response.
The issue is when we try to find the log entries in splunk console we are not able to find them, we noticed that the indexes (not only the index created for http collector) do not reflect any event in the console. always are in 0, it seems like the indexes are not working.
We reviewed the configurations again and again, we did not see any wrong, we install splunk trial version in a dev box just to test, and there with same configuration is working properly.
Someone knows what we need to check in the server or what we need to do to solve this issue?
Do you have permissions to search over the index?
Does your search head have access to the same splunk server that is hosting the data being indexed?
What does your search look like?
you are probably 'admin' on the dev box and admin can see everything. My colleague asks for the search syntax because if you don't specify the index you will be searching only over the main default index not a specific index. so if your search does not begin:
index="nameofyourindex" then you probably aren't being shown the context of the index by default. you can either change that... or specify the name in the search.
Thanks for quick reply
Well in both servers I have admin rights, because I'm using admin user, with that user I created the index where the http collector is pointed.
For searching in splunk of course I'm using the search by specific index, but nothing.
The weird thing is that if I'm going to indexes screen in splunk console, the indexes stay in 0 they don't reflect any event even if I've got a 200 ok as response in the requests.
Regarding the indexes, those are stored in the same server.
If you are admin, and
index=* doesn't show you the data, then you need to really consider whether the events are even getting to your indexer.
You say you get a 200 OK but from what?
If it works on one, and not on the other... consider that you might have the token wrong.
Since the send is likely pretty frequent an error or warning would be easy to spot.
take a look at the
_internal index and see if you can spot something
index=_internal sourcetype=splunkd NOT (log_level=INFO) check out the components