Hi Splunkers, I have a strange behavior with a Splunk Enteprise Security SH.
In target Environment, we have a Indexer's Cluster queried by 2 SH: a Core one and a Enteprise Security one.
For a particular index, If we perform a search on ES SH, we cannot see data. I mean, even if we perform the simplest query possible, which is:
index=<index_name>
we go no result. Perhaps, if I try the same search on Core SH, data are shown.
The behavior in my mind is very strange because it happened only with this specific index; all other remaining indexes return the same identical data, both performing query on ES SH and Core SH.
So in a nuthshell we can say:
Index that return result on SH Core: N
Index tha return result on ES Core: N - 1
I wonder if you’re storing the data in the SH somehow. In the SH where you can query the data check the splunk_server field to see where the SH is pulling the data from. If everything was right you would see listed some or all indexers (depending on how well balanced the data is distributed among indexers for the time interval you selected)
Depending on how you set your authorization, you might end up with different permissions in the roles in each SH (e.g access to indexes). Check you roles to see what are the allowed indexes in each SH for the roles you have.
Hi @diogofgm , permission are rights: we use our domain account set to have admin rights on all Splunk hosts in our env.
I performed other analysis and I found a strange things. Let me share with you another set of inputs.
In the under analysis env, we have 4 indexers in cluster. Above them, we have 3 SH NOT in cluster and the fourth one, the one with ES. So, in a nutshell:
3x SH Splunk Core (NO SH Cluster) + 1 ES SH
4x IDX clustered
Using btool, I checked indexes.conf deployed on Indexers cluster, and I found that, on all 4 IDXs, there are only 2 indexes.conf:
$SPLUNK_HOME$/etc/apps/slave-apps/_cluster/local/indexes.conf
$SPLUNK_HOME$/etc/system/default/indexes.conf
As I expected, the one in default folder is the system provided one, not edited by who performed initial installation and setup (another company has done this, not us).
So, I checked the one in _cluster and, as I expected, it is the one where all indexes created by previous admins has been put...except the one that give me the problem.
I mean: inside $SPLUNK_HOME$/etc/apps/slave-apps/_cluster/local/indexes.conf I can find custom indexes set (they are 262) but NOT the one (pan_logs) that rise the issue. There is no trace of it on the indexers (at lease, in files I checked).
So, I thought: hey, wait a minute, could it be deployed directly on SH? So, I checked indexes.conf on the SH where I can query successfull the index, but again I found no trace about it. It appear, let me say, like a "ghost" index: No trace of it on SH and IDX, but there is a SH able to query it.
I wonder if you’re storing the data in the SH somehow. In the SH where you can query the data check the splunk_server field to see where the SH is pulling the data from. If everything was right you would see listed some or all indexers (depending on how well balanced the data is distributed among indexers for the time interval you selected)
Bingo. It figured out that data comes from indexers; simply, NOT the ones where ES is able to perform query.
It exists another cluster that produce this data, but where SH with ES is not able to perform query.
Thanks for your help.