All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks @kiran_panchavat. That's what my understanding was but I got a different response from a support engineer (see below) that's why I wanted to confirm.  $SPLUNK_HOME/etc/system/local/indexes.co... See more...
Thanks @kiran_panchavat. That's what my understanding was but I got a different response from a support engineer (see below) that's why I wanted to confirm.  $SPLUNK_HOME/etc/system/local/indexes.conf (This file contains the default settings for the entire Splunk instance and will apply globally unless overridden.) $SPLUNK_HOME/etc/apps/search/local/indexes.conf (Configuration files in app-specific directories (like the search app) will override the settings in the system-level configuration files. This means that any settings defined here for specific indexes will take precedence over the default settings from $SPLUNK_HOME/etc/system/local/indexes.conf.)
@jkamdar  The configuration in `$SPLUNK_HOME/etc/system/local/indexes.conf` takes precedence over `$SPLUNK_HOME/etc/apps/search/local/indexes.conf`. For example, if you define an index called `wind... See more...
@jkamdar  The configuration in `$SPLUNK_HOME/etc/system/local/indexes.conf` takes precedence over `$SPLUNK_HOME/etc/apps/search/local/indexes.conf`. For example, if you define an index called `windows` in both `/system/local` and `/apps/search/local`, the configuration in `/system/local` will take precedence for the `windows` index. However, if you define `windows` in `/system/local` and a different index, such as `linux`, in `/apps/search/local`, the settings for `windows` will come from `/system/local`, while the settings for `linux` will come from `/apps/search/local`, as it doesn’t exist in `/system/local`. https://docs.splunk.com/Documentation/Splunk/latest/admin/Wheretofindtheconfigurationfiles#:~:text=Configuration%20file%20precedence%20order%20depends,precedence%20order%20of%20the%20directories. 
Hi @danielbb , as you can read at https://www.rsyslog.com/doc/index.html the default configuration is at /etc/rsyslog.conf but usually the conf files are in a subfolder defined in the above file at... See more...
Hi @danielbb , as you can read at https://www.rsyslog.com/doc/index.html the default configuration is at /etc/rsyslog.conf but usually the conf files are in a subfolder defined in the above file at the folder /etc/rsyslog.d. Ciao. Giuseppe
That's gorgeous @gcusello, I see the process running - syslog 930 1 0 Jan03 ? 00:00:01 /usr/sbin/rsyslogd -n -iNONE Thank you very much! Where is the default configuration/data... See more...
That's gorgeous @gcusello, I see the process running - syslog 930 1 0 Jan03 ? 00:00:01 /usr/sbin/rsyslogd -n -iNONE Thank you very much! Where is the default configuration/data mount point?
  Got a question about file precedency in Splunk. If I have 2 indexes.conf. One in $SPLUNK_HOME/etc/system/local/indexes.conf and 2nd one in $SPLUNK_HOME/etc/apps/search/local/indexes.conf, which o... See more...
  Got a question about file precedency in Splunk. If I have 2 indexes.conf. One in $SPLUNK_HOME/etc/system/local/indexes.conf and 2nd one in $SPLUNK_HOME/etc/apps/search/local/indexes.conf, which one would take precedence?   Mainly, to move all the data to be frozen after one year I have configured the default section in my $SPLUNK_HOME/etc/system/local/indexes.conf  frozenTimePeriodInSecs = 31536000 But it's different for other indexes in $SPLUNK_HOME/etc/apps/search/local/indexes.conf. So how would Splunk see it and apply?   Thanks for your help in advance. 
Please share a sanitized sample event and the props for the sourcetype.
Thanks @defection-io  for responding. The query is returning hosts which are basically our Indexers. We had config files in Indexers taht was removed as part of removing config files from Splunk envi... See more...
Thanks @defection-io  for responding. The query is returning hosts which are basically our Indexers. We had config files in Indexers taht was removed as part of removing config files from Splunk environment.  Regarding the source column , it is /opt/splunk/var/log/splunk/metrics.log so not of much help. 
Have you already solved this issue? I also want to do the same, but I encountered the following problem: Active forwards:     None Configured but inactive forwards:     mysubdomain:443
Hello, Can someone please provide the eksctl command line or command line in combination with a cluster config file that will provide an EKS cluster (control plane and worker node(s)) that is resour... See more...
Hello, Can someone please provide the eksctl command line or command line in combination with a cluster config file that will provide an EKS cluster (control plane and worker node(s)) that is resourced to allow the  installation of the splunk-operator and the creation of a standalone Splunk Enterprise instance? Thanks, Mark      
Hi @greenpebble ! Hmm, the 10 GB dev license usually has all of Splunk's functionality enabled, so that's odd you are seeing that message (I know the 50 GB license has some limitations). If you c... See more...
Hi @greenpebble ! Hmm, the 10 GB dev license usually has all of Splunk's functionality enabled, so that's odd you are seeing that message (I know the 50 GB license has some limitations). If you can't login at all, there are a couple of things you can try. Either, update the license from the CLI or temporarily remove all other users from the instance. This Splunk doc has information about adding a license from the CLI: https://docs.splunk.com/Documentation/Splunk/latest/Admin/LicenserCLIcommands Once you update the license, try restarting Splunk and attempting to login again. If you are still having issues, try this to temporarily remove all other users from the instance: 1. Stop Splunk. 2. Go to `$SPLUNK_HOME/etc/passwd` and make a backup of this file (Ex. `cp passwd passwd.bak`). 3. Edit the `passwd` file and remove all users except for the `admin` user. There should only be 1 line in the file when you are done. 4. Restart Splunk. This should let you login as the `admin` user, since the other users are removed. Once you login and fix the license, you can restore the `passwd` file from the backup to add the users back. Hope this helps!
Hi @juhiacc  You can do some snooping around in the `_internal` index to see if you can figure out where the data is coming from. I'm not sure what sourcetype UberAgent uses, but if we assume ... See more...
Hi @juhiacc  You can do some snooping around in the `_internal` index to see if you can figure out where the data is coming from. I'm not sure what sourcetype UberAgent uses, but if we assume it's `uberagent`, you can run the following search to get some more info about the origin of the data (just replace `uberagent` with the correct sourcetype): ``` index=_internal sourcetype=splunkd component=Metrics group=per_sourcetype_thruput series="uberagent" ``` In the results that return, you should be able to see all of the hosts that have processed data for this sourcetype. Depending on your environment, you may see multiple hosts in the `host` field, but you should be able to determine which hosts are intermediate steps (like a Heavy Forwarder or Indexer) and which hosts are the original source.   From there, you can investigate the hosts `inputs.conf` to see if there are any hints as to where the data is coming from. Sometimes, the `source` field of the data might also indicate where the data is coming from. For example, if the `source` is a file path, it's almost certainly coming from a file monitor input. But it looks like you may have already checked this.   There is also a chance that it was data indexed in the past with future timestamps. But since you mentioned that you deleted the index, this is unlikely the case. New data needs to be indexed for it to start appearing in the `main` index now.   If none of that helps, let me know and we can try some other things. Good luck!
Hi  We had UberAgent apps installed in Splunk environment and recently we deleted the apps along with the index. We see that due to index deletion , data is getting in main index from very few serve... See more...
Hi  We had UberAgent apps installed in Splunk environment and recently we deleted the apps along with the index. We see that due to index deletion , data is getting in main index from very few servers/devices. But not sure where this data is coming from since we have removed the UberAgent apps from everywhere. Any suggestions where should we be looking at to find the source? There are no related HEC tokens OR scripts that is to be found. Warm Regards !
I am running into an issues where I am attempting to import data from a SQL Server database, one of the columns is entitled message, contains  message payload with the character '{' in it.  When Splu... See more...
I am running into an issues where I am attempting to import data from a SQL Server database, one of the columns is entitled message, contains  message payload with the character '{' in it.  When Splunk Process the data from DB Connect, it inappropriately truncates the message when it sees the '{' bracket in the document.  Are there solutions for overriding this line breaking feature?  We currently have to go into Raw to extract the information using RegEx to preserve the data and we would rather store this message in a Splunk Key Value Pair.  
From what you have shared (which is all I can go on), are you saying that the events which have been marked as "SENDING" in the type are not actually "Sending" messages? If so, presumably they also d... See more...
From what you have shared (which is all I can go on), are you saying that the events which have been marked as "SENDING" in the type are not actually "Sending" messages? If so, presumably they also don't have a type field? Please can you share accurate but anonymised examples of the all event types you are trying to process because doing it piecemeal is not very productive.
Try this.  Make a backup of $SPLUNK_HOME/etc/passwd.  Edit the original file and remove all users except admin and restart Splunk.
It is the admin acc that I am trying to log in with and the issue is still persisting. 
You should be able to log in using the admin account.
There is nothing to fix.  The warning (not an error) message letting you know that connections are not as secure as they could be.  They still are encrypted, however. If you choose, you can follow t... See more...
There is nothing to fix.  The warning (not an error) message letting you know that connections are not as secure as they could be.  They still are encrypted, however. If you choose, you can follow the instructions in the message to enable server name checking at connection time.  That, however, requires that each server have a unique certificate containing the server's name.
my condition was wrong, here is what i went with in the end: <drilldown> <!-- Handle clicks on Tail (Legend) --> <condition match="$click.name2$>0 AND NOT $click.value2$>0"... See more...
my condition was wrong, here is what i went with in the end: <drilldown> <!-- Handle clicks on Tail (Legend) --> <condition match="$click.name2$>0 AND NOT $click.value2$>0"> <eval token="form.Tail">if($click.name2$ == $form.Tail$, "*", $click.name2$)</eval> </condition> <!-- Handle clicks on Source (Chart) --> <condition match="$click.value2$>0"> <link> <param name="target">_blank</param> <param name="search">index=myindex | search "WARNNING: "</param> </link> </condition> </drilldown>
Hi @greenpebble , at first, there's no sense in this approach because the Incident Review page should be the main page of every SOC Analyst and should be always open, otherwise, you don't need ES! ... See more...
Hi @greenpebble , at first, there's no sense in this approach because the Incident Review page should be the main page of every SOC Analyst and should be always open, otherwise, you don't need ES! Anyway, if you want also an eMail, at every Correlation Search you enabled, you can add as additional Action sending an eMail, in this way the SOC Analysts will receive an eMail every time a Notable is written. You can do this editing the Actions of every Correlation Search. Beware because maybe in this way you are creating a spammer! Ciao. Giuseppe