All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Splunk forwarders didn’t support NLB between forwarders and indexers. Only place where you could use it is with HEC.
You should read this as a starting point to understand Splunk precedence. https://docs.splunk.com/Documentation/Splunk/latest/admin/Wheretofindtheconfigurationfiles Then you should also understand t... See more...
You should read this as a starting point to understand Splunk precedence. https://docs.splunk.com/Documentation/Splunk/latest/admin/Wheretofindtheconfigurationfiles Then you should also understand that precedence depends also are you indexing or searching. But as @richgalloway said best way to check it is btool with differentiaali options.
Port 8089 is for splunk internal management communication between nodes. E.g. all traffic from search head to indexer peers goes to this port. Also you could use REST calls to manage, get information... See more...
Port 8089 is for splunk internal management communication between nodes. E.g. all traffic from search head to indexer peers goes to this port. Also you could use REST calls to manage, get information or even run saved searches on nodes. Port 8000 is normally for GUI access.  Here is one diagram of ports and how those are connected https://community.splunk.com/t5/Deployment-Architecture/Diagram-of-Splunk-Common-Network-Ports/m-p/116657  
Hello, Can someone please provide the eksctl command line or command line in combination with a cluster config file that will provide an EKS cluster (control plane and worker node(s)) that is resour... See more...
Hello, Can someone please provide the eksctl command line or command line in combination with a cluster config file that will provide an EKS cluster (control plane and worker node(s)) that is resourced for installation of the splunk-operator and then experimentation with standalone Splunk Enterprise configurations? Thanks, Mark
We see the following on the server via the ss -tulpn  tcp LISTEN 0 128 0.0.0.0:8089 0.0.0.0:* user... See more...
We see the following on the server via the ss -tulpn  tcp LISTEN 0 128 0.0.0.0:8089 0.0.0.0:* users:(("splunkd",pid=392724,fd=4))  However, the browser at http://<Indexer>:8089 returns ERR_CONNECTION_RESET. What can it be?  while http://<Indexer>:8000 works as expected.
Use the btool command to see which settings will take effect the next time Splunk restarts. splunk btool --debug indexes list
Thanks @kiran_panchavat. That's what my understanding was but I got a different response from a support engineer (see below) that's why I wanted to confirm.  $SPLUNK_HOME/etc/system/local/indexes.co... See more...
Thanks @kiran_panchavat. That's what my understanding was but I got a different response from a support engineer (see below) that's why I wanted to confirm.  $SPLUNK_HOME/etc/system/local/indexes.conf (This file contains the default settings for the entire Splunk instance and will apply globally unless overridden.) $SPLUNK_HOME/etc/apps/search/local/indexes.conf (Configuration files in app-specific directories (like the search app) will override the settings in the system-level configuration files. This means that any settings defined here for specific indexes will take precedence over the default settings from $SPLUNK_HOME/etc/system/local/indexes.conf.)
@jkamdar  The configuration in `$SPLUNK_HOME/etc/system/local/indexes.conf` takes precedence over `$SPLUNK_HOME/etc/apps/search/local/indexes.conf`. For example, if you define an index called `wind... See more...
@jkamdar  The configuration in `$SPLUNK_HOME/etc/system/local/indexes.conf` takes precedence over `$SPLUNK_HOME/etc/apps/search/local/indexes.conf`. For example, if you define an index called `windows` in both `/system/local` and `/apps/search/local`, the configuration in `/system/local` will take precedence for the `windows` index. However, if you define `windows` in `/system/local` and a different index, such as `linux`, in `/apps/search/local`, the settings for `windows` will come from `/system/local`, while the settings for `linux` will come from `/apps/search/local`, as it doesn’t exist in `/system/local`. https://docs.splunk.com/Documentation/Splunk/latest/admin/Wheretofindtheconfigurationfiles#:~:text=Configuration%20file%20precedence%20order%20depends,precedence%20order%20of%20the%20directories. 
Hi @danielbb , as you can read at https://www.rsyslog.com/doc/index.html the default configuration is at /etc/rsyslog.conf but usually the conf files are in a subfolder defined in the above file at... See more...
Hi @danielbb , as you can read at https://www.rsyslog.com/doc/index.html the default configuration is at /etc/rsyslog.conf but usually the conf files are in a subfolder defined in the above file at the folder /etc/rsyslog.d. Ciao. Giuseppe
That's gorgeous @gcusello, I see the process running - syslog 930 1 0 Jan03 ? 00:00:01 /usr/sbin/rsyslogd -n -iNONE Thank you very much! Where is the default configuration/data... See more...
That's gorgeous @gcusello, I see the process running - syslog 930 1 0 Jan03 ? 00:00:01 /usr/sbin/rsyslogd -n -iNONE Thank you very much! Where is the default configuration/data mount point?
  Got a question about file precedency in Splunk. If I have 2 indexes.conf. One in $SPLUNK_HOME/etc/system/local/indexes.conf and 2nd one in $SPLUNK_HOME/etc/apps/search/local/indexes.conf, which o... See more...
  Got a question about file precedency in Splunk. If I have 2 indexes.conf. One in $SPLUNK_HOME/etc/system/local/indexes.conf and 2nd one in $SPLUNK_HOME/etc/apps/search/local/indexes.conf, which one would take precedence?   Mainly, to move all the data to be frozen after one year I have configured the default section in my $SPLUNK_HOME/etc/system/local/indexes.conf  frozenTimePeriodInSecs = 31536000 But it's different for other indexes in $SPLUNK_HOME/etc/apps/search/local/indexes.conf. So how would Splunk see it and apply?   Thanks for your help in advance. 
Please share a sanitized sample event and the props for the sourcetype.
Thanks @defection-io  for responding. The query is returning hosts which are basically our Indexers. We had config files in Indexers taht was removed as part of removing config files from Splunk envi... See more...
Thanks @defection-io  for responding. The query is returning hosts which are basically our Indexers. We had config files in Indexers taht was removed as part of removing config files from Splunk environment.  Regarding the source column , it is /opt/splunk/var/log/splunk/metrics.log so not of much help. 
Have you already solved this issue? I also want to do the same, but I encountered the following problem: Active forwards:     None Configured but inactive forwards:     mysubdomain:443
Hello, Can someone please provide the eksctl command line or command line in combination with a cluster config file that will provide an EKS cluster (control plane and worker node(s)) that is resour... See more...
Hello, Can someone please provide the eksctl command line or command line in combination with a cluster config file that will provide an EKS cluster (control plane and worker node(s)) that is resourced to allow the  installation of the splunk-operator and the creation of a standalone Splunk Enterprise instance? Thanks, Mark      
Hi @greenpebble ! Hmm, the 10 GB dev license usually has all of Splunk's functionality enabled, so that's odd you are seeing that message (I know the 50 GB license has some limitations). If you c... See more...
Hi @greenpebble ! Hmm, the 10 GB dev license usually has all of Splunk's functionality enabled, so that's odd you are seeing that message (I know the 50 GB license has some limitations). If you can't login at all, there are a couple of things you can try. Either, update the license from the CLI or temporarily remove all other users from the instance. This Splunk doc has information about adding a license from the CLI: https://docs.splunk.com/Documentation/Splunk/latest/Admin/LicenserCLIcommands Once you update the license, try restarting Splunk and attempting to login again. If you are still having issues, try this to temporarily remove all other users from the instance: 1. Stop Splunk. 2. Go to `$SPLUNK_HOME/etc/passwd` and make a backup of this file (Ex. `cp passwd passwd.bak`). 3. Edit the `passwd` file and remove all users except for the `admin` user. There should only be 1 line in the file when you are done. 4. Restart Splunk. This should let you login as the `admin` user, since the other users are removed. Once you login and fix the license, you can restore the `passwd` file from the backup to add the users back. Hope this helps!
Hi @juhiacc  You can do some snooping around in the `_internal` index to see if you can figure out where the data is coming from. I'm not sure what sourcetype UberAgent uses, but if we assume ... See more...
Hi @juhiacc  You can do some snooping around in the `_internal` index to see if you can figure out where the data is coming from. I'm not sure what sourcetype UberAgent uses, but if we assume it's `uberagent`, you can run the following search to get some more info about the origin of the data (just replace `uberagent` with the correct sourcetype): ``` index=_internal sourcetype=splunkd component=Metrics group=per_sourcetype_thruput series="uberagent" ``` In the results that return, you should be able to see all of the hosts that have processed data for this sourcetype. Depending on your environment, you may see multiple hosts in the `host` field, but you should be able to determine which hosts are intermediate steps (like a Heavy Forwarder or Indexer) and which hosts are the original source.   From there, you can investigate the hosts `inputs.conf` to see if there are any hints as to where the data is coming from. Sometimes, the `source` field of the data might also indicate where the data is coming from. For example, if the `source` is a file path, it's almost certainly coming from a file monitor input. But it looks like you may have already checked this.   There is also a chance that it was data indexed in the past with future timestamps. But since you mentioned that you deleted the index, this is unlikely the case. New data needs to be indexed for it to start appearing in the `main` index now.   If none of that helps, let me know and we can try some other things. Good luck!
Hi  We had UberAgent apps installed in Splunk environment and recently we deleted the apps along with the index. We see that due to index deletion , data is getting in main index from very few serve... See more...
Hi  We had UberAgent apps installed in Splunk environment and recently we deleted the apps along with the index. We see that due to index deletion , data is getting in main index from very few servers/devices. But not sure where this data is coming from since we have removed the UberAgent apps from everywhere. Any suggestions where should we be looking at to find the source? There are no related HEC tokens OR scripts that is to be found. Warm Regards !
I am running into an issues where I am attempting to import data from a SQL Server database, one of the columns is entitled message, contains  message payload with the character '{' in it.  When Splu... See more...
I am running into an issues where I am attempting to import data from a SQL Server database, one of the columns is entitled message, contains  message payload with the character '{' in it.  When Splunk Process the data from DB Connect, it inappropriately truncates the message when it sees the '{' bracket in the document.  Are there solutions for overriding this line breaking feature?  We currently have to go into Raw to extract the information using RegEx to preserve the data and we would rather store this message in a Splunk Key Value Pair.  
From what you have shared (which is all I can go on), are you saying that the events which have been marked as "SENDING" in the type are not actually "Sending" messages? If so, presumably they also d... See more...
From what you have shared (which is all I can go on), are you saying that the events which have been marked as "SENDING" in the type are not actually "Sending" messages? If so, presumably they also don't have a type field? Please can you share accurate but anonymised examples of the all event types you are trying to process because doing it piecemeal is not very productive.