All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

It should be installable even it’s archived. Ok, it seems that policy has changed and there is no more download button for archived apps. Let’s check if anyone known more about this on slack. https:... See more...
It should be installable even it’s archived. Ok, it seems that policy has changed and there is no more download button for archived apps. Let’s check if anyone known more about this on slack. https://splunk-usergroups.slack.com/archives/C0453SZLGHX/p1738101637411249
Based on that, it’s up and running. Have you try it with that node e.g. with curl and using localhost? How about splunkd_acces.log and web logs etc? Are there anything? And selinux? Any entries on... See more...
Based on that, it’s up and running. Have you try it with that node e.g. with curl and using localhost? How about splunkd_acces.log and web logs etc? Are there anything? And selinux? Any entries on auditd?
Thank you for the info ! I didn't know about the app you mentioned, sounds useful. I wanted to install it, but i couldn't  i think found it here ( https://splunkbase.splunk.com/app/1603 ) but i... See more...
Thank you for the info ! I didn't know about the app you mentioned, sounds useful. I wanted to install it, but i couldn't  i think found it here ( https://splunkbase.splunk.com/app/1603 ) but it's end-of-life / archived.  I didnt see any alternatives listed ( https://lantern.splunk.com/Splunk_Platform/Product_Tips/Extending_the_Platform/SimpleXML_Examples_app_end_of_life_FAQ )
  root@SplunkPROD bin]# ss -tunlp | grep 8000 tcp LISTEN 0 128 0.0.0.0:8000 0.0.0.0:* users:(("splunkd",pid=17948,fd=179)) [root@SplunPROD bin]# root@SplunkPROD bin]# firewall-cmd --list-ports 4... See more...
  root@SplunkPROD bin]# ss -tunlp | grep 8000 tcp LISTEN 0 128 0.0.0.0:8000 0.0.0.0:* users:(("splunkd",pid=17948,fd=179)) [root@SplunPROD bin]# root@SplunkPROD bin]# firewall-cmd --list-ports 443/tcp 8000/tcp 8089/tcp 8191/tcp 9997/tcp 8000/udp 9997/udp [root@SplunkPROD bin]# root@SplunkPROD bin]# firewall-cmd --list-all public (active) target: default icmp-block-inversion: no interfaces: eth0 sources: services: cockpit dhcpv6-client ssh ports: 8000/tcp 8000/udp 8089/tcp 8191/tcp 443/tcp 9997/tcp 9997/udp protocols: forward: yes masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: [root@SplunkPROD bin]#
Web.conf file configuration [settings] enableSplunkWebSSL = true httpport = 8000 ############################## root@SplunkPROD bin]# ./splunk enable web-ssl WARNING: Server Certificate Hostnam... See more...
Web.conf file configuration [settings] enableSplunkWebSSL = true httpport = 8000 ############################## root@SplunkPROD bin]# ./splunk enable web-ssl WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. Your session is invalid. Please login. Splunk username: jwestbank Password: You need to restart the Splunk Server (splunkd) for your changes to take effect. [root@SplunkPROD bin]# ./splunk restart Stopping splunkd... Shutting down. Please wait, as this may take a few minutes. ... [ OK ] Stopping splunk helpers... [ OK ] Done. Splunk> Now with more code! Checking prerequisites... Checking http port [8000]: open Checking mgmt port [8089]: open Checking appserver port [127.0.0.1:8065]: open Checking kvstore port [8191]: open Checking configuration... Done. Checking critical directories... Done Checking indexes... Validated: _audit _configtracker _dsappevent _dsclient _dsphonehome _internal _introspection _metrics _metrics_rollup _telemetry _thefishbucket history main summary Done Checking filesystem compatibility... Done Checking conf files for problems... Done Checking default conf files for edits... Validating installed files against hashes from '/opt/splunk/splunk-9.3.0-51ccf43db5bd-linux-2.6-x86_64-manifest' All installed files intact. Done All preliminary checks passed. Starting splunk server daemon (splunkd)... PYTHONHTTPSVERIFY is set to 0 in splunk-launch.conf disabling certificate validation for the httplib and urllib libraries shipped with the embedded Python interpreter; must be set to "1" for increased security Done [ OK ] Waiting for web server at https://127.0.0.1:8000 to be available............WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. . Done If you get stuck, we're here to help. Look for answers here: http://docs.splunk.com The Splunk web interface is at https://SplunkPROD:8000
Our Splunk ingestion for eStreamer events appears to be getting overwhelmed by the amount of data we receive.  Currently, our ingestion averages over 10,000 events per second, and Cisco support indic... See more...
Our Splunk ingestion for eStreamer events appears to be getting overwhelmed by the amount of data we receive.  Currently, our ingestion averages over 10,000 events per second, and Cisco support indicate that the existing Splunk app that we've been using cannot handle that volume.  Is there an approach we can use to support that interface with a volume at that level? Currently, we are using this app: https://splunkbase.splunk.com/app/3662 Should we be using this app? https://splunkbase.splunk.com/app/7404 And, if we switch apps, will the new one (7404) be able to keep up with the data transfer volume?
I was asking i have to use this below code same is this a constant predefined or i have to edit and  add my values ? for example: INGEST_EVAL = host := json_extract(_raw, "host.name")             ... See more...
I was asking i have to use this below code same is this a constant predefined or i have to edit and  add my values ? for example: INGEST_EVAL = host := json_extract(_raw, "host.name")                                                                                              |                                                                                          my host name like host.ABC INGEST_EVAL = _raw := json_extract(_raw, "message")                                                                                   |                                                                  is _raw is predinfed or we need to add something here and same for message doe we have to add whole json ? in props.conf: [source::http:logstash] TRANSFORMS-00 = securelog_set_default_metadata TRANSFORMS-01 = securelog_override_raw in transforms.conf: [securelog_set_default_metadata] INGEST_EVAL = host := json_extract(_raw, "host.name") [securelog_override_raw] INGEST_EVAL = _raw := json_extract(_raw, "message")  
Do you have at least 6 nodes (3+3) for those two SHCs? If yes, then you should do bootstraps 1st e.g for first three nodes and selecting one of those and captain. Then do same for last three nodes an... See more...
Do you have at least 6 nodes (3+3) for those two SHCs? If yes, then you should do bootstraps 1st e.g for first three nodes and selecting one of those and captain. Then do same for last three nodes and select one of those to be a captain. There are clear instructions how to do it for one SHC in docs.splunk.com But why you are needing two SHC with only one deployer? Have you tens of members on both SHC or what is your business reason for that configuration?
Has splunk started web gui process and are it listening on that host or is it totally down?
Hi I’m quite sure that you are on correct path. Unfortunately I haven’t now some examples in my hand where that or quite similar options have done. You probably know simple xml example app and its d... See more...
Hi I’m quite sure that you are on correct path. Unfortunately I haven’t now some examples in my hand where that or quite similar options have done. You probably know simple xml example app and its debug part? If not there are some instructions how to use it https://data-findings.com/wp-content/uploads/2024/09/HSUG-20240903-Tiia-Ojares.pdf Anyhow use those init, finalize parts on searches and also check token and form.token values. Time by time you must use one and another time/purpose another one. Also that simple xml debug part helps you to understand how and when those are different. I try to find those examples later if you cannot solve this or nobody cannot help you beforehand. r. Ismo
You must put those configurations in first full splunk enterprise instance from source to indexers. If you have separate HF which is endpoint for those udp feed, then those must be there. If that is U... See more...
You must put those configurations in first full splunk enterprise instance from source to indexers. If you have separate HF which is endpoint for those udp feed, then those must be there. If that is UF and then you are send events through IHF then add those conf there. And if there haven’t been any HF before indexers then add those configurations in all indexers. And as said it’s better to use real syslog server to terminate syslog feeds a use e.g. a UF to collects events from files or use SC4S for that.
The best instructions for real time alerts is never ever use those! Usually those generate more issues inside and outside of splunk e.g. in email systems when there are some mistakes in configuration... See more...
The best instructions for real time alerts is never ever use those! Usually those generate more issues inside and outside of splunk e.g. in email systems when there are some mistakes in configuration or even any mistakes. Instead of real time alert you should use scheduled alerts. Just select suitable time schedule for based on individual alert. When you are creating those check if there are regularly some latency when indexing events and if, then adjust earliest and latest based on that. For sending emails, you could add needed configuration for base splunk email settings or add some alert actions to do it. Personally I prefer to add links to alert into its body, never add real data into it. Time by time there could be some static or similar content. But never send real events outside of splunk. More instructions can found from community/answers and also alerting manual.
Hello, I am creating an alert, and want to make sure that the schedule or real time setup sends an email out once the query finds a match. What is the best configuration for an alert to send an emai... See more...
Hello, I am creating an alert, and want to make sure that the schedule or real time setup sends an email out once the query finds a match. What is the best configuration for an alert to send an email as soon as the criteria of the query matches?  Thank you! 
Here is instructions how to backup and restore deployer https://docs.splunk.com/Documentation/Splunk/9.4.0/DistSearch/BackuprestoreSHC I think that you could just replicate old deployer to new one a... See more...
Here is instructions how to backup and restore deployer https://docs.splunk.com/Documentation/Splunk/9.4.0/DistSearch/BackuprestoreSHC I think that you could just replicate old deployer to new one and change its name, ip and GUID. I’m not sure if there is something on bundles or state files what needs to change etc? Then change replication url on all those SHC nodes which you are moving to this new cluster to point into it. After that just bring those nodes up one by one and wait that each one is up and there is no errors. After that start next. Before you start this take offline backups including kvstore and stop all those nodes which belongs to SHC which you are moving away from original deployer. If there is no need to keep those nodes in SHC anymore, then just remove those nodes from SHC and use those as individual SH. Note: I haven’t try this by myself, so you take the risk by yourself!
Could you explain a bit more what do you mean by @gcusello wrote: The only way is, if it's wrong, to modify the timestamp format to take the second one and not the one added by the syslog receiver... See more...
Could you explain a bit more what do you mean by @gcusello wrote: The only way is, if it's wrong, to modify the timestamp format to take the second one and not the one added by the syslog receiver. ? is approach with having that EXTRACT-extracted_time = \b(?P<extracted_time>\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d+(-\d{2}:\d{2})?) EVAL-_time = strptime(extracted_time, "%Y-%m-%dT%H:%M:%S.%6N%z") in props.conf right? 
That doesn't help. _time still represents current time, not time in the event.  
By default this has configured to look only SPLUNK_HOME mount point. I don’t know if there is way to add additional mount points there.  If you need monitor other mount points and other Linux st... See more...
By default this has configured to look only SPLUNK_HOME mount point. I don’t know if there is way to add additional mount points there.  If you need monitor other mount points and other Linux statistics, I think that you should use e.g. *nix ta for collecting logs and metrics https://splunkbase.splunk.com/app/833
Ahh..thanks, this was killing me. I was also having trouble with the eval statement checking an array value (kept erroring out), but seems like spath was the key there as well.  This ended up workin... See more...
Ahh..thanks, this was killing me. I was also having trouble with the eval statement checking an array value (kept erroring out), but seems like spath was the key there as well.  This ended up working for me: index=someindex | spath output=sentSubject "Item.Subject" | spath output=receivedSubject "AffectedItems{}.Subject" | eval subject = if(isnull(sentSubject),receivedSubject,sentSubject) | table UserId,subject,Operation, _time  
There haven’t been any requirements for specific client cert as HEC has planned to be “open” at this point of view. The real “authentication” has done with given token. In that way there is no need t... See more...
There haven’t been any requirements for specific client cert as HEC has planned to be “open” at this point of view. The real “authentication” has done with given token. In that way there is no need to share any TLS client certificates for source side.
Introspection seems to give me the data.mount_point only for "/" and not for the other file systems that I can see via the Linux "df -kh" command. How come?