All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Is the forwarder using indexer discovery ?
@PickleRick @richgalloway , Can we make changes to the splunk ta windows app inputs.conf of  the deployment server ?? There was some configs messed up in the inputs.conf how we can restore to the p... See more...
@PickleRick @richgalloway , Can we make changes to the splunk ta windows app inputs.conf of  the deployment server ?? There was some configs messed up in the inputs.conf how we can restore to the previous configs ??   Thanks...
Thanks a lot, will try to configure with the LB and test it out, so the forwarders also will be sending data to the LB URL?  Dhana
Hi @ITWhisperer  Added a log to all the event to that it can be picked up commonly,which resolved it  
So, in my case if I need to put that "long search" in drop down list, just use the regular search, not base search? Thanks
I have already found a solution Data().DokumentId
You can't have more. You might be able to "walk around" this limitation by writing custom command - it's bound by other rules than scripted lookup.
Base searches _should_ be transforming searches. You can get away with normal event-returning searches, but only when they return a reasonably small set of results. It's not a good idea to try to sea... See more...
Base searches _should_ be transforming searches. You can get away with normal event-returning searches, but only when they return a reasonably small set of results. It's not a good idea to try to search for 100k events in your base search (even worse idea if every event is a huge 100-fields json) and then have refining searches transforming the base set of events. So - for example - you have users logging in to your system. You want to have stats with counts aggregated per day in one table and split by username in another table. You can use two separate searches but you can use base search to spawn just one base search over your source index. But the proper approach here would _not_ be to do just raw search for all logins and then do two separate stats refining searches - one by _time and other by username but as a base search you should do a stats count but it should be done by both of those fields. So your refine searches would need to sum per user or per day.
These are two separate problems. One is auditing commands in your local system (which can be done using several methods, each with their pros and cons; the typical method is auditd). Another one is... See more...
These are two separate problems. One is auditing commands in your local system (which can be done using several methods, each with their pros and cons; the typical method is auditd). Another one is sending the data out from your box to Splunk. If you want to use auditd, you need to create auditd rules to track spawning new processes. This is out of scope of this forum and there are many documents all around the internet how to do that depending on what data you want logged and in which situations (logging every single process creation might be an overkill). Typically you write auditd logs to /var/log/audit/audit.log so you need to set up your rsyslog to read that file (you need to define an input with imfile type. And to send the files to the HF you can either use simple tcp or udp output and have a corresponding input defined on your HF (but it's not a recommended way) or you can create a HEC input on your HF and send to it via an omhttp output in your rsyslog (provided you have a sufficiently new version - not all distro-shipped versions contain that input; especially older ones). If you want to be selective, you might tie a ruleset containing that HF output to that audit.log file input.
You asked for a regex so that is what I gave you.  Add quotes and other text as necessary.
What format are you pulling the logs in? Traditional or XML? * $XmlRegex: Use this key for filtering when you render Windows Event log events in XML by setting the 'renderXml' setting to "true"... See more...
What format are you pulling the logs in? Traditional or XML? * $XmlRegex: Use this key for filtering when you render Windows Event log events in XML by setting the 'renderXml' setting to "true". Search the online documentation for "Filter data in XML format with the XmlRegex key" for details.
Synchronous processing is the norm for a streaming command.  Perhaps you want a reporting command.  See the bottom of https://docs.splunk.com/DocumentationStatic/PythonSDK/1.7.4/index.html for comman... See more...
Synchronous processing is the norm for a streaming command.  Perhaps you want a reporting command.  See the bottom of https://docs.splunk.com/DocumentationStatic/PythonSDK/1.7.4/index.html for command types.
@richgalloway , is this with quotes ? blacklist5 = EventCode="4688" Message="(ParentProcessName.+Microsoft Monitoring Agent\\Agent\\MonitoringHost\.exe)"  
I've got a new deployment of 9.1.1, upgraded from a prior version, I can't remember which off the top of my head.  I am running Windows 2019 btw, if there is any relevance.   When I log in I get th... See more...
I've got a new deployment of 9.1.1, upgraded from a prior version, I can't remember which off the top of my head.  I am running Windows 2019 btw, if there is any relevance.   When I log in I get the following message     Failed to upgrade KV Store to the latest version. KV Store is running an old version, service(36). Resolve upgrade errors and try to upgrade KV Store to the latest version again. Learn more. 11/20/2023, 12:04:48 PM       If I shutdown splunkd, then run  splunk.exe migrate migrate-kvstore -v  I'll get the following error.     [App Key Value Store migration] Starting migrate-kvstore. Started standalone KVStore update, start_time="2023-11-20 12:00:29". failed to add license to stack enterprise, err - stack already has this license, cannot add again [App Key Value Store migration] Checking if migration is needed. Upgrade type 1. This can take up to 600seconds. 2023-11-20T17:00:30.187Z W CONTROL [main] net.ssl.sslCipherConfig is deprecated. It will be removed in a future release. 2023-11-20T17:00:30.193Z F CONTROL [main] Failed global initialization: InvalidSSLConfiguration: CertAddCertificateContextToStore Failed The object or property already exists. mongod exited abnormally (exit code 1, status: exited with code 1) - look at mongod.log to investigate. KV Store process terminated abnormally (exit code 1, status exited with code 1). See mongod.log and splunkd.log for details. WARN: [App Key Value Store migration] Service(40) terminated before the service availability check could complete. Exit code 1, waited for 0 seconds. App Key Value Store migration failed, check the migration log for details. After you have addressed the cause of the service failure, run the migration again, otherwise App Key Value Store won't function.     No entries are ever posted to mongod.log. Just to verify, I cleanred out the var/log/splunk directory.  Moving the folder, and upon running the command, the folders are generated, but the mongod.log file is never created.   Any Advice on how to get the kvstore to migrate?  
There's no Vienn option, but you can use the Sankey to visualize different categories relations, where thickness of links between bars reflect the overlap. Note the Dashboard studio and the classic ... See more...
There's no Vienn option, but you can use the Sankey to visualize different categories relations, where thickness of links between bars reflect the overlap. Note the Dashboard studio and the classic version of the chart, do rendering differently.
ParentProcessName.+Microsoft Monitoring Agent\\Agent\\MonitoringHost\.exe
Hi Can you pls give me and eg. for above regex with out group ? Thanks
First count the number of recipients then filter out those with fewer than 10. | eventstats dc(emailID) as msg_count by recip | where msg_count >= 10 | stats dc(emailID) as email_count
Use the distinct_count function of the stats command. | stats dc(recip) as recip_count by attachment_type
I don't know why Splunk is not matching that event.  The regex looks good to me.  Perhaps try without the  groups?  It shouldn't matter, but perhaps it will and the groups are not necessary.