All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The Machine agent is starting but it is not reporting %CPU, %Busy, Free Memory,etc. It does show the processes running on the server but not metrics. Please suggest.
Hi all, I have 2 multiselect dropdowns. One is dependent on other dropdown. The first drop down has groups and second has sub groups. I am facing some problem in appending the subgroup value to the... See more...
Hi all, I have 2 multiselect dropdowns. One is dependent on other dropdown. The first drop down has groups and second has sub groups. I am facing some problem in appending the subgroup value to the respective group. For example, lets assume that group has values a b c and only c has subgroup that is x ,y. I want to append that subgroup as c_x and c_y and pass it to the query. I tried adding suffix in dropdown itself. But when the tokens are selected in any order it is adding the sub group to whole token, that is if i select b,c,a it will add subgroup as b,c,a_x and b,c,a_y.   Any suggestions on how can i correctly append the sub group to respective groups and use it in the query.
25/10/2023 6000 31/10/2023 0 6/11/2023 2500 6/11/2023 500 12/11/2023 -7800 16/11/2023 500   i have a table and i'm trying to create a line chart that starts at 6000, then... See more...
25/10/2023 6000 31/10/2023 0 6/11/2023 2500 6/11/2023 500 12/11/2023 -7800 16/11/2023 500   i have a table and i'm trying to create a line chart that starts at 6000, then has a straight line until it hits the date 6/11/2023 at which point it adds a line 90 degrees and goes up to 8500 and so on .. going up at 90 degrees and down at 90 degrees for the negative values keeping the current total thanks,
Hi, I am fairly new to AppDynamics and I am a bit puzzled by some behaviours with Nodejs Transaction Snapshots. Could anyone explain the following? A HTTP request comes into a Nodejs application an... See more...
Hi, I am fairly new to AppDynamics and I am a bit puzzled by some behaviours with Nodejs Transaction Snapshots. Could anyone explain the following? A HTTP request comes into a Nodejs application and it makes another HTTP request to an external service. All the calls are async and there is no specific correlation setup. I am expecting one outbound request for each inbound request. However, I sometimes see many outbound request calls.  Is this because AppD is just sampling the process, at the time of the snapshot, and showing all outbound calls occurring at that time?  Many Thanks H
Is the forwarder using indexer discovery ?
@PickleRick @richgalloway , Can we make changes to the splunk ta windows app inputs.conf of  the deployment server ?? There was some configs messed up in the inputs.conf how we can restore to the p... See more...
@PickleRick @richgalloway , Can we make changes to the splunk ta windows app inputs.conf of  the deployment server ?? There was some configs messed up in the inputs.conf how we can restore to the previous configs ??   Thanks...
Thanks a lot, will try to configure with the LB and test it out, so the forwarders also will be sending data to the LB URL?  Dhana
Hi @ITWhisperer  Added a log to all the event to that it can be picked up commonly,which resolved it  
So, in my case if I need to put that "long search" in drop down list, just use the regular search, not base search? Thanks
I have already found a solution Data().DokumentId
You can't have more. You might be able to "walk around" this limitation by writing custom command - it's bound by other rules than scripted lookup.
Base searches _should_ be transforming searches. You can get away with normal event-returning searches, but only when they return a reasonably small set of results. It's not a good idea to try to sea... See more...
Base searches _should_ be transforming searches. You can get away with normal event-returning searches, but only when they return a reasonably small set of results. It's not a good idea to try to search for 100k events in your base search (even worse idea if every event is a huge 100-fields json) and then have refining searches transforming the base set of events. So - for example - you have users logging in to your system. You want to have stats with counts aggregated per day in one table and split by username in another table. You can use two separate searches but you can use base search to spawn just one base search over your source index. But the proper approach here would _not_ be to do just raw search for all logins and then do two separate stats refining searches - one by _time and other by username but as a base search you should do a stats count but it should be done by both of those fields. So your refine searches would need to sum per user or per day.
These are two separate problems. One is auditing commands in your local system (which can be done using several methods, each with their pros and cons; the typical method is auditd). Another one is... See more...
These are two separate problems. One is auditing commands in your local system (which can be done using several methods, each with their pros and cons; the typical method is auditd). Another one is sending the data out from your box to Splunk. If you want to use auditd, you need to create auditd rules to track spawning new processes. This is out of scope of this forum and there are many documents all around the internet how to do that depending on what data you want logged and in which situations (logging every single process creation might be an overkill). Typically you write auditd logs to /var/log/audit/audit.log so you need to set up your rsyslog to read that file (you need to define an input with imfile type. And to send the files to the HF you can either use simple tcp or udp output and have a corresponding input defined on your HF (but it's not a recommended way) or you can create a HEC input on your HF and send to it via an omhttp output in your rsyslog (provided you have a sufficiently new version - not all distro-shipped versions contain that input; especially older ones). If you want to be selective, you might tie a ruleset containing that HF output to that audit.log file input.
You asked for a regex so that is what I gave you.  Add quotes and other text as necessary.
What format are you pulling the logs in? Traditional or XML? * $XmlRegex: Use this key for filtering when you render Windows Event log events in XML by setting the 'renderXml' setting to "true"... See more...
What format are you pulling the logs in? Traditional or XML? * $XmlRegex: Use this key for filtering when you render Windows Event log events in XML by setting the 'renderXml' setting to "true". Search the online documentation for "Filter data in XML format with the XmlRegex key" for details.
Synchronous processing is the norm for a streaming command.  Perhaps you want a reporting command.  See the bottom of https://docs.splunk.com/DocumentationStatic/PythonSDK/1.7.4/index.html for comman... See more...
Synchronous processing is the norm for a streaming command.  Perhaps you want a reporting command.  See the bottom of https://docs.splunk.com/DocumentationStatic/PythonSDK/1.7.4/index.html for command types.
@richgalloway , is this with quotes ? blacklist5 = EventCode="4688" Message="(ParentProcessName.+Microsoft Monitoring Agent\\Agent\\MonitoringHost\.exe)"  
I've got a new deployment of 9.1.1, upgraded from a prior version, I can't remember which off the top of my head.  I am running Windows 2019 btw, if there is any relevance.   When I log in I get th... See more...
I've got a new deployment of 9.1.1, upgraded from a prior version, I can't remember which off the top of my head.  I am running Windows 2019 btw, if there is any relevance.   When I log in I get the following message     Failed to upgrade KV Store to the latest version. KV Store is running an old version, service(36). Resolve upgrade errors and try to upgrade KV Store to the latest version again. Learn more. 11/20/2023, 12:04:48 PM       If I shutdown splunkd, then run  splunk.exe migrate migrate-kvstore -v  I'll get the following error.     [App Key Value Store migration] Starting migrate-kvstore. Started standalone KVStore update, start_time="2023-11-20 12:00:29". failed to add license to stack enterprise, err - stack already has this license, cannot add again [App Key Value Store migration] Checking if migration is needed. Upgrade type 1. This can take up to 600seconds. 2023-11-20T17:00:30.187Z W CONTROL [main] net.ssl.sslCipherConfig is deprecated. It will be removed in a future release. 2023-11-20T17:00:30.193Z F CONTROL [main] Failed global initialization: InvalidSSLConfiguration: CertAddCertificateContextToStore Failed The object or property already exists. mongod exited abnormally (exit code 1, status: exited with code 1) - look at mongod.log to investigate. KV Store process terminated abnormally (exit code 1, status exited with code 1). See mongod.log and splunkd.log for details. WARN: [App Key Value Store migration] Service(40) terminated before the service availability check could complete. Exit code 1, waited for 0 seconds. App Key Value Store migration failed, check the migration log for details. After you have addressed the cause of the service failure, run the migration again, otherwise App Key Value Store won't function.     No entries are ever posted to mongod.log. Just to verify, I cleanred out the var/log/splunk directory.  Moving the folder, and upon running the command, the folders are generated, but the mongod.log file is never created.   Any Advice on how to get the kvstore to migrate?  
There's no Vienn option, but you can use the Sankey to visualize different categories relations, where thickness of links between bars reflect the overlap. Note the Dashboard studio and the classic ... See more...
There's no Vienn option, but you can use the Sankey to visualize different categories relations, where thickness of links between bars reflect the overlap. Note the Dashboard studio and the classic version of the chart, do rendering differently.