All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

There are many ways to look at search performance, particularly of windows event log data. You should get comfortable with understanding the search job properties page. In particular, look at the pha... See more...
There are many ways to look at search performance, particularly of windows event log data. You should get comfortable with understanding the search job properties page. In particular, look at the phase1 search and the scanCount. scanCount is the number of events that were scanned to return the results. With winevent log data particulary, you should understand that in order for the search to know if the process_name field is not what you want, it has to look at all events because process_name is a field that is is mapped by the Windows TA.  Minimising the number of events you look at (scanCount) will always help performance. Look at this presentation that shows how to use TERM() effectively. That can be a significant benefit to your searches.  https://conf.splunk.com/files/2020/slides/PLA1089C.pdf For example, your initial search index=windows source=XmlWinEventLog:Security process_name=ipconfig.exe can most likely be significantly improved just by writing index=windows source=XmlWinEventLog:Security TERM(ipconfig) process_name=ipconfig.exe because instead of pulling every event out to see if the windows TA has mapped a piece of the raw event to the process_name field, it will ONLY look at the events that have the term ipconfig in the raw event, so given that ipconfig will be a less frequently used command, your scanCount will drop significantly. In the search log from the inspect job page, search for LISPY and you can see how the parser has interpreted your search. In your other example of the != vs NOT, take a look at the phase0 search in the job properties. You will no doubt see a significant different in the expanded search. There are other forms of "filters", such as subsearches and lookups, but I would say that there is not often a one-size-fits-all approach to optimising your searches. It frequently depends on your data and the event count and cardinality of values you get back for fields you're trying to exclude. Lookups are often a good way to filter data, particularly when your data is still being searched in the index tier, i.e. before a transforming command has sent the data to the search head. So, it can be more efficient to do this type of logic index=windows source=XmlWinEventLog:Security | lookup process_names.csv process_name OUTPUT is_this_one_i_want | where isnotnull(is_this_one_i_want) which will then drop all events where process_name is included in your lookup. Note that this is a poor example, as it would grab all events and then filter, but the point is that it can be more efficient to first limit your data set in the primary search then filter using a lookup to remove other events rather than writing an up-front really complex set of conditions.
Thanks for the clarification, this helps a lot. Do you happen to know how I can get in touch with the developer to request access? I couldn’t find a contact listed on Splunkbase. Appreciate your help!
I found some more information, when I go: Apps -> DBX -> search -> save as alert -> I get the Output Name field but if I go: Apps -> other app (like Search & Reporting) -> search -> save a... See more...
I found some more information, when I go: Apps -> DBX -> search -> save as alert -> I get the Output Name field but if I go: Apps -> other app (like Search & Reporting) -> search -> save as alert -> I don't get the Output Name field   Any ideas what that could be? Kind Regards, Andre
I have tried to write a query that outputs the transaction counts, and response times but not sure how to group it by APIs and the date? Here is what I have written so far: index=my_app sourcety... See more...
I have tried to write a query that outputs the transaction counts, and response times but not sure how to group it by APIs and the date? Here is what I have written so far: index=my_app sourcetype=my_logs:hec (source=my_Logger) msgsource="*" msgtype="*MyClient*" host=* [ |inputlookup My_Application_Mapping.csv | search Client="SomeBank" | table appl ] | rex field=elapsed "^(?<minutes>\\d+):(?<seconds>\\d+)\\.(?<milliseconds>\\d+)" | eval total_seconds = (tonumber(seconds) * 1000) | eval total_milliseconds = (tonumber(minutes) * 60 * 1000) + (tonumber(seconds) * 1000) + (tonumber(milliseconds)) | timechart span=1m cont=f usenull=f useother=f count(total_milliseconds) as AllTransactions, avg(total_milliseconds) as AvgDuration count(eval(total_milliseconds<=1000)) as "TXN_1000", count(eval(total_milliseconds>1000 AND total_milliseconds<=2000)) as "1sec-2sec" count(eval(total_milliseconds>2000 AND total_milliseconds<=5000)) as "2sec-5sec" count(eval(total_milliseconds>5000 )) as "5sec+", | timechart span=1d sum(AllTransactions) as "Total" avg(AvgDuration) as AvgDur sum(TXN_1000) sum(1sec-2sec) sum(2sec-5sec) sum(5sec+)   `msgsource` has my API name. The output of above query is: _time | Total | AvgDur | sum(TXN_1000) | sum (1sec-2sec) | sum(2sec-5sec) | sum(5sec+) 2025-07-10| 10000 | 162.12312322 | 1000 | 122 | 1 I want final output to be _time| API | Total | AvgDur | sum(TXN_1000) | sum (1sec-2sec) | sum(2sec-5sec) | sum(5sec+) 2025-07-10| RetrievePay2 | 10000 | 162.12312322 | 1000 | 122 | 1 2025-07-10 | RetrievePay5 | 2000 | 62.12131244 | 333 | 56 | 2 2025-07-09| RetrievePay2 | 10000 | 162.12312322 | 1000 | 122 | 1 2025-07-09 | RetrievePay5 | 2000 | 62.12131244 | 333 | 56 | 2   Any help is appreciated. Thanks!
@laura  It looks like this app isn’t restricted by region, organization type, or license level but rather by explicit download permissions set by the developer itself.  The developer of the app has ... See more...
@laura  It looks like this app isn’t restricted by region, organization type, or license level but rather by explicit download permissions set by the developer itself.  The developer of the app has decided to make the app restricted, so only approved users can download it.
Hello, We’re trying to access the H3 SIEM Logs and Events Compliance Tool (https://splunkbase.splunk.com/app/7928), but are encountering download restrictions even with admin credentials. Can someon... See more...
Hello, We’re trying to access the H3 SIEM Logs and Events Compliance Tool (https://splunkbase.splunk.com/app/7928), but are encountering download restrictions even with admin credentials. Can someone confirm if the app is limited by region, organization type, or specific licensing? Thanks in advance!
Here https://conf.splunk.com/files/2019/slides/FN1003.pdf is something to read. It explains how splunk see and use fields when it searching events from buckets.
I think i tried that, But i found a solution with mvmap and if match...
Thanks, that's helpful! I was hoping they would be watching here for their app being tagged.
You are probably not aware but the flow from using the app to discussing issues leads here, to this forum. This is the workflow according to the prompts and ui. You also might not have noticed tha... See more...
You are probably not aware but the flow from using the app to discussing issues leads here, to this forum. This is the workflow according to the prompts and ui. You also might not have noticed that that app is the tagged association. Perhaps the creator watches the forums for their own app? I would if I was them. I hope you never reply to one of my questions again. You're as helpful and as welcome as a rotten egg.
| foreach mode=multivalue Field2 [| eval new=if(<<ITEM>>!=Field1,mvappend(new,<<ITEM>>),new)]
I have  this table for example  Field1 | Field2 Value1 | value1 value2 value3  Field2 is mv I want to remove the value that already axits in field1 so the result be like this: Field1 | Field2 V... See more...
I have  this table for example  Field1 | Field2 Value1 | value1 value2 value3  Field2 is mv I want to remove the value that already axits in field1 so the result be like this: Field1 | Field2 Value1 | value2, value3    I didnt see the mvfilter support this
We have changed in the desired alert to see if another private clone triggerign the message, but wont know untill Tuesday.. let's wait.
Hi @ielshahrori  The issue you have is likely due to a mismatch between the default self-signed certificate's Common Name (CN), which is typically set to the Splunk server's hostname or localhost, a... See more...
Hi @ielshahrori  The issue you have is likely due to a mismatch between the default self-signed certificate's Common Name (CN), which is typically set to the Splunk server's hostname or localhost, and the public IP address used for access. This causes SSL/TLS handshake failures in browsers when attempting HTTPS connections (e.g., https://publicIP:8000), even though basic TCP connectivity (like telnet) succeeds on port 8000. Browsers enforce strict certificate validation, and self-signed certs with CN mismatches often result in "unable to reach" errors without an option to proceed unless explicitly overridden. If you are using the public IP address over HTTPS then I assume you do not have a valid trusted certificate that the clients can validate for connecting to Splunk? Its not typical to have an SSL certificate that matches an IP address, instead you should have a hostname with DNS that resolves to the IP address of your Splunk server. Then you will either need a Public Trusted SSL cert, or a self-signed cert which the clients have the root CA trusted one their system. You can then configure the custom SSL cert in Splunk:  Update web.conf (typically in $SPLUNK_HOME/etc/system/local/) with: [settings] enableSplunkWebSSL = true serverCert = <path_to_your_new_cert.pem>    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi livehybrid, Thank you for your response, we are going to work with webhooks. Having a splunk cloud, we are trying to create the URL for our HEC token to configure it in the webhook of Jira. We ... See more...
Hi livehybrid, Thank you for your response, we are going to work with webhooks. Having a splunk cloud, we are trying to create the URL for our HEC token to configure it in the webhook of Jira. We are not sure how we have to configure the URL with the tokens. As far as i know it is not recommended to put the hec token in the URL and even if that was the answer, we are not sure how to put it. If we have more than 1 HEC in Splunk, we have to specify the token somehow no? There could be a kind of miss between tokens if we dont specify? Example of HEC Url. "https://http-inputs-mycompany.splunkcloud.com/services/collector/event" Kindest Regards
I am currently facing an issue accessing the Splunk Web interface over HTTPS. When I configure enableSplunkWebSSL = true in web.conf, the Splunk Web service appears to start normally, and the port 8... See more...
I am currently facing an issue accessing the Splunk Web interface over HTTPS. When I configure enableSplunkWebSSL = true in web.conf, the Splunk Web service appears to start normally, and the port 8000 is open. However, users are unable to reach the interface via the public IP using HTTPS. When I change the configuration to enableSplunkWebSSL = false, and use HTTP instead, everything works fine — users can successfully access the Splunk Web interface on the public IP and port 8000. Additional details: There is full network connectivity; telnet to the public IP and port 8000 works. The issue is reproducible across different browsers and devices. The certificate used is the default self-signed certificate provided by Splunk. The Splunk Web service log does not show any fatal errors. I need to maintain HTTPS access for security compliance. Could you please assist in identifying the root cause and provide guidance on how to ensure HTTPS access works properly over the public IP?  
The easiest way is copy your indexes.conf from your cluster manager. Then just ensure that it doesn’t contains any SmartStore or other unknown targets etc. then create a new app which contains only th... See more...
The easiest way is copy your indexes.conf from your cluster manager. Then just ensure that it doesn’t contains any SmartStore or other unknown targets etc. then create a new app which contains only this indexes.conf and other files which are needed by any app. Then install this app into your HF. But as said, you should use some real syslog server instead of use splunk tcp/udp inputs for getting syslog feed into splunk. Even splunk can do it, there are some side effects with it. Probably the biggest is that you will lost all syslog events when you are restarting HF. And this could take several minutes instead of using syslog server or clustered syslog implementation. You could easily find some old posts where we have discussed about it and give some hints how to do it.
@phamanh1652  Have you created the index called "trellix"? and also check the splunk internal logs on your Splunk Cloud Search head.  You can use this add-on to integrate your Trellix MVISION. It s... See more...
@phamanh1652  Have you created the index called "trellix"? and also check the splunk internal logs on your Splunk Cloud Search head.  You can use this add-on to integrate your Trellix MVISION. It supports both Splunk Cloud and Splunk Enterprise. https://splunkbase.splunk.com/app/7022   
The allowedDomainList setting can be in any alert_actions.conf file on your search head(s).  Precedence rules apply, however.  See https://docs.splunk.com/Documentation/Splunk/9.4.2/Admin/Wheretofind... See more...
The allowedDomainList setting can be in any alert_actions.conf file on your search head(s).  Precedence rules apply, however.  See https://docs.splunk.com/Documentation/Splunk/9.4.2/Admin/Wheretofindtheconfigurationfiles
@ws  In that case mainly you need to update these .confs what i remember, -server.conf -Update serverName = splunk.test2.com -inputs.conf -Update host=splunk.test2.com if it were set with old name... See more...
@ws  In that case mainly you need to update these .confs what i remember, -server.conf -Update serverName = splunk.test2.com -inputs.conf -Update host=splunk.test2.com if it were set with old name -web.conf -Update mgmtHostPort if it reference old name -SSL certs -Regenerate certs with new hostname if HTTPS is used Also check network devices configured with this new hostname/IP and update DNS records,Firewall rules if applcicable. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!