All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, I have an input lookup file with 2 fields  first filed contains some path and the second filed is an httpcode for the path.  example   :  /s/a/list   403  ; /s/b/list 504  i need help to f... See more...
Hi All, I have an input lookup file with 2 fields  first filed contains some path and the second filed is an httpcode for the path.  example   :  /s/a/list   403  ; /s/b/list 504  i need help to form a search query to exclude the fields in this input lookup file with matching the httpcode ;  whe i run query with like index=a and sourcetype=*b*  it needs to exclude the path and specific httpcode from the excel and siplay output for other paths and httpcodes.  please help 
I have tried to write a query that outputs the transaction counts, and response times but not sure how to group it by APIs and the date? Here is what I have written so far: index=my_app sourcety... See more...
I have tried to write a query that outputs the transaction counts, and response times but not sure how to group it by APIs and the date? Here is what I have written so far: index=my_app sourcetype=my_logs:hec (source=my_Logger) msgsource="*" msgtype="*MyClient*" host=* [ |inputlookup My_Application_Mapping.csv | search Client="SomeBank" | table appl ] | rex field=elapsed "^(?<minutes>\\d+):(?<seconds>\\d+)\\.(?<milliseconds>\\d+)" | eval total_seconds = (tonumber(seconds) * 1000) | eval total_milliseconds = (tonumber(minutes) * 60 * 1000) + (tonumber(seconds) * 1000) + (tonumber(milliseconds)) | timechart span=1m cont=f usenull=f useother=f count(total_milliseconds) as AllTransactions, avg(total_milliseconds) as AvgDuration count(eval(total_milliseconds<=1000)) as "TXN_1000", count(eval(total_milliseconds>1000 AND total_milliseconds<=2000)) as "1sec-2sec" count(eval(total_milliseconds>2000 AND total_milliseconds<=5000)) as "2sec-5sec" count(eval(total_milliseconds>5000 )) as "5sec+", | timechart span=1d sum(AllTransactions) as "Total" avg(AvgDuration) as AvgDur sum(TXN_1000) sum(1sec-2sec) sum(2sec-5sec) sum(5sec+)   `msgsource` has my API name. The output of above query is: _time | Total | AvgDur | sum(TXN_1000) | sum (1sec-2sec) | sum(2sec-5sec) | sum(5sec+) 2025-07-10| 10000 | 162.12312322 | 1000 | 122 | 1 I want final output to be _time| API | Total | AvgDur | sum(TXN_1000) | sum (1sec-2sec) | sum(2sec-5sec) | sum(5sec+) 2025-07-10| RetrievePay2 | 10000 | 162.12312322 | 1000 | 122 | 1 2025-07-10 | RetrievePay5 | 2000 | 62.12131244 | 333 | 56 | 2 2025-07-09| RetrievePay2 | 10000 | 162.12312322 | 1000 | 122 | 1 2025-07-09 | RetrievePay5 | 2000 | 62.12131244 | 333 | 56 | 2   Any help is appreciated. Thanks!
Hello, We’re trying to access the H3 SIEM Logs and Events Compliance Tool (https://splunkbase.splunk.com/app/7928), but are encountering download restrictions even with admin credentials. Can someon... See more...
Hello, We’re trying to access the H3 SIEM Logs and Events Compliance Tool (https://splunkbase.splunk.com/app/7928), but are encountering download restrictions even with admin credentials. Can someone confirm if the app is limited by region, organization type, or specific licensing? Thanks in advance!
I have  this table for example  Field1 | Field2 Value1 | value1 value2 value3  Field2 is mv I want to remove the value that already axits in field1 so the result be like this: Field1 | Field2 V... See more...
I have  this table for example  Field1 | Field2 Value1 | value1 value2 value3  Field2 is mv I want to remove the value that already axits in field1 so the result be like this: Field1 | Field2 Value1 | value2, value3    I didnt see the mvfilter support this
I am currently facing an issue accessing the Splunk Web interface over HTTPS. When I configure enableSplunkWebSSL = true in web.conf, the Splunk Web service appears to start normally, and the port 8... See more...
I am currently facing an issue accessing the Splunk Web interface over HTTPS. When I configure enableSplunkWebSSL = true in web.conf, the Splunk Web service appears to start normally, and the port 8000 is open. However, users are unable to reach the interface via the public IP using HTTPS. When I change the configuration to enableSplunkWebSSL = false, and use HTTP instead, everything works fine — users can successfully access the Splunk Web interface on the public IP and port 8000. Additional details: There is full network connectivity; telnet to the public IP and port 8000 works. The issue is reproducible across different browsers and devices. The certificate used is the default self-signed certificate provided by Splunk. The Splunk Web service log does not show any fatal errors. I need to maintain HTTPS access for security compliance. Could you please assist in identifying the root cause and provide guidance on how to ensure HTTPS access works properly over the public IP?  
Hi,   I would like to request further assistance regarding the following. If I intend to change the domain of my existing All-in-One Splunk Enterprise server, what are the key areas I should be aw... See more...
Hi,   I would like to request further assistance regarding the following. If I intend to change the domain of my existing All-in-One Splunk Enterprise server, what are the key areas I should be aware of, and which configuration files need to be updated?
I am looking for the best way in terms of performance when adding filtering of certain events for security rules. Normally for a security rule, it starts off with quite a large scope, for example: i... See more...
I am looking for the best way in terms of performance when adding filtering of certain events for security rules. Normally for a security rule, it starts off with quite a large scope, for example: index=windows source=XmlWinEventLog:Security process_name=ipconfig.exe  Then often in your environment, you would have to filter benign processes, behaviors. Currently, this is how I am writing filters index=windows source=XmlWinEventLog:Security EventCode=4688 process_name=ipconfig.exe | search NOT process_command_line="ipconfig /all" | search NOT process_parent_path=*benign.exe host=BENIGN_HOSTS This gives the best readability, but I am looking for best performance. Then what is the best way to write filters? 
Good morning All, I have been trying to figure out how can I create a data input on a heavy forwarder to forward data to a specific index located on indexer cluster. I have three indexers organised ... See more...
Good morning All, I have been trying to figure out how can I create a data input on a heavy forwarder to forward data to a specific index located on indexer cluster. I have three indexers organised in a cluster. The indexers and heavy forwarder are managed by management node. I have used Windows Universal forwarder to forward events to a particular index to indexers group (cluster) but I'm struggling to find a way of configuring similar thing on Linux based HF. Basically, what I'm trying to achieve is to configure SYSLOG port (this will be custom port, let's say 1514) to receive SYSLOG data from particular SYSLOG host and forward it to custom index created on indexers group (cluster). When adding a port in Data Inputs, I can specify local index, but not remote, clustered index. On the HF in Data Forwarding section, I can see All are forwarded to the indexer cluster. Would anyone know how I can achieve this? Any help would be much appreciated. Kind Regards, Mike.
I’m trying to forward logs and events from Trellix EPO SaaS to Splunk Cloud for monitoring purposes. To do this, I’ve installed the Trellix EPO SaaS Connector add-on in Splunk. During the setup, the ... See more...
I’m trying to forward logs and events from Trellix EPO SaaS to Splunk Cloud for monitoring purposes. To do this, I’ve installed the Trellix EPO SaaS Connector add-on in Splunk. During the setup, the connector requires API credentials to establish communication between Splunk and Trellix. However, even after completing the configuration, I’m not seeing any logs being ingested into Splunk. Additionally, I’m not entirely sure what each field in the configuration tab represents, which makes troubleshooting difficult. So i just configure: + IAM URL = Token Endpoint URL in Client Credentials Management + API Gateway URL = https://api.manage.trellix.com I am using Trellix MVISION Trial and Splunk Cloud Trial for testing purpose.  
A alert is configured to schedulre cron trigger with expression 0 11 * * 1,4.   But its triggering on non specific days, like on Tuesday and Wednseday?   What could be the problem here? i checked... See more...
A alert is configured to schedulre cron trigger with expression 0 11 * * 1,4.   But its triggering on non specific days, like on Tuesday and Wednseday?   What could be the problem here? i checked using crontab.guru, its showing correct validation.   Thanks In Advance.
I need to onboard Cisco Catalyst 8500 router logs into Splunk. When I was looking for addons, I found the below addons that seem relevant  Cisco Catalyst Add-on for Splunk - This is preferred as it... See more...
I need to onboard Cisco Catalyst 8500 router logs into Splunk. When I was looking for addons, I found the below addons that seem relevant  Cisco Catalyst Add-on for Splunk - This is preferred as its Cisco built and supported. https://splunkbase.splunk.com/app/7538 Then there is this addon Add-on for Cisco Network Data - https://splunkbase.splunk.com/app/1467, but it is unsupported. The instructions in the Cisco built addon are not very clear on how to onboard the router logs.  Can someone please help?
Hello, In Splunk i have a query that i use to show data with an xyseries. The output should be displayed as a Column-chart in "in Dashboard Studio" But when i save this dashboard i recieved this... See more...
Hello, In Splunk i have a query that i use to show data with an xyseries. The output should be displayed as a Column-chart in "in Dashboard Studio" But when i save this dashboard i recieved this error "Dashboard Studio only supports Trellis layout for single value visualizations." This example query: | makeresults | eval Displayname="DSP10", Month="2505-06", duration=100 | append [ | makeresults | eval Displayname="DSP10", Month="2505-07", duration=200 ] | append [ | makeresults | eval Displayname="DSP20", Month="2505-06", duration=50 ] | append [ | makeresults | eval Displayname="DSP20", Month="2505-07", duration=90 ] | table Month Displayname duration | xyseries Month Displayname duration   Are there any other options to display this in Studio in a Trellis layout as Column Chart of a Line? Regards, Harry
http event data is not received at index   though in the log it says HttpInputDataHandler - handled token name=xyz   How do i debug this i checked splunkd.log and could not find anything fishy  ... See more...
http event data is not received at index   though in the log it says HttpInputDataHandler - handled token name=xyz   How do i debug this i checked splunkd.log and could not find anything fishy    07-16-2025 16:14:39.809 +0800 DEBUG HttpInputDataHandler - handled token name=embedded, channel=n/a, source_IP=x.y.z.a, reply=0, events_processed=1, http_input_body_size=10338, parsing_err="", body_chunk="{"action": "queued", "workflow_job": {"id": 46075907488, "run_id": 16313804135, "workflow_name": "linux-ci-pipeline", "head_branch": "dts_changes", "run_url": "https://api.github.com/repos/org/repo-name/actions/runs/16313804135", "run_attempt": 1, "node_id": "CR_kwDOHHhjyM8AAAAKulaNoA", "head_sha": "9fd419d2fcd5fc775c4b61a5392133630d5763b8", "url": "https://api.github.com/repos/org/repo-name/actions/job" 07-16-2025 16:14:39.809 +0800 DEBUG UTF8Processor - Done key received for: source::/infrastructure/da_infra/splunk/tarball/splunk_instance/splunk/var/log/splunk/metrics.log|host::baip052|splunkd|2532 07-16-2025 16:14:39.809 +0800 INFO UTF8Processor - Converting using CHARSET="UTF-8" for conf "source::http:embedded|host::10.244.215.89:8088|httpevent|" 07-16-2025 16:14:39.809 +0800 DEBUG regexExtractionProcessor - RegexExtractor: Interpolated to metrics_log_clone::s 07-16-2025 16:14:39.809 +0800 DEBUG regexExtractionProcessor - RegexExtractor: Extracted metrics_log_clone::s 07-16-2025 16:14:39.809 +0800 INFO LineBreakingProcessor - Using truncation length 10000 for conf "source::http:embedded|host::10.244.215.89:8088|httpevent|" 07-16-2025 16:14:39.809 +0800 DEBUG regexExtractionProcessor - RegexExtractor: Interpolated to _metrics 07-16-2025 16:14:39.809 +0800 INFO LineBreakingProcessor - LB_CHUNK_BREAKER uses truncation length 2000000 for conf "source::http:embedded|host::10.244.215.89:8088|httpevent|" 07-16-2025 16:14:39.809 +0800 INFO LineBreakingProcessor - Using lookbehind 100 for conf "source::http:embedded|host::10.244.215.89:8088|httpevent|" 07-16-2025 16:14:39.809 +0800 DEBUG regexExtractionProcessor - RegexExtractor: Extracted _metrics 07-16-2025 16:14:39.809 +0800 WARN LineBreakingProcessor - Truncating line because limit of 10000 bytes has been exceeded with a line length >= 10338 - data_source="http:embedded", data_host="10.244.215.89:8088", data_sourcetype="httpevent" 07-16-2025 16:14:39.809 +0800 DEBUG regexExtractionProcessor - RegexExtractor: Interpolated to group::pipeline 07-16-2025 16:14:39.809 +0800 DEBUG regexExtractionProcessor - RegexExtractor: Extracted group::pipeline 07-16-2025 16:14:39.809 +0800 DEBUG regexExtractionProcessor - RegexExtractor: Interpolated to name::dev-null 07-16-2025 16:14:39.809 +0800 DEBUG regexExtractionProcessor - RegexExtractor: Extracted name::dev-null 07-16-2025 16:14:39.809 +0800 DEBUG regexExtractionProcessor - RegexExtractor: Interpolated to processor::nullqueue 07-16-2025 16:14:39.809 +0800 DEBUG UTF8Processor - Done key received for: source::http:embedded|host::10.244.215.89:8088|httpevent|
Hello, we are using Splunk Observability migrating from another solution. We have certain scripts that will validate aggregated metrics (namely average of a p99). Working with splunk observability w... See more...
Hello, we are using Splunk Observability migrating from another solution. We have certain scripts that will validate aggregated metrics (namely average of a p99). Working with splunk observability we are having difficult finding and api/method that will give us this information stablishing a single metric,value in a given timeline.  This is what we want to achieve: From X to Y give me average of P99 for "latency_metric". The expected result should be a single data point what is the average p99 of latency metric from that timeframe, namely something like: 300ms Any idea of what can we use? 
Hi All,   I am having a requirement to create a dashboard for fetching the expiry date of certificate used in Multiple Windows server. There are load balancer used for these server. and also it ... See more...
Hi All,   I am having a requirement to create a dashboard for fetching the expiry date of certificate used in Multiple Windows server. There are load balancer used for these server. and also it cant be accessed by internet. means the app URL cannot be accessed from these server. so is there any such utility in splunk or script through which we can create such dashboard.
Given this search result: Company A         Visa            15                                  MC                5                                  AmEx           2 Company B         Visa       ... See more...
Given this search result: Company A         Visa            15                                  MC                5                                  AmEx           2 Company B         Visa            19                                  MC                8                                  AmEx           3 How can I generate a total row like this? Total                      Visa            34                                  MC             13                                  AmEx           5
Hi, Difficult question... Whe have some problems with search performance. Looking at the job inspector i noticed within the slow jobs the command.search.kv is taking a lot of time.  What is this? ... See more...
Hi, Difficult question... Whe have some problems with search performance. Looking at the job inspector i noticed within the slow jobs the command.search.kv is taking a lot of time.  What is this? And where is this part of the search-command executed (indexer or search-head)? I notice especialy wineventlogs are taking a lot of this kv time. I created a blank SH, no apps at all, timed some searches with some different indexes and installed some different apps. I noticed when this command.search.kv takes more time. Sometimes this is correct in relation to the app/event match if looking at the props.conf.  Turning the right app  off makes this command.search.kv decrease a lot to almost zero. But with winevents.. no go.. it stays high.  Also even without the fieldextracts etc installed on this blank SH, most fields are extracted. If those field were extracted at index time.. i can imagine there wil be no command.search.kv time wasted (wild guess). does the indexer extract these fields at search time (strange strange) and wil this be the command.search.kv?? So is it possible this command.search.kv also run's on the indexers? And so.. does this lookup / field extraction cost most off the time?   Thanks in advance greets Jari    
Hi Splunk Gurus,  I’m working on a script to programmatically check if logs from a specific host are available in Splunk. For this, I’m using token-based authentication. I’ve created a role and a us... See more...
Hi Splunk Gurus,  I’m working on a script to programmatically check if logs from a specific host are available in Splunk. For this, I’m using token-based authentication. I’ve created a role and a user with the necessary permissions, and generated a token for that user. However, when I try to run the following curl command against my Splunk Cloud instance: curl -k -H "Authorization: Bearer <your_token>" \ https://<your_splunk_instance>.splunkcloud.com:443/services/server/info   I receive a 303 status code, and I’m not sure what I might be doing wrong. I’ve checked multiple forums but haven’t been able to find a clear solution. Could you please help me understand what might be causing this and how I can resolve it? Thank you in advance!
We have been having some strange performance issues with some of our dashboards and we would like some advice on how to troubleshoot these issues and fix them. Despite the underlying searches being ... See more...
We have been having some strange performance issues with some of our dashboards and we would like some advice on how to troubleshoot these issues and fix them. Despite the underlying searches being extremely fast, sometimes results will take upwards of 30 seconds to be displayed in the corresponding dashboard panels. Infrastructure and dashboard details We are running a distributed on-prem Splunk environment with one search head and a cluster of three indexers. All instances are on version 9.2.2, although we have been able to replicate these issues with a 9.4.2 search head as well. We have six core dashboards, ranging from simple and static to considerably dynamic and complex. About 95% of the searches in this app’s dashboards are metric-based and use mstats. Each individual search is quite fast, with most searches running in under 0.5s, even in the presence of joins/appends. Most of these searches have a 10s refresh time by default. Problem We have been facing a recurring issue where certain panels will sometimes not load for several seconds (10-30 seconds usually). This tends to happen in some of the more complex dashboards, particularly after drilldowns/input interactions – doing so often leads to "Waiting for data" messages displayed inside the panels. One of two things tends to happen: The underlying search jobs run successfully but the panels do not display data until the next refresh, which causes the search to re-run; panels behave as normal afterwards: The pending searches start executing but do not fetch any results for several seconds, which can lead to the same search taking variable amounts of time to execute. Here is an example of the same search taking significantly different amounts of time to run (ran just 27s apart): Whenever a search takes long to run, the component of the search that takes the longest to run, is, by far, the dispatch.stream.remote.<one_of_the_indexers> component which, to the best of our knowledge, represents the amount of time spent by the search head waiting for data streamed back from an indexer during a distributed search. We have run load tests consisting of opening our dashboards several times in different tabs simultaneously for prolonged periods of time and monitoring system metrics such as CPU, network, and memory. We were not able to detect any hardware bottlenecks, only a modest increase in the CPU usage and load average for the search head and indexers, which is expected. We have also upgraded the hardware the search head is running on (96 cores, 512 GB RAM) and despite the noticeable performance increase, the problem still occurs occasionally. We would greatly appreciate the community's assistance in helping us troubleshoot these issues.
Hi everyone and thanks in advance. I'm trying to collate all our SOCKS traffic on our network over the last 90 days. Our IP's rotate and as a result I can't run this search for all time, I have to ... See more...
Hi everyone and thanks in advance. I'm trying to collate all our SOCKS traffic on our network over the last 90 days. Our IP's rotate and as a result I can't run this search for all time, I have to run it for 90 days individually, Which is where I got to here: index=*proxy* SOCKS earliest=-1d latest=-0d | eval destination=coalesce(dest, dest_port), userid=coalesce(user, username) | rex field=url mode=sed "s/^SOCKS:\/\/|:\d+$//g" | eval network=case(match(src_ip,"<REDACTED>"),"user",1=1,"server") | stats values(domain) as Domain values(userid) as Users values(destination) as Destinations by url, src_ip, network | convert ctime(First_Seen) ctime(Last_Seen) | sort -Event_Count | join type=left max=0 src_ip [ search index=triangulate earliest=-1d latest=-0d |stats count by ip,username |rename username AS userid |rename ip as src_ip ] | join type=left max=0 src_ip [ search index=windows_events EventID=4624 NOT src_ip="-" NOT user="*$" earliest=-1d latest=-0d | stats count by IpAddress, user | rename IpAddress as src_ip | rename user as win_userid | fields - count ] |eval userid=coalesce(userid, win_userid) | join type=left max=0 userid [ search index="active_directory" earliest=-1d latest=-0d | stats count by username,fullname,title,division,mail | rename username as userid ] Then a colleague suggested I do it slightly differently and run it over the 90 days but link it together which is where we got to here: index=*proxy* SOCKS | eval destination=coalesce(dest, dest_port) | rex field=url mode=sed "s/^SOCKS:\/\/|:\d+$//g" | eval network=case(match(src_ip,"<Redacted>"),"user",1=1,"server") | eval Proxy_day = strftime(_time, "%d-%m-%y") | join type=left max=0 src_ip [ search index=windows_events EventID=4624 NOT src_ip="-" NOT user="*$" | stats count by IpAddress, user | rename IpAddress as src_ip | rename user as win_userid | fields - count ] | eval userid=coalesce(userid, win_userid) | join type=left max=0 userid [ search index="active_directory" | stats count by username, fullname, title, division, mail | rename username as userid ] | rename src_ip as "Source IP" | stats values(mail) as "Email Address" values(username) as "User ID" values(destination) as Destination values(network) as Network values(Proxy_day) as Day values(url) as URL by "Source IP" However the problem I'm running into now is in the data produced there could be 100's of URL's / Emails / Day associated with the source IP which makes the data unactionable and actually starts to break a .csv when exported. Would anyone be able to help? Ideally I'd just like the top for example 5 results, but I've had no luck with that or a few other methods I've tried. Even SplunkGPT is failing me - is it even possible?