All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello everyone, I am in the process of installing a Java agent on Linux (RHEL8) for WebMethods. It's pretty straight forward in the documentation. However, there is a difference between the AppDynam... See more...
Hello everyone, I am in the process of installing a Java agent on Linux (RHEL8) for WebMethods. It's pretty straight forward in the documentation. However, there is a difference between the AppDynamics documentation and the WebMethods one. In AppD, it says (and I am quoting here from webMethods Startup Settings) For webMethods servers that use the Tanuki Java service wrapper for start-up, you need to configure the agent in the wrapper.conf file. See Tanuki Service Wrapper Settings. Yet in WebMethods documentation (My webMethods Server Webhelp) There are some parameters that do not relate to My webMethods Server but to the JVM itself. You set custom JVM parameters in the custom_wrapper.conf file for My webMethods Server, using the following syntax: wrapper.java.additional.n=parameter Which configuration method is correct, and if both are correct which one is recommended? Can the AppD documentation be updated also to include the default paths/locations to the .conf files in WebMethods? 
We are currently using config explorer app to update configurations across our deployments  My doubt here is how can I run CLI command in config explorer? I need to give CLI command in Deployer to ... See more...
We are currently using config explorer app to update configurations across our deployments  My doubt here is how can I run CLI command in config explorer? I need to give CLI command in Deployer to deploy apps across SH cluster members? We don't have backend server access as of now. Is it possible to run CLI command through config explorer or do we need to have backend server access for that for sure?
Hi, I have json data structured as follows:     { "payload": { "status": "ok", # or "degraded" } }     I'm trying to use the stats command to count the "ok" and "degraded" events separa... See more...
Hi, I have json data structured as follows:     { "payload": { "status": "ok", # or "degraded" } }     I'm trying to use the stats command to count the "ok" and "degraded" events separately. I am using the following query:      index=whatever | eval is_ok=if(payload.status=="ok", 1, 0) | stats count as total, count(is_ok) as ok_count     I have tried passing it through spath, , with "=" in the if condition,  and several other approaches changes. What always happens is that both counts contain all elements, despite there being different numbers of them. Please help!
Is it possible to execute a script through a button click and display the script's output on a Splunk dashboard? Has anyone implemented something similar before? Any guidance would be greatly appreci... See more...
Is it possible to execute a script through a button click and display the script's output on a Splunk dashboard? Has anyone implemented something similar before? Any guidance would be greatly appreciated, as I am currently stuck on this. Thank you!
Hello everyone! I would like to ask about the Splunk Heavy Forwarder Splunk-side config: https://splunk.github.io/splunk-connect-for-syslog/main/sources/vendor/Splunk/heavyforwarder/ With those ... See more...
Hello everyone! I would like to ask about the Splunk Heavy Forwarder Splunk-side config: https://splunk.github.io/splunk-connect-for-syslog/main/sources/vendor/Splunk/heavyforwarder/ With those settings it will send the metadata in the format of key::value. Is it possible to reconfigure it to send metadata key-value pairs with some other key-value separator instead of "::"? If yes, how exactly?
I'm trying to create a simple status page visualization that mimics the style I've seen by Atlassian Statuspage.   You can see it on the status page for Discord and Wiz. Currently, I have a timechar... See more...
I'm trying to create a simple status page visualization that mimics the style I've seen by Atlassian Statuspage.   You can see it on the status page for Discord and Wiz. Currently, I have a timechart and if status=1 then it's up, but if status=0 then it's down.  When the app is down, there is simply no bar on the graph.  How do I "force" a value for the bar to appear but then color each bar based on the status value.  I think I'm missing something really simple and hoping someone can point me in the right direction. Current SPL index=main app="myApp" | eval status=if(isnull(status), "0", status) | timechart span=1m max(status) by app   Current XML   <dashboard version="1.1" theme="light"> <label>Application Status</label> <row> <panel> <chart> <search> <query>index=main app="myApp" | eval status=if(isnull(status), "0", status) | timechart span=1m max(status) by app</query> <earliest>-60m@m</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option> <option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option> <option name="charting.axisLabelsY.majorUnit">1</option> <option name="charting.axisTitleX.visibility">collapsed</option> <option name="charting.axisTitleY.visibility">collapsed</option> <option name="charting.axisTitleY2.visibility">visible</option> <option name="charting.axisX.abbreviation">none</option> <option name="charting.axisX.scale">linear</option> <option name="charting.axisY.abbreviation">none</option> <option name="charting.axisY.maximumNumber">1</option> <option name="charting.axisY.minimumNumber">0</option> <option name="charting.axisY.scale">linear</option> <option name="charting.axisY2.abbreviation">none</option> <option name="charting.axisY2.enabled">0</option> <option name="charting.axisY2.scale">inherit</option> <option name="charting.chart">column</option> <option name="charting.chart.bubbleMaximumSize">50</option> <option name="charting.chart.bubbleMinimumSize">10</option> <option name="charting.chart.bubbleSizeBy">area</option> <option name="charting.chart.columnSpacing">0</option> <option name="charting.chart.nullValueMode">gaps</option> <option name="charting.chart.showDataLabels">none</option> <option name="charting.chart.sliceCollapsingThreshold">0.01</option> <option name="charting.chart.stackMode">default</option> <option name="charting.chart.style">shiny</option> <option name="charting.drilldown">none</option> <option name="charting.seriesColors">[0x459240]</option> <option name="charting.layout.splitSeries">0</option> <option name="charting.layout.splitSeries.allowIndependentYRanges">0</option> <option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option> <option name="charting.legend.mode">standard</option> <option name="charting.legend.placement">none</option> <option name="charting.lineWidth">2</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> </chart> </panel> </row> </dashboard>    
Hi, I have two indexes - "cart" and "purchased" . In "cart" index there is a field "cart_id" and in "purchased" there is a field "pur_id".  If  payment will be successfully for a cart then the card_... See more...
Hi, I have two indexes - "cart" and "purchased" . In "cart" index there is a field "cart_id" and in "purchased" there is a field "pur_id".  If  payment will be successfully for a cart then the card_id values will be stored as a pur_id in the "purchased" index. cart purchased  cart_id 123 payment received  pur_id   123 cart_id 456   no payment  no record for 456 Now I want to display the percentage of cart for which payment is done. I wonder if anyone can help here.   Thank you so much 
Is it possible to create a button in a Splunk dashboard that, when clicked, runs a script to export logs from Zabbix and display them on the dashboard? The dashboard should only be visible after the ... See more...
Is it possible to create a button in a Splunk dashboard that, when clicked, runs a script to export logs from Zabbix and display them on the dashboard? The dashboard should only be visible after the button is clicked. Has anyone implemented something like this before? Please help, as I’m really stuck on this!
We have a deployment server which deploys apps (which contains configs) to Search head cluster (3 SH). I am not sure whether DS distributes apps directly to SH members or it will sent to deployer and... See more...
We have a deployment server which deploys apps (which contains configs) to Search head cluster (3 SH). I am not sure whether DS distributes apps directly to SH members or it will sent to deployer and from there deployer will distribute apps to SH members? Please clarify. We have created a role in DS app which restricts to specific index. When we try to push it... That role is not reflecting in SH members? But when we are checking in Deployer that app is present under shcluster/apps and that role is updated. But it is not showing in SH UI. What is the problem?   Do we need to manually push the config from deployer to SH members everytime? We have config in Deployer as deployer_push_mode=merge_to_default... Is it means distribution is automated? If not how to push config from Deployer to SH members through Splunk web? We don't have access to backend server to give CLI command.
I've got to be close. But having issues trying to figure out how to get a distinct count of user sessions to show up in a bar chart with a trendline. I'd like to see a distinct count of users for las... See more...
I've got to be close. But having issues trying to figure out how to get a distinct count of user sessions to show up in a bar chart with a trendline. I'd like to see a distinct count of users for last year by month and have a trendline added. <My Search> | stats dc(userSesnId) as moving_avg | timechart span=30d dc(userSesnId) as count_of_user_sessions | trendline sma4(moving_avg) as "Moving Average" | rename count_of_user_sessions AS "Disctinct Count of User Sessions"
Working on a dashboard in dashboard studio to display data in two different tables using a single dropdown.  Issue I have is that all my data is determined by the "username" field but want to have dr... See more...
Working on a dashboard in dashboard studio to display data in two different tables using a single dropdown.  Issue I have is that all my data is determined by the "username" field but want to have dropdown display user Lastname, Firstname for better visibility.    First table pulls records from a lookup table with user demographics and such.  Second table is pulling respective window log data tracking various user activity.   In my dropdown, I am currently using the lookup table and eval function to join both "user_last", "user_first" set variable to "fullname" and display User "Lastname, Firstname".   I then used "fullname" as the pass-on token for my first table.   However, my second table, I need the "username" as the token because the data I am querying only has the "username" in the logs and not the users first or last name as my first table.    My question is can I set my dropdown to display "user_last, user_first" names but set the token value as "username" or can I assign multiple tokens in a SPL query in Dashboard Studio to use in the respective tables or can I do both for sake of knowledge.   Here is what I am working with and appreciate any assistance with this. Lookup table:      Name:    system_users.csv      Fields:    username,    name_last,     name_first.... Dashboard Dropdown Field Values:     Data Source Name:    lookup_users SPL Query:     | inputlookup bpn_system_users.csv | eval fullname= name_last.", ".name_first | table fullname | sort fullname Source Code:    { "type": "ds.search", "options": { "queryParameters": { "earliest": "$SearchTimeLine.earliest$", "latest": "$SearchTimeLine.latest$" }, "query": " | inputlookup system_users.csv\n | eval fullname= name_last.\", \".name_first\n | table fullname\n | sort fullname" }, "name": "lookup_users" }
Hello Everyone,  I am hoping someone can help me out as I have exhausted everything I can think of and cannot seem to get anything to work. Essentially what I am looking to do is pull results to get... See more...
Hello Everyone,  I am hoping someone can help me out as I have exhausted everything I can think of and cannot seem to get anything to work. Essentially what I am looking to do is pull results to get a total based off an ID. The issue I am running into is that the ID will have between 1-4 events associated to it. These events are related to the status.  I am only wanting to get the results for any ID that are Open and Escalated but the issue I am running into is that it is pulling all of the events even of those that have since had the status changed to closed or another status.  I am wanting to excluded all of the events for IDs that have had their status changed to anything other than Open or Escalated. The other trouble that I am running into is that this "status" event is occuring in the metadata of the whole transaction. I have the majority of my query built out but where I am struggling is removing the initial Open and Escalated events for the alerts that the status was changed. The field the status changes in is under "logs" and then "logs{}.action".  
Hello Splunkers, I need some help to understand what will be the minimum spects required for Splunk Enterprise Installation for the purpose Heavy Forwarder where only it will receive logs from 1 sou... See more...
Hello Splunkers, I need some help to understand what will be the minimum spects required for Splunk Enterprise Installation for the purpose Heavy Forwarder where only it will receive logs from 1 source over Syslog and forward to Indexers.  Can I just use 2 CPU's 8 GB RAM and storage based of estimation of the log file sizes. I'm asking this because the official guide says it should be minimum 12 GB RAM , 4 Cores CPU. Please if someone can advise on this. Thanking you in advance,   Moh....
I am using StatsD to send metrics to a receiver, but I am encountering an issue where timing metrics (|ms) are not being captured, even though counter metrics (|c) work fine on Splunk Observability... See more...
I am using StatsD to send metrics to a receiver, but I am encountering an issue where timing metrics (|ms) are not being captured, even though counter metrics (|c) work fine on Splunk Observability Cloud.   Example of Working Metric: The following command works and is processed correctly by the StatsD receiver:   echo "test_Latency:42|c|#key:val" | nc -u -w1 localhost 8127   Example of Non-Working Metric: However, this command does not result in any output or processing:   echo "test_Latency:0.082231|ms" | nc -u -w1 localhost 8127   Current StatsD Configuration: Here is the configuration I am using for the receiver by following the doc: https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/statsdreceiver   receivers: statsd: endpoint: "localhost:8127" aggregation_interval: 30s enable_metric_type: true is_monotonic_counter: false timer_histogram_mapping: - statsd_type: "histogram" observer_type: "gauge" - statsd_type: "timing" observer_type: "histogram" histogram: max_size: 100 - statsd_type: "distribution" observer_type: "summary" summary: percentiles: [0, 10, 50, 90, 95, 100]    Why are timing metrics (|ms) not being captured while counters (|c) are working, can you please help to check on it as the statsdreceiver github document says it supports "timer" related metrics https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/receiver/statsdreceiver/README.md#timer Any help or suggestions would be greatly appreciated. Thank You.
After upgrading Splunk to 9.4.0 and Splunk DB Connect to 3.18.1, all INPUTS have the error: Checkpoint not found. The input in rising mode is expected to contain a checkpoint.   None of them are p... See more...
After upgrading Splunk to 9.4.0 and Splunk DB Connect to 3.18.1, all INPUTS have the error: Checkpoint not found. The input in rising mode is expected to contain a checkpoint.   None of them are pulling in data. Looking over the logs, I see:   2025-01-10 12:16:00.298 +0000 Trace-Id=1d3654ac-86c1-445f-97c6-6919b3f6eb8c [Scheduled-Job-Executor-116] ERROR org.easybatch.core.job.BatchJob - Unable to open record reader com.splunk.dbx.server.exception.ReadCheckpointFailException: Error(s) occur when reading checkpoint. at com.splunk.dbx.server.dbinput.task.DbInputCheckpointManager.load(DbInputCheckpointManager.java:71) at com.splunk.dbx.server.dbinput.task.DbInputTask.loadCheckpoint(DbInputTask.java:133) at com.splunk.dbx.server.dbinput.recordreader.DbInputRecordReader.executeQuery(DbInputRecordReader.java:82) at com.splunk.dbx.server.dbinput.recordreader.DbInputRecordReader.open(DbInputRecordReader.java:55) at org.easybatch.core.job.BatchJob.openReader(BatchJob.java:140) at org.easybatch.core.job.BatchJob.call(BatchJob.java:97) at com.splunk.dbx.server.api.service.conf.impl.InputServiceImpl.runTask(InputServiceImpl.java:321) at com.splunk.dbx.server.api.resource.InputResource.lambda$runInput$1(InputResource.java:183) at com.splunk.dbx.logging.MdcTaskDecorator.run(MdcTaskDecorator.java:23) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:833)   I'm unable to Edit the config, and update the Check point value. Even thought the Execute Query works, when I try to save the update it gives: Error(s) occur when reading checkpoint.   Has anybody else successfully upgraded to 9.4.0 and 3.18.1?
Trying to check and set values conditionally but below query is giving error Error :- Error in 'eval' command: Fields cannot be assigned a boolean result. Instead, try if([bool expr], [expr], [e... See more...
Trying to check and set values conditionally but below query is giving error Error :- Error in 'eval' command: Fields cannot be assigned a boolean result. Instead, try if([bool expr], [expr], [expr]). The search job has failed due to an error. You may be able view the job in the    Query :- index="uhcportals-prod-logs" sourcetype=kubernetes container_name="myuhc-sso" logger="com.uhg.myuhc.log.SplunkLog" message.ssoType="Inbound" | eval ssoType = if(message.incomingRequest.inboundSsoType == "5-KEY", message.incomingRequest.deepLink, message.incomingRequest.inboundSsoType== "HYBRID", message.incomingRequest.inboundSsoType) | stats distinct_count("message.ssoAttributes.EEID") as Count by ssoType, "message.backendCalls{}.responseCode"
01-09-2025 17:01:37.725 -0500 WARN  TcpOutputProc [4940 parsing] - The TCP output processor has paused the data flow. Forwarding to host_dest=sbdcrib.splunkcloud.com inside output group default-autol... See more...
01-09-2025 17:01:37.725 -0500 WARN  TcpOutputProc [4940 parsing] - The TCP output processor has paused the data flow. Forwarding to host_dest=sbdcrib.splunkcloud.com inside output group default-autolb-group from host_src=CRBCITDHCP-01 has been blocked for blocked_seconds=1800. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data.
01-09-2025 17:30:30.169 -0500 INFO  PeriodicHealthReporter - feature="TCPOutAutoLB-0" color=red indicator="s2s_connections" due_to_threshold_value=70 measured_value=100 reason="More than 70% of forwa... See more...
01-09-2025 17:30:30.169 -0500 INFO  PeriodicHealthReporter - feature="TCPOutAutoLB-0" color=red indicator="s2s_connections" due_to_threshold_value=70 measured_value=100 reason="More than 70% of forwarding destinations have failed.  Ensure your hosts and ports in outputs.conf are correct.  Also ensure that the indexers are all running, and that any SSL certificates being used for forwarding are correct." node_type=indicator node_path=splunkd.data_forwarding.splunk-2-splunk_forwarding.tcpoutautolb-0.s2s_connections
What are some reasons why a Linux UF will get quarantined by the deployment manager:8089? 
I have two log messages "%ROUTING-LDP-5-NSR_SYNC_START" and "%ROUTING-LDP-5-NBR_CHANGE" which usually accompany each other whenever there is a peer flapping. So "%ROUTING-LDP-5-NBR_CHANGE" is followe... See more...
I have two log messages "%ROUTING-LDP-5-NSR_SYNC_START" and "%ROUTING-LDP-5-NBR_CHANGE" which usually accompany each other whenever there is a peer flapping. So "%ROUTING-LDP-5-NBR_CHANGE" is followed by "%ROUTING-LDP-5-NSR_SYNC_START" almost every time. I am trying to find the output where a device only produces "%ROUTING-LDP-5-NSR_SYNC_START" without "%ROUTING-LDP-5-NBR_CHANGE" and I am using transaction but not been able to figure it out.  index = test ("%ROUTING-LDP-5-NSR_SYNC_START" OR "%ROUTING-LDP-5-NBR_CHANGE") | transaction maxspan=5m startswith="%ROUTING-LDP-5-NSR_SYNC_START" endswith="%ROUTING-LDP-5-NBR_CHANGE" | search eventcount=1 startswith="%ROUTING-LDP-5-NSR_SYNC_START" | stats count by host