All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@finchgdx Hello, Do you have that usage field in your tutorial data or CSV ? 
Hi, I have problem with parsing log in Splunk Add-on for Check Point Log Exporter. I have install it in both SH and HF, but log from checkpoint not parsing properly. I have try change REGEX to ([a-zA... See more...
Hi, I have problem with parsing log in Splunk Add-on for Check Point Log Exporter. I have install it in both SH and HF, but log from checkpoint not parsing properly. I have try change REGEX to ([a-zA-Z0-9_-]+)[:=]+([^|]+) and try to change DEPTH_LIMIT to 200000 like in troubleshooting said but it still not working.  Can you give me some advice?  Thank you so much !
Hello, I was trying to ingest snmptrapd logs with self file monitoring (Only one Splunk Instance in my environment) Here is the log format: <UNKNOWN> - 2025-01-13 10:55:44 UDP: [10.0.216.39]:53916... See more...
Hello, I was trying to ingest snmptrapd logs with self file monitoring (Only one Splunk Instance in my environment) Here is the log format: <UNKNOWN> - 2025-01-13 10:55:44 UDP: [10.0.216.39]:53916->[10.0.214.14]:162 SNMPv2-SMI::mib-2.1.3.0 30:17:26:51.00 SNMPv2-SMI::snmpModules.1.1.4.1.0 CYBER-ARK-MIB::osDiskFreeSpaceNotification CYBER-ARK-MIB::osDiskDrive "C:\\" CYBER-ARK-MIB::osDiskPercentageFreeSpace "71.61" CYBER-ARK-MIB::osDiskFreeSpace "58221" CYBER-ARK-MIB::osDiskTrapState "Alert" <UNKNOWN> - 2025-01-13 10:55:44 UDP: [10.0.216.39]:53916->[10.0.214.14]:162 SNMPv2-SMI::mib-2.1.3.0 30:17:26:51.00 SNMPv2-SMI::snmpModules.1.1.4.1.0 CYBER-ARK-MIB::osMemoryUsageNotification CYBER-ARK-MIB::osMemoryTotalKbPhysical 16776172 CYBER-ARK-MIB::osMemoryAvailKbPhysical 13524732 CYBER-ARK-MIB::osMemoryTotalKbSwap 19266540 CYBER-ARK-MIB::osMemoryAvailKbSwap 3660968 CYBER-ARK-MIB::osMemoryTrapState "Alert" <UNKNOWN> - 2025-01-13 10:55:44 UDP: [10.0.216.39]:53916->[10.0.214.14]:162 SNMPv2-SMI::mib-2.1.3.0 30:17:26:51.00 SNMPv2-SMI::snmpModules.1.1.4.1.0 CYBER-ARK-MIB::osSwapMemoryUsageNotification CYBER-ARK-MIB::osMemoryTotalKbPhysical 16776172 CYBER-ARK-MIB::osMemoryAvailKbPhysical 13524732 CYBER-ARK-MIB::osMemoryTotalKbSwap 19266540 CYBER-ARK-MIB::osMemoryAvailKbSwap 3660968 CYBER-ARK-MIB::osMemoryTrapState "Alert" I tried to use "<UNKNOWN>" as the line breaker, but it does not work at all and the event is broke in a weird way(sometimes it works, most of the time it doesn't) Please find the props.conf setting as below: [cyberark:snmplogs] LINE_BREAKER = \<UNKNOWN\> NO_BINARY_CHECK = true SHOULD_LINEMERGE = true category = Custom pulldown_type = 1 BREAK_ONLY_BEFORE = \<UNKNOWN\> MUST_NOT_BREAK_BEFORE = \<UNKNOWN\> disabled = false LINE_BREAKER_LOOKBEHIND = 2000     Line Breaking Result in Splunk:
I am watching the training for the core user certification path on STEP and they are using an index that has the usage field.  I have uploaded the tutorial data from the community site but it doesn't... See more...
I am watching the training for the core user certification path on STEP and they are using an index that has the usage field.  I have uploaded the tutorial data from the community site but it doesn't have the usage field.  I don't know how to rectify this and I cannot replicate the activity in the learning material.  Does anyone have a suggestion?   EDIT - i just made up my own CSV and imported that data. ggwp
Your application logs are stored in Splunk Cloud. Splunk Observability Cloud does not store any application logs--it uses the Log Observer Connect integration to read them from Splunk Cloud and displ... See more...
Your application logs are stored in Splunk Cloud. Splunk Observability Cloud does not store any application logs--it uses the Log Observer Connect integration to read them from Splunk Cloud and display them.
Two problems with the search. In an evaluation function, deep path payload.status needs to be single quoted (i.e., 'payload.status') to dereference its value.  Otherwise bare word payload.status ev... See more...
Two problems with the search. In an evaluation function, deep path payload.status needs to be single quoted (i.e., 'payload.status') to dereference its value.  Otherwise bare word payload.status evaluates to null. If you want to use count(is_ok), you should make the "other" value disappear, i.e., make it be a null, not a "real" value of 0.  If you think 0 is a better representation for "other", use sum as @ITWhisperer suggests. In other words, on mock event sequence _raw payload.status seq {"seq":1,"payload":{"status":"ok"}} ok 1 {"seq":2,"payload":{"status":"degraded"}} degraded 2 {"seq":3,"payload":{"status":"ok"}} ok 3 either   | eval is_ok=if('payload.status'=="ok", 1, null()) | stats count as total, count(is_ok) as ok_count   or   | eval is_ok=if('payload.status'=="ok", 1, 0) | stats count as total, sum(is_ok) as ok_count   or even   | eval is_ok=if('payload.status'=="ok", 1, 0) | stats count as total, count(eval(is_ok = 1)) as ok_count   should give you total ok_count 3 2 This is an emulation you can play with and compare with real data   | makeresults format=json data="[ { \"seq\": 1, \"payload\": { \"status\": \"ok\", } }, { \"seq\": 2, \"payload\": { \"status\": \"degraded\", } }, { \"seq\": 3, \"payload\": { \"status\": \"ok\", } } ]" | fields - payload, seq, _time | spath ``` data emulation above ```    
Worked like a charm. This line seems to be making all the difference: | spath path=payload.status output=status.
It still fails in that it appears that the if(payload.status==...) always evaluates to false, despite there being both "ok" and "degraded" events, so the sum is equal to the count of all events.
@splunklearner  The Config Explorer app itself does not directly support running CLI commands, as it is primarily designed for managing configurations and validating them within Splunk’s web UI.  T... See more...
@splunklearner  The Config Explorer app itself does not directly support running CLI commands, as it is primarily designed for managing configurations and validating them within Splunk’s web UI.  This is an older link, and there isn’t any official Splunk documentation available for this specific topic. However, you can give it a try: https://community.splunk.com/t5/Deployment-Architecture/What-is-the-curl-command-used-on-the-deployer-to-apply-shcluster/td-p/202735?_gl=1zi5ycp_gcl_auMTQ4OTY1OTQ1MC4xNzM2MTkyNzY0FPAUMTQ4OTY1OTQ1MC4xNzM2MTkyNzY0_gaMTg5NTQwNDUzLjE3MzYxOTI3NjQ._ga_5EPM2P39FVMTczNjcwODUyNC4yLjEuMTczNjcwODY0Mi4wLjAuMTM1MTg2ODI4MA.._fplc*JTJCNEZ3U3B6Q0EycCUyQnd0dlR4ZVI0ekJXZlI3Y0kzQ3dEdDl1b3QyNGFRQ01pUCUyQlhXTmFRO If this reply helps you, Karma would be appreciated.
Hello everyone, I am in the process of installing a Java agent on Linux (RHEL8) for WebMethods. It's pretty straight forward in the documentation. However, there is a difference between the AppDynam... See more...
Hello everyone, I am in the process of installing a Java agent on Linux (RHEL8) for WebMethods. It's pretty straight forward in the documentation. However, there is a difference between the AppDynamics documentation and the WebMethods one. In AppD, it says (and I am quoting here from webMethods Startup Settings) For webMethods servers that use the Tanuki Java service wrapper for start-up, you need to configure the agent in the wrapper.conf file. See Tanuki Service Wrapper Settings. Yet in WebMethods documentation (My webMethods Server Webhelp) There are some parameters that do not relate to My webMethods Server but to the JVM itself. You set custom JVM parameters in the custom_wrapper.conf file for My webMethods Server, using the following syntax: wrapper.java.additional.n=parameter Which configuration method is correct, and if both are correct which one is recommended? Can the AppD documentation be updated also to include the default paths/locations to the .conf files in WebMethods? 
Try using sum rather than count index=whatever | eval is_ok=if(payload.status=="ok", 1, 0) | stats count as total, sum(is_ok) as ok_count
We are currently using config explorer app to update configurations across our deployments  My doubt here is how can I run CLI command in config explorer? I need to give CLI command in Deployer to ... See more...
We are currently using config explorer app to update configurations across our deployments  My doubt here is how can I run CLI command in config explorer? I need to give CLI command in Deployer to deploy apps across SH cluster members? We don't have backend server access as of now. Is it possible to run CLI command through config explorer or do we need to have backend server access for that for sure?
@AFKunc Kindly verify if the JSON data has been onboarded correctly. I tested it using the same data you provided. Could you confirm if this is the data you were expecting?   I hope this h... See more...
@AFKunc Kindly verify if the JSON data has been onboarded correctly. I tested it using the same data you provided. Could you confirm if this is the data you were expecting?   I hope this helps, if any reply helps you, you could add your upvote/karma points to that reply, thanks.
Hi bud, Have you tried creating a <p> on the dashboard and telling the script to display it? <dashboard version="1.1" script="my_js.js"> <label>My JS Output</label> <row> <panel> <html> <input ty... See more...
Hi bud, Have you tried creating a <p> on the dashboard and telling the script to display it? <dashboard version="1.1" script="my_js.js"> <label>My JS Output</label> <row> <panel> <html> <input type="button" value="Submit" id="submitButton" /> <h1>This is my output<h1> <p id="my_script_output"></p </html> </panel> </row> </dashboard> Then your js: document.getElementById("submitButton").addEventListener("click", output); function(output() { let script_output = "This is my Output." document.getElementById("my_script_output").innerHTML = script_output; } Does depend on how complex you want it to be... Let me know how you get on with this Regards
@rohithvr19  Please check my answer below. https://community.splunk.com/t5/Dashboards-Visualizations/Can-I-call-a-Python-script-from-a-dashboard-and-output-its/m-p/398088   Thanks KV An upvote ... See more...
@rohithvr19  Please check my answer below. https://community.splunk.com/t5/Dashboards-Visualizations/Can-I-call-a-Python-script-from-a-dashboard-and-output-its/m-p/398088   Thanks KV An upvote would be appreciated if any of my replies help you solve the problem or gain knowledge.
Currently there seems to be too many open questions how your environment and it’s deployments are done. For that reason you should sit down with someone who are managed this environment and go throug... See more...
Currently there seems to be too many open questions how your environment and it’s deployments are done. For that reason you should sit down with someone who are managed this environment and go through and document the correct way to deploy changes into different servers. I suppose that they already have some automation or some defined processes how this has done. Of course we could told some technical tips and tricks to do a individual change etc but I’m quite sure that this is not the correct and long living solution for your problem!
Hello @isoutamo , We are trying to create a role (by using authorise.conf) in DS app under etc/deployment-apps and it will be  pushed to deployer under shcluster/apps. From there how do I need to pu... See more...
Hello @isoutamo , We are trying to create a role (by using authorise.conf) in DS app under etc/deployment-apps and it will be  pushed to deployer under shcluster/apps. From there how do I need to push it to search head cluster members. There are 3 SHs. We don't have access to backend. From Splunk web I need to achieve this.? Because in SH when I am checking with roles section created role is not showing but in Deployer under shcluster/apps authorise.conf is updated when I push it from DS. Please help me in this?
Hi, I have json data structured as follows:     { "payload": { "status": "ok", # or "degraded" } }     I'm trying to use the stats command to count the "ok" and "degraded" events separa... See more...
Hi, I have json data structured as follows:     { "payload": { "status": "ok", # or "degraded" } }     I'm trying to use the stats command to count the "ok" and "degraded" events separately. I am using the following query:      index=whatever | eval is_ok=if(payload.status=="ok", 1, 0) | stats count as total, count(is_ok) as ok_count     I have tried passing it through spath, , with "=" in the if condition,  and several other approaches changes. What always happens is that both counts contain all elements, despite there being different numbers of them. Please help!
Is it possible to execute a script through a button click and display the script's output on a Splunk dashboard? Has anyone implemented something similar before? Any guidance would be greatly appreci... See more...
Is it possible to execute a script through a button click and display the script's output on a Splunk dashboard? Has anyone implemented something similar before? Any guidance would be greatly appreciated, as I am currently stuck on this. Thank you!
Hello everyone! I would like to ask about the Splunk Heavy Forwarder Splunk-side config: https://splunk.github.io/splunk-connect-for-syslog/main/sources/vendor/Splunk/heavyforwarder/ With those ... See more...
Hello everyone! I would like to ask about the Splunk Heavy Forwarder Splunk-side config: https://splunk.github.io/splunk-connect-for-syslog/main/sources/vendor/Splunk/heavyforwarder/ With those settings it will send the metadata in the format of key::value. Is it possible to reconfigure it to send metadata key-value pairs with some other key-value separator instead of "::"? If yes, how exactly?