All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am running into an issues where I am attempting to import data from a SQL Server database, one of the columns is entitled message, contains  message payload with the character '{' in it.  When Splu... See more...
I am running into an issues where I am attempting to import data from a SQL Server database, one of the columns is entitled message, contains  message payload with the character '{' in it.  When Splunk Process the data from DB Connect, it inappropriately truncates the message when it sees the '{' bracket in the document.  Are there solutions for overriding this line breaking feature?  We currently have to go into Raw to extract the information using RegEx to preserve the data and we would rather store this message in a Splunk Key Value Pair.  
I'm trying to do a condition based action on my chart. I want to create a situation where when a legend is clicked the form.tail will change to take click.name2 (which is a number like 120 144 931 e... See more...
I'm trying to do a condition based action on my chart. I want to create a situation where when a legend is clicked the form.tail will change to take click.name2 (which is a number like 120 144 931 etc..) and when inside the chart a column is clicked a costume search will be opened (in a new window if possible if not same window will be just fine). based of checking if click.name is a number (and it's should be as it should be the name of the source /mnt/support_engineering... )   this is my current chart: <chart> <title>amount of warning per file per Tail</title> <search base="basesearch"> <query>|search | search "WARNNING: " | rex field=_raw "WARNNING: (?&lt;warnning&gt;(?s:.*?))(?=\n\d{5}|$)" | search warnning IN $warning_type$ | search $project$ | search $platform$ | chart count over source by Tail</query> </search> <option name="charting.chart">column</option> <option name="charting.chart.showDataLabels">all</option> <option name="charting.chart.stackMode">stacked</option> <option name="charting.legend.labelStyle.overflowMode">ellipsisEnd</option> <option name="refresh.display">progressbar</option> <drilldown> <eval token="form.Tail">if($click.name2$=$form.Tail$, "*", $click.name2$)</eval> </drilldown> </chart> The main problem is that whenever i even try condition inside the drilldown a new search is opened instead managing tokens no matter what the condition or what im doing inside. This is what I've tried so far: <drilldown> <!-- Handle clicks on Tail (Legend) --> <condition match="tonumber($click.name$) != $click.name$"> <eval token="form.Tail"> if($click.name2$ == $form.Tail$, "*", $click.name2$) </eval> </condition> <!-- Handle clicks on Source (Chart) --> <condition match="tonumber($click.name$) == $click.name$"> <link> <param name="target">_blank</param> <param name="search"> index=myindex | search "WARNNING: " </param> </link> </condition> </drilldown> click.name should be the name of the source as those are the columns of my chart   thanks in advanced to helpers
I am getting the following error message whenever I try to login to my Splunk test environment: user=************** is unavailable because the maximum user count for this license has been exceeded.  ... See more...
I am getting the following error message whenever I try to login to my Splunk test environment: user=************** is unavailable because the maximum user count for this license has been exceeded.  I think this is because of a new license I recently uploaded to this box. As the old license was due to expire I recently got a new free Splunk license (10GB Splunk Developer License). I received/uploaded it to the test box on Friday, 3 days before the old one was due to expire. I then deleted the old license that day despite it having a few additional days. On Sunday (the day the old license was due to expire), I started getting this login issue. As I can't get past the login screen I can't try and reupload a different license, etc.  Any suggestions? 
Hi there,  I'm looking to setup an automated email that will trigger any time a new alert comes into Incident Review in Splunk ES (using Splunk>enterprise). The idea is for the team to be notified w... See more...
Hi there,  I'm looking to setup an automated email that will trigger any time a new alert comes into Incident Review in Splunk ES (using Splunk>enterprise). The idea is for the team to be notified without having the Incident Review page open and improve response time. I know I can set emails individually when a alert triggers, but this would be for every 'new' alert (there are some alerts that are autoclosing) that comes in or with an option to only target high urgency alerts based on volume.  Any advice would be appreciated!  
How to download add-on "Splunk IT Service Intelligence" I'm receiving an error, as seen in the picture below.
Hi everyone, I’ve been receiving a lot of helpful responses regarding this topic, and I truly appreciate the support. However, I’m currently stuck on how to execute a Python script via a button in ... See more...
Hi everyone, I’ve been receiving a lot of helpful responses regarding this topic, and I truly appreciate the support. However, I’m currently stuck on how to execute a Python script via a button in Splunk and display the results on a dashboard. Here’s the Python script I’m using: import json import requests import logging class ZabbixHandler: def __init__(self): self.logger = logging.getLogger('zabbix_handler') self.ZABBIX_API_URL = "http://localhost/zabbix/api_jsonrpc.php" # Replace with your Zabbix API URL self.ZABBIX_USERNAME = "user" # Replace with your Zabbix username self.ZABBIX_PASSWORD = "password" # Replace with your Zabbix password self.SPLUNK_HEC_URL = "http://localhost:8088/services/collector" # Replace with your Splunk HEC URL self.SPLUNK_HEC_TOKEN = "myhectoken" # Replace with your Splunk HEC token self.HEC_INDEX = "summary" # Splunk index for the logs self.HEC_SOURCETYPE = "zabbix:audit:logs" # Splunk sourcetype def authenticate_with_zabbix(self): payload = { "jsonrpc": "2.0", "method": "user.login", "params": { "username": self.ZABBIX_USERNAME, "password": self.ZABBIX_PASSWORD, }, "id": 1, } response = requests.post(self.ZABBIX_API_URL, json=payload, verify=False) response_data = response.json() if "result" in response_data: return response_data["result"] else: raise Exception(f"Zabbix authentication failed: {response_data}") def fetch_audit_logs(self, auth_token): payload = { "jsonrpc": "2.0", "method": "auditlog.get", "params": { "output": "extend", "filter": { "action": 0 # Fetch specific actions if needed } }, "auth": auth_token, "id": 2, } response = requests.post(self.ZABBIX_API_URL, json=payload, verify=False) response_data = response.json() if "result" in response_data: return response_data["result"] else: raise Exception(f"Failed to fetch audit logs: {response_data}") def send_logs_to_splunk(self, logs): headers = { "Authorization": f"Splunk {self.SPLUNK_HEC_TOKEN}", "Content-Type": "application/json", } for log in logs: payload = { "index": self.HEC_INDEX, "sourcetype": self.HEC_SOURCETYPE, "event": log, } response = requests.post(self.SPLUNK_HEC_URL, headers=headers, json=payload, verify=False) if response.status_code != 200: self.logger.error(f"Failed to send log to Splunk: {response.status_code} - {response.text}") def handle_request(self): try: auth_token = self.authenticate_with_zabbix() logs = self.fetch_audit_logs(auth_token) self.send_logs_to_splunk(logs) return {"status": "success", "message": "Logs fetched and sent to Splunk successfully."} except Exception as e: self.logger.error(f"Error during operation: {str(e)}") return {"status": "error", "message": str(e)} if __name__ == "__main__": handler = ZabbixHandler() response = handler.handle_request() print(json.dumps(response)) My restmap.conf [script:zabbix_handler] match = /zabbix_handler script = zabbix_handler.py handler = python output_modes = json Current Status: The script is working correctly, and I am successfully retrieving data from Zabbix and sending it to Splunk. The logs are being indexed in Splunk’s summary index, and I can verify this via manual execution of the script. Requirements: I want to create a button in a Splunk dashboard that, when clicked, executes the above Python script. The script (zabbix_handler.py) is located in the /opt/splunk/bin/ directory. The script extracts logs from Zabbix, sends them to Splunk’s HEC endpoint, and stores them in the summary index. After the button is clicked and the script is executed, I would like to display the query results from index="summary" on the same dashboard. Questions: JavaScript for the Button: How should I write the JavaScript code for the button to execute this script and display the results? Placement of JavaScript Code: Where exactly in the Splunk app directory should I place the JavaScript code? Triggering the Script: How can I integrate this setup with Splunk’s framework to ensure the Python script is executed and results are shown in the dashboard?   @kamlesh_vaghela  can u help me on this task.?im kind of stuck on this and ur videos helped me a lot ..!
Hi everyone, I’ve been receiving a lot of helpful responses regarding this topic, and I truly appreciate the support. However, I’m currently stuck on how to execute a Python script via a button in ... See more...
Hi everyone, I’ve been receiving a lot of helpful responses regarding this topic, and I truly appreciate the support. However, I’m currently stuck on how to execute a Python script via a button in Splunk and display the results on a dashboard. Here’s the Python script I’m using: import json import requests import logging class ZabbixHandler: def __init__(self): self.logger = logging.getLogger('zabbix_handler') self.ZABBIX_API_URL = "http://localhost/zabbix/api_jsonrpc.php" # Replace with your Zabbix API URL self.ZABBIX_USERNAME = "user" # Replace with your Zabbix username self.ZABBIX_PASSWORD = "password" # Replace with your Zabbix password self.SPLUNK_HEC_URL = "http://localhost:8088/services/collector" # Replace with your Splunk HEC URL self.SPLUNK_HEC_TOKEN = "myhectoken" # Replace with your Splunk HEC token self.HEC_INDEX = "summary" # Splunk index for the logs self.HEC_SOURCETYPE = "zabbix:audit:logs" # Splunk sourcetype def authenticate_with_zabbix(self): payload = { "jsonrpc": "2.0", "method": "user.login", "params": { "username": self.ZABBIX_USERNAME, "password": self.ZABBIX_PASSWORD, }, "id": 1, } response = requests.post(self.ZABBIX_API_URL, json=payload, verify=False) response_data = response.json() if "result" in response_data: return response_data["result"] else: raise Exception(f"Zabbix authentication failed: {response_data}") def fetch_audit_logs(self, auth_token): payload = { "jsonrpc": "2.0", "method": "auditlog.get", "params": { "output": "extend", "filter": { "action": 0 # Fetch specific actions if needed } }, "auth": auth_token, "id": 2, } response = requests.post(self.ZABBIX_API_URL, json=payload, verify=False) response_data = response.json() if "result" in response_data: return response_data["result"] else: raise Exception(f"Failed to fetch audit logs: {response_data}") def send_logs_to_splunk(self, logs): headers = { "Authorization": f"Splunk {self.SPLUNK_HEC_TOKEN}", "Content-Type": "application/json", } for log in logs: payload = { "index": self.HEC_INDEX, "sourcetype": self.HEC_SOURCETYPE, "event": log, } response = requests.post(self.SPLUNK_HEC_URL, headers=headers, json=payload, verify=False) if response.status_code != 200: self.logger.error(f"Failed to send log to Splunk: {response.status_code} - {response.text}") def handle_request(self): try: auth_token = self.authenticate_with_zabbix() logs = self.fetch_audit_logs(auth_token) self.send_logs_to_splunk(logs) return {"status": "success", "message": "Logs fetched and sent to Splunk successfully."} except Exception as e: self.logger.error(f"Error during operation: {str(e)}") return {"status": "error", "message": str(e)} if __name__ == "__main__": handler = ZabbixHandler() response = handler.handle_request() print(json.dumps(response)) My restmap.conf [script:zabbix_handler] match = /zabbix_handler script = zabbix_handler.py handler = python output_modes = json Current Status: The script is working correctly, and I am successfully retrieving data from Zabbix and sending it to Splunk. The logs are being indexed in Splunk’s summary index, and I can verify this via manual execution of the script. Requirements: I want to create a button in a Splunk dashboard that, when clicked, executes the above Python script. The script (zabbix_handler.py) is located in the /opt/splunk/bin/ directory. The script extracts logs from Zabbix, sends them to Splunk’s HEC endpoint, and stores them in the summary index. After the button is clicked and the script is executed, I would like to display the query results from index="summary" on the same dashboard. Questions: JavaScript for the Button: How should I write the JavaScript code for the button to execute this script and display the results? Placement of JavaScript Code: Where exactly in the Splunk app directory should I place the JavaScript code? Triggering the Script: How can I integrate this setup with Splunk’s framework to ensure the Python script is executed and results are shown in the dashboard?
Cannot edit the index setting as it shows an error said "Argument "coldPath_expanded" is not supported by this handler". Splunk Enterprise version: 8.2.4  
We have a windows host that sends us stream data but suddenly stopped the stream. Upon checking the _internal logs we found that sniffer.running=false at the exact time when it stopped sending the lo... See more...
We have a windows host that sends us stream data but suddenly stopped the stream. Upon checking the _internal logs we found that sniffer.running=false at the exact time when it stopped sending the logs. Previous to this it was true. I am trying to find out where I can set this flag to true and restart the streamfwd.exe. If this would fix the issue. Doubt is that we didn't really touched any conf file that would have changed it. Attaching the internal logs for this host for more clarity if the solution that I am thinking is not the one and something else needs to be done.   Thanks in advance for any help.  
I have increase the Max Size of the "main" index at indexer clustering master node. I tried to push it to the peer node, it showed successful and I have also restart the peer node (Server Control -->... See more...
I have increase the Max Size of the "main" index at indexer clustering master node. I tried to push it to the peer node, it showed successful and I have also restart the peer node (Server Control --> Restart Splunk).  The Max Size of the "main" index is still not updated.     Splunk Enterprise version: 8.2
I have to display a field called Info which has value A and color it based on range (low, severe, high) as was Splunk Classic but in Splunk Dashboard studio . How can i achieve that?
I have an indexer, a search head, and a heavy forwarder for a small installation. How do I configure them to communicate correctly?
I'm in the process of creating a small Splunk installation and I would like to know from where I would download the syslog-ng Linux Ubuntu installation for version 20.x.
In my logs I am getting 4 events for 1 id.  1)Updating DB record with displayId=ABC0000000; type=TRANSFER 2)Updating DB record with displayId=ABC0000000; type=MESSAGES 3)Updating DB record with ... See more...
In my logs I am getting 4 events for 1 id.  1)Updating DB record with displayId=ABC0000000; type=TRANSFER 2)Updating DB record with displayId=ABC0000000; type=MESSAGES 3)Updating DB record with displayId=ABC0000000; type=POSTING 4)Sending message to  topic ver. 2.3 with displayId=ABC0000000 Sample logs: [13.01.2025 15:45.50] [XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX] [XXXXXXXXXXXXXXXXXXXXXX] [INFO ] [Application_name]- Updating DB record with displayId=ABC0000000; type=TRANSFER I want to get the list of all those ids which have all 3 events like "Updating DB........." but missing "Sending message to  topic ........."
Hi All, I have a main search where name1 filed will have multiple values I need to run sub search based on the value of name1.  The structure goes like this: mail_search which has name1=a sub... See more...
Hi All, I have a main search where name1 filed will have multiple values I need to run sub search based on the value of name1.  The structure goes like this: mail_search which has name1=a sub search if name1=a   then run search1 if name1=b   then run search2 I have tried this with the following code:     | makeresults | eval name1="a" | eval condition=case(name1="a", "index=_internal | head 1 | eval val=\"Query for a1\" | table val", name1="b", "index=_internal | head 1 | eval val=\"Query for b\" | table val", 1=1, "search index=_internal | head 1 | eval val=\"Default query\" | table val") |table condition | map search=$condition$     I am getting the following error Unable to run query '"index=_internal | head 1 | eval val=\"Query for a1\" | table val"'.
Hi, I have a requirement to mask any sensitive data, such as credit card numbers or Social Security Numbers, that might be ingested into Splunk. I can write the props to handle data masking, but the... See more...
Hi, I have a requirement to mask any sensitive data, such as credit card numbers or Social Security Numbers, that might be ingested into Splunk. I can write the props to handle data masking, but the challenge is that I do not know where or if the sensitive data will appear. Although the data we currently have doesn't contain any sensitive information, compliance mandates require us to implement controls that detect and mask such data before it is ingested into Splunk. Essentially, the props need to be dynamic. Is there a way to achieve this?   Thanks.
Hi everyone, recently we had an use case where we had to use the scheduled png export function from a dashboard studio dashboard (Enterprise 9.4) Unfortunately it´s only working with some kind of l... See more...
Hi everyone, recently we had an use case where we had to use the scheduled png export function from a dashboard studio dashboard (Enterprise 9.4) Unfortunately it´s only working with some kind of limitations (Bug?). If you change the custom input fields of the export for subject and message it´s not considered in the mail. In the Dev Tools you will find something like "action.email.subject"  for the subject and action.email.message for the message with the informationen written to the export schedule, seems to be okay so far. If you start the export again, only the following fields will be considered in the email, which seems to be predefined and not changable at all via GUI. "action.email.message.view": "Splunk Dashboard: api_modify_dashboard_backup "action.email.subject.view" : A PDF was generated for api_modify_dashboard_backup (even if you select the png-export)   Anyone else experienced this or even better has found a solution? Thanks to all  
Hi, I have problem with parsing log in Splunk Add-on for Check Point Log Exporter. I have install it in both SH and HF, but log from checkpoint not parsing properly. I have try change REGEX to ([a-zA... See more...
Hi, I have problem with parsing log in Splunk Add-on for Check Point Log Exporter. I have install it in both SH and HF, but log from checkpoint not parsing properly. I have try change REGEX to ([a-zA-Z0-9_-]+)[:=]+([^|]+) and try to change DEPTH_LIMIT to 200000 like in troubleshooting said but it still not working.  Can you give me some advice?  Thank you so much !
Hello, I was trying to ingest snmptrapd logs with self file monitoring (Only one Splunk Instance in my environment) Here is the log format: <UNKNOWN> - 2025-01-13 10:55:44 UDP: [10.0.216.39]:53916... See more...
Hello, I was trying to ingest snmptrapd logs with self file monitoring (Only one Splunk Instance in my environment) Here is the log format: <UNKNOWN> - 2025-01-13 10:55:44 UDP: [10.0.216.39]:53916->[10.0.214.14]:162 SNMPv2-SMI::mib-2.1.3.0 30:17:26:51.00 SNMPv2-SMI::snmpModules.1.1.4.1.0 CYBER-ARK-MIB::osDiskFreeSpaceNotification CYBER-ARK-MIB::osDiskDrive "C:\\" CYBER-ARK-MIB::osDiskPercentageFreeSpace "71.61" CYBER-ARK-MIB::osDiskFreeSpace "58221" CYBER-ARK-MIB::osDiskTrapState "Alert" <UNKNOWN> - 2025-01-13 10:55:44 UDP: [10.0.216.39]:53916->[10.0.214.14]:162 SNMPv2-SMI::mib-2.1.3.0 30:17:26:51.00 SNMPv2-SMI::snmpModules.1.1.4.1.0 CYBER-ARK-MIB::osMemoryUsageNotification CYBER-ARK-MIB::osMemoryTotalKbPhysical 16776172 CYBER-ARK-MIB::osMemoryAvailKbPhysical 13524732 CYBER-ARK-MIB::osMemoryTotalKbSwap 19266540 CYBER-ARK-MIB::osMemoryAvailKbSwap 3660968 CYBER-ARK-MIB::osMemoryTrapState "Alert" <UNKNOWN> - 2025-01-13 10:55:44 UDP: [10.0.216.39]:53916->[10.0.214.14]:162 SNMPv2-SMI::mib-2.1.3.0 30:17:26:51.00 SNMPv2-SMI::snmpModules.1.1.4.1.0 CYBER-ARK-MIB::osSwapMemoryUsageNotification CYBER-ARK-MIB::osMemoryTotalKbPhysical 16776172 CYBER-ARK-MIB::osMemoryAvailKbPhysical 13524732 CYBER-ARK-MIB::osMemoryTotalKbSwap 19266540 CYBER-ARK-MIB::osMemoryAvailKbSwap 3660968 CYBER-ARK-MIB::osMemoryTrapState "Alert" I tried to use "<UNKNOWN>" as the line breaker, but it does not work at all and the event is broke in a weird way(sometimes it works, most of the time it doesn't) Please find the props.conf setting as below: [cyberark:snmplogs] LINE_BREAKER = \<UNKNOWN\> NO_BINARY_CHECK = true SHOULD_LINEMERGE = true category = Custom pulldown_type = 1 BREAK_ONLY_BEFORE = \<UNKNOWN\> MUST_NOT_BREAK_BEFORE = \<UNKNOWN\> disabled = false LINE_BREAKER_LOOKBEHIND = 2000     Line Breaking Result in Splunk:
I am watching the training for the core user certification path on STEP and they are using an index that has the usage field.  I have uploaded the tutorial data from the community site but it doesn't... See more...
I am watching the training for the core user certification path on STEP and they are using an index that has the usage field.  I have uploaded the tutorial data from the community site but it doesn't have the usage field.  I don't know how to rectify this and I cannot replicate the activity in the learning material.  Does anyone have a suggestion?   EDIT - i just made up my own CSV and imported that data. ggwp