All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

First, map is usually not the solution to the problem you are trying to solve. Secondly, could you explain the relationship between values "a", "b" and the searches "index=_internal | head 1 | eval ... See more...
First, map is usually not the solution to the problem you are trying to solve. Secondly, could you explain the relationship between values "a", "b" and the searches "index=_internal | head 1 | eval val=\"Query for a1\" | table val" and "index=_internal | head 1 | eval val=\"Query for b\" | table val"? Confusingly, everyone of the three searches will result in a predetermined string value of a single field.  Why bother with index=_internal?  If you are just trying to make a point of map, you can compose them with makeresults just as easily. If you really want to use map, study the syntax and examples in map.  The whole idea of map is to NOT use case function.  To produce the result you intended, here is a proper construct:   | makeresults | eval name1 = mvappend("c", "b", "a") | mvexpand name1 | map search="search index=_internal | head 1 | eval val=if(\"$name1$\" IN (\"a\", \"b\"), \"Query for $name1$\", \"Default query\") | table val"   This is the output no matter what data you have in _internal. val Default query Query for b Query for a However, there are often much easier and better ways to do this.  To illustrate, forget val="Query for a".  Let's pick more realistic mock values "info", "warn".  This is a construct using map.   | makeresults | eval searchterm = mvappend("info", "warn", "nosuchterm") | mvexpand searchterm | map search="search index=_internal log_level=\"$searchterm$\" | stats count by log_level | eval val=if(\"$searchterm$\" IN (\"info\", \"warn\"), \"Query for $searchterm$\", \"Default query\")"   If you examine _internal events, you will know that, even though searchterm is given three values, the above should only give two rows, like log_level count val INFO 500931 Query for info WARN 17262 Query for warn However, the syntax of map makes the search much harder to maintain.  Here is an alternative using subsearch. (There are other alternatives based on actual search term and data characteristics.)   index=_internal [makeresults | eval searchterm = mvappend("info", "warn", "nosuchterm") | fields searchterm | rename searchterm as log_level] | stats count by log_level | eval val = if(log_level IN ("INFO", "WARN"), "Query for " . log_level, "Default query")   If you apply this to the exact same time interval, it will give you exactly the same output. Hope this helps.
I set the following in inputs.conf and seems it is working fine now. multiline_event_extra_waittime = true time_before_close = 120 I will monitor it for a while and see if the successful ... See more...
I set the following in inputs.conf and seems it is working fine now. multiline_event_extra_waittime = true time_before_close = 120 I will monitor it for a while and see if the successful event breaking is stable. Thank you for your help!
Thanks for clarifying. You helped a lot! That means there are two options for me: do this conversion on the syslog-ng side and that won't hurt the splunk side of things forward the logs to yet an... See more...
Thanks for clarifying. You helped a lot! That means there are two options for me: do this conversion on the syslog-ng side and that won't hurt the splunk side of things forward the logs to yet another splunk instance that will only do this conversion, thereby isolating the "production" Splunk instance from these transforms
Hi @manhdt , Splunk ITSI is a Premium App, so you cannot download it from Splunkbase. You can have it only if your paid a license, in this case you'll receive an email with the link to download it,... See more...
Hi @manhdt , Splunk ITSI is a Premium App, so you cannot download it from Splunkbase. You can have it only if your paid a license, in this case you'll receive an email with the link to download it, or if you are a Splunk Partner, using your Not For Resale License. If you want to see it, you can access to Splunk Show, if you are enabled (it's required a Sales Engineer 2 certification) or to a video on the Splunk YouTube channel. Ciao. Giuseppe
It depends on your overall process but as a general rule, the pipeline works like this: input -> transforms -> output(s) So if you modify an event and its metadata it will get to outputs that way. ... See more...
It depends on your overall process but as a general rule, the pipeline works like this: input -> transforms -> output(s) So if you modify an event and its metadata it will get to outputs that way. There is an ugly way to avoid it - use CLONE_SOURCETYPE to make a copy of your event and process it independently but it's both a performance hit and a maintenance nightmare in the future.
How to download add-on "Splunk IT Service Intelligence" I'm receiving an error, as seen in the picture below.
Do you mean that separate lines are written with 10 minute intervals or every 10 minutes a whole multiline event is written? Anyway, if it's a UF it might help to add EVENT_BREAKER_ENABLE=true and se... See more...
Do you mean that separate lines are written with 10 minute intervals or every 10 minutes a whole multiline event is written? Anyway, if it's a UF it might help to add EVENT_BREAKER_ENABLE=true and set EVENT_BREAKER to the same value as LINE_BREAKER.
Hi @danielbb , is it mandatory to use syslog-ng? you should already have rsyslog in your system, that's the evolution of syslog-ng and almost the same. Ciao. Giuseppe
Okay, I reverted to using INGEST_EVAL, that works as well. On the other hand, I have an additional question: If a given Splunk node is already forwarding logs to another node over S2S or S2S over H... See more...
Okay, I reverted to using INGEST_EVAL, that works as well. On the other hand, I have an additional question: If a given Splunk node is already forwarding logs to another node over S2S or S2S over HEC, and I want to add this configuration to send the logs to yet another destination (a node running syslog-ng), then will this configuration break the other pre-existing destinations' log format? Or is it safe to use from this perspective?
Hi Tom can you share a screenshot of your rule setup under BT Detection please, the whole config. Just want to check something
how do I fix this   
@All-done-steak  You need to create an `indexes.conf` file in either `/opt/splunk/etc/manager-apps` or `/opt/splunk/etc/master-apps`. Afterward, push the configuration, and it will appear on the ind... See more...
@All-done-steak  You need to create an `indexes.conf` file in either `/opt/splunk/etc/manager-apps` or `/opt/splunk/etc/master-apps`. Afterward, push the configuration, and it will appear on the indexers under `/opt/splunk/etc/peer-apps`.
@All-done-steak  The main index serves as the default index. I recommend creating a new index and applying the desired settings. Then, navigate to the cluster master and push the changes using the f... See more...
@All-done-steak  The main index serves as the default index. I recommend creating a new index and applying the desired settings. Then, navigate to the cluster master and push the changes using the following command: /opt/splunk/bin/splunk apply cluster-bundle Afterward, check the bundle status. For an Indexer cluster, use the CLI on the Cluster Master to run: /opt/splunk/bin/splunk show cluster-bundle-status
How to you verified that the parameter was not updated? Have you checked the changed indexes.conf on a peer node? If not, please check it and execute $SPLUNK_HOME/bin/splunk btool indexes list main ... See more...
How to you verified that the parameter was not updated? Have you checked the changed indexes.conf on a peer node? If not, please check it and execute $SPLUNK_HOME/bin/splunk btool indexes list main --debug to check the parameter and its app location. It’s possible that another indexes.conf file takes precedence over your modified configuration.
Check out on your splunk instance the Dashboard Example Hub. There you will find a bunch of examples how to visualize your data. http://<your_splunk_url>:8000/en-GB/app/splunk-dashboard-studio/examp... See more...
Check out on your splunk instance the Dashboard Example Hub. There you will find a bunch of examples how to visualize your data. http://<your_splunk_url>:8000/en-GB/app/splunk-dashboard-studio/example-hub-nav-visualizations#Single%20Value Example Source definition: { "type": "splunk.singlevalue", "dataSources": { "primary": "ds_search1" }, "options": {}, "context": {}, }  
Hi everyone, I’ve been receiving a lot of helpful responses regarding this topic, and I truly appreciate the support. However, I’m currently stuck on how to execute a Python script via a button in ... See more...
Hi everyone, I’ve been receiving a lot of helpful responses regarding this topic, and I truly appreciate the support. However, I’m currently stuck on how to execute a Python script via a button in Splunk and display the results on a dashboard. Here’s the Python script I’m using: import json import requests import logging class ZabbixHandler: def __init__(self): self.logger = logging.getLogger('zabbix_handler') self.ZABBIX_API_URL = "http://localhost/zabbix/api_jsonrpc.php" # Replace with your Zabbix API URL self.ZABBIX_USERNAME = "user" # Replace with your Zabbix username self.ZABBIX_PASSWORD = "password" # Replace with your Zabbix password self.SPLUNK_HEC_URL = "http://localhost:8088/services/collector" # Replace with your Splunk HEC URL self.SPLUNK_HEC_TOKEN = "myhectoken" # Replace with your Splunk HEC token self.HEC_INDEX = "summary" # Splunk index for the logs self.HEC_SOURCETYPE = "zabbix:audit:logs" # Splunk sourcetype def authenticate_with_zabbix(self): payload = { "jsonrpc": "2.0", "method": "user.login", "params": { "username": self.ZABBIX_USERNAME, "password": self.ZABBIX_PASSWORD, }, "id": 1, } response = requests.post(self.ZABBIX_API_URL, json=payload, verify=False) response_data = response.json() if "result" in response_data: return response_data["result"] else: raise Exception(f"Zabbix authentication failed: {response_data}") def fetch_audit_logs(self, auth_token): payload = { "jsonrpc": "2.0", "method": "auditlog.get", "params": { "output": "extend", "filter": { "action": 0 # Fetch specific actions if needed } }, "auth": auth_token, "id": 2, } response = requests.post(self.ZABBIX_API_URL, json=payload, verify=False) response_data = response.json() if "result" in response_data: return response_data["result"] else: raise Exception(f"Failed to fetch audit logs: {response_data}") def send_logs_to_splunk(self, logs): headers = { "Authorization": f"Splunk {self.SPLUNK_HEC_TOKEN}", "Content-Type": "application/json", } for log in logs: payload = { "index": self.HEC_INDEX, "sourcetype": self.HEC_SOURCETYPE, "event": log, } response = requests.post(self.SPLUNK_HEC_URL, headers=headers, json=payload, verify=False) if response.status_code != 200: self.logger.error(f"Failed to send log to Splunk: {response.status_code} - {response.text}") def handle_request(self): try: auth_token = self.authenticate_with_zabbix() logs = self.fetch_audit_logs(auth_token) self.send_logs_to_splunk(logs) return {"status": "success", "message": "Logs fetched and sent to Splunk successfully."} except Exception as e: self.logger.error(f"Error during operation: {str(e)}") return {"status": "error", "message": str(e)} if __name__ == "__main__": handler = ZabbixHandler() response = handler.handle_request() print(json.dumps(response)) My restmap.conf [script:zabbix_handler] match = /zabbix_handler script = zabbix_handler.py handler = python output_modes = json Current Status: The script is working correctly, and I am successfully retrieving data from Zabbix and sending it to Splunk. The logs are being indexed in Splunk’s summary index, and I can verify this via manual execution of the script. Requirements: I want to create a button in a Splunk dashboard that, when clicked, executes the above Python script. The script (zabbix_handler.py) is located in the /opt/splunk/bin/ directory. The script extracts logs from Zabbix, sends them to Splunk’s HEC endpoint, and stores them in the summary index. After the button is clicked and the script is executed, I would like to display the query results from index="summary" on the same dashboard. Questions: JavaScript for the Button: How should I write the JavaScript code for the button to execute this script and display the results? Placement of JavaScript Code: Where exactly in the Splunk app directory should I place the JavaScript code? Triggering the Script: How can I integrate this setup with Splunk’s framework to ensure the Python script is executed and results are shown in the dashboard?   @kamlesh_vaghela  can u help me on this task.?im kind of stuck on this and ur videos helped me a lot ..!
Hi everyone, I’ve been receiving a lot of helpful responses regarding this topic, and I truly appreciate the support. However, I’m currently stuck on how to execute a Python script via a button in ... See more...
Hi everyone, I’ve been receiving a lot of helpful responses regarding this topic, and I truly appreciate the support. However, I’m currently stuck on how to execute a Python script via a button in Splunk and display the results on a dashboard. Here’s the Python script I’m using: import json import requests import logging class ZabbixHandler: def __init__(self): self.logger = logging.getLogger('zabbix_handler') self.ZABBIX_API_URL = "http://localhost/zabbix/api_jsonrpc.php" # Replace with your Zabbix API URL self.ZABBIX_USERNAME = "user" # Replace with your Zabbix username self.ZABBIX_PASSWORD = "password" # Replace with your Zabbix password self.SPLUNK_HEC_URL = "http://localhost:8088/services/collector" # Replace with your Splunk HEC URL self.SPLUNK_HEC_TOKEN = "myhectoken" # Replace with your Splunk HEC token self.HEC_INDEX = "summary" # Splunk index for the logs self.HEC_SOURCETYPE = "zabbix:audit:logs" # Splunk sourcetype def authenticate_with_zabbix(self): payload = { "jsonrpc": "2.0", "method": "user.login", "params": { "username": self.ZABBIX_USERNAME, "password": self.ZABBIX_PASSWORD, }, "id": 1, } response = requests.post(self.ZABBIX_API_URL, json=payload, verify=False) response_data = response.json() if "result" in response_data: return response_data["result"] else: raise Exception(f"Zabbix authentication failed: {response_data}") def fetch_audit_logs(self, auth_token): payload = { "jsonrpc": "2.0", "method": "auditlog.get", "params": { "output": "extend", "filter": { "action": 0 # Fetch specific actions if needed } }, "auth": auth_token, "id": 2, } response = requests.post(self.ZABBIX_API_URL, json=payload, verify=False) response_data = response.json() if "result" in response_data: return response_data["result"] else: raise Exception(f"Failed to fetch audit logs: {response_data}") def send_logs_to_splunk(self, logs): headers = { "Authorization": f"Splunk {self.SPLUNK_HEC_TOKEN}", "Content-Type": "application/json", } for log in logs: payload = { "index": self.HEC_INDEX, "sourcetype": self.HEC_SOURCETYPE, "event": log, } response = requests.post(self.SPLUNK_HEC_URL, headers=headers, json=payload, verify=False) if response.status_code != 200: self.logger.error(f"Failed to send log to Splunk: {response.status_code} - {response.text}") def handle_request(self): try: auth_token = self.authenticate_with_zabbix() logs = self.fetch_audit_logs(auth_token) self.send_logs_to_splunk(logs) return {"status": "success", "message": "Logs fetched and sent to Splunk successfully."} except Exception as e: self.logger.error(f"Error during operation: {str(e)}") return {"status": "error", "message": str(e)} if __name__ == "__main__": handler = ZabbixHandler() response = handler.handle_request() print(json.dumps(response)) My restmap.conf [script:zabbix_handler] match = /zabbix_handler script = zabbix_handler.py handler = python output_modes = json Current Status: The script is working correctly, and I am successfully retrieving data from Zabbix and sending it to Splunk. The logs are being indexed in Splunk’s summary index, and I can verify this via manual execution of the script. Requirements: I want to create a button in a Splunk dashboard that, when clicked, executes the above Python script. The script (zabbix_handler.py) is located in the /opt/splunk/bin/ directory. The script extracts logs from Zabbix, sends them to Splunk’s HEC endpoint, and stores them in the summary index. After the button is clicked and the script is executed, I would like to display the query results from index="summary" on the same dashboard. Questions: JavaScript for the Button: How should I write the JavaScript code for the button to execute this script and display the results? Placement of JavaScript Code: Where exactly in the Splunk app directory should I place the JavaScript code? Triggering the Script: How can I integrate this setup with Splunk’s framework to ensure the Python script is executed and results are shown in the dashboard?
Cannot edit the index setting as it shows an error said "Argument "coldPath_expanded" is not supported by this handler". Splunk Enterprise version: 8.2.4  
We have a windows host that sends us stream data but suddenly stopped the stream. Upon checking the _internal logs we found that sniffer.running=false at the exact time when it stopped sending the lo... See more...
We have a windows host that sends us stream data but suddenly stopped the stream. Upon checking the _internal logs we found that sniffer.running=false at the exact time when it stopped sending the logs. Previous to this it was true. I am trying to find out where I can set this flag to true and restart the streamfwd.exe. If this would fix the issue. Doubt is that we didn't really touched any conf file that would have changed it. Attaching the internal logs for this host for more clarity if the solution that I am thinking is not the one and something else needs to be done.   Thanks in advance for any help.  
There is option export the dashboard where the details are clear and legible. One thing i noticed is, the size difference between both instant download and scheduled one where the scheduled one is le... See more...
There is option export the dashboard where the details are clear and legible. One thing i noticed is, the size difference between both instant download and scheduled one where the scheduled one is less in sixe (almost half ), looks like the pdf/png is getting compressed  while sending to mail.   the only solution I can come up with is to split the dashboard into 2 and schedule it. edit: above action still gave same results.   If anyone has any idea beside it, please share.