All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @danielbb , is it mandatory to use syslog-ng? you should already have rsyslog in your system, that's the evolution of syslog-ng and almost the same. Ciao. Giuseppe
Okay, I reverted to using INGEST_EVAL, that works as well. On the other hand, I have an additional question: If a given Splunk node is already forwarding logs to another node over S2S or S2S over H... See more...
Okay, I reverted to using INGEST_EVAL, that works as well. On the other hand, I have an additional question: If a given Splunk node is already forwarding logs to another node over S2S or S2S over HEC, and I want to add this configuration to send the logs to yet another destination (a node running syslog-ng), then will this configuration break the other pre-existing destinations' log format? Or is it safe to use from this perspective?
Hi Tom can you share a screenshot of your rule setup under BT Detection please, the whole config. Just want to check something
how do I fix this   
@All-done-steak  You need to create an `indexes.conf` file in either `/opt/splunk/etc/manager-apps` or `/opt/splunk/etc/master-apps`. Afterward, push the configuration, and it will appear on the ind... See more...
@All-done-steak  You need to create an `indexes.conf` file in either `/opt/splunk/etc/manager-apps` or `/opt/splunk/etc/master-apps`. Afterward, push the configuration, and it will appear on the indexers under `/opt/splunk/etc/peer-apps`.
@All-done-steak  The main index serves as the default index. I recommend creating a new index and applying the desired settings. Then, navigate to the cluster master and push the changes using the f... See more...
@All-done-steak  The main index serves as the default index. I recommend creating a new index and applying the desired settings. Then, navigate to the cluster master and push the changes using the following command: /opt/splunk/bin/splunk apply cluster-bundle Afterward, check the bundle status. For an Indexer cluster, use the CLI on the Cluster Master to run: /opt/splunk/bin/splunk show cluster-bundle-status
How to you verified that the parameter was not updated? Have you checked the changed indexes.conf on a peer node? If not, please check it and execute $SPLUNK_HOME/bin/splunk btool indexes list main ... See more...
How to you verified that the parameter was not updated? Have you checked the changed indexes.conf on a peer node? If not, please check it and execute $SPLUNK_HOME/bin/splunk btool indexes list main --debug to check the parameter and its app location. It’s possible that another indexes.conf file takes precedence over your modified configuration.
Check out on your splunk instance the Dashboard Example Hub. There you will find a bunch of examples how to visualize your data. http://<your_splunk_url>:8000/en-GB/app/splunk-dashboard-studio/examp... See more...
Check out on your splunk instance the Dashboard Example Hub. There you will find a bunch of examples how to visualize your data. http://<your_splunk_url>:8000/en-GB/app/splunk-dashboard-studio/example-hub-nav-visualizations#Single%20Value Example Source definition: { "type": "splunk.singlevalue", "dataSources": { "primary": "ds_search1" }, "options": {}, "context": {}, }  
Hi everyone, I’ve been receiving a lot of helpful responses regarding this topic, and I truly appreciate the support. However, I’m currently stuck on how to execute a Python script via a button in ... See more...
Hi everyone, I’ve been receiving a lot of helpful responses regarding this topic, and I truly appreciate the support. However, I’m currently stuck on how to execute a Python script via a button in Splunk and display the results on a dashboard. Here’s the Python script I’m using: import json import requests import logging class ZabbixHandler: def __init__(self): self.logger = logging.getLogger('zabbix_handler') self.ZABBIX_API_URL = "http://localhost/zabbix/api_jsonrpc.php" # Replace with your Zabbix API URL self.ZABBIX_USERNAME = "user" # Replace with your Zabbix username self.ZABBIX_PASSWORD = "password" # Replace with your Zabbix password self.SPLUNK_HEC_URL = "http://localhost:8088/services/collector" # Replace with your Splunk HEC URL self.SPLUNK_HEC_TOKEN = "myhectoken" # Replace with your Splunk HEC token self.HEC_INDEX = "summary" # Splunk index for the logs self.HEC_SOURCETYPE = "zabbix:audit:logs" # Splunk sourcetype def authenticate_with_zabbix(self): payload = { "jsonrpc": "2.0", "method": "user.login", "params": { "username": self.ZABBIX_USERNAME, "password": self.ZABBIX_PASSWORD, }, "id": 1, } response = requests.post(self.ZABBIX_API_URL, json=payload, verify=False) response_data = response.json() if "result" in response_data: return response_data["result"] else: raise Exception(f"Zabbix authentication failed: {response_data}") def fetch_audit_logs(self, auth_token): payload = { "jsonrpc": "2.0", "method": "auditlog.get", "params": { "output": "extend", "filter": { "action": 0 # Fetch specific actions if needed } }, "auth": auth_token, "id": 2, } response = requests.post(self.ZABBIX_API_URL, json=payload, verify=False) response_data = response.json() if "result" in response_data: return response_data["result"] else: raise Exception(f"Failed to fetch audit logs: {response_data}") def send_logs_to_splunk(self, logs): headers = { "Authorization": f"Splunk {self.SPLUNK_HEC_TOKEN}", "Content-Type": "application/json", } for log in logs: payload = { "index": self.HEC_INDEX, "sourcetype": self.HEC_SOURCETYPE, "event": log, } response = requests.post(self.SPLUNK_HEC_URL, headers=headers, json=payload, verify=False) if response.status_code != 200: self.logger.error(f"Failed to send log to Splunk: {response.status_code} - {response.text}") def handle_request(self): try: auth_token = self.authenticate_with_zabbix() logs = self.fetch_audit_logs(auth_token) self.send_logs_to_splunk(logs) return {"status": "success", "message": "Logs fetched and sent to Splunk successfully."} except Exception as e: self.logger.error(f"Error during operation: {str(e)}") return {"status": "error", "message": str(e)} if __name__ == "__main__": handler = ZabbixHandler() response = handler.handle_request() print(json.dumps(response)) My restmap.conf [script:zabbix_handler] match = /zabbix_handler script = zabbix_handler.py handler = python output_modes = json Current Status: The script is working correctly, and I am successfully retrieving data from Zabbix and sending it to Splunk. The logs are being indexed in Splunk’s summary index, and I can verify this via manual execution of the script. Requirements: I want to create a button in a Splunk dashboard that, when clicked, executes the above Python script. The script (zabbix_handler.py) is located in the /opt/splunk/bin/ directory. The script extracts logs from Zabbix, sends them to Splunk’s HEC endpoint, and stores them in the summary index. After the button is clicked and the script is executed, I would like to display the query results from index="summary" on the same dashboard. Questions: JavaScript for the Button: How should I write the JavaScript code for the button to execute this script and display the results? Placement of JavaScript Code: Where exactly in the Splunk app directory should I place the JavaScript code? Triggering the Script: How can I integrate this setup with Splunk’s framework to ensure the Python script is executed and results are shown in the dashboard?   @kamlesh_vaghela  can u help me on this task.?im kind of stuck on this and ur videos helped me a lot ..!
Hi everyone, I’ve been receiving a lot of helpful responses regarding this topic, and I truly appreciate the support. However, I’m currently stuck on how to execute a Python script via a button in ... See more...
Hi everyone, I’ve been receiving a lot of helpful responses regarding this topic, and I truly appreciate the support. However, I’m currently stuck on how to execute a Python script via a button in Splunk and display the results on a dashboard. Here’s the Python script I’m using: import json import requests import logging class ZabbixHandler: def __init__(self): self.logger = logging.getLogger('zabbix_handler') self.ZABBIX_API_URL = "http://localhost/zabbix/api_jsonrpc.php" # Replace with your Zabbix API URL self.ZABBIX_USERNAME = "user" # Replace with your Zabbix username self.ZABBIX_PASSWORD = "password" # Replace with your Zabbix password self.SPLUNK_HEC_URL = "http://localhost:8088/services/collector" # Replace with your Splunk HEC URL self.SPLUNK_HEC_TOKEN = "myhectoken" # Replace with your Splunk HEC token self.HEC_INDEX = "summary" # Splunk index for the logs self.HEC_SOURCETYPE = "zabbix:audit:logs" # Splunk sourcetype def authenticate_with_zabbix(self): payload = { "jsonrpc": "2.0", "method": "user.login", "params": { "username": self.ZABBIX_USERNAME, "password": self.ZABBIX_PASSWORD, }, "id": 1, } response = requests.post(self.ZABBIX_API_URL, json=payload, verify=False) response_data = response.json() if "result" in response_data: return response_data["result"] else: raise Exception(f"Zabbix authentication failed: {response_data}") def fetch_audit_logs(self, auth_token): payload = { "jsonrpc": "2.0", "method": "auditlog.get", "params": { "output": "extend", "filter": { "action": 0 # Fetch specific actions if needed } }, "auth": auth_token, "id": 2, } response = requests.post(self.ZABBIX_API_URL, json=payload, verify=False) response_data = response.json() if "result" in response_data: return response_data["result"] else: raise Exception(f"Failed to fetch audit logs: {response_data}") def send_logs_to_splunk(self, logs): headers = { "Authorization": f"Splunk {self.SPLUNK_HEC_TOKEN}", "Content-Type": "application/json", } for log in logs: payload = { "index": self.HEC_INDEX, "sourcetype": self.HEC_SOURCETYPE, "event": log, } response = requests.post(self.SPLUNK_HEC_URL, headers=headers, json=payload, verify=False) if response.status_code != 200: self.logger.error(f"Failed to send log to Splunk: {response.status_code} - {response.text}") def handle_request(self): try: auth_token = self.authenticate_with_zabbix() logs = self.fetch_audit_logs(auth_token) self.send_logs_to_splunk(logs) return {"status": "success", "message": "Logs fetched and sent to Splunk successfully."} except Exception as e: self.logger.error(f"Error during operation: {str(e)}") return {"status": "error", "message": str(e)} if __name__ == "__main__": handler = ZabbixHandler() response = handler.handle_request() print(json.dumps(response)) My restmap.conf [script:zabbix_handler] match = /zabbix_handler script = zabbix_handler.py handler = python output_modes = json Current Status: The script is working correctly, and I am successfully retrieving data from Zabbix and sending it to Splunk. The logs are being indexed in Splunk’s summary index, and I can verify this via manual execution of the script. Requirements: I want to create a button in a Splunk dashboard that, when clicked, executes the above Python script. The script (zabbix_handler.py) is located in the /opt/splunk/bin/ directory. The script extracts logs from Zabbix, sends them to Splunk’s HEC endpoint, and stores them in the summary index. After the button is clicked and the script is executed, I would like to display the query results from index="summary" on the same dashboard. Questions: JavaScript for the Button: How should I write the JavaScript code for the button to execute this script and display the results? Placement of JavaScript Code: Where exactly in the Splunk app directory should I place the JavaScript code? Triggering the Script: How can I integrate this setup with Splunk’s framework to ensure the Python script is executed and results are shown in the dashboard?
Cannot edit the index setting as it shows an error said "Argument "coldPath_expanded" is not supported by this handler". Splunk Enterprise version: 8.2.4  
We have a windows host that sends us stream data but suddenly stopped the stream. Upon checking the _internal logs we found that sniffer.running=false at the exact time when it stopped sending the lo... See more...
We have a windows host that sends us stream data but suddenly stopped the stream. Upon checking the _internal logs we found that sniffer.running=false at the exact time when it stopped sending the logs. Previous to this it was true. I am trying to find out where I can set this flag to true and restart the streamfwd.exe. If this would fix the issue. Doubt is that we didn't really touched any conf file that would have changed it. Attaching the internal logs for this host for more clarity if the solution that I am thinking is not the one and something else needs to be done.   Thanks in advance for any help.  
There is option export the dashboard where the details are clear and legible. One thing i noticed is, the size difference between both instant download and scheduled one where the scheduled one is le... See more...
There is option export the dashboard where the details are clear and legible. One thing i noticed is, the size difference between both instant download and scheduled one where the scheduled one is less in sixe (almost half ), looks like the pdf/png is getting compressed  while sending to mail.   the only solution I can come up with is to split the dashboard into 2 and schedule it. edit: above action still gave same results.   If anyone has any idea beside it, please share.  
I have increase the Max Size of the "main" index at indexer clustering master node. I tried to push it to the peer node, it showed successful and I have also restart the peer node (Server Control -->... See more...
I have increase the Max Size of the "main" index at indexer clustering master node. I tried to push it to the peer node, it showed successful and I have also restart the peer node (Server Control --> Restart Splunk).  The Max Size of the "main" index is still not updated.     Splunk Enterprise version: 8.2
@PickleRick  @isoutamo @kiran_panchavat  Thank you for the replies! I think I should provide more information about the log. It is from snmp traps, and I have a script that will export the trap line... See more...
@PickleRick  @isoutamo @kiran_panchavat  Thank you for the replies! I think I should provide more information about the log. It is from snmp traps, and I have a script that will export the trap line by line to the log file that will be monitored by Splunk.  The props.conf @PickleRick  helped to amend works well if I use 'add data' to add a static log file instead of file monitoring, but If I use file monitoring (new lines of snmp traps will be written around every 10 minutes), the line breaking went wrong. So I was thinking is the problem due to the file being updated? But the snmp traps were written almost at the same time (as seen in the timestamps), if I would like to fix it, what configurations can I change?
I can confirm that this is fixed in 9.4.0   | makeresults format=csv data="field a a:b" | eval field = split(field, ":"), count = mvcount(field), map = mvmap(field, "1")   In 9.4.0, it returns ... See more...
I can confirm that this is fixed in 9.4.0   | makeresults format=csv data="field a a:b" | eval field = split(field, ":"), count = mvcount(field), map = mvmap(field, "1")   In 9.4.0, it returns count field map 1 a 1 2 a b 1 1 Before the fix, it would return the following, incorrect first row. count field map 1 a a 2 a b 1 1
I have to display a field called Info which has value A and color it based on range (low, severe, high) as was Splunk Classic but in Splunk Dashboard studio . How can i achieve that?
EVAL is a search-time configuration so it will not (I'm not eve  sure it's correct syntax in your example) work in index time.  
I can confirm, this type of setup does not work for the Windows logs:   [sanitize_metadata] EVAL-EEEE =replace(_meta,"::","=") [metadata_meta] SOURCE_KEY = EEEE REGEX = (?ims)(.*) FORMAT = $1__-__... See more...
I can confirm, this type of setup does not work for the Windows logs:   [sanitize_metadata] EVAL-EEEE =replace(_meta,"::","=") [metadata_meta] SOURCE_KEY = EEEE REGEX = (?ims)(.*) FORMAT = $1__-__$0 DEST_KEY = _raw The problem is that with this the Windows logs only contain the eventlog message part, as if they did not have any metadata attached.  
I have played around a bit more... This is what seems to be working for me: [sanitize_metadata] EVAL-_meta=replace(_meta,"::","=") [metadata_meta] SOURCE_KEY = _meta REGEX = (?ims)(.*) FORMAT = $1... See more...
I have played around a bit more... This is what seems to be working for me: [sanitize_metadata] EVAL-_meta=replace(_meta,"::","=") [metadata_meta] SOURCE_KEY = _meta REGEX = (?ims)(.*) FORMAT = $1__-__$0 DEST_KEY = _raw Note: __-__ is just a placeholder for a separator. I found an article that is aiming at a marginally similar thing as I do: https://zchandikaz.medium.com/alter-splunk-data-at-indexing-time-a10c09713f51 There, the individual uses EVAL instead of INGEST_EVAL. Is there any significant difference? Also, I changed your example because it worked differently if I did not use _meta as a target variable in the INGEST_EVAL. I noticed that with your version, the logs that originated from the Windows machine with the UF on it, were missing the metadata assigned there. When I use my version, all the metadata set on the UF (static key-value pairs) is there in the log. Any idea why that might be? Either way, thanks so much for your effort to help me! I really appreciate it!