All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi here is how I did it. I actually migrate the whole distributed multisite environment from one service provider to another. https://community.splunk.com/t5/Splunk-Enterprise/Migration-of-Splunk-to... See more...
Hi here is how I did it. I actually migrate the whole distributed multisite environment from one service provider to another. https://community.splunk.com/t5/Splunk-Enterprise/Migration-of-Splunk-to-different-server-same-platform-Linux-but/m-p/538069/highlight/true#M4823 r. Ismo
@Cccvvveee0235 Please check this.  https://community.splunk.com/t5/Security/quot-Server-Error-quot-for-a-fresh-Splunk-install/m-p/447283  https://community.splunk.com/t5/Splunk-Search/Why-are-we-se... See more...
@Cccvvveee0235 Please check this.  https://community.splunk.com/t5/Security/quot-Server-Error-quot-for-a-fresh-Splunk-install/m-p/447283  https://community.splunk.com/t5/Splunk-Search/Why-are-we-seeing-a-quot-Server-Error-quot-message-after-each/m-p/131524 
Hi @kamlesh_vaghela  I did not observe any errors in the python.log file, but I noticed errors in the splunkd.log file. Here is the relevant log entry:   01-16-2025 12:01:24.958 +0530 ERROR Scrip... See more...
Hi @kamlesh_vaghela  I did not observe any errors in the python.log file, but I noticed errors in the splunkd.log file. Here is the relevant log entry:   01-16-2025 12:01:24.958 +0530 ERROR ScriptRunner [40857 TcpChannelThread] - stderr from '/opt/splunk/bin/python3.9 /opt/splunk/bin/runScript.py zabbix_handler.Zabbix_handler': Traceback (most recent call last): 01-16-2025 12:01:24.958 +0530 ERROR ScriptRunner [40857 TcpChannelThread] - stderr from '/opt/splunk/bin/python3.9 /opt/splunk/bin/runScript.py zabbix_handler.Zabbix_handler': File "/opt/splunk/bin/runScript.py", line 72, in <module> 01-16-2025 12:01:24.958 +0530 ERROR ScriptRunner [40857 TcpChannelThread] - stderr from '/opt/splunk/bin/python3.9 /opt/splunk/bin/runScript.py zabbix_handler.Zabbix_handler': os.chdir(scriptDir) 01-16-2025 12:01:24.958 +0530 ERROR ScriptRunner [40857 TcpChannelThread] - stderr from '/opt/splunk/bin/python3.9 /opt/splunk/bin/runScript.py zabbix_handler.Zabbix_handler': FileNotFoundError: [Errno 2] No such file or directory: '' This error occurs because the scriptDir variable is empty or invalid, which leads to the os.chdir(scriptDir) function attempting to change to a directory that does not exist. Could you assist in identifying why the scriptDir value might be undefined or improperly set in this context?
@Cccvvveee0235  Kindly try logging in using a different browser and check if it works.
I am referencing the following to create a custom command. https://github.com/splunk/splunk-app-examples/tree/master/custom_search_commands/python/reportingsearchcommands_app I am downloading the a... See more...
I am referencing the following to create a custom command. https://github.com/splunk/splunk-app-examples/tree/master/custom_search_commands/python/reportingsearchcommands_app I am downloading the app and running it. In makeresults, even if I generate 200000 lines and run it, only 1 result comes out. However, if I put the content in the index or lookup and run it, the number of results is 7~10, etc. The desired result is 1, but multiple results come out. Is it not possible to make it so that only one is shown?
I see you want to determine full paths of the value input list.  You have a second requirement that the input be a JSON array,  ["Tag3", "Tag4"], and a third that the code needs to run in 8.0, which ... See more...
I see you want to determine full paths of the value input list.  You have a second requirement that the input be a JSON array,  ["Tag3", "Tag4"], and a third that the code needs to run in 8.0, which precludes JSON functions introduced in 8.1.  Note each of the path{} array has multiple values.  Without help of JSON functions, you need to handle that first. The most common way to do this is with mvexpand. (The input array also needs this.) | makeresults | eval _raw = "{ \"Info\": { \"Apps\": { \"ReportingServices\": { \"ReportTags\": [ \"Tag1\" ], \"UserTags\": [ \"Tag2\", \"Tag3\" ] }, \"MessageQueue\": { \"ReportTags\": [ \"Tag1\", \"Tag4\" ], \"UserTags\": [ \"Tag3\", \"Tag4\", \"Tag5\" ] }, \"Frontend\": { \"ClientTags\": [ \"Tag12\", \"Tag47\" ] } } } }" | spath ``` data emulation above ``` | eval Tags = "[\"Tag3\", \"Tag4\"]" | foreach *Tags{} [mvexpand <<FIELD>>] | spath input=Tags | mvexpand {} | foreach *Tags{} [eval tags=mvappend(tags, if(lower('<<FIELD>>') = lower('{}'), "<<FIELD>>", null()))] | dedup tags | stats values(tags) If your dataset is large, mvexpand has some limitations.
Hello! I am getting this error when I am trying to authenticate to Splunk Enterprise. Could someone help me with this error? Below putting screenshot.  
@rohithvr19  It looks like there is some error in the endpoint.  Can you please check logs in "splunk/var/log/splunk/python.log"?  Sharing my sample code.  KV
I would like to understand if the following scenario would be possible: 1. Security detection queries/analytics relying on sysmon logs are onboarded and enabled. 2. When the logs of a certain endpo... See more...
I would like to understand if the following scenario would be possible: 1. Security detection queries/analytics relying on sysmon logs are onboarded and enabled. 2. When the logs of a certain endpoint matches the security analytic, it creates an alert and is sent to a case management system for the analyst to investigate. 3.  at this point, the analyst is not able to view the sysmon logs of that particular endpoint. he will need to manually trigger the sysmon log to be indexed from the case management platform, only then he will be able to search the sysmon log on splunk for the past X number of days  4. however, the analyst will not be able to search for sysmon logs of the other unrelated endpoints.    In summary, is there a way that we can deploy and have the security detection analytics to monitor and detect across all endpoints, yet only allowing the security analyst to only have the ability to search for the sysmon logs of the endpoint which triggered the alert based on an adhoc request via the case management system?
Try something like this   | eval Tag = split("Tag3,Tag4",",") | mvexpand Tag | spath | foreach *Tags{} [| eval tags=if(mvfind(lower('<<FIELD>>'), "^".lower(Tag)."$") >= 0,mvappend(tags, "<<FIEL... See more...
Try something like this   | eval Tag = split("Tag3,Tag4",",") | mvexpand Tag | spath | foreach *Tags{} [| eval tags=if(mvfind(lower('<<FIELD>>'), "^".lower(Tag)."$") >= 0,mvappend(tags, "<<FIELD>>"), tags)] | stats values(tags) Note that mvfind uses regex so you may get some odd results if your tags have special characters in them  
...I think the original poster is asking about getting Power BI activity logs into Splunk, not about letting Power BI interact with Splunk via an ODBC connector. I need to ingest Power BI activity l... See more...
...I think the original poster is asking about getting Power BI activity logs into Splunk, not about letting Power BI interact with Splunk via an ODBC connector. I need to ingest Power BI activity logs from Power BI to Splunk. Does anyone have any experience with that?
What's the difference between "Splunk VMware OVA for ITSI" and "Splunk OVA for VMware"?   The Splunk OVA for VMware appears to be more recent. Do they serve the same function? Can the "Splunk OVA f... See more...
What's the difference between "Splunk VMware OVA for ITSI" and "Splunk OVA for VMware"?   The Splunk OVA for VMware appears to be more recent. Do they serve the same function? Can the "Splunk OVA for VMware" be used with ITSI? 
Hi the cluster master is also our License manager.  And by replacing a CM in place, you mean keeping the IPs and DNS of the CM? Copy from /var/run is listed in the https://docs.splunk.com/Documenta... See more...
Hi the cluster master is also our License manager.  And by replacing a CM in place, you mean keeping the IPs and DNS of the CM? Copy from /var/run is listed in the https://docs.splunk.com/Documentation/Splunk/9.4.0/Indexer/Handlemanagernodefailure
I'm running Splunk on Windows and don't have the tcpdump command.
Is there a command or app that will decode base64 and detect the correct charset to output to? Currently, I'm currently unable to decode to UTF-16LE. Splunk wants to decode UTF-8.  In my curre... See more...
Is there a command or app that will decode base64 and detect the correct charset to output to? Currently, I'm currently unable to decode to UTF-16LE. Splunk wants to decode UTF-8.  In my current role, I cannot edit any .conf files. Those are administrated by a server team.  If there is an app, I can request it be installed, else I'm working solely out of the SPL. 
so if its forwarding, there should be a splunkd.log that is recent?
Hi All My issue is i have logstash data coming in splunk logs source type is Http Events and logs are coming in JSON format. I need to know how can i use this data to find something meaningful tha... See more...
Hi All My issue is i have logstash data coming in splunk logs source type is Http Events and logs are coming in JSON format. I need to know how can i use this data to find something meaningful that i can use also as we get event code in windows forwarders so i block unwanted  event codes giving repeated information but in logstash data what we can do if i want to do something like this. How to take out information which we can use in splunk?
Does anyone know how to do this on Splunk v8.0.5?
I have an existing search head that is peered to 2 cluster mgrs. This SH has the ES app on it. I am looking to add additional data from a remote indexers. Do i just need to add the remote cluster mgr... See more...
I have an existing search head that is peered to 2 cluster mgrs. This SH has the ES app on it. I am looking to add additional data from a remote indexers. Do i just need to add the remote cluster mgr as a peer to my existing SH so that i can access the data in ES?
I know this is a while ago now, but maybe helpful to others...try using the "hidden" dimension `_timeseries`.  This is a JSON string that is an amalgamation of all of the dimensions for each datapoin... See more...
I know this is a while ago now, but maybe helpful to others...try using the "hidden" dimension `_timeseries`.  This is a JSON string that is an amalgamation of all of the dimensions for each datapoint. Take care, the results may be (very) high arity and splunkd doesn't (yet?) have very strong protections for itself (in terms of RAM used while searching) when using this code path, so it is (IMHO) easy to crush your indexer tier's memory and cause lots of thrashing.