All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have been using the Splunk API from within a Python script to retrieve information about saved searches using a call to the endpoint:   hxxps://<splunk_server>/-/-/saved/searches/<name_of_saved_s... See more...
I have been using the Splunk API from within a Python script to retrieve information about saved searches using a call to the endpoint:   hxxps://<splunk_server>/-/-/saved/searches/<name_of_saved_search>?output_mode=json   The <name_of_saved_search> has been URL encoded to deal with some punctuation (including '/'), using the Python function:   name_of_searched_search = urllib.parse.quote(search_name, safe='')   It has been working so far, but recently I encountered an issue when the name of the saved search contains square brackets (e.g. "[123] My Search") Even after URL encoding, Splunk's API just does not accept the API call at the endpoint:   hxxps://<splunk_server>/-/-/saved/searches/%5B123%5D%20My%20Search?output_mode=json   and returns a response with HTTP status code of 404 (Not Found). I am not sure what else I should be doing to handle the square brackets in the name of the saved search to make the API call work.
at the moment, the servers are monitored on splunk, but only win event log:security logs are being piped. I want to increase the monitoring capability to include sysmon and powershell logging, but, i... See more...
at the moment, the servers are monitored on splunk, but only win event log:security logs are being piped. I want to increase the monitoring capability to include sysmon and powershell logging, but, i do not want sysmon logs from ALL servers to be indexed and searchable, unless a security event warrants a particular server to have its sysmon indexed.    i.e.  1. all severs have sysmon enabled  2. splunk's security analytics and detection queries runs in the background to monitor the sysmon, if there are any hits, it creates an alert on splunk and the alert log is indexed. 3. alert is sent to a case management system 4. at the request of the security analyst, he can request to view the sysmon of that particular server and that server' sysmon will be indexed on splunk for the past 5 days.  5. analyst will not be able to view sysmon on splunk for the rest of the servers that are not indexed as it is unrelated to the security event. he can only index the sysmon of a particular server IF he triggers that action from the case management system
Hi @desmando , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Not exactly. If splunkd is running then it generates events into splunkd.log, but it’s not 100% indicator that it is forwarding also. But you could look events from that file which told this. Those a... See more...
Not exactly. If splunkd is running then it generates events into splunkd.log, but it’s not 100% indicator that it is forwarding also. But you could look events from that file which told this. Those are “connected/forwarding server 1.2.3.4:9997” or something similar (I cannot check correct lines now). Is it possible that you look that information from server side? Just search those from _internal logs or even from MC?
I encountered an issue while trying to integrate a Python script into my Splunk dashboard to export Zabbix logs to a Splunk index. When I click the button on the dashboard, the following error is log... See more...
I encountered an issue while trying to integrate a Python script into my Splunk dashboard to export Zabbix logs to a Splunk index. When I click the button on the dashboard, the following error is logged in splunkd.log: 01-16-2025 12:01:24.958 +0530 ERROR ScriptRunner [40857 TcpChannelThread] - stderr from '/opt/splunk/bin/python3.9 /opt/splunk/bin/runScript.py zabbix_handler.Zabbix_handler': Traceback (most recent call last): 01-16-2025 12:01:24.958 +0530 ERROR ScriptRunner [40857 TcpChannelThread] - stderr from '/opt/splunk/bin/python3.9 /opt/splunk/bin/runScript.py zabbix_handler.Zabbix_handler': File "/opt/splunk/bin/runScript.py", line 72, in <module> 01-16-2025 12:01:24.958 +0530 ERROR ScriptRunner [40857 TcpChannelThread] - stderr from '/opt/splunk/bin/python3.9 /opt/splunk/bin/runScript.py zabbix_handler.Zabbix_handler': os.chdir(scriptDir) 01-16-2025 12:01:24.958 +0530 ERROR ScriptRunner [40857 TcpChannelThread] - stderr from '/opt/splunk/bin/python3.9 /opt/splunk/bin/runScript.py zabbix_handler.Zabbix_handler': FileNotFoundError: [Errno 2] No such file or directory: '' Setup: Python Script: Location: /opt/splunk/etc/apps/search/bin/zabbix_handler.py Function: Export Zabbix logs to a Splunk index using the HEC endpoint. JavaScript Code: Location: /opt/splunk/etc/apps/search/appserver/static/ Function: Adds a button to the dashboard, which triggers the Python script. Observed Behavior: When the button is clicked, the error indicates that the scriptDir variable in runScript.py is empty, leading to the os.chdir(scriptDir) call failing. Questions: Why might scriptDir be empty when runScript.py is executed? Is there a specific configuration required in the Splunk dashboard or app structure to ensure the ScriptPath is correctly passed to the ScriptRunner? How can I debug or fix this issue to ensure the Python script is executed properly? Any help or guidance would be greatly appreciated. Thank you!
It seems that I understood your question little bit differently. So you have separate system (not splunk), which are currently monitoring those source systems. When it found something then it create... See more...
It seems that I understood your question little bit differently. So you have separate system (not splunk), which are currently monitoring those source systems. When it found something then it create alert. After that log collection has started from this source system? How you can be sure that you will get all needed old logs which are needed for analysis if you are starting collection after the event has happened? Some related activities could happened a long time ago before the event which create the alert!
There are couple of apps which can manage e.g. base64 encoding. Here is one which I have used https://splunkbase.splunk.com/app/5565 When you have issues with windows character sets, you must add ch... See more...
There are couple of apps which can manage e.g. base64 encoding. Here is one which I have used https://splunkbase.splunk.com/app/5565 When you have issues with windows character sets, you must add character set information into UF’s props.conf. There are some examples in community and this is also described on docs.
Reason for the above is that i have hundreds of servers and if every server has its sysmon log indexed, it will take up a lot of bandwidth, storage space and cost. Hence i am looking to find a possib... See more...
Reason for the above is that i have hundreds of servers and if every server has its sysmon log indexed, it will take up a lot of bandwidth, storage space and cost. Hence i am looking to find a possible solution where splunk security detection analytics can run across all servers and triggers an alert for any positive hits, and only at the request of the security analyst, then the sysmon of a particular endpoint of interest will be indexed for the past 5 days for example. 
Theoretically yes, practically it’s vere very hard to do and it’s needs a lot of writing dashboards, reports etc. I really don’t suggest it. What is the issue which you try to solve with this appro... See more...
Theoretically yes, practically it’s vere very hard to do and it’s needs a lot of writing dashboards, reports etc. I really don’t suggest it. What is the issue which you try to solve with this approach?
You should try the next things: reload page then try again clean bowser cache an try again try browser’s private mode try another browser
Var/run contains some status information. Everything works smoothlier with it, but it’s not a catastrophe without it.
@MikeMakai I think you can use WinDump/Wireshark. You can take help from your network team.  https://wiki.wireshark.org/WinDump 
Hi here is how I did it. I actually migrate the whole distributed multisite environment from one service provider to another. https://community.splunk.com/t5/Splunk-Enterprise/Migration-of-Splunk-to... See more...
Hi here is how I did it. I actually migrate the whole distributed multisite environment from one service provider to another. https://community.splunk.com/t5/Splunk-Enterprise/Migration-of-Splunk-to-different-server-same-platform-Linux-but/m-p/538069/highlight/true#M4823 r. Ismo
@Cccvvveee0235 Please check this.  https://community.splunk.com/t5/Security/quot-Server-Error-quot-for-a-fresh-Splunk-install/m-p/447283  https://community.splunk.com/t5/Splunk-Search/Why-are-we-se... See more...
@Cccvvveee0235 Please check this.  https://community.splunk.com/t5/Security/quot-Server-Error-quot-for-a-fresh-Splunk-install/m-p/447283  https://community.splunk.com/t5/Splunk-Search/Why-are-we-seeing-a-quot-Server-Error-quot-message-after-each/m-p/131524 
Hi @kamlesh_vaghela  I did not observe any errors in the python.log file, but I noticed errors in the splunkd.log file. Here is the relevant log entry:   01-16-2025 12:01:24.958 +0530 ERROR Scrip... See more...
Hi @kamlesh_vaghela  I did not observe any errors in the python.log file, but I noticed errors in the splunkd.log file. Here is the relevant log entry:   01-16-2025 12:01:24.958 +0530 ERROR ScriptRunner [40857 TcpChannelThread] - stderr from '/opt/splunk/bin/python3.9 /opt/splunk/bin/runScript.py zabbix_handler.Zabbix_handler': Traceback (most recent call last): 01-16-2025 12:01:24.958 +0530 ERROR ScriptRunner [40857 TcpChannelThread] - stderr from '/opt/splunk/bin/python3.9 /opt/splunk/bin/runScript.py zabbix_handler.Zabbix_handler': File "/opt/splunk/bin/runScript.py", line 72, in <module> 01-16-2025 12:01:24.958 +0530 ERROR ScriptRunner [40857 TcpChannelThread] - stderr from '/opt/splunk/bin/python3.9 /opt/splunk/bin/runScript.py zabbix_handler.Zabbix_handler': os.chdir(scriptDir) 01-16-2025 12:01:24.958 +0530 ERROR ScriptRunner [40857 TcpChannelThread] - stderr from '/opt/splunk/bin/python3.9 /opt/splunk/bin/runScript.py zabbix_handler.Zabbix_handler': FileNotFoundError: [Errno 2] No such file or directory: '' This error occurs because the scriptDir variable is empty or invalid, which leads to the os.chdir(scriptDir) function attempting to change to a directory that does not exist. Could you assist in identifying why the scriptDir value might be undefined or improperly set in this context?
@Cccvvveee0235  Kindly try logging in using a different browser and check if it works.
I am referencing the following to create a custom command. https://github.com/splunk/splunk-app-examples/tree/master/custom_search_commands/python/reportingsearchcommands_app I am downloading the a... See more...
I am referencing the following to create a custom command. https://github.com/splunk/splunk-app-examples/tree/master/custom_search_commands/python/reportingsearchcommands_app I am downloading the app and running it. In makeresults, even if I generate 200000 lines and run it, only 1 result comes out. However, if I put the content in the index or lookup and run it, the number of results is 7~10, etc. The desired result is 1, but multiple results come out. Is it not possible to make it so that only one is shown?
I see you want to determine full paths of the value input list.  You have a second requirement that the input be a JSON array,  ["Tag3", "Tag4"], and a third that the code needs to run in 8.0, which ... See more...
I see you want to determine full paths of the value input list.  You have a second requirement that the input be a JSON array,  ["Tag3", "Tag4"], and a third that the code needs to run in 8.0, which precludes JSON functions introduced in 8.1.  Note each of the path{} array has multiple values.  Without help of JSON functions, you need to handle that first. The most common way to do this is with mvexpand. (The input array also needs this.) | makeresults | eval _raw = "{ \"Info\": { \"Apps\": { \"ReportingServices\": { \"ReportTags\": [ \"Tag1\" ], \"UserTags\": [ \"Tag2\", \"Tag3\" ] }, \"MessageQueue\": { \"ReportTags\": [ \"Tag1\", \"Tag4\" ], \"UserTags\": [ \"Tag3\", \"Tag4\", \"Tag5\" ] }, \"Frontend\": { \"ClientTags\": [ \"Tag12\", \"Tag47\" ] } } } }" | spath ``` data emulation above ``` | eval Tags = "[\"Tag3\", \"Tag4\"]" | foreach *Tags{} [mvexpand <<FIELD>>] | spath input=Tags | mvexpand {} | foreach *Tags{} [eval tags=mvappend(tags, if(lower('<<FIELD>>') = lower('{}'), "<<FIELD>>", null()))] | dedup tags | stats values(tags) If your dataset is large, mvexpand has some limitations.
Hello! I am getting this error when I am trying to authenticate to Splunk Enterprise. Could someone help me with this error? Below putting screenshot.  
@rohithvr19  It looks like there is some error in the endpoint.  Can you please check logs in "splunk/var/log/splunk/python.log"?  Sharing my sample code.  KV