All Topics

Top

All Topics

Hi Splunkers, I am trying to configure rest api monitoring via splunk add-on builder but while configuring when i am trying to test the result i am receiving SSL error. Splunk-Add-on Builder Ve... See more...
Hi Splunkers, I am trying to configure rest api monitoring via splunk add-on builder but while configuring when i am trying to test the result i am receiving SSL error. Splunk-Add-on Builder Version:4.3.0 Splunk Enterprise Version:9.1.1 What could be done to mitigate this SSL error? Awaiting quick help and response Pasting the error herewith: 2024-09-16 15:28:49,569 - test_rest_api - [ERROR] - [test] HTTPError reason=HTTP Error HTTPSConnectionPool(host='endpoints.office.com', port=443): Max retries exceeded with url: /version?clientrequestid=b10c5ed1-bad1-445f-b386-b919946339a7 (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1106)'))) when sending request to url=https://endpoints.office.com/version?clientrequestid=b10c5ed1-bad1-445f-b386-b919946339a7 method=GET Traceback (most recent call last): File "/splunk/etc/apps/TA-o365_rest_api/bin/ta_o365_rest_api/aob_py3/urllib3/connectionpool.py", line 722, in urlopen chunked=chunked, File "/splunk/etc/apps/TA-o365_rest_api/bin/ta_o365_rest_api/aob_py3/urllib3/connectionpool.py", line 404, in _make_request self._validate_conn(conn) File "/splunk/etc/apps/TA-o365_rest_api/bin/ta_o365_rest_api/aob_py3/urllib3/connectionpool.py", line 1060, in _validate_conn conn.connect() File "/splunk/etc/apps/TA-o365_rest_api/bin/ta_o365_rest_api/aob_py3/urllib3/connection.py", line 429, in connect tls_in_tls=tls_in_tls, File "/splunk/etc/apps/TA-o365_rest_api/bin/ta_o365_rest_api/aob_py3/urllib3/util/ssl_.py", line 450, in ssl_wrap_socket sock, context, tls_in_tls, server_hostname=server_hostname File "/splunk/etc/apps/TA-o365_rest_api/bin/ta_o365_rest_api/aob_py3/urllib3/util/ssl_.py", line 493, in _ssl_wrap_socket_impl return ssl_context.wrap_socket(sock, server_hostname=server_hostname) File "/splunk/lib/python3.7/ssl.py", line 428, in wrap_socket session=session File "/splunk/lib/python3.7/ssl.py", line 878, in _create self.do_handshake() File "/splunk/lib/python3.7/ssl.py", line 1147, in do_handshake self._sslobj.do_handshake() ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1106) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/splunk/etc/apps/TA-o365_rest_api/bin/ta_o365_rest_api/aob_py3/requests/adapters.py", line 497, in send chunked=chunked, File "/splunk/etc/apps/TA-o365_rest_api/bin/ta_o365_rest_api/aob_py3/urllib3/connectionpool.py", line 802, in urlopen method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] File "/splunk/etc/apps/TA-o365_rest_api/bin/ta_o365_rest_api/aob_py3/urllib3/util/retry.py", line 594, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='endpoints.office.com', port=443): Max retries exceeded with url: /version?clientrequestid=b10c5ed1-bad1-445f-b386-b919946339a7 (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1106)'))) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/splunk/etc/apps/TA-o365_rest_api/bin/ta_o365_rest_api/aob_py3/cloudconnectlib/core/http.py", line 230, in _retry_send_request_if_needed uri=uri, body=body, method=method, headers=headers File "/splunk/etc/apps/TA-o365_rest_api/bin/ta_o365_rest_api/aob_py3/cloudconnectlib/core/http.py", line 219, in _send_internal verify=self.requests_verify, File "/splunk/etc/apps/TA-o365_rest_api/bin/ta_o365_rest_api/aob_py3/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/splunk/etc/apps/TA-o365_rest_api/bin/ta_o365_rest_api/aob_py3/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/splunk/etc/apps/TA-o365_rest_api/bin/ta_o365_rest_api/aob_py3/requests/adapters.py", line 517, in send raise SSLError(e, request=request) requests.exceptions.SSLError: HTTPSConnectionPool(host='endpoints.office.com', port=443): Max retries exceeded with url: /version?clientrequestid=b10c5ed1-bad1-445f-b386-b919946339a7 (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1106)'))) The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/splunk/etc/apps/TA-o365_rest_api/bin/ta_o365_rest_api/aob_py3/cloudconnectlib/core/engine.py", line 308, in _send_request response = self._client.send(request) File "/splunk/etc/apps/TA-o365_rest_api/bin/ta_o365_rest_api/aob_py3/cloudconnectlib/core/http.py", line 296, in send url, request.method, request.headers, request.body File "/splunk/etc/apps/TA-o365_rest_api/bin/ta_o365_rest_api/aob_py3/cloudconnectlib/core/http.py", line 243, in _retry_send_request_if_needed raise HTTPError(f"HTTP Error {err}") from err cloudconnectlib.core.exceptions.HTTPError: HTTP Error HTTPSConnectionPool(host='endpoints.office.com', port=443): Max retries exceeded with url: /version?clientrequestid=b10c5ed1-bad1-445f-b386-b919946339a7 (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1106)'))) 2024-09-16 15:28:49,570 - test_rest_api - [INFO] - [test] This job need to be terminated. 2024-09-16 15:28:49,570 - test_rest_api - [INFO] - [test] Job processing finished 2024-09-16 15:28:49,571 - test_rest_api - [INFO] - [test] 1 job(s) process finished 2024-09-16 15:28:49,571 - test_rest_api - [INFO] - [test] Engine executing finished 
I'm upgrading splunk enterprise to 9.3 using the rpm file, but when I run  rpm -U splunk-9.3.0-51ccf43db5bd.x86_64.rpm it installs all the folders, but removes the bin directory, so I can't then st... See more...
I'm upgrading splunk enterprise to 9.3 using the rpm file, but when I run  rpm -U splunk-9.3.0-51ccf43db5bd.x86_64.rpm it installs all the folders, but removes the bin directory, so I can't then start splunk. i've searched through the communities, and a few people seem to have hit the issue on windows, but not linux. how can I get around this issue? thanks Dabbsy
Hi, I have an App that has a set of icons that work fine on light mode, but if I switch to dark mode, they become invisible. If I add the lighter icons and dark mode, then the icons become invisible... See more...
Hi, I have an App that has a set of icons that work fine on light mode, but if I switch to dark mode, they become invisible. If I add the lighter icons and dark mode, then the icons become invisible in light mode. Is there a way to have both sets of icons and have them change based on the active mode?
Good day, I'm trying to setup the HF to forward to an additional syslog target which expects the RFC5424 (Grafana Alloy) so far the HF is reaching the syslog target but then the target complains abo... See more...
Good day, I'm trying to setup the HF to forward to an additional syslog target which expects the RFC5424 (Grafana Alloy) so far the HF is reaching the syslog target but then the target complains about missing priority and I'm not sure if this because of the RFC5424 vs RFC3164 I've tried the following outputs.conf option: [syslog:my_syslog_group] disabled = false server = grafana-alloy.svc.cluster.local:51898 type = tcp #other tested variant priority = <NO_PRI> priority = <34> #tested with or without timeformat timestampformat = %b %e %H:%M:%S How can i make sure that the HF syslog forward is using the RFC5424 format?
I have the slack integration hooked up to Splunk On-call I would like to trigger a Splunk On-call alert when the Slack usergroup is used. How do I go about setting that up if anyone knows? Thank... See more...
I have the slack integration hooked up to Splunk On-call I would like to trigger a Splunk On-call alert when the Slack usergroup is used. How do I go about setting that up if anyone knows? Thank you in advance
Hi All, We are getting below error message ITSI rules_engine: "ErrorMessage="One or more fields are missing to create episode state." in splunk which is stopping episode creation for some of the ev... See more...
Hi All, We are getting below error message ITSI rules_engine: "ErrorMessage="One or more fields are missing to create episode state." in splunk which is stopping episode creation for some of the events. However when we check the search results there are no null or empty field values for respective fields. Please help me to fix this ASAP with a detailed steps. Thanks in Advance to all.
Hi Splunk,  I created a dashboard with various panels. Some of the panels are tables with drilldown searches allowing you to click on the value, and opening a new tab using the value clicked on ($... See more...
Hi Splunk,  I created a dashboard with various panels. Some of the panels are tables with drilldown searches allowing you to click on the value, and opening a new tab using the value clicked on ($row.user.value$) in the new search.  However, for some reason the drilldown on one panel opens the search without populating the variable: $row.user.value$ All the other panels' drilldown searches work. Source code of panel:   { "type": "splunk.table", "options": { "count": 100, "dataOverlayMode": "none", "drilldown": "none", "showRowNumbers": false, "showInternalFields": false }, "dataSources": { "primary": "ds_aaaa" }, "title": "Panel One (Last 30 Days)", "eventHandlers": [ { "type": "drilldown.linkToSearch", "options": { "query": "index=\"winlog\" EventCode=4625 user=$row.user.value$", "earliest": "auto", "latest": "auto", "type": "custom", "newTab": true } } ], "context": {}, "showProgressBar": false, "showLastUpdated": false }   The SPL after clicking on the table value: index="winlog" EventCode=4625 user=$row.user.value$ Why does the $row.user.value$ not populate?
I am trying to install the Splunk App for SOAR and Splunk App for SOAR Export , however facing the issue as below I am using the soar_local_admin user account and add this user to phantom role a... See more...
I am trying to install the Splunk App for SOAR and Splunk App for SOAR Export , however facing the issue as below I am using the soar_local_admin user account and add this user to phantom role as well. still the same. Any suggestion will be highly appreciated.  
As a Splunk newcomer, I need guidance on using Splunk effectively to send logs to a Disaster Recovery (DR) environment where I have one Heavy Forwarder (HF) and one Deployment Server (DS) on-premises... See more...
As a Splunk newcomer, I need guidance on using Splunk effectively to send logs to a Disaster Recovery (DR) environment where I have one Heavy Forwarder (HF) and one Deployment Server (DS) on-premises. What steps should I take with my HF and DS to ensure smooth log ingestion into the DR Splunk Cloud instance? I have considered replicate vm ( HF and DS) as a possible solution, but I am still not sure about the best approach. Please advise on the following: - Are there any specific licensing requirements or restrictions for replicating Splunk instances? - What are the potential performance implications of replicating a Splunk VM, especially considering the data volume and real-time or near real-time requirements? - Are there any recommended best practices or configurations for replicating HF and DS VMs to a DR environment?" Thank for your help.
Hi, I'm working with .NET and using the 'services/search/jobs/' API. After successfully connecting through the 'services/auth/logi'n API, I receive a SessionKey, which I add to the headers for sub... See more...
Hi, I'm working with .NET and using the 'services/search/jobs/' API. After successfully connecting through the 'services/auth/logi'n API, I receive a SessionKey, which I add to the headers for subsequent requests as follows: oRequest.Headers.Authorization = new AuthenticationHeaderValue("Splunk", connectionInfo.AccessToken); When I received 401 error code after called 'services/search/jobs/' , I attempt to reconnect by calling 'services/auth/login' up to three times to retrieve a new session key and update the header accordingly. Despite this, the session key sometimes remains unchanged (is this expected behavior?), and regardless of whether the token changes or not, I continue to receive the 401 Unauthorized error: Response: '<?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="WARN">call not properly authenticated</msg> </messages> </response> ' Error from System.Net.Http: System.Net.Http.HttpRequestException: Response status code does not indicate success: 401 (Unauthorized). The URL I'm using starts with https and the port is 8089. Can you assist with this issue?
Dear All, We have splunk index with data like pattern and the pattern was recently changed. {"Feild1":"DATA1","Feild2":"DATA2","Feild3":"DATA3","Feild4":"DATA4"} We have several dashboards using p... See more...
Dear All, We have splunk index with data like pattern and the pattern was recently changed. {"Feild1":"DATA1","Feild2":"DATA2","Feild3":"DATA3","Feild4":"DATA4"} We have several dashboards using previous data pattern like below. DATA1,DATA2,DATA3,DATA4 Looking for a way to filter out or suppress {"Feild1": "Feild2":.....} using splunk query's and feed output to dashboards.   Kindly suggest how this can be done.   Thanks  
Hi , I have a saved search which is cron scheduled , but it is not showing on the saved search panel . (setting->Searches,report and alerts) what could be the reason.
Yes, I'm new to this Splunk stuff and I'm trying to learn and I have a health red and I'm not sure how to fix it and i don't know if it's causing me other issues    here are the list that are marke... See more...
Yes, I'm new to this Splunk stuff and I'm trying to learn and I have a health red and I'm not sure how to fix it and i don't know if it's causing me other issues    here are the list that are marked in red.    splunkd   Data Forwarding Splunk-2-Splunk Forwarding TCPOutAutoLB-0 File Monitor Input Ingestion Latency Real-time Reader-0     also I'm currently am taking some online courses from Udemy. would anyone  recommend anything else or where to learn?  I'm only asking because these courses are out of date.     
Hello, When I write data to a summary index, the timestamp (_time) always follows the earliest time. For example, if my daily scheduled search runs at 1am today, 9/15/2024, to write the last 24-hou... See more...
Hello, When I write data to a summary index, the timestamp (_time) always follows the earliest time. For example, if my daily scheduled search runs at 1am today, 9/15/2024, to write the last 24-hour data to a summary index, the time stamp (_time) will be 9/14/2024. When I search the summary index in the last 24 hours, the result will be empty because it's always 24 hours behind, so I have to modify the search time to the last 2-day to see the data. Is it a best practice to keep the timestamp as the earliest time, or do you modify the timestamp to the search time? In my example, if I modify the timestamp to the search time, the time stamp would be 9/15/2024 1 a.m. Please suggest. Thank you so much for your help.
for example  i have this fields and valus: stats count by username . i got this: username root | 102 admin | 71 yara | 34 this is the same for src src 168.172.1.1 | 132 10.10.0.1 | 60 168.... See more...
for example  i have this fields and valus: stats count by username . i got this: username root | 102 admin | 71 yara | 34 this is the same for src src 168.172.1.1 | 132 10.10.0.1 | 60 168.0.8.1 | 12 i want to see it one table but the i want it to check all fields , like dst , port , mail... it could be any thing on the event the goal is to get for each event the top field that have the most values that are repeated with the same value
I am facing an issue, while try to create Automation User, this option is not available. Need to create server but for that authorization configuration is required.  to avail this authorization con... See more...
I am facing an issue, while try to create Automation User, this option is not available. Need to create server but for that authorization configuration is required.  to avail this authorization configuration, we need the following options as was in previous versions of Splunk How ever I am getting the below options:       Please suggest What am I doing Wrong? Best Regards  
.
Sample logs: Mon Sep 04 13:23:40 2024 -- (eMonitoringLATN_install.sh) Your current directory is [/d/onedemand/etc] Mon Sep 05 12:21:30 2024 -- (eMonitoringLATN_install.sh) Final Destination reached... See more...
Sample logs: Mon Sep 04 13:23:40 2024 -- (eMonitoringLATN_install.sh) Your current directory is [/d/onedemand/etc] Mon Sep 05 12:21:30 2024 -- (eMonitoringLATN_install.sh) Final Destination reached logs. Mon Sep 06 12:21:30 2024 -- (eMonitoringLATN_install.sh) logs ingestion started.   We tried below props for above sample logs but line breaking not happening correctly.  And pls let us know how to give time format as well. SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+)\w+\s\w+\s\d{2}\s\d{2}:\d{2}:\d{2}\s\d{4} TIME_PREFIX=^    
Salam guys I wrote the Correlation Search Query and added the Adaptive Response Actions (notable, risk analysis and send to soar), but when the event goes to soar there's no event_id in the artifact... See more...
Salam guys I wrote the Correlation Search Query and added the Adaptive Response Actions (notable, risk analysis and send to soar), but when the event goes to soar there's no event_id in the artifacts SPL Query: [ | tstats summariesonly=true values("Authentication.user") as "user",values("Authentication.dest") as "dest",count("Authentication.dest") as "dest_count",count("Authentication.user") as "user_count",count from datamodel="Authentication"."Authentication" where nodename="Authentication.Failed_Authentication" by "Authentication.src","Authentication.app" | rename "Authentication.src" as "src","Authentication.app" as "app" | where 'count'>5 ]
Hi guys. I'd like to know if there is a way to schedule an input script to run multiple times also if it does not end with an exit code. I explain: i have some scripts which need to wait some time... See more...
Hi guys. I'd like to know if there is a way to schedule an input script to run multiple times also if it does not end with an exit code. I explain: i have some scripts which need to wait some time before exiting with an output and/or an exit code. At the same time, i need to rerun the same script also if the previous is still running in background. Splunk can't do this, since it monitors the ran script and wait and exit code before running the new one. Example:   [script://./bin/my_script.sh] index=blablabla sourcetye=blablabla source=blablabla interval=60 ...   Let's say "my_script.sh" contains a simple (it's only an example),   #!/bin/bash date sleep 90   Now, with all current methods i used, also running it with [script://./bin/my_script.sh &] Or with a launcher.sh which detaches a child process with a "bash -c 'my_script.sh &' &" or "exec my_script.sh &" the "sleep 90" prevents splunkd to rerun the script every 60 secs, since it needs 90s to exit the previous script sleep. So, in my indexed data, i'll found 2 mins data, as for the 90s sleep, 10:00 splunkd launches "my_script.sh" and waits its exit code to index data 10:01 splunkd tries to launch a new "my_script.sh", but stops it since there's a previous "sleep 90" 10:02 splunkd indexes previous 10:00 data, and reschedule a new "my_script.sh" 10:03 as 10:01 10:04 as 10:02 ... and so on... Is there a way to force a re-run, also if a previous script pid is currently still running, and have data for, 10:00 output from "my_script.sh" 10:01 output from "my_script.sh" 10:02 output from "my_script.sh" 10:03 output from "my_script.sh" ......? Thanks.