All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@phanTom  We ended up using custom function with phantom.merge() as this fit our needs and was very simple (few lines of code). We also found how to use phantom.collect() to read in everything aft... See more...
@phanTom  We ended up using custom function with phantom.merge() as this fit our needs and was very simple (few lines of code). We also found how to use phantom.collect() to read in everything after some trial and error. Thank you for pointing out the addon app 
Hello, we've encountered a problem with the TA-crowdstrike-falcon-event-streams TA, which was functional in the past. Splunk Enterprise onPrem VERSION=9.1.2 BUILD=b6b9c8185839 PRODUCT=splunk ... See more...
Hello, we've encountered a problem with the TA-crowdstrike-falcon-event-streams TA, which was functional in the past. Splunk Enterprise onPrem VERSION=9.1.2 BUILD=b6b9c8185839 PRODUCT=splunk PLATFORM=Linux-x86_64 When opening the UI to configure the crowdstrike Auth we'll end up with Err 500. Same for the other views. I've tried to reinstall it, but it didn't change anything. Splunkd logs the following:     01-26-2024 16:13:29.817 +0100 ERROR AdminManagerExternal [3102377 TcpChannelThread] - Unexpected error "<class 'splunktaucclib.rest_handler.error.RestError'>" from python handler: "REST Error [500]: Internal Server Error -- Traceback (most recent call last):\n File "/opt/splunk/lib/python3.7/site-packages/urllib3/connectionpool.py", line 706, in urlopen\n chunked=chunked,\n File "/opt/splunk/lib/python3.7/site-packages/urllib3/connectionpool.py", line 382, in _make_request\n self._validate_conn(conn)\n File "/opt/splunk/lib/python3.7/site-packages/urllib3/connectionpool.py", line 1010, in _validate_conn\n conn.connect()\n File "/opt/splunk/lib/python3.7/site-packages/urllib3/connection.py", line 421, in connect\n tls_in_tls=tls_in_tls,\n File "/opt/splunk/lib/python3.7/site-packages/urllib3/util/ssl_.py", line 453, in ssl_wrap_socket\n ssl_sock = _ssl_wrap_socket_impl(sock, context, tls_in_tls)\n File "/opt/splunk/lib/python3.7/site-packages/urllib3/util/ssl_.py", line 495, in _ssl_wrap_socket_impl\n return ssl_context.wrap_socket(sock)\n File "/opt/splunk/lib/python3.7/ssl.py", line 428, in wrap_socket\n session=session\n File "/opt/splunk/lib/python3.7/ssl.py", line 878, in _create\n self.do_handshake()\n File "/opt/splunk/lib/python3.7/ssl.py", line 1147, in do_handshake\n self._sslobj.do_handshake()\nssl.SSLError: [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:1106)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File "/opt/splunk/lib/python3.7/site-packages/requests/adapters.py", line 449, in send\n timeout=timeout\n File "/opt/splunk/lib/python3.7/site-packages/urllib3/connectionpool.py", line 756, in urlopen\n method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]\n File "/opt/splunk/lib/python3.7/site-packages/urllib3/util/retry.py", line 574, in increment\n raise MaxRetryError(_pool, url, error or ResponseError(cause))\nurllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='127.0.0.1', port=8089): Max retries exceeded with url: /servicesNS/nobody/TA-crowdstrike-falcon-event-streams/configs/conf-ta_crowdstrike_falcon_event_streams_settings/_reload (Caused by SSLError(SSLError(1, '[SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:1106)')))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File "/opt/splunk/etc/apps/TA-crowdstrike-falcon-event-streams/lib/splunktaucclib/rest_handler/handler.py", line 124, in wrapper\n for name, data, acl in meth(self, *args, **kwargs):\n File "/opt/splunk/etc/apps/TA-crowdstrike-falcon-event-streams/lib/splunktaucclib/rest_handler/handler.py", line 162, in get\n self.reload()\n File "/opt/splunk/etc/apps/TA-crowdstrike-falcon-event-streams/lib/splunktaucclib/rest_handler/handler.py", line 259, in reload\n action="_reload",\n File "/opt/splunk/etc/apps/TA-crowdstrike-falcon-event-streams/lib/splunklib/binding.py", line 320, in wrapper\n return request_fun(self, *args, **kwargs)\n File "/opt/splunk/etc/apps/TA-crowdstrike-falcon-event-streams/lib/splunklib/binding.py", line 79, in new_f\n val = f(*args, **kwargs)\n File "/opt/splunk/etc/apps/TA-crowdstrike-falcon-event-streams/lib/splunklib/binding.py", line 727, in get\n response = self.http.get(path, all_headers, **query)\n File "/opt/splunk/etc/apps/TA-crowdstrike-falcon-event-streams/lib/splunklib/binding.py", line 1254, in get\n return self.request(url, { 'method': "GET", 'headers': headers })\n File "/opt/splunk/etc/apps/TA-crowdstrike-falcon-event-streams/lib/splunklib/binding.py", line 1316, in request\n response = self.handler(url, message, **kwargs)\n File "/opt/splunk/etc/apps/TA-crowdstrike-falcon-event-streams/lib/solnlib/splunk_rest_client.py", line 147, in request\n **kwargs,\n File "/opt/splunk/lib/python3.7/site-packages/requests/api.py", line 61, in request\n return session.request(method=method, url=url, **kwargs)\n File "/opt/splunk/lib/python3.7/site-packages/requests/sessions.py", line 542, in request\n resp = self.send(prep, **send_kwargs)\n File "/opt/splunk/lib/python3.7/site-packages/requests/sessions.py", line 655, in send\n r = adapter.send(request, **kwargs)\n File "/opt/splunk/lib/python3.7/site-packages/requests/adapters.py", line 514, in send\n raise SSLError(e, request=request)\nrequests.exceptions.SSLError: HTTPSConnectionPool(host='127.0.0.1', port=8089): Max retries exceeded with url: /servicesNS/nobody/TA-crowdstrike-falcon-event-streams/configs/conf-ta_crowdstrike_falcon_event_streams_settings/_reload (Caused by SSLError(SSLError(1, '[SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:1106)')))\n". See splunkd.log/python.log for more details.     inputs.conf   [splunktcp-ssl:8089] disabled = 0 requireClientCert = false sslVersions = * [...] [SSL] serverCert = <path> requireClientCert = true allowSslRenegotiation = true sslCommonNameToCheck = <others> 127.0.0.1,SplunkServerDefaultCert   server.conf   [sslConfig] enableSplunkdSSL = true sslVersions = tls1.2 serverCert = /opt/splunk/etc/auth/<path>.pem sslRootCAPath = /opt/splunk/etc/auth/<path>.pem requireClientCert = true sslVerifyServerName = true sslVerifyServerCert = true sslCommonNameToCheck = <FQDNs> cliVerifyServerName = false sslPassword = <pw>     We're looking forward for your help! Thank you!
I ave a couple of scheduled reports that I SCP off of our splunk enterprise.  Both reports are in /opt/splunk/etc/apps/search/lookups.  One of the reports I setup a while ago and it's permissions loo... See more...
I ave a couple of scheduled reports that I SCP off of our splunk enterprise.  Both reports are in /opt/splunk/etc/apps/search/lookups.  One of the reports I setup a while ago and it's permissions look right and I can SCP it (file1.csv).  The new report gives me a permission denied when I try to copy it (file2.csv). File 1: -rw-r-----. 1 splunk splunk 306519 Jan 26 05:00 file1.csv -rw-------. 1 splunk splunk 1177070 Jan 26 03:00 file2.csv   Not sure how to get file2.csv group readable so I can copy it off.
Hello everyone,  I'm currently trying to optimize Splunk with disk space and index.  I read about : Changing the parameter "Pause indexing if free disk space (in MB) falls below" Never modify th... See more...
Hello everyone,  I'm currently trying to optimize Splunk with disk space and index.  I read about : Changing the parameter "Pause indexing if free disk space (in MB) falls below" Never modify the indexes.conf parameters  And some others posts of the community But I'm not quite sure about the solution for my problems :  The coldToFrozenDir/Script parameters are empty. Kind regards, Tybe
Quoting the docs: How Splunk software determines time zones To determine the time zone to assign to a timestamp, Splunk software uses the following logic in order of precedence: Use the time zon... See more...
Quoting the docs: How Splunk software determines time zones To determine the time zone to assign to a timestamp, Splunk software uses the following logic in order of precedence: Use the time zone specified in raw event data (for example, PST, -0800), if present. Use the TZ attribute set in props.conf, if the event matches the host, source, or source type that the stanza specifies. If the forwarder and the receiving indexer are version 6.0 or higher, use the time zone that the forwarder provides. Use the time zone of the host that indexes the event. From my experience with Windows (I see windows events format) the most common error is when someone forgets to set up a system timezone on install and as a result - the whole server is indeed in a wrong timezone and effectively uses wrong time. Otherwise windows events are properly ingested and parsed (I assume you have TA_windows on your receiving indexers or HF).
Hello Every Body.   I'm starting this question be couse i'm traying to genrate detections for goole workspace invader as that post about 365.  https://www.splunk.com/en_us/blog/security/hunting-m3... See more...
Hello Every Body.   I'm starting this question be couse i'm traying to genrate detections for goole workspace invader as that post about 365.  https://www.splunk.com/en_us/blog/security/hunting-m365-invaders-blue-team-s-guide-to-initial-access-vectors.html.  But i can not find google work space  login logs in actual ingest. We installed  the ad-don and newest apps abalaible in the splunkbase and could not find it. surfin into the splunk web we could't fund an euivalent searchs as the link attached.    Some bady had the same problem?  how can I solved it? 
Review how the data is ingested.  By default, Splunk Cloud presumes all event times are UTC.  That means all non-UTC timestamps must be identified as such.  The TIME_FORMAT setting in props.conf shou... See more...
Review how the data is ingested.  By default, Splunk Cloud presumes all event times are UTC.  That means all non-UTC timestamps must be identified as such.  The TIME_FORMAT setting in props.conf should include the time zone if the event timestamp does (your sample event does not).  Other events should use the TZ setting in props.conf to specify the time zone. Every sourcetype onboarded should have props.conf settings to avoid having Splunk make incorrect assumptions about the data.
Hi @ITWhisperer, thanks for your answer. Your answer give me the root cause and that is fine; now the question is: how should I fix this?
_indextime is not being represented in your screen shot. It looks like your event, which contains the text "01/24/2023 09:42:07 AM" (without any timezone information) is being interpreted as UTC i.e... See more...
_indextime is not being represented in your screen shot. It looks like your event, which contains the text "01/24/2023 09:42:07 AM" (without any timezone information) is being interpreted as UTC i.e. GMT+0. This is converted the UTC epoch time (number of seconds) and stored in _time. _time is then displayed in the event view in local time i.e. GMT+1 so 09:... becomes 08:... hence your "hour difference".
It looks to me like the dry run completed successfully.  There were no buckets that could not be merged and no failed peers.  Have you tried running with dryrun=false?
Hi Splunkers, I have a problem with timestamp on our platform. Here some assumption and acquired knowledge. Knowledge _time =  is the event time (the time which is present in the event. In other w... See more...
Hi Splunkers, I have a problem with timestamp on our platform. Here some assumption and acquired knowledge. Knowledge _time =  is the event time (the time which is present in the event. In other words: the time when the event was generated. _indextime = is the index time or, if you prefer, the time when the events have been indexed. Issue with timezone shown can be related to user settings, that can be changed under username -> Preferences -> Timezone. Environment: a Splunk Cloud SaaS platform with logs ingested in different ways: Forwarder (both UF and HF) API Syslog File monitoring Issue: If I expand the event and I examinate the _time field:  Why, in my case, time event and time shown are different? Important additional Info Our user settings timezone are set on GMT+1 (due we are in Italy) for all users. You see a Windows events as sample, but the problem is present on all logs: it doesn't matter what log source I consider and how it is sending events to Splunk. Every log show time difference. The difference between _time and time shown is always on 1 hour, for every events on every log sources. I searched here on community and I found other topics about this issue, some of them has been very useful to gain a basic knowledge like Difference Between Event Time and _time  but, due we are on cloud (with limited chance to set some file and parameter that are involved) and the issue is for all events, I'm still locked on this problem.   
Just set host and port and let the splunklib handle the rest.
Hi Team, We are trying to onboard AWS cloudwatch metrics and events data to splunk , we decided to go with splunk Add on for AWS pull mechanism. I am trying to configure a custom namespace and metri... See more...
Hi Team, We are trying to onboard AWS cloudwatch metrics and events data to splunk , we decided to go with splunk Add on for AWS pull mechanism. I am trying to configure a custom namespace and metrics created in  AWS to splunk , I am unable to see the metrics there . I edited the default aws namespaces and added my custom namespace . Is this right method to add my custom metrics. Can someone guide here. 
Hi @PickleRick  we use base url api already for curl commands and would like to use in python.
Hi @VatsalJagani  no, we need to use base url api. Thanks.
@addOnGuy - I don't think there is any direct way to get the alert description. So you would need to make API REST call to saved/searches endpoint to find all the details about the alert. https://do... See more...
@addOnGuy - I don't think there is any direct way to get the alert description. So you would need to make API REST call to saved/searches endpoint to find all the details about the alert. https://docs.splunk.com/Documentation/Splunk/latest/RESTREF/RESTsearch#saved.2Fsearches.2F.7Bname.7D https://<host>:<mPort>/services/saved/searches/{name} As you mentioned you got the alert name already, replace it here and you will get the other details including the description.   I hope this helps!!!
Theoretically, of course a reverse-proxy could affect a URI path of the request redirecting it into somewhere else on the backend but that would need to be explicitly configured. Anyway see the http... See more...
Theoretically, of course a reverse-proxy could affect a URI path of the request redirecting it into somewhere else on the backend but that would need to be explicitly configured. Anyway see the https://docs.splunk.com/Documentation/Splunk/latest/RESTUM/RESTusing document and see the remark about namespaces on https://docs.splunk.com/Documentation/Splunk/9.1.3/RESTREF/RESTlist
The recommendation would be to get a decent security team. This "finding" is completely false. Firstly, the current openssl version for UF 9.1 is at least 1.0.2zg-fips. Secondly, UF doesn't contain... See more...
The recommendation would be to get a decent security team. This "finding" is completely false. Firstly, the current openssl version for UF 9.1 is at least 1.0.2zg-fips. Secondly, UF doesn't contain the c_rehash script so even with the "vulnerable" version the UF as a whole was not vulnerable. Sending out "findings" based just on recognized versions of software is really a very low-effort vulnerability "management".
@splunkreal - Did you got to resolve your issue?
@selvam_sekar - Here is the query with slight modification. Though in my case even with original query I'm getting the right count for today and yesterday. basesearch earliest=-4d@d latest=now | bi... See more...
@selvam_sekar - Here is the query with slight modification. Though in my case even with original query I'm getting the right count for today and yesterday. basesearch earliest=-4d@d latest=now | bin span=1d _time | search NOT date_wday="saturday" OR date_wday="sunday" | stats count by Name, _time | streamstats current=f window=1 last(count) as Yesterday by Name | rename count as Today | where strftime(_time, "%F")==strftime(now(), "%F") | stats first(*) as * by Name | eval percentage_variance=abs(round(((Yesterday-Today)/Yesterday)*100,2)) | table Name Today Yesterday percentage_variance   I hope this helps!!! Kindly upvote & accept the answer if it does!!!