All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Splunkers, This is after I upgraded to Splunk Enterprise version 9.4, the client names under Forwarding Management on deployment server showing up as GUID but not the actual hostnames, prior t... See more...
Hello Splunkers, This is after I upgraded to Splunk Enterprise version 9.4, the client names under Forwarding Management on deployment server showing up as GUID but not the actual hostnames, prior to version 9.4 I remember it was showing actual hostnames, not sure if an additional configuration is required here. have anyone experience the same and knows what needs to be done. Please advise,   regards,
We have Search head cluster consisting of 3 Search heads. where Splunk enterprise security have notable index in the enterprise security app where all the notable logs are getting stored, now the pro... See more...
We have Search head cluster consisting of 3 Search heads. where Splunk enterprise security have notable index in the enterprise security app where all the notable logs are getting stored, now the problem is the notable index data is not replicating there data along with other 2 Search heads. 
Hi All, I'm build below query for Delayed Forwarder for Phone home for 2 hour and Not Sending Data to indexes more than 15 min through append command as single correlation search. However, query is... See more...
Hi All, I'm build below query for Delayed Forwarder for Phone home for 2 hour and Not Sending Data to indexes more than 15 min through append command as single correlation search. However, query is not working with append command where calculating time duration of data sent and last phone connection.  Kindly suggest if any change in query can fix the calculation. index=_internal host=index1 source=*metrics.log* component=Metrics group=tcpin_connections kb>1 | eval os=os+" "+arch | eval ip=sourceIp | eval type="Datasent" | stats max(_time) as _time values(hostname) as hostname values(fwdType) as fwdType values(version) as version values(os) as os by sourceIp | append [ search index=_internal source="/opt/splunk/var/log/splunk/splunkd_access.log" "/services/broker/phonehome/connection" |rex field=uri "_(?<fwd_name>[^_]+)_(?<fwd_id>[-0-9A-Z]+)$" | eval type="Deployment" | dedup fwd_name | stats max(_time) as lastPhoneHomeTime values(fwd_name) as hostname values(useragent) as fwdType values(version) as version values(type) as types by clientip | convert ctime(lastPhoneHomeTime) | table clientip lastPhoneHomeTime hostname fwdType version] | stats dc(type) as num_types values(type) as types values(hostname) as hostname values(fwdType) as fwdType values(version) as version values(os) as os max(_time) as most_recent_data values(lastPhoneHomeTime) as most_recent_settings by ip | eval data_minutes_ago=round((now()-most_recent_data)/60, 1), settings_minutes_ago=round((now()-most_recent_settings)/60, 1) | search settings_minutes_ago>120 OR data_minutes_ago>15 | convert ctime(most_recent_data) ctime(most_recent_settings) | sort types data_minutes_ago settings_minutes_ago | stats max(_time) as lastPhoneHomeTime values(fwd_name) as hostname values(useragent) as fwdType values(version) as version values(type) as types by clientip | convert ctime(lastPhoneHomeTime) | table clientip lastPhoneHomeTime hostname fwdType version] | stats dc(type) as num_types values(type) as types values(hostname) as hostname values(fwdType) as fwdType values(version) as version values(os) as os max(_time) as most_recent_data values(lastPhoneHomeTime) as most_recent_settings by ip | eval data_minutes_ago=round((now()-most_recent_data)/60, 1), settings_minutes_ago=round((now()-most_recent_settings)/60, 1) | search settings_minutes_ago>120 OR data_minutes_ago>15 | convert ctime(most_recent_data) ctime(most_recent_settings) | sort types data_minutes_ago settings_minutes_ago    
I created my own app using Splunk Add-On Builder that captures some events via an API. I'm using Python input. After a few hours I get an authentication error in some of the code automatically genera... See more...
I created my own app using Splunk Add-On Builder that captures some events via an API. I'm using Python input. After a few hours I get an authentication error in some of the code automatically generated by the Splunk add-on builder. I'll put the error below. Can anyone help me? Thank you 2025-01-27 10:49:55,230 log_level=ERROR pid=49602 tid=MainThread file=base_modinput.py:log_error:309 | Traceback (most recent call last):   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/splunklib/binding.py", line 321, in wrapper     return request_fun(self, *args, **kwargs)   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/splunklib/binding.py", line 76, in new_f     val = f(*args, **kwargs)   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/splunklib/binding.py", line 737, in get     response = self.http.get(path, all_headers, **query)   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/splunklib/binding.py", line 1272, in get     return self.request(url, {'method': "GET", 'headers': headers})   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/splunklib/binding.py", line 1344, in request     raise HTTPError(response) splunklib.binding.HTTPError: HTTP 401 Unauthorized -- b'{"messages":[{"type":"WARN","text":"call not properly authenticated"}]}'   During handling of the above exception, another exception occurred:   Traceback (most recent call last):   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/splunklib/binding.py", line 262, in _handle_auth_error     yield   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/splunklib/binding.py", line 330, in wrapper     return request_fun(self, *args, **kwargs)   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/splunklib/binding.py", line 76, in new_f     val = f(*args, **kwargs)   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/splunklib/binding.py", line 737, in get     response = self.http.get(path, all_headers, **query)   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/splunklib/binding.py", line 1272, in get     return self.request(url, {'method': "GET", 'headers': headers})   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/splunklib/binding.py", line 1344, in request     raise HTTPError(response) splunklib.binding.HTTPError: HTTP 401 Unauthorized -- b'{"messages":[{"type":"WARN","text":"call not properly authenticated"}]}'   During handling of the above exception, another exception occurred:   Traceback (most recent call last):   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/modinput_wrapper/base_modinput.py", line 113, in stream_events     self.parse_input_args(input_definition)   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/modinput_wrapper/base_modinput.py", line 154, in parse_input_args     self._parse_input_args_from_global_config(inputs)   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/modinput_wrapper/base_modinput.py", line 173, in _parse_input_args_from_global_config     ucc_inputs = global_config.inputs.load(input_type=self.input_type)   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/splunktaucclib/global_config/configuration.py", line 277, in load     input_item["name"], input_item["entity"]   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/splunktaucclib/global_config/configuration.py", line 189, in _load_endpoint     RestHandler.path_segment(self._endpoint_path(name)), **query   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/splunklib/binding.py", line 330, in wrapper     return request_fun(self, *args, **kwargs)   File "/opt/splunk/lib/python3.7/contextlib.py", line 130, in __exit__     self.gen.throw(type, value, traceback)   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/splunklib/binding.py", line 265, in _handle_auth_error     raise AuthenticationError(msg, he) splunklib.binding.AuthenticationError: Authentication Failed! If session token is used, it seems to have been expired.
Hi All Upgrading on prem from 9.3 to 9.4 and getting this error on mongod which Iv never had before: The server certificate does not match the host name. Hostname: 127.0.0.1 does not match SAN(s): ... See more...
Hi All Upgrading on prem from 9.3 to 9.4 and getting this error on mongod which Iv never had before: The server certificate does not match the host name. Hostname: 127.0.0.1 does not match SAN(s): This makes sense since I am using a custom cert and 127.0.0.1 isnt on it, the cert is a wildcard cert I use internally so messing with the hosts file wont work, is there a way to get mongod to either ignore the cer SAN's or to change the connect string for mongo so that its connecting to the FQDN rather than 127.0.0.1
Hi , Please can anyone of you let me know what is the latest  sub-version of splunk 9.3.?  Regards, Poojitha NV
Hi, We need to implement Observability in our PHP 7.3.33 application. Can you please us the way to do so. As open telemetry requires PHP version higher than 8.  Currently, it is difficult for us to... See more...
Hi, We need to implement Observability in our PHP 7.3.33 application. Can you please us the way to do so. As open telemetry requires PHP version higher than 8.  Currently, it is difficult for us to upgrade the version. Any help will be appriciated.
What do I need to change in order to convert HEC on HTTP to HEC on HTTPS?
Hi, guys i am unable to install splunk enterprise on my win 11, when clicking on installer it say it prematurly ended pleas
Good day,   I'm having an issue with an email dashboard I'm attempting to create in Splunk. This dashboard filters on the various email headers fields such as sender, recipient, subject, etc. One o... See more...
Good day,   I'm having an issue with an email dashboard I'm attempting to create in Splunk. This dashboard filters on the various email headers fields such as sender, recipient, subject, etc. One of these fields is the attachments field. The issue is that there is *alwasy* a sender, recipient, and subject....but not all emails have attachments nor do I always want to filter by it. In the dashboard, I'm using a text field with a default value of '*' . The problem with this is shown in the extract below.     index=email source=/var/email_0.log attachments=$file$ OR sha256=$hash$       This search will find all emails with attachments, but filter emails without any. However, what if I want search an email just by its subject while ignoring attachments? I'd love to be able to change the dashboard so that filtering by these fields could be turned on and off, but I haven't found a way to do that. I thought I could use isnotnull(attachments) inside a case() or if() function to test if the field exists, but those expressions don't appear to work in the base search. Does anyone have any insight into how I could change the search(or dashboard) so that I'm not always filtering by attachments? Perhaps by changing the default values? Or perhaps the regex command?
I'm wondering, is it possible to mask / anonymize data at index time for the _internal index.  I have an Alert Action configured with a webhook, and I'm looking to mask the URI of the request in inte... See more...
I'm wondering, is it possible to mask / anonymize data at index time for the _internal index.  I have an Alert Action configured with a webhook, and I'm looking to mask the URI of the request in internal logs.   I'm able to mask the value at search time with this SPL. index=_internal action=webhook | rex field=url mode=sed "s/https?:\/\/www.domin.com\/(.*)/https:\/\/www.domain.com\/XXXX-XXXX-XXXX/g" | table url I tried to port this configuration to /opt/splunk/etc/system/local/ by creating a props.conf with the following. [sourcetype::_internal] SEDCMD-url = s/https?:\/\/www.domain.com\/(.*)/https:\/\/www.domain.com\/XXXX-XXXX-XXXX/g AND [splunkd] SEDCMD-url = s/https?:\/\/www.domain.com\/(.*)/https:\/\/www.domain.com\/XXXX-XXXX-XXXX/g Doesn't work. This is a standalone instance of Splunk running on a ec2 instance.  So my question is, is it even possible to filter splunk generated logs?  Should I funnel these to transforms.conf and do it there?  Is that possible? Any help or insight would be greatly appreciated
Hey guys im trying to ingest haproxy logs in splunk uba. now my issue is that im getting eventHasNoEntities for all events even tho they are parsed. what does this error mean exactly? does it mea... See more...
Hey guys im trying to ingest haproxy logs in splunk uba. now my issue is that im getting eventHasNoEntities for all events even tho they are parsed. what does this error mean exactly? does it mean it has no device or user associated with it? or its missing some fields. my main event key includes the whole haproxy logs
I always get 403 Forbidden when logging in to www.splunk.com. However, when I login from office network, it is ok. This is very frustrating. I cannot access UF and latest Splunk Enterprise from my ... See more...
I always get 403 Forbidden when logging in to www.splunk.com. However, when I login from office network, it is ok. This is very frustrating. I cannot access UF and latest Splunk Enterprise from my desktop. The funny things is after I get the 403 Forbidden, I tried going to docs.splunk.com, and I can see that I am actually logged in.  But when I tried to go to other pages, I will get 403 Forbidden.
we have a SH cluster with 3 SH which is collecting data with indexer cluster having 3 indexers. Now the problem is data present in the each indexer is not properly replicating in all 3 SH, example if... See more...
we have a SH cluster with 3 SH which is collecting data with indexer cluster having 3 indexers. Now the problem is data present in the each indexer is not properly replicating in all 3 SH, example if we check for last 15 min _internal data on each SH then number of event is different by 1k to 5 k. And if I create dashboard in SH then this is getting replicated properly in between the SH. because of this issue in enterprise security notable is showing different in each SH.
Hello, I am building a splunk app , where I want to have my own custom aggregate function for stats command. Below is my use case let say. | makeresults count=10 | eval event_count=random()%... See more...
Hello, I am building a splunk app , where I want to have my own custom aggregate function for stats command. Below is my use case let say. | makeresults count=10 | eval event_count=random()%10 | stats mysum("event_count") as total_count Does anyone knows how my python code should look like if its feasible to create mysum function. Thanks!
Hi Splunkers! The issue I am having is regarding different results from alerts when some condition is met, compared to manual search results on the same query and time frame. I am having a repeated i... See more...
Hi Splunkers! The issue I am having is regarding different results from alerts when some condition is met, compared to manual search results on the same query and time frame. I am having a repeated issue between different search queries including different functions, where an alert is triggered, and when i view the results of the alert, it outputs for example 3000 events scanned, and 2 results in the statistic section. While when i manually trigger this search it will output 3500 events scanned and 0 results in the statistic scan. I cant find any solution online, and this issue is causing several of my alerts to false alert. here is an example query that is giving me this issue incase that is helpful: index="index" <search> earliest=-8h@h |stats count(Field) as Counter earliest(Field) as DataOld by FieldA, Field B |where DataNew!=DataOld OR isnull(DataOld) |table Counter, DataOld, Field A, Field B any help is very appericated!
I am taking the SPLK-5001 Cybersecurity Defense analyst exam, where can I find useful and accurate practice exams to prepare? I find that some available online are AI generated, not realistic, too ha... See more...
I am taking the SPLK-5001 Cybersecurity Defense analyst exam, where can I find useful and accurate practice exams to prepare? I find that some available online are AI generated, not realistic, too hard or too easy. Any general study tips would be very helpful
We have SH cluster of 3 SH, where enterprise security notable are not same on all 3 SH enterprise security. And further when we check for last 15 min internal data that also vary with significant num... See more...
We have SH cluster of 3 SH, where enterprise security notable are not same on all 3 SH enterprise security. And further when we check for last 15 min internal data that also vary with significant number (5 K to 10 k) than other 2 SH Member.
Hi,   Struggling trying to figure out what I'm doing wrong. I have the following SPL | inputlookup append=t kvstore | eval _time = strptime(start_date, "%Y-%m-%d") | eval readable_time = strftime(... See more...
Hi,   Struggling trying to figure out what I'm doing wrong. I have the following SPL | inputlookup append=t kvstore | eval _time = strptime(start_date, "%Y-%m-%d") | eval readable_time = strftime(_time, "%Y-%m-%d %H:%M:%S") start_date is YYYY-MM-DD, when I modify the _time, I can see it is changed via readable_time, but the timepicker still ignores the change. I can say search last 30 days and I get the events with _time before the range in the timepicker. Any ideas?  Thanks!
I have a base query which yield the field result, result can be either "Pass" or "Fail" Sample query result is attached How can I create a column chart with the count of passes and fails as diffe... See more...
I have a base query which yield the field result, result can be either "Pass" or "Fail" Sample query result is attached How can I create a column chart with the count of passes and fails as different color columns?   here is my current search which yields a column chart with two columns of the same color index="sampleindex" source="samplesource" | search test_name="IR Test" | search serial_number="TC-7"| spath result | stats count by result