All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

There is coming DBX 4.0 version which already contains HA feature. I'm not sure when it's coming into GA, but you could check current discussions from slack https://splunk-usergroups.slack.com/archiv... See more...
There is coming DBX 4.0 version which already contains HA feature. I'm not sure when it's coming into GA, but you could check current discussions from slack https://splunk-usergroups.slack.com/archives/C22R341NG/p1736956361104829  
Hello Splunkers, This is after I upgraded to Splunk Enterprise version 9.4, the client names under Forwarding Management on deployment server showing up as GUID but not the actual hostnames, prior t... See more...
Hello Splunkers, This is after I upgraded to Splunk Enterprise version 9.4, the client names under Forwarding Management on deployment server showing up as GUID but not the actual hostnames, prior to version 9.4 I remember it was showing actual hostnames, not sure if an additional configuration is required here. have anyone experience the same and knows what needs to be done. Please advise,   regards,
Selinux alerts is disabled and not getting on second point.  But when i comment #NoNewPrivileges=Yes line from file /etc/systemd/system/SplunkForwarder.service. It will work. But not sure why Splu... See more...
Selinux alerts is disabled and not getting on second point.  But when i comment #NoNewPrivileges=Yes line from file /etc/systemd/system/SplunkForwarder.service. It will work. But not sure why Splunk service privilege is stoping this. 
Hi, thanks for your answer! We manually instrument our code and include span links, and we can see that it works when using Jaeger instead of splunk. When trying your example via curl the span lin... See more...
Hi, thanks for your answer! We manually instrument our code and include span links, and we can see that it works when using Jaeger instead of splunk. When trying your example via curl the span link didn't show up in the waterfall view either, so maybe our collector isn't set up correctly? We are using the docker container ( quay.io/signalfx/splunk-otel-collector:latest ) with the default configuration. However, I could not find any hints in the docs on what to configure for span links to work.  
You already asked a question about this issue. Please read the answers you got there.
Thay is puzzling there because assuming that you're running it with the same user as your forwarder runs with you should be having the same environment. The things I'd check would be: 1) selinux al... See more...
Thay is puzzling there because assuming that you're running it with the same user as your forwarder runs with you should be having the same environment. The things I'd check would be: 1) selinux alerts (if anything which should work doesn't it's often selinux ;-)) 2) dump the environment to a file at the beginning of your script and compare the version you get from running it with "splunk cmd" with the output from when it's actually run by thr forwarder.
Hi @isoutamo , I proposed this solution to Splunk for a project and I'm waiting for an answer that probably will not arrive! In my opinion it should work. Ciao. Giuseppe
Hi @AShwin1119 , usually indexes aren't in the Search Heads, but in the Indexers! So you shouldn't have any replication issue for the notable index. What's the issue you're finding? Ciao. Giuseppe
We have Search head cluster consisting of 3 Search heads. where Splunk enterprise security have notable index in the enterprise security app where all the notable logs are getting stored, now the pro... See more...
We have Search head cluster consisting of 3 Search heads. where Splunk enterprise security have notable index in the enterprise security app where all the notable logs are getting stored, now the problem is the notable index data is not replicating there data along with other 2 Search heads. 
Hi @Nraj87 , what do you want to discover: missing forwarders? if this is your requirement, you could follow two ways: use a lookup containing the perimeter of the hosts to monitor (called e.g. pe... See more...
Hi @Nraj87 , what do you want to discover: missing forwarders? if this is your requirement, you could follow two ways: use a lookup containing the perimeter of the hosts to monitor (called e.g. perimeter.csv and containing at least one field: host) and running a search like the following: | tstats count WHERE index=* BY host | append [ | inputlookup perimeter.csv | eval count=0 | fields host count ] | stats sum(count) AS total BY host | where total=0 or, if you don't want to manually manage the perimeter.csv lookup, you could search e.g. for the hosts that sent logs in the last 30 days and didn't sent logs in the last hour: | tstats latest(_time) AS _time count WHERE index=* earliest=-30d latest=now BY host | eval period=if(_time<now()-3600,"Previous","Latest") | stats dc(period) AS period_count values(period) AS period BY host | where period_count=1 AND period="Previous" I prefer the first solution because gives you more control, but it requires more job to manage. Ciao. Giuseppe
Hi All, I'm build below query for Delayed Forwarder for Phone home for 2 hour and Not Sending Data to indexes more than 15 min through append command as single correlation search. However, query is... See more...
Hi All, I'm build below query for Delayed Forwarder for Phone home for 2 hour and Not Sending Data to indexes more than 15 min through append command as single correlation search. However, query is not working with append command where calculating time duration of data sent and last phone connection.  Kindly suggest if any change in query can fix the calculation. index=_internal host=index1 source=*metrics.log* component=Metrics group=tcpin_connections kb>1 | eval os=os+" "+arch | eval ip=sourceIp | eval type="Datasent" | stats max(_time) as _time values(hostname) as hostname values(fwdType) as fwdType values(version) as version values(os) as os by sourceIp | append [ search index=_internal source="/opt/splunk/var/log/splunk/splunkd_access.log" "/services/broker/phonehome/connection" |rex field=uri "_(?<fwd_name>[^_]+)_(?<fwd_id>[-0-9A-Z]+)$" | eval type="Deployment" | dedup fwd_name | stats max(_time) as lastPhoneHomeTime values(fwd_name) as hostname values(useragent) as fwdType values(version) as version values(type) as types by clientip | convert ctime(lastPhoneHomeTime) | table clientip lastPhoneHomeTime hostname fwdType version] | stats dc(type) as num_types values(type) as types values(hostname) as hostname values(fwdType) as fwdType values(version) as version values(os) as os max(_time) as most_recent_data values(lastPhoneHomeTime) as most_recent_settings by ip | eval data_minutes_ago=round((now()-most_recent_data)/60, 1), settings_minutes_ago=round((now()-most_recent_settings)/60, 1) | search settings_minutes_ago>120 OR data_minutes_ago>15 | convert ctime(most_recent_data) ctime(most_recent_settings) | sort types data_minutes_ago settings_minutes_ago | stats max(_time) as lastPhoneHomeTime values(fwd_name) as hostname values(useragent) as fwdType values(version) as version values(type) as types by clientip | convert ctime(lastPhoneHomeTime) | table clientip lastPhoneHomeTime hostname fwdType version] | stats dc(type) as num_types values(type) as types values(hostname) as hostname values(fwdType) as fwdType values(version) as version values(os) as os max(_time) as most_recent_data values(lastPhoneHomeTime) as most_recent_settings by ip | eval data_minutes_ago=round((now()-most_recent_data)/60, 1), settings_minutes_ago=round((now()-most_recent_settings)/60, 1) | search settings_minutes_ago>120 OR data_minutes_ago>15 | convert ctime(most_recent_data) ctime(most_recent_settings) | sort types data_minutes_ago settings_minutes_ago    
I created my own app using Splunk Add-On Builder that captures some events via an API. I'm using Python input. After a few hours I get an authentication error in some of the code automatically genera... See more...
I created my own app using Splunk Add-On Builder that captures some events via an API. I'm using Python input. After a few hours I get an authentication error in some of the code automatically generated by the Splunk add-on builder. I'll put the error below. Can anyone help me? Thank you 2025-01-27 10:49:55,230 log_level=ERROR pid=49602 tid=MainThread file=base_modinput.py:log_error:309 | Traceback (most recent call last):   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/splunklib/binding.py", line 321, in wrapper     return request_fun(self, *args, **kwargs)   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/splunklib/binding.py", line 76, in new_f     val = f(*args, **kwargs)   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/splunklib/binding.py", line 737, in get     response = self.http.get(path, all_headers, **query)   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/splunklib/binding.py", line 1272, in get     return self.request(url, {'method': "GET", 'headers': headers})   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/splunklib/binding.py", line 1344, in request     raise HTTPError(response) splunklib.binding.HTTPError: HTTP 401 Unauthorized -- b'{"messages":[{"type":"WARN","text":"call not properly authenticated"}]}'   During handling of the above exception, another exception occurred:   Traceback (most recent call last):   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/splunklib/binding.py", line 262, in _handle_auth_error     yield   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/splunklib/binding.py", line 330, in wrapper     return request_fun(self, *args, **kwargs)   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/splunklib/binding.py", line 76, in new_f     val = f(*args, **kwargs)   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/splunklib/binding.py", line 737, in get     response = self.http.get(path, all_headers, **query)   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/splunklib/binding.py", line 1272, in get     return self.request(url, {'method': "GET", 'headers': headers})   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/splunklib/binding.py", line 1344, in request     raise HTTPError(response) splunklib.binding.HTTPError: HTTP 401 Unauthorized -- b'{"messages":[{"type":"WARN","text":"call not properly authenticated"}]}'   During handling of the above exception, another exception occurred:   Traceback (most recent call last):   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/modinput_wrapper/base_modinput.py", line 113, in stream_events     self.parse_input_args(input_definition)   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/modinput_wrapper/base_modinput.py", line 154, in parse_input_args     self._parse_input_args_from_global_config(inputs)   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/modinput_wrapper/base_modinput.py", line 173, in _parse_input_args_from_global_config     ucc_inputs = global_config.inputs.load(input_type=self.input_type)   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/splunktaucclib/global_config/configuration.py", line 277, in load     input_item["name"], input_item["entity"]   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/splunktaucclib/global_config/configuration.py", line 189, in _load_endpoint     RestHandler.path_segment(self._endpoint_path(name)), **query   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/splunklib/binding.py", line 330, in wrapper     return request_fun(self, *args, **kwargs)   File "/opt/splunk/lib/python3.7/contextlib.py", line 130, in __exit__     self.gen.throw(type, value, traceback)   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/splunklib/binding.py", line 265, in _handle_auth_error     raise AuthenticationError(msg, he) splunklib.binding.AuthenticationError: Authentication Failed! If session token is used, it seems to have been expired.
Try something along these lines | rex field=msg "(?<action>added|removed)" | eval added_time=if(action="added",_time,null()) | eval removed_time=if(action="removed",_time,null()) | sort 0 _time | st... See more...
Try something along these lines | rex field=msg "(?<action>added|removed)" | eval added_time=if(action="added",_time,null()) | eval removed_time=if(action="removed",_time,null()) | sort 0 _time | streamstats max(added_time) as added_time latest(removed_time) as removed_time by host slot_number | eval downtime=if(action="added",added_time-removed_time,null())
Hi All Upgrading on prem from 9.3 to 9.4 and getting this error on mongod which Iv never had before: The server certificate does not match the host name. Hostname: 127.0.0.1 does not match SAN(s): ... See more...
Hi All Upgrading on prem from 9.3 to 9.4 and getting this error on mongod which Iv never had before: The server certificate does not match the host name. Hostname: 127.0.0.1 does not match SAN(s): This makes sense since I am using a custom cert and 127.0.0.1 isnt on it, the cert is a wildcard cert I use internally so messing with the hosts file wont work, is there a way to get mongod to either ignore the cer SAN's or to change the connect string for mongo so that its connecting to the FQDN rather than 127.0.0.1
Hi you could check it from here https://www.splunk.com/en_us/download/previous-releases.html or alternatively check what version doc.splunk.com shows. r. Ismo
Hi , Please can anyone of you let me know what is the latest  sub-version of splunk 9.3.?  Regards, Poojitha NV
Hi, We need to implement Observability in our PHP 7.3.33 application. Can you please us the way to do so. As open telemetry requires PHP version higher than 8.  Currently, it is difficult for us to... See more...
Hi, We need to implement Observability in our PHP 7.3.33 application. Can you please us the way to do so. As open telemetry requires PHP version higher than 8.  Currently, it is difficult for us to upgrade the version. Any help will be appriciated.
Yes, script is woking with "splunk cmd" also splunk cmd ./crio_simple_ps.sh
regex101.com is excellent place to try how these rex are working or not. It also show execution cost of different versions and contains some other help.
You must turn on TLS/SSL on hec configuration. I think that in cloud there is only https available, but in onprem you could use both but not at the same time in one node! I this doc look enterprise ... See more...
You must turn on TLS/SSL on hec configuration. I think that in cloud there is only https available, but in onprem you could use both but not at the same time in one node! I this doc look enterprise part and enable ssl. https://docs.splunk.com/Documentation/Splunk/9.4.0/Data/UsetheHTTPEventCollector When you are using it then just use https instead of http as a protocol. Earlier there was some requirements that TLS certificates must be official at least with some senders? I don’t know if this is still valid requirement or can you use private certificates also?