All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Got it, I'll try to explain better. This is the actual base search:   (index=email source=/var/logs/esa_0.log attachments=$file$ sha256=$hash$) OR (index=cyber source=/varlogs/fe01.log) (suser="$se... See more...
Got it, I'll try to explain better. This is the actual base search:   (index=email source=/var/logs/esa_0.log attachments=$file$ sha256=$hash$) OR (index=cyber source=/varlogs/fe01.log) (suser="$sender$" OR sender="$sender$") (duser="$recipient$" OR recipient="$recipient$") (subject="$subject$" OR msg="$subject$") (id="'<$email_id$>'" OR message-id="$email_id$") (ReplyAddress="$reply_add$" OR from-header="$reply_add$")   If you look, you'll see the various fields I have setup to filter by: sender, recipient, subject, etc. Part of what I'm doing is actually consolidating email information from two different sourctypes. That's why I have the various filters being matched against the equalvalient field in the other sourcetype. For instance, in this part 'suser="$sender$" OR sender="$sender$" ', it'll filter out emails by sender, keeping only the events in both sourcetypes where the sender is somebody@gmail.com, for example. However, the default value for this field(and the rest) is a wildcard * to match everything, so even if I don't fill in a value to filter by, it'll default to that. As a result, the search becomes this ' suser="*" OR sender="*" ' at search time. You see the problem? With this kind of filter, it *requires* the suser or sender field to be present in the events lest they get filtered out, even though I'm not trying to filter by that. Now, in the case of fields like sender, recipient, subject, and even email_id, this is okay because *every* email has to have these fields. They're not optional. In the case of email attachments, however, that isn't the case. Not all emails have attachments, therefore not all events have an 'attachments' field. However, because the search ultimately defaults to this ' attachments=* ', it requires them. This is the problem. It makes it impossible to search for emails without attachments. Ideally, I'd love to be able to simply tell Splunk not to filter by that field at all unless I fill it with something that isn't a wildcard, but that doesn't appear to be possible. Does this clear up any confusion?
Hi all Does anyone know if there is a built-in visualisation similar to that provided by Graphistry (https://www.splunk.com/en_us/blog/tips-and-tricks/visualising-network-patterns-with-splunk-and-gr... See more...
Hi all Does anyone know if there is a built-in visualisation similar to that provided by Graphistry (https://www.splunk.com/en_us/blog/tips-and-tricks/visualising-network-patterns-with-splunk-and-graphistry.html)? Thanks
Hi everyone, I'm a new Splunk Enterprise administrator. I'm about to delete the previous administrator's account and create a new one for myself. However, I have a few questions before I proceed. T... See more...
Hi everyone, I'm a new Splunk Enterprise administrator. I'm about to delete the previous administrator's account and create a new one for myself. However, I have a few questions before I proceed. The previous administrator created numerous saved searches, lookup files, and scheduled tasks. Before deleting the account, I would like to: Verify account assets: Is there a way to view all the saved searches, lookup files, dashboards, and other assets owned by the account that I'm about to delete? Assign assets: How can I transfer ownership of these assets to my new account or configure my new account to access them? I'm concerned that deleting the account without taking these precautions might disrupt ongoing scheduled tasks. Any advice or experience you can share would be greatly appreciated. Thank you.
There is coming DBX 4.0 version which already contains HA feature. I'm not sure when it's coming into GA, but you could check current discussions from slack https://splunk-usergroups.slack.com/archiv... See more...
There is coming DBX 4.0 version which already contains HA feature. I'm not sure when it's coming into GA, but you could check current discussions from slack https://splunk-usergroups.slack.com/archives/C22R341NG/p1736956361104829  
Hello Splunkers, This is after I upgraded to Splunk Enterprise version 9.4, the client names under Forwarding Management on deployment server showing up as GUID but not the actual hostnames, prior t... See more...
Hello Splunkers, This is after I upgraded to Splunk Enterprise version 9.4, the client names under Forwarding Management on deployment server showing up as GUID but not the actual hostnames, prior to version 9.4 I remember it was showing actual hostnames, not sure if an additional configuration is required here. have anyone experience the same and knows what needs to be done. Please advise,   regards,
Selinux alerts is disabled and not getting on second point.  But when i comment #NoNewPrivileges=Yes line from file /etc/systemd/system/SplunkForwarder.service. It will work. But not sure why Splu... See more...
Selinux alerts is disabled and not getting on second point.  But when i comment #NoNewPrivileges=Yes line from file /etc/systemd/system/SplunkForwarder.service. It will work. But not sure why Splunk service privilege is stoping this. 
Hi, thanks for your answer! We manually instrument our code and include span links, and we can see that it works when using Jaeger instead of splunk. When trying your example via curl the span lin... See more...
Hi, thanks for your answer! We manually instrument our code and include span links, and we can see that it works when using Jaeger instead of splunk. When trying your example via curl the span link didn't show up in the waterfall view either, so maybe our collector isn't set up correctly? We are using the docker container ( quay.io/signalfx/splunk-otel-collector:latest ) with the default configuration. However, I could not find any hints in the docs on what to configure for span links to work.  
You already asked a question about this issue. Please read the answers you got there.
Thay is puzzling there because assuming that you're running it with the same user as your forwarder runs with you should be having the same environment. The things I'd check would be: 1) selinux al... See more...
Thay is puzzling there because assuming that you're running it with the same user as your forwarder runs with you should be having the same environment. The things I'd check would be: 1) selinux alerts (if anything which should work doesn't it's often selinux ;-)) 2) dump the environment to a file at the beginning of your script and compare the version you get from running it with "splunk cmd" with the output from when it's actually run by thr forwarder.
Hi @isoutamo , I proposed this solution to Splunk for a project and I'm waiting for an answer that probably will not arrive! In my opinion it should work. Ciao. Giuseppe
Hi @AShwin1119 , usually indexes aren't in the Search Heads, but in the Indexers! So you shouldn't have any replication issue for the notable index. What's the issue you're finding? Ciao. Giuseppe
We have Search head cluster consisting of 3 Search heads. where Splunk enterprise security have notable index in the enterprise security app where all the notable logs are getting stored, now the pro... See more...
We have Search head cluster consisting of 3 Search heads. where Splunk enterprise security have notable index in the enterprise security app where all the notable logs are getting stored, now the problem is the notable index data is not replicating there data along with other 2 Search heads. 
Hi @Nraj87 , what do you want to discover: missing forwarders? if this is your requirement, you could follow two ways: use a lookup containing the perimeter of the hosts to monitor (called e.g. pe... See more...
Hi @Nraj87 , what do you want to discover: missing forwarders? if this is your requirement, you could follow two ways: use a lookup containing the perimeter of the hosts to monitor (called e.g. perimeter.csv and containing at least one field: host) and running a search like the following: | tstats count WHERE index=* BY host | append [ | inputlookup perimeter.csv | eval count=0 | fields host count ] | stats sum(count) AS total BY host | where total=0 or, if you don't want to manually manage the perimeter.csv lookup, you could search e.g. for the hosts that sent logs in the last 30 days and didn't sent logs in the last hour: | tstats latest(_time) AS _time count WHERE index=* earliest=-30d latest=now BY host | eval period=if(_time<now()-3600,"Previous","Latest") | stats dc(period) AS period_count values(period) AS period BY host | where period_count=1 AND period="Previous" I prefer the first solution because gives you more control, but it requires more job to manage. Ciao. Giuseppe
Hi All, I'm build below query for Delayed Forwarder for Phone home for 2 hour and Not Sending Data to indexes more than 15 min through append command as single correlation search. However, query is... See more...
Hi All, I'm build below query for Delayed Forwarder for Phone home for 2 hour and Not Sending Data to indexes more than 15 min through append command as single correlation search. However, query is not working with append command where calculating time duration of data sent and last phone connection.  Kindly suggest if any change in query can fix the calculation. index=_internal host=index1 source=*metrics.log* component=Metrics group=tcpin_connections kb>1 | eval os=os+" "+arch | eval ip=sourceIp | eval type="Datasent" | stats max(_time) as _time values(hostname) as hostname values(fwdType) as fwdType values(version) as version values(os) as os by sourceIp | append [ search index=_internal source="/opt/splunk/var/log/splunk/splunkd_access.log" "/services/broker/phonehome/connection" |rex field=uri "_(?<fwd_name>[^_]+)_(?<fwd_id>[-0-9A-Z]+)$" | eval type="Deployment" | dedup fwd_name | stats max(_time) as lastPhoneHomeTime values(fwd_name) as hostname values(useragent) as fwdType values(version) as version values(type) as types by clientip | convert ctime(lastPhoneHomeTime) | table clientip lastPhoneHomeTime hostname fwdType version] | stats dc(type) as num_types values(type) as types values(hostname) as hostname values(fwdType) as fwdType values(version) as version values(os) as os max(_time) as most_recent_data values(lastPhoneHomeTime) as most_recent_settings by ip | eval data_minutes_ago=round((now()-most_recent_data)/60, 1), settings_minutes_ago=round((now()-most_recent_settings)/60, 1) | search settings_minutes_ago>120 OR data_minutes_ago>15 | convert ctime(most_recent_data) ctime(most_recent_settings) | sort types data_minutes_ago settings_minutes_ago | stats max(_time) as lastPhoneHomeTime values(fwd_name) as hostname values(useragent) as fwdType values(version) as version values(type) as types by clientip | convert ctime(lastPhoneHomeTime) | table clientip lastPhoneHomeTime hostname fwdType version] | stats dc(type) as num_types values(type) as types values(hostname) as hostname values(fwdType) as fwdType values(version) as version values(os) as os max(_time) as most_recent_data values(lastPhoneHomeTime) as most_recent_settings by ip | eval data_minutes_ago=round((now()-most_recent_data)/60, 1), settings_minutes_ago=round((now()-most_recent_settings)/60, 1) | search settings_minutes_ago>120 OR data_minutes_ago>15 | convert ctime(most_recent_data) ctime(most_recent_settings) | sort types data_minutes_ago settings_minutes_ago    
I created my own app using Splunk Add-On Builder that captures some events via an API. I'm using Python input. After a few hours I get an authentication error in some of the code automatically genera... See more...
I created my own app using Splunk Add-On Builder that captures some events via an API. I'm using Python input. After a few hours I get an authentication error in some of the code automatically generated by the Splunk add-on builder. I'll put the error below. Can anyone help me? Thank you 2025-01-27 10:49:55,230 log_level=ERROR pid=49602 tid=MainThread file=base_modinput.py:log_error:309 | Traceback (most recent call last):   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/splunklib/binding.py", line 321, in wrapper     return request_fun(self, *args, **kwargs)   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/splunklib/binding.py", line 76, in new_f     val = f(*args, **kwargs)   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/splunklib/binding.py", line 737, in get     response = self.http.get(path, all_headers, **query)   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/splunklib/binding.py", line 1272, in get     return self.request(url, {'method': "GET", 'headers': headers})   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/splunklib/binding.py", line 1344, in request     raise HTTPError(response) splunklib.binding.HTTPError: HTTP 401 Unauthorized -- b'{"messages":[{"type":"WARN","text":"call not properly authenticated"}]}'   During handling of the above exception, another exception occurred:   Traceback (most recent call last):   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/splunklib/binding.py", line 262, in _handle_auth_error     yield   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/splunklib/binding.py", line 330, in wrapper     return request_fun(self, *args, **kwargs)   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/splunklib/binding.py", line 76, in new_f     val = f(*args, **kwargs)   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/splunklib/binding.py", line 737, in get     response = self.http.get(path, all_headers, **query)   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/splunklib/binding.py", line 1272, in get     return self.request(url, {'method': "GET", 'headers': headers})   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/splunklib/binding.py", line 1344, in request     raise HTTPError(response) splunklib.binding.HTTPError: HTTP 401 Unauthorized -- b'{"messages":[{"type":"WARN","text":"call not properly authenticated"}]}'   During handling of the above exception, another exception occurred:   Traceback (most recent call last):   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/modinput_wrapper/base_modinput.py", line 113, in stream_events     self.parse_input_args(input_definition)   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/modinput_wrapper/base_modinput.py", line 154, in parse_input_args     self._parse_input_args_from_global_config(inputs)   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/modinput_wrapper/base_modinput.py", line 173, in _parse_input_args_from_global_config     ucc_inputs = global_config.inputs.load(input_type=self.input_type)   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/splunktaucclib/global_config/configuration.py", line 277, in load     input_item["name"], input_item["entity"]   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/splunktaucclib/global_config/configuration.py", line 189, in _load_endpoint     RestHandler.path_segment(self._endpoint_path(name)), **query   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/splunklib/binding.py", line 330, in wrapper     return request_fun(self, *args, **kwargs)   File "/opt/splunk/lib/python3.7/contextlib.py", line 130, in __exit__     self.gen.throw(type, value, traceback)   File "/opt/splunk/etc/apps/myapp/bin/myapp/aob_py3/splunklib/binding.py", line 265, in _handle_auth_error     raise AuthenticationError(msg, he) splunklib.binding.AuthenticationError: Authentication Failed! If session token is used, it seems to have been expired.
Try something along these lines | rex field=msg "(?<action>added|removed)" | eval added_time=if(action="added",_time,null()) | eval removed_time=if(action="removed",_time,null()) | sort 0 _time | st... See more...
Try something along these lines | rex field=msg "(?<action>added|removed)" | eval added_time=if(action="added",_time,null()) | eval removed_time=if(action="removed",_time,null()) | sort 0 _time | streamstats max(added_time) as added_time latest(removed_time) as removed_time by host slot_number | eval downtime=if(action="added",added_time-removed_time,null())
Hi All Upgrading on prem from 9.3 to 9.4 and getting this error on mongod which Iv never had before: The server certificate does not match the host name. Hostname: 127.0.0.1 does not match SAN(s): ... See more...
Hi All Upgrading on prem from 9.3 to 9.4 and getting this error on mongod which Iv never had before: The server certificate does not match the host name. Hostname: 127.0.0.1 does not match SAN(s): This makes sense since I am using a custom cert and 127.0.0.1 isnt on it, the cert is a wildcard cert I use internally so messing with the hosts file wont work, is there a way to get mongod to either ignore the cer SAN's or to change the connect string for mongo so that its connecting to the FQDN rather than 127.0.0.1
Hi you could check it from here https://www.splunk.com/en_us/download/previous-releases.html or alternatively check what version doc.splunk.com shows. r. Ismo
Hi , Please can anyone of you let me know what is the latest  sub-version of splunk 9.3.?  Regards, Poojitha NV
Hi, We need to implement Observability in our PHP 7.3.33 application. Can you please us the way to do so. As open telemetry requires PHP version higher than 8.  Currently, it is difficult for us to... See more...
Hi, We need to implement Observability in our PHP 7.3.33 application. Can you please us the way to do so. As open telemetry requires PHP version higher than 8.  Currently, it is difficult for us to upgrade the version. Any help will be appriciated.