All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Wait. What is that [splunkd] stanza? You have an input called splunkd? What is it ingesting?
I am trying to use the Splunk Add-on for Tomcat  first time. When I try Add Account this results in error message below. I think the add-on expects Java to be somewhere. Java is installed on my all-i... See more...
I am trying to use the Splunk Add-on for Tomcat  first time. When I try Add Account this results in error message below. I think the add-on expects Java to be somewhere. Java is installed on my all-in-one Splunk server, where the add-on is installed. How do I make Java available to this add-on? Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_tomcat/lib/splunktaucclib/rest_handler/handler.py", line 142, in wrapper for name, data, acl in meth(self, *args, **kwargs): File "/opt/splunk/etc/apps/Splunk_TA_tomcat/lib/splunktaucclib/rest_handler/handler.py", line 107, in wrapper self.endpoint.validate( File "/opt/splunk/etc/apps/Splunk_TA_tomcat/lib/splunktaucclib/rest_handler/endpoint/init.py", line 85, in validate self._loop_fields("validate", name, data, existing=existing) File "/opt/splunk/etc/apps/Splunk_TA_tomcat/lib/splunktaucclib/rest_handler/endpoint/init.py", line 82, in _loop_fields return [getattr(f, meth)(data, *args, **kwargs) for f in model.fields] File "/opt/splunk/etc/apps/Splunk_TA_tomcat/lib/splunktaucclib/rest_handler/endpoint/init.py", line 82, in <listcomp> return [getattr(f, meth)(data, *args, **kwargs) for f in model.fields] File "/opt/splunk/etc/apps/Splunk_TA_tomcat/lib/splunktaucclib/rest_handler/endpoint/field.py", line 56, in validate res = self.validator.validate(value, data) File "/opt/splunk/etc/apps/Splunk_TA_tomcat/bin/Splunk_TA_tomcat_account_validator.py", line 85, in validate self._process = subprocess.Popen( # nosemgrep false-positive : The value java_args is File "/opt/splunk/lib/python3.9/subprocess.py", line 951, in __init_ self._execute_child(args, executable, preexec_fn, close_fds, File "/opt/splunk/lib/python3.9/subprocess.py", line 1837, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) FileNotFoundError: [Errno 2] No such file or directory: 'java'
Still feel myself as a dummy I set this in the UF inputs.conf: [splunkd] _meta = my_field::abc   Then I set this in the feilds.conf of SH: [my_field] INDEXED = true INDEXED_VALUE = false   O... See more...
Still feel myself as a dummy I set this in the UF inputs.conf: [splunkd] _meta = my_field::abc   Then I set this in the feilds.conf of SH: [my_field] INDEXED = true INDEXED_VALUE = false   Of course, I applied config on both instances and still can't find new field created in "_meta"
I give you karma for this, i forget my client using Palo Alto to detect it.
You should add \s* in both side of = to found also “index =*” etc definition. Also there could be a macro or eventtypes where this definition is. Even role’s default index or search filters could cont... See more...
You should add \s* in both side of = to found also “index =*” etc definition. Also there could be a macro or eventtypes where this definition is. Even role’s default index or search filters could contains this definition. As you see it’s not as easy than look “index=*” from saved searches or even from _audit. There are lot of similar questions and answers in community where you could see more information about this subject.
I think this endpoint just took some time to deploy. Its working as expected now.
Have these instructions changed since this was posted? The Splunk Cloud URL doesn't resolve for me. I am on the free version of Splunk Cloud with an Enabled HEC.
https://docs.splunk.com/Documentation/ES/8.0.1/Install/InstallSplunkESinSHC#Differences_between_deploying_on_a_search_head_and_a_search_head_cluster_environment I think the rest of the SHC ES instal... See more...
https://docs.splunk.com/Documentation/ES/8.0.1/Install/InstallSplunkESinSHC#Differences_between_deploying_on_a_search_head_and_a_search_head_cluster_environment I think the rest of the SHC ES installation is mistakenly copied and not adjusted from the single-instance. This manual could use a feedback.
Two things. 1. Stanzas are (unless explicitly set for source or host) based on sourcetype. Don't put "sourcetype::" in stanza specification. 2. If your idea was to cast sourcetype from A to B and t... See more...
Two things. 1. Stanzas are (unless explicitly set for source or host) based on sourcetype. Don't put "sourcetype::" in stanza specification. 2. If your idea was to cast sourcetype from A to B and then use transforms defined for sourcetype B then it won't work. The list of operations which will be performed on an event is decided at the beginning of the ingestion pipeline. The only way to change it "midflight" is to use the CLONE_SOURCETYPE transform but it's more complicated than simple sourcetype rewrite.
Hi, I'm using the Journald input in univarsal forwarder to collect logs form journald: https://docs.splunk.com/Documentation/Splunk/9.3.2/Data/CollecteventsfromJournalD. When the data comes, I set th... See more...
Hi, I'm using the Journald input in univarsal forwarder to collect logs form journald: https://docs.splunk.com/Documentation/Splunk/9.3.2/Data/CollecteventsfromJournalD. When the data comes, I set the sourcetype dynamically based on the value of the journald TRANSPORT field. This works fine. After that, I would like to apply other transforms to the logs with a certain sourcetypes e.g. remove the logs if the log has a certain phrase. Unfortunately, for some reason, the second transform is not working. Here is the props and configs that I'm using   here is my transforms.conf:   [set_new_sourcetype] SOURCE_KEY = field:TRANSPORT REGEX = ([^\s]+) FORMAT = sourcetype::$1 DEST_KEY = MetaData:Sourcetype   [setnull_syslog_test] REGEX = (?i)test DEST_KEY = queue FORMAT = nullQueue   here is my pros.conf:   [source::journald:///var/log/journal] TRANSFORMS-change_sourcetype = set_new_sourcetype   [sourcetype::syslog] TRANSFORMS-setnull = setnull_syslog_test   Any idea why the setnull_syslog_test transform is not working?
Sure. As a user you can't directly change anything that's happening during data ingestion but a properly maintained environment should allow for feedback to the data quality. A badly ingested data is... See more...
Sure. As a user you can't directly change anything that's happening during data ingestion but a properly maintained environment should allow for feedback to the data quality. A badly ingested data is flawed data and often a simply useless data. Data with wrongly issued timestamp is simply not searchable in the proper "space in time".
I understand its not an auth issue. The error is very clear that it cannot resolve host. The question pertains to the host/dns entry. Thank you
This is _not_ a problem with authentication. This means that the host you're trying to run your curl command on cannot resolve the name you've provided as the host for the HEC endpoint. Whether it's... See more...
This is _not_ a problem with authentication. This means that the host you're trying to run your curl command on cannot resolve the name you've provided as the host for the HEC endpoint. Whether it's that you've provided a wrong name or is it because you have problems with your network setup - that we don't know. BTW, trial Splunk Cloud instances don't use TLS on HEC inputs as far as I remember.
I cannot get auth to work for the HTTP Input in the Splunk trial. curl -H "Authorization: Splunk <HEC_token>" -k https://http-inputs-<stack_url>.splunkcloud.com:8088/services/collector/event -d '{... See more...
I cannot get auth to work for the HTTP Input in the Splunk trial. curl -H "Authorization: Splunk <HEC_token>" -k https://http-inputs-<stack_url>.splunkcloud.com:8088/services/collector/event -d '{"sourcetype": "my_sample_data", "event": "http auth ftw!"}' My Splunk URL in https://<stack_url>.splunkcloud.com I've scoured the forums and web trying a number of combinations here. The HTTP Input is in the Enabled state on the Splunk console. Any help is appreciated. Thank you    
You could calculate the current hour of the alert execution, then adjust the threshold at the end. <mySearch> | bin _time span=1m | stats avg(msg.DurationMs) AS AvgDuration by _time, msg.Service | e... See more...
You could calculate the current hour of the alert execution, then adjust the threshold at the end. <mySearch> | bin _time span=1m | stats avg(msg.DurationMs) AS AvgDuration by _time, msg.Service | eval hour = strftime(now(),"%H") | where (AvgDuration > 1000 and hour >= 8 and hour < 17) or (AvgDuration > 500 AND (hour < 8 OR hour >= 17))
I want to set up splunk alert that can have two threshold  1. if the time is between 8 AM to 5PM - alert if AvgDuration is greater than 1000ms 2. If time is between 5pm to next day 8AM - alert if ... See more...
I want to set up splunk alert that can have two threshold  1. if the time is between 8 AM to 5PM - alert if AvgDuration is greater than 1000ms 2. If time is between 5pm to next day 8AM - alert if avgduration is greater than 500ms How do i implement this Query am working on <mySearch>| bin _time span=1m| stats avg(msg.DurationMs) AS AvgDuration by _time, msg.Service | where AvgDuration > 1000
Fix for this will be SPL-266957.
Its a Search Head Cluster Environment.
Try the API: | rest "/servicesNS/-/-/saved/searches" splunk_server=* | regex search="index=\*" | table title search disabled author eai:acl.app
Hello, if this can help, found out another user indicated "this setting was not enabled on our deployer hence the ES upgrade still proceeded without it's enablement."