All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I give you karma for this, i forget my client using Palo Alto to detect it.
You should add \s* in both side of = to found also “index =*” etc definition. Also there could be a macro or eventtypes where this definition is. Even role’s default index or search filters could cont... See more...
You should add \s* in both side of = to found also “index =*” etc definition. Also there could be a macro or eventtypes where this definition is. Even role’s default index or search filters could contains this definition. As you see it’s not as easy than look “index=*” from saved searches or even from _audit. There are lot of similar questions and answers in community where you could see more information about this subject.
I think this endpoint just took some time to deploy. Its working as expected now.
Have these instructions changed since this was posted? The Splunk Cloud URL doesn't resolve for me. I am on the free version of Splunk Cloud with an Enabled HEC.
https://docs.splunk.com/Documentation/ES/8.0.1/Install/InstallSplunkESinSHC#Differences_between_deploying_on_a_search_head_and_a_search_head_cluster_environment I think the rest of the SHC ES instal... See more...
https://docs.splunk.com/Documentation/ES/8.0.1/Install/InstallSplunkESinSHC#Differences_between_deploying_on_a_search_head_and_a_search_head_cluster_environment I think the rest of the SHC ES installation is mistakenly copied and not adjusted from the single-instance. This manual could use a feedback.
Two things. 1. Stanzas are (unless explicitly set for source or host) based on sourcetype. Don't put "sourcetype::" in stanza specification. 2. If your idea was to cast sourcetype from A to B and t... See more...
Two things. 1. Stanzas are (unless explicitly set for source or host) based on sourcetype. Don't put "sourcetype::" in stanza specification. 2. If your idea was to cast sourcetype from A to B and then use transforms defined for sourcetype B then it won't work. The list of operations which will be performed on an event is decided at the beginning of the ingestion pipeline. The only way to change it "midflight" is to use the CLONE_SOURCETYPE transform but it's more complicated than simple sourcetype rewrite.
Hi, I'm using the Journald input in univarsal forwarder to collect logs form journald: https://docs.splunk.com/Documentation/Splunk/9.3.2/Data/CollecteventsfromJournalD. When the data comes, I set th... See more...
Hi, I'm using the Journald input in univarsal forwarder to collect logs form journald: https://docs.splunk.com/Documentation/Splunk/9.3.2/Data/CollecteventsfromJournalD. When the data comes, I set the sourcetype dynamically based on the value of the journald TRANSPORT field. This works fine. After that, I would like to apply other transforms to the logs with a certain sourcetypes e.g. remove the logs if the log has a certain phrase. Unfortunately, for some reason, the second transform is not working. Here is the props and configs that I'm using   here is my transforms.conf:   [set_new_sourcetype] SOURCE_KEY = field:TRANSPORT REGEX = ([^\s]+) FORMAT = sourcetype::$1 DEST_KEY = MetaData:Sourcetype   [setnull_syslog_test] REGEX = (?i)test DEST_KEY = queue FORMAT = nullQueue   here is my pros.conf:   [source::journald:///var/log/journal] TRANSFORMS-change_sourcetype = set_new_sourcetype   [sourcetype::syslog] TRANSFORMS-setnull = setnull_syslog_test   Any idea why the setnull_syslog_test transform is not working?
Sure. As a user you can't directly change anything that's happening during data ingestion but a properly maintained environment should allow for feedback to the data quality. A badly ingested data is... See more...
Sure. As a user you can't directly change anything that's happening during data ingestion but a properly maintained environment should allow for feedback to the data quality. A badly ingested data is flawed data and often a simply useless data. Data with wrongly issued timestamp is simply not searchable in the proper "space in time".
I understand its not an auth issue. The error is very clear that it cannot resolve host. The question pertains to the host/dns entry. Thank you
This is _not_ a problem with authentication. This means that the host you're trying to run your curl command on cannot resolve the name you've provided as the host for the HEC endpoint. Whether it's... See more...
This is _not_ a problem with authentication. This means that the host you're trying to run your curl command on cannot resolve the name you've provided as the host for the HEC endpoint. Whether it's that you've provided a wrong name or is it because you have problems with your network setup - that we don't know. BTW, trial Splunk Cloud instances don't use TLS on HEC inputs as far as I remember.
I cannot get auth to work for the HTTP Input in the Splunk trial. curl -H "Authorization: Splunk <HEC_token>" -k https://http-inputs-<stack_url>.splunkcloud.com:8088/services/collector/event -d '{... See more...
I cannot get auth to work for the HTTP Input in the Splunk trial. curl -H "Authorization: Splunk <HEC_token>" -k https://http-inputs-<stack_url>.splunkcloud.com:8088/services/collector/event -d '{"sourcetype": "my_sample_data", "event": "http auth ftw!"}' My Splunk URL in https://<stack_url>.splunkcloud.com I've scoured the forums and web trying a number of combinations here. The HTTP Input is in the Enabled state on the Splunk console. Any help is appreciated. Thank you    
You could calculate the current hour of the alert execution, then adjust the threshold at the end. <mySearch> | bin _time span=1m | stats avg(msg.DurationMs) AS AvgDuration by _time, msg.Service | e... See more...
You could calculate the current hour of the alert execution, then adjust the threshold at the end. <mySearch> | bin _time span=1m | stats avg(msg.DurationMs) AS AvgDuration by _time, msg.Service | eval hour = strftime(now(),"%H") | where (AvgDuration > 1000 and hour >= 8 and hour < 17) or (AvgDuration > 500 AND (hour < 8 OR hour >= 17))
I want to set up splunk alert that can have two threshold  1. if the time is between 8 AM to 5PM - alert if AvgDuration is greater than 1000ms 2. If time is between 5pm to next day 8AM - alert if ... See more...
I want to set up splunk alert that can have two threshold  1. if the time is between 8 AM to 5PM - alert if AvgDuration is greater than 1000ms 2. If time is between 5pm to next day 8AM - alert if avgduration is greater than 500ms How do i implement this Query am working on <mySearch>| bin _time span=1m| stats avg(msg.DurationMs) AS AvgDuration by _time, msg.Service | where AvgDuration > 1000
Fix for this will be SPL-266957.
Its a Search Head Cluster Environment.
Try the API: | rest "/servicesNS/-/-/saved/searches" splunk_server=* | regex search="index=\*" | table title search disabled author eai:acl.app
Hello, if this can help, found out another user indicated "this setting was not enabled on our deployer hence the ES upgrade still proceeded without it's enablement."  
Can someone please help me with dashboard search query that will look for all alerts configured in splunk and list only those alerts having index=* 
I'm accepting this as the solution since this line was the key to creating my final search as seen below. It looks for empty indexes which can be used in an alert as "When number of results is greate... See more...
I'm accepting this as the solution since this line was the key to creating my final search as seen below. It looks for empty indexes which can be used in an alert as "When number of results is greater than 0" and "Trigger for each result"    | rest splunk_server=local /services/data/indexes | where title IN ("index1", "index2", "index3", "index4", "index5", "index6") | table title | rename title AS index | join type=left index [| tstats count where index IN ("index1", "index2", "index3", "index4", "index5", "index6") BY index] | fillnull | where count=0    
I got it with a variation on your solution. Below is the final search for an alert that looks for empty indexes (count=0) from a given list of known indexes ("index1", "index2", "index3", "index4", "... See more...
I got it with a variation on your solution. Below is the final search for an alert that looks for empty indexes (count=0) from a given list of known indexes ("index1", "index2", "index3", "index4", "index5", "index6", ...etc.).     | rest splunk_server=local /services/data/indexes | where title IN ("index1", "index2", "index3", "index4", "index5", "index6") | table title | rename title AS index | join type=left index [| tstats count where index IN ("index1", "index2", "index3", "index4", "index5", "index6") BY index] | fillnull | where count=0     That first line with rest command was the key. Thank you!