All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, I'm using the Journald input in univarsal forwarder to collect logs form journald: https://docs.splunk.com/Documentation/Splunk/9.3.2/Data/CollecteventsfromJournalD. When the data comes, I set th... See more...
Hi, I'm using the Journald input in univarsal forwarder to collect logs form journald: https://docs.splunk.com/Documentation/Splunk/9.3.2/Data/CollecteventsfromJournalD. When the data comes, I set the sourcetype dynamically based on the value of the journald TRANSPORT field. This works fine. After that, I would like to apply other transforms to the logs with a certain sourcetypes e.g. remove the logs if the log has a certain phrase. Unfortunately, for some reason, the second transform is not working. Here is the props and configs that I'm using   here is my transforms.conf:   [set_new_sourcetype] SOURCE_KEY = field:TRANSPORT REGEX = ([^\s]+) FORMAT = sourcetype::$1 DEST_KEY = MetaData:Sourcetype   [setnull_syslog_test] REGEX = (?i)test DEST_KEY = queue FORMAT = nullQueue   here is my pros.conf:   [source::journald:///var/log/journal] TRANSFORMS-change_sourcetype = set_new_sourcetype   [sourcetype::syslog] TRANSFORMS-setnull = setnull_syslog_test   Any idea why the setnull_syslog_test transform is not working?
Sure. As a user you can't directly change anything that's happening during data ingestion but a properly maintained environment should allow for feedback to the data quality. A badly ingested data is... See more...
Sure. As a user you can't directly change anything that's happening during data ingestion but a properly maintained environment should allow for feedback to the data quality. A badly ingested data is flawed data and often a simply useless data. Data with wrongly issued timestamp is simply not searchable in the proper "space in time".
I understand its not an auth issue. The error is very clear that it cannot resolve host. The question pertains to the host/dns entry. Thank you
This is _not_ a problem with authentication. This means that the host you're trying to run your curl command on cannot resolve the name you've provided as the host for the HEC endpoint. Whether it's... See more...
This is _not_ a problem with authentication. This means that the host you're trying to run your curl command on cannot resolve the name you've provided as the host for the HEC endpoint. Whether it's that you've provided a wrong name or is it because you have problems with your network setup - that we don't know. BTW, trial Splunk Cloud instances don't use TLS on HEC inputs as far as I remember.
I cannot get auth to work for the HTTP Input in the Splunk trial. curl -H "Authorization: Splunk <HEC_token>" -k https://http-inputs-<stack_url>.splunkcloud.com:8088/services/collector/event -d '{... See more...
I cannot get auth to work for the HTTP Input in the Splunk trial. curl -H "Authorization: Splunk <HEC_token>" -k https://http-inputs-<stack_url>.splunkcloud.com:8088/services/collector/event -d '{"sourcetype": "my_sample_data", "event": "http auth ftw!"}' My Splunk URL in https://<stack_url>.splunkcloud.com I've scoured the forums and web trying a number of combinations here. The HTTP Input is in the Enabled state on the Splunk console. Any help is appreciated. Thank you    
You could calculate the current hour of the alert execution, then adjust the threshold at the end. <mySearch> | bin _time span=1m | stats avg(msg.DurationMs) AS AvgDuration by _time, msg.Service | e... See more...
You could calculate the current hour of the alert execution, then adjust the threshold at the end. <mySearch> | bin _time span=1m | stats avg(msg.DurationMs) AS AvgDuration by _time, msg.Service | eval hour = strftime(now(),"%H") | where (AvgDuration > 1000 and hour >= 8 and hour < 17) or (AvgDuration > 500 AND (hour < 8 OR hour >= 17))
I want to set up splunk alert that can have two threshold  1. if the time is between 8 AM to 5PM - alert if AvgDuration is greater than 1000ms 2. If time is between 5pm to next day 8AM - alert if ... See more...
I want to set up splunk alert that can have two threshold  1. if the time is between 8 AM to 5PM - alert if AvgDuration is greater than 1000ms 2. If time is between 5pm to next day 8AM - alert if avgduration is greater than 500ms How do i implement this Query am working on <mySearch>| bin _time span=1m| stats avg(msg.DurationMs) AS AvgDuration by _time, msg.Service | where AvgDuration > 1000
Fix for this will be SPL-266957.
Its a Search Head Cluster Environment.
Try the API: | rest "/servicesNS/-/-/saved/searches" splunk_server=* | regex search="index=\*" | table title search disabled author eai:acl.app
Hello, if this can help, found out another user indicated "this setting was not enabled on our deployer hence the ES upgrade still proceeded without it's enablement."  
Can someone please help me with dashboard search query that will look for all alerts configured in splunk and list only those alerts having index=* 
I'm accepting this as the solution since this line was the key to creating my final search as seen below. It looks for empty indexes which can be used in an alert as "When number of results is greate... See more...
I'm accepting this as the solution since this line was the key to creating my final search as seen below. It looks for empty indexes which can be used in an alert as "When number of results is greater than 0" and "Trigger for each result"    | rest splunk_server=local /services/data/indexes | where title IN ("index1", "index2", "index3", "index4", "index5", "index6") | table title | rename title AS index | join type=left index [| tstats count where index IN ("index1", "index2", "index3", "index4", "index5", "index6") BY index] | fillnull | where count=0    
I got it with a variation on your solution. Below is the final search for an alert that looks for empty indexes (count=0) from a given list of known indexes ("index1", "index2", "index3", "index4", "... See more...
I got it with a variation on your solution. Below is the final search for an alert that looks for empty indexes (count=0) from a given list of known indexes ("index1", "index2", "index3", "index4", "index5", "index6", ...etc.).     | rest splunk_server=local /services/data/indexes | where title IN ("index1", "index2", "index3", "index4", "index5", "index6") | table title | rename title AS index | join type=left index [| tstats count where index IN ("index1", "index2", "index3", "index4", "index5", "index6") BY index] | fillnull | where count=0     That first line with rest command was the key. Thank you!
Can you give some sample data which helps us to understand this better and found suitable solution? Please anonymous that data if/when needed!
You can/should always give some notice for those people who have onboarding that data for fixing the real issue! As said with wrong timestamps this data is not as useful than it should.
Hi, I am using splunk otel,  send log to splunk enterprise.For different sourcetype, I want to do different thing, like add field, remove fields can you guide me, thanks a lot.   For below, it wor... See more...
Hi, I am using splunk otel,  send log to splunk enterprise.For different sourcetype, I want to do different thing, like add field, remove fields can you guide me, thanks a lot.   For below, it work. ```       transform/istio-proxy:         error_mode: ignore         log_statements:         - context: log           statements:           - set(attributes["johnaddkey"], "johnaddvalue") ```   For below, it does not work. ```       transform/istio-proxy:         error_mode: ignore         log_statements:         - context: log           statements:           - set(attributes["johntestwhere"], "johnvaluewhere") where attributes["sourcetype"]             == "kube:container:istio-proxy" ``` For below, it does not work. ```       transform/istio-proxy:         error_mode: ignore         log_statements:         - context: log           conditions:           - attributes["sourcetype"] == "kube:container:istio-proxy"           statements:           - set(attributes["johnaddkeyc"], "johnaddvaluec")  ```    
I am trying to get response on the basis on coming status codes from message.outgoingResponse.istURL, But problem is that there is no field to read return status code from below shared screen shot UR... See more...
I am trying to get response on the basis on coming status codes from message.outgoingResponse.istURL, But problem is that there is no field to read return status code from below shared screen shot URL,  is there any way we can get status code as we got from any URL on browser? Query :- index="uhcportals-prod-logs" sourcetype=kubernetes container_name="myuhc-sso" logger="com.uhg.myuhc.log.SplunkLog" message.ssoType="Outbound" | spath "message.incomingRequest.partner" | rename message.incomingRequest.partner as "SSO Partner" | search "SSO Partner"=sso_SSOPartner_Amwell | stats count by message.outgoingResponse.istURL  
Gentlemen Thanks for the follow-up about when to fix such issues. As a pure user, I have no influence on everything that happens during indexing. I can just take the data which is there and work w... See more...
Gentlemen Thanks for the follow-up about when to fix such issues. As a pure user, I have no influence on everything that happens during indexing. I can just take the data which is there and work with what I get. 
Gentlemen you're right Search code:   | where my_time>=relative_time(now(),"-1d@d") AND _time<=relative_time(now(),"@d")   updated to    | where my_time &gt;=relative_time(now(),"-1d@d") AND _... See more...
Gentlemen you're right Search code:   | where my_time>=relative_time(now(),"-1d@d") AND _time<=relative_time(now(),"@d")   updated to    | where my_time &gt;=relative_time(now(),"-1d@d") AND _time &lt;=relative_time(now(),"@d")   will work in a dashboard. I started from scratch with all my trials and were not able to reproduce my issues with the html tags.  This must be a classic PIBCAK (Problem Is Between Chair and Keyboard)