All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @dorHerbesman - I would recommend opening a case with support, they’ll be able to help you troubleshoot what’s going on!
How long are you prepared to wait for the service to come up again? Are you looking to alert if all the servers don't come back up within a certain time, or if any one of them doesn't come back up? A... See more...
How long are you prepared to wait for the service to come up again? Are you looking to alert if all the servers don't come back up within a certain time, or if any one of them doesn't come back up? Are events generated when the service is up, and how regularly do these events occur? Can there be periods when no events are generated but the service is still to be considered up?
In Advanced XML, all tokens are global in scope so there is no need to "pass" them.
Hi @marnall  I tried your suggestion, but the _time always set to info_min_time My search time is this morning:  Mar 11 2024 10:12:00 EDT Time Frame is: last 30 day, so info_min_time (start t... See more...
Hi @marnall  I tried your suggestion, but the _time always set to info_min_time My search time is this morning:  Mar 11 2024 10:12:00 EDT Time Frame is: last 30 day, so info_min_time (start time: Feb 10 2024 00:00:00 EST) info_max_time (end time: Mar 11 2024 00:00:00 EDT) _time is set to info_min_time as seen below   Can you set your search time to to last 30 day and run the collect command with testmode=true and share your results? | collect index= summary testmode=true addtime=true file=summary_test_1.stash_new name="summary_test_1" marker="report=\"summary_test_1\""   You should have _raw field that contains all the fields, including _time before getting pushed by collect command.   See my below output. Please share yours.. Thanks    
I thought I would pop in and let you all know the resolution from Splunk. :\d{2}\s+(?P<Successful>\d+)\s+(?P<Failed>\d+)\s+(?P<Percentage>\S+) IN bodyPreview  
As I understand it, you want the tstats command to look only for process names in the lookup file.  You can do that with a subsearch | tstats `summariesonly` count from datamodel=Endpoint.Processes ... See more...
As I understand it, you want the tstats command to look only for process names in the lookup file.  You can do that with a subsearch | tstats `summariesonly` count from datamodel=Endpoint.Processes where [|inputlookup is_windows_system_file" | fields filename | rename filename as "Processes.process_name" | format] by Processes.aid Processes.dest Processes.process_name Processes.process _time  
Hey there @elizabethl_splu  after reading this thread i tried this setting on my splunk 9.1.2 environment and it dosen't work. i opened a file named  web-features.conf with the stanze  [feature:... See more...
Hey there @elizabethl_splu  after reading this thread i tried this setting on my splunk 9.1.2 environment and it dosen't work. i opened a file named  web-features.conf with the stanze  [feature:dashboards_csp] enable_dashboards_redirection_restriction=false under /opt/splunk/etc/shcluster/apps/ADMIN_CONF (folder i created to disterbute conf files and updates) and still getting this warning, can you think of anything im doing wrong? thanks in advanced!
I have windows service called "ess". Due to network glitch the service is entering into stopped state and start state. Since the windows event is generating for delivery network glitch an event is r... See more...
I have windows service called "ess". Due to network glitch the service is entering into stopped state and start state. Since the windows event is generating for delivery network glitch an event is recorded in splunk. But the service ess is really down, and never entered into running state we need to be alerted. I want to write splunk to alert only when the service ess went into stopped state but never entered into running state for 25 hosts. Same service is running on 25 hosts and all servers has network glitches.
Thanks @ITWhisperer - It worked for me
Thanks!  I'd say Splunk is way behind on updating its Python.
Ok. Thanks.  Splunk is way behind on updating its Python.
Hi @Harish2 , no it's the easiest and flexible way, but why you don't want to use the hours and minutes in the search? you can also create a macro to call instead adding all the conditions to your ... See more...
Hi @Harish2 , no it's the easiest and flexible way, but why you don't want to use the hours and minutes in the search? you can also create a macro to call instead adding all the conditions to your searches. Ciao. Giuseppe
Thanks
@dnavara please have a look on my explanation here: https://community.splunk.com/t5/Getting-Data-In/Why-has-the-index-process-paused-data-flow-How-to-handle-too/m-p/631226/highlight/true#M108187  
Events are retrieved based on the value of _time, so depending on how your event is parsed, it may appear in the index retrospectively. For example, Apache httpd log entries are usually timestamped ... See more...
Events are retrieved based on the value of _time, so depending on how your event is parsed, it may appear in the index retrospectively. For example, Apache httpd log entries are usually timestamped with the time the request came in e.g. 05:26, but it is written to the log when the request is completed, for example, 05:28. This means that it was not in the log at 05:27, but did appear "later"
The case function does not support wildcards natively, but you can use them in like (as you have) or you can use the equivalent regular expression using match.   | eval Status=case(like('message',"... See more...
The case function does not support wildcards natively, but you can use them in like (as you have) or you can use the equivalent regular expression using match.   | eval Status=case(like('message',"%Exchange Rates Process Completed. File sucessfully sent to Concur%"),"SUCCESS", match('message',"(TEST|DEV|PRD)\(SUCCESS): Exchange Rates OnDemand Interface Run Report - Concur"),"SUCCESS", like('TracePoint',"%EXCEPTION%"),"ERROR")    
There are a few ways to onboard data into Splunk. Install a universal forwarder on the server to send log files to Splunk Have the server send syslog data to Splunk via a syslog server or Splunk C... See more...
There are a few ways to onboard data into Splunk. Install a universal forwarder on the server to send log files to Splunk Have the server send syslog data to Splunk via a syslog server or Splunk Connect for Syslog Use the server's API to extract data for indexing Use Splunk DB Connect to pull data from the server's SQL database. Have the application send data directly to Splunk using HTTP Event Collector (HEC). This answer may be useful: https://community.splunk.com/t5/Other-Usage/Splunk-integration-with-thousandeyes/m-p/361387 See also https://www.thousandeyes.com/blog/data-observability-backend-opentelemetry
Using props.conf i'm able to extract the fields but on the Splunk dashboard, the data is not visible for the timing 05:26 pm and data is visible for 05:27 pm, if i check after 2-3 minutes the entry a... See more...
Using props.conf i'm able to extract the fields but on the Splunk dashboard, the data is not visible for the timing 05:26 pm and data is visible for 05:27 pm, if i check after 2-3 minutes the entry at 05:26 pm will be visible. On the dashboard the default time is last 15 minutes.
  Hi @ITWhisperer  Here the raw format {"message_type": "INFO", "processing_stage": "XXXXX", "message": "XXXXXX", "correlation_id": "XXXXXX", "error": "", "invoker_agent": "XXXXXX", "invoked_compo... See more...
  Hi @ITWhisperer  Here the raw format {"message_type": "INFO", "processing_stage": "XXXXX", "message": "XXXXXX", "correlation_id": "XXXXXX", "error": "", "invoker_agent": "XXXXXX", "invoked_component": "XXXXXX, "request_payload": "", "response_details": "", "invocation_timestamp": "XXXXX", "response_timestamp": "XXXXX", "original_source_app": "XXXX", "AAAA": "", "retry_attempt": "1", "custom_attributes": {"entity-internal-id": ["12345678", "9876543", "2341234"], "root-entity-id": "3", "campaign-id": "XXXX", "campaign-name": "XXXXX", "marketing-area": "CCCC", "lead-id": ["000000", "1111111", "3333333"], "record_count": "", "country": ""}}
Hi we see the same issue on Splunk 9.1.2. What was the reason for lowering this to 1 from the default of 6? maxConcurrentOptimizes=1