All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

If the correlation search is set to run in Continuous mode (as opposed to real-time) then, yes, Splunk will attempt to re-run the skipped search intervals.  Change to real-time mode to avoid that.  S... See more...
If the correlation search is set to run in Continuous mode (as opposed to real-time) then, yes, Splunk will attempt to re-run the skipped search intervals.  Change to real-time mode to avoid that.  See https://docs.splunk.com/Documentation/ES/7.1.2/Admin/Configurecorrelationsearches#Change_correlation_search_scheduling for more information.
In this case review inputs.conf sourcetype and change it if you use default pretrained :   https://docs.splunk.com/Documentation/Splunk/9.3.0/Data/Listofpretrainedsourcetypes   "The source types ... See more...
In this case review inputs.conf sourcetype and change it if you use default pretrained :   https://docs.splunk.com/Documentation/Splunk/9.3.0/Data/Listofpretrainedsourcetypes   "The source types marked with an asterisk ( * ) use the INDEXED_EXTRACTIONS attribute, which sets other attributes in props.conf to specific defaults and requires special handling to forward to another Splunk platform instance. See Forward fields extracted from structured data files."
Hello, if you are using _TCP_ROUTING and index rename on target platform, logs may go to "last chance index"       
Hello, see table below please. There are results for components A, B and C: _time component_hostname uptime 2024-11-11 15:00 Host A 0.00000 1.00000 5.00000 2024-11-11 15:... See more...
Hello, see table below please. There are results for components A, B and C: _time component_hostname uptime 2024-11-11 15:00 Host A 0.00000 1.00000 5.00000 2024-11-11 15:00 Host B 0.00000 1.00000 2024-11-11 15:00 Host C 0.00000   If I apply where uptime=0 my results will look following: _time component_hostname uptime 2024-11-11 15:00 Host A 0.00000 2024-11-11 15:00 Host B 0.00000 2024-11-11 15:00 Host C 0.00000   But this is not what I need because component A was also showing uptime during my span 1.00000 and 5.00000. Same applies for component B as it was showing uptime 0.00000 and 1.00000. Which means that components A and B where uptime during my span and that is ok. But I´m interested only for components which during the span where showing no other value then 0 e.g. component C. Like this I know that components A and B are responding during my span but component C not responding because its always 0.  
You need access to the search head to confirm the data has been received properly.  Coordinate that with your Splunk admin(s)
I found that I had an error in one of my correlation searches because I saw it in the cloud monitoring console. When I fixed the error I suddenly saw that the latency over this specific correlation s... See more...
I found that I had an error in one of my correlation searches because I saw it in the cloud monitoring console. When I fixed the error I suddenly saw that the latency over this specific correlation search was >4 million seconds. Looking into the actual events that the cloud monitoring console is looking at I see scheduled_time is more than a month ago. Did I do something dumb or is Splunk actually just trying to run all those failed scheduled tasks now and I just need to wait it out? Or is there a way to stop them from running? I disabled the correlation search already and did a restart from the server controls....
Hi @mana_pk123  Can you please share the link of the application that you installed and trying to integrate ? Is it a Splunkbase application or custom?
Hi There, I am experiencing the same issue here, how did you resolve it?   Kind Regards gift
Hello, Is there possibility of obtaining a Splunk Cloud license for development and integration purposes. Our company is actively working with Splunk APIs, and I’m trying to determine if there’s a ... See more...
Hello, Is there possibility of obtaining a Splunk Cloud license for development and integration purposes. Our company is actively working with Splunk APIs, and I’m trying to determine if there’s a license or partnership program we could leverage to support this work. Many thanks in advance!  
Hello All,    I'm having a timeline chart, I would like to add zoom in to this chart when we drang and select some lines, it needs to zoom.    Can anyone hekp to find this. Thanks in Advance! ... See more...
Hello All,    I'm having a timeline chart, I would like to add zoom in to this chart when we drang and select some lines, it needs to zoom.    Can anyone hekp to find this. Thanks in Advance!
Were you able to solve this? I am facing the same issue
Hi @Athira , you should check the presence in bothe the searches, something like this: index=source "status for : * "Not available" | rex "status for : (?<ORDERS>.*?)" | eval type="one" | appe... See more...
Hi @Athira , you should check the presence in bothe the searches, something like this: index=source "status for : * "Not available" | rex "status for : (?<ORDERS>.*?)" | eval type="one" | append [ search Message="Request for : *" | rex "data=[A-Za-z0-9-]+\|(?P<ORDERS>[\w\.]+)" | rex "\"unique\"\:\"(?P<UNIQUEID>[A-Z0-9]+)\"" | eval type="two" ] | stats dc(type) AS type_count values(UNIQUEID) AS UNIQUEID BY ORDERS | where type_count=2 Ciao. Giuseppe
In my case from each query I'm retrieving few fields from that log using regex and makemv,mvexpand command, so not sure with that changes how I can do it 
hi @gcusello  i tried your approach , i'm getting results  for all ORDERS.  i want only the ORDERS and UNIQUEID from subquery to be displayed  which matches the ORDERS (in the outer query)  those  No... See more...
hi @gcusello  i tried your approach , i'm getting results  for all ORDERS.  i want only the ORDERS and UNIQUEID from subquery to be displayed  which matches the ORDERS (in the outer query)  those  Not available 
I am trying to integrate splunk into my project. Currently, I have the following .properties file:   mySplunk.level = INFO mySplunk.handlers = com.splunk.logging.HttpEventCollectorLoggingHandler ... See more...
I am trying to integrate splunk into my project. Currently, I have the following .properties file:   mySplunk.level = INFO mySplunk.handlers = com.splunk.logging.HttpEventCollectorLoggingHandler # Configure the com.splunk.logging.HttpEventCollectorLoggingHandler com.splunk.logging.HttpEventCollectorLoggingHandler.url = myUrl com.splunk.logging.HttpEventCollectorLoggingHandler.level = INFO com.splunk.logging.HttpEventCollectorLoggingHandler.token = myToken com.splunk.logging.HttpEventCollectorLoggingHandler.source= mySource com.splunk.logging.HttpEventCollectorLoggingHandler.disableCertificateValidation=true     Note: url and token are not put into this file but are available and the access is grated. My SplunkTestLogger.java   import java.util.logging.Logger; import java.util.logging.Level; public class Main { public static void main(String[] args) { Logger logger = Logger.getLogger("mySplunk"); try{ FileInputStream fis = new FileInputStream("C\\User\\myUser\\logging.properties"); LogManager.getLogManager().readConfiguration(fis); log.setLevel(LEVEL.INFO); log.addHandler(new java.util.logging.consoleHandler()); log.setUseParentHandlers(false); log.info("starting myApp"); fis.close(); } catch (Exception e) { logger.log(Level.SEVERE, "Exception occurred", e); } } }   This class is not able to send any log messages to splunk. Why? I already tried to connect and send events manually with   URL url = new URL(SPLUNK_HEC_URL + "/services/collector/event"); HttpURLConnection connection = (HttpURLConnection) url.openConnection(); connection.setRequestMethod("POST"); connection.setRequestProperty("Authorization", "Splunk " + SPLUNK_HEC_TOKEN); connection.setDoOutput(true); //....   and it was successful. but I want to make it work with the .properties approach.
    HI @ITWhisperer  As you can see in result StartTime is sorted but businedd date is coming as 11/07/2024 in front of that . It is not sorted
Splunk is not good at finding what isn't there - you have to tell Splunk (by creating an event in some way) what the expected data is and compare that to the actual data that is received. For example... See more...
Splunk is not good at finding what isn't there - you have to tell Splunk (by creating an event in some way) what the expected data is and compare that to the actual data that is received. For example, you could have a lookup file of expected accounts, or in your case, since you seem to know which accounts you are interested in and there are only a few, you could use makeresults to generate corresponding events. You would then append this list to the list of accounts you are finding in the logs and discount those which are in the logs, leaving you just the accounts which aren't in the logs.
I use Splunk Enterprise 9.0.4 and I tried adding _meta field which didn't work. I also tried adding INGEST_EVAL to transforms and tried sending the data, still no luck identifying the source.
Hi @hieuba6868 ,   I am sending open telemetry data to heavy forwarder and the HF forwards the data to indexers. When I look at the  field 'splunk_server' I can see only the name of indexers. If I ... See more...
Hi @hieuba6868 ,   I am sending open telemetry data to heavy forwarder and the HF forwards the data to indexers. When I look at the  field 'splunk_server' I can see only the name of indexers. If I look at the data I can see the name of the otel source. In my current scenario I want to know which is the HF sending the data.   Regards, Pravin
Check out: [..] The search head must be at the same or a higher level than the search peers. See the note later in this section for a precise definition of "level" in this context. [..] Syste... See more...
Check out: [..] The search head must be at the same or a higher level than the search peers. See the note later in this section for a precise definition of "level" in this context. [..] System requirements and other deployment considerations for distributed search - Splunk Documentation