All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello, Is there possibility of obtaining a Splunk Cloud license for development and integration purposes. Our company is actively working with Splunk APIs, and I’m trying to determine if there’s a ... See more...
Hello, Is there possibility of obtaining a Splunk Cloud license for development and integration purposes. Our company is actively working with Splunk APIs, and I’m trying to determine if there’s a license or partnership program we could leverage to support this work. Many thanks in advance!  
Hello All,    I'm having a timeline chart, I would like to add zoom in to this chart when we drang and select some lines, it needs to zoom.    Can anyone hekp to find this. Thanks in Advance! ... See more...
Hello All,    I'm having a timeline chart, I would like to add zoom in to this chart when we drang and select some lines, it needs to zoom.    Can anyone hekp to find this. Thanks in Advance!
Were you able to solve this? I am facing the same issue
Hi @Athira , you should check the presence in bothe the searches, something like this: index=source "status for : * "Not available" | rex "status for : (?<ORDERS>.*?)" | eval type="one" | appe... See more...
Hi @Athira , you should check the presence in bothe the searches, something like this: index=source "status for : * "Not available" | rex "status for : (?<ORDERS>.*?)" | eval type="one" | append [ search Message="Request for : *" | rex "data=[A-Za-z0-9-]+\|(?P<ORDERS>[\w\.]+)" | rex "\"unique\"\:\"(?P<UNIQUEID>[A-Z0-9]+)\"" | eval type="two" ] | stats dc(type) AS type_count values(UNIQUEID) AS UNIQUEID BY ORDERS | where type_count=2 Ciao. Giuseppe
In my case from each query I'm retrieving few fields from that log using regex and makemv,mvexpand command, so not sure with that changes how I can do it 
hi @gcusello  i tried your approach , i'm getting results  for all ORDERS.  i want only the ORDERS and UNIQUEID from subquery to be displayed  which matches the ORDERS (in the outer query)  those  No... See more...
hi @gcusello  i tried your approach , i'm getting results  for all ORDERS.  i want only the ORDERS and UNIQUEID from subquery to be displayed  which matches the ORDERS (in the outer query)  those  Not available 
I am trying to integrate splunk into my project. Currently, I have the following .properties file:   mySplunk.level = INFO mySplunk.handlers = com.splunk.logging.HttpEventCollectorLoggingHandler ... See more...
I am trying to integrate splunk into my project. Currently, I have the following .properties file:   mySplunk.level = INFO mySplunk.handlers = com.splunk.logging.HttpEventCollectorLoggingHandler # Configure the com.splunk.logging.HttpEventCollectorLoggingHandler com.splunk.logging.HttpEventCollectorLoggingHandler.url = myUrl com.splunk.logging.HttpEventCollectorLoggingHandler.level = INFO com.splunk.logging.HttpEventCollectorLoggingHandler.token = myToken com.splunk.logging.HttpEventCollectorLoggingHandler.source= mySource com.splunk.logging.HttpEventCollectorLoggingHandler.disableCertificateValidation=true     Note: url and token are not put into this file but are available and the access is grated. My SplunkTestLogger.java   import java.util.logging.Logger; import java.util.logging.Level; public class Main { public static void main(String[] args) { Logger logger = Logger.getLogger("mySplunk"); try{ FileInputStream fis = new FileInputStream("C\\User\\myUser\\logging.properties"); LogManager.getLogManager().readConfiguration(fis); log.setLevel(LEVEL.INFO); log.addHandler(new java.util.logging.consoleHandler()); log.setUseParentHandlers(false); log.info("starting myApp"); fis.close(); } catch (Exception e) { logger.log(Level.SEVERE, "Exception occurred", e); } } }   This class is not able to send any log messages to splunk. Why? I already tried to connect and send events manually with   URL url = new URL(SPLUNK_HEC_URL + "/services/collector/event"); HttpURLConnection connection = (HttpURLConnection) url.openConnection(); connection.setRequestMethod("POST"); connection.setRequestProperty("Authorization", "Splunk " + SPLUNK_HEC_TOKEN); connection.setDoOutput(true); //....   and it was successful. but I want to make it work with the .properties approach.
    HI @ITWhisperer  As you can see in result StartTime is sorted but businedd date is coming as 11/07/2024 in front of that . It is not sorted
Splunk is not good at finding what isn't there - you have to tell Splunk (by creating an event in some way) what the expected data is and compare that to the actual data that is received. For example... See more...
Splunk is not good at finding what isn't there - you have to tell Splunk (by creating an event in some way) what the expected data is and compare that to the actual data that is received. For example, you could have a lookup file of expected accounts, or in your case, since you seem to know which accounts you are interested in and there are only a few, you could use makeresults to generate corresponding events. You would then append this list to the list of accounts you are finding in the logs and discount those which are in the logs, leaving you just the accounts which aren't in the logs.
I use Splunk Enterprise 9.0.4 and I tried adding _meta field which didn't work. I also tried adding INGEST_EVAL to transforms and tried sending the data, still no luck identifying the source.
Hi @hieuba6868 ,   I am sending open telemetry data to heavy forwarder and the HF forwards the data to indexers. When I look at the  field 'splunk_server' I can see only the name of indexers. If I ... See more...
Hi @hieuba6868 ,   I am sending open telemetry data to heavy forwarder and the HF forwards the data to indexers. When I look at the  field 'splunk_server' I can see only the name of indexers. If I look at the data I can see the name of the otel source. In my current scenario I want to know which is the HF sending the data.   Regards, Pravin
Check out: [..] The search head must be at the same or a higher level than the search peers. See the note later in this section for a precise definition of "level" in this context. [..] Syste... See more...
Check out: [..] The search head must be at the same or a higher level than the search peers. See the note later in this section for a precise definition of "level" in this context. [..] System requirements and other deployment considerations for distributed search - Splunk Documentation
@hiepdao whilst on-prem it should be fine but you may need to check if the lib ever needs an update.  The best practise, especially if you ever move to Cloud SOAR, would be to create an app for the ... See more...
@hiepdao whilst on-prem it should be fine but you may need to check if the lib ever needs an update.  The best practise, especially if you ever move to Cloud SOAR, would be to create an app for the actions requiring pandas and then package the pandas .whl file as a dependency to make it more portable. 
Please share some anonymised sample events in code blocks (using the </> button above) so we can see what you are dealing with.
Please give a detailed example of what you want showing why where uptime=0 doesn't work for you.
Please show your raw event in a codeblock (using the </> button)
Please show the results not the search
Hi @Athira , try to follow my approach using stats instead join applied to your conditions: index=source "status for : * "Not available" | rex "status for : (?<ORDERS>.*?)" | append [ search ... See more...
Hi @Athira , try to follow my approach using stats instead join applied to your conditions: index=source "status for : * "Not available" | rex "status for : (?<ORDERS>.*?)" | append [ search Message="Request for : *" | rex "data=[A-Za-z0-9-]+\|(?P<ORDERS>[\w\.]+)" | rex "\"unique\"\:\"(?P<UNIQUEID>[A-Z0-9]+)\""] | stats values(UNIQUEID) AS UNIQUEID BY ORDERS if you have more values for UNIQUEID and you want a row foreach one, you can add the statement | mvexpand UNIQUEID. As I said, this solution has only one limit: the subsearch must return maximun 50,000 results. Ciao. Giuseppe
Update: since Splunk 9.2 Regex_cpu_profiling  in limits.conf default value is true. regex_cpu_profiling = <boolean> * Enable CPU time metrics for RegexProcessor. Output will be in the metrics.log... See more...
Update: since Splunk 9.2 Regex_cpu_profiling  in limits.conf default value is true. regex_cpu_profiling = <boolean> * Enable CPU time metrics for RegexProcessor. Output will be in the metrics.log file. Entries in metrics.log will appear per_host_regex_cpu, per_source_regex_cpu, per_sourcetype_regex_cpu, per_index_regex_cpu. * Default: true    
Hello ITW, thank you for reply. Where Uptime=0 won´t resolve it because during 1 day span some component_hostnames been uptime for few seconds e.g. 1.0000 or 5.0000. This means it can´t be counted... See more...
Hello ITW, thank you for reply. Where Uptime=0 won´t resolve it because during 1 day span some component_hostnames been uptime for few seconds e.g. 1.0000 or 5.0000. This means it can´t be counted as permanent downtime.  My query should be looking only for component_hostnames  which had no different Uptime except of 0 in span of 1 day. Stives