All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I'll try to explain it with a basic example. As an output of a stats command I have: detection query search1 google.com yahoo.com search2 google.com bing.com ... See more...
I'll try to explain it with a basic example. As an output of a stats command I have: detection query search1 google.com yahoo.com search2 google.com bing.com   I want to get which queries are not being detected by both search1 and search 2. Or else, getting rid of the queries that are in both searches, either way work. Like ok, search1 is detecting yahoo.com whereas search2 isn't, and viceversa with bing.com I thought about grouping by query instead of by search,  the problem is I have dozens or even hundreds of queries. Any thoughts? Cheers
Hi Splunkers, I have a doubt about underscores and path in props.conf. Suppose, in my props.conf, I have: [source::/aaa/bbb/ccc_ddd] As you can see, in my path I have an underscore in path name. ... See more...
Hi Splunkers, I have a doubt about underscores and path in props.conf. Suppose, in my props.conf, I have: [source::/aaa/bbb/ccc_ddd] As you can see, in my path I have an underscore in path name. This could be a problem? I mean: can I put the underscore without problem or I have to use backslash to escape it?
I've had more consistent results by putting the trigger condition in the search and having the alert trigger if the number of results is not zero. | tstats count where index=cts-dcpsa-app sourcetype... See more...
I've had more consistent results by putting the trigger condition in the search and having the alert trigger if the number of results is not zero. | tstats count where index=cts-dcpsa-app sourcetype=app:dcpsa host_ip IN (xx.xx.xxx.xxx, xx.xx.xxx.xxx) by host | eval current_time=_time | eval excluded_start_time=strptime("2024-04-14 21:00:00", "%Y-%m-%d %H:%M:%S") | eval excluded_end_time=strptime("2024-04-15 04:00:00", "%Y-%m-%d %H:%M:%S") | eval is_maintenance_window=if(current_time >= excluded_start_time AND current_time < excluded_end_time, 1, 0) | eval is_server_down=if(count == 0, 1, 0) | where is_maintenance window = 0 AND is_server_down=1
Hi @LuanNguyen , yes, yupu can use an HF as intermediate Forwarder between UFs and IDXs. The number is relevant only to correctly have a dimensioning of the reference hardware. At first I hint to ... See more...
Hi @LuanNguyen , yes, yupu can use an HF as intermediate Forwarder between UFs and IDXs. The number is relevant only to correctly have a dimensioning of the reference hardware. At first I hint to engage a Splunk Architect for this job. Then I hint to avoid a single point of failure using at least two or three HFs. Then there isn't a reference hardware for the HF, in my experience we started with the default HW reference (12 CPUs, 12 GB RAM, 300 GB disk), and then, analyzing the use of these resources, we defined to add some CPUs. In addition, you should define if these HFs are only  concentrators or if they also do parsing, merging and typing phases, especially the parsing phase: many transformations requires more resources. Then, if you have many UFs, you could prefer to have three or four HFs instead of two with more resources, to avoid that the network interfaces are the bottleneck. As I said, this design requires at least a Splunk Architect or a Splunk PS. Ciao. Giuseppe 
Yes, a heavy forwarder can be used in that manner.  Having only one HF, however, is a single point of failure that could lead to data loss if it is unavailable.  Be sure to set up at least 2 intermed... See more...
Yes, a heavy forwarder can be used in that manner.  Having only one HF, however, is a single point of failure that could lead to data loss if it is unavailable.  Be sure to set up at least 2 intermediate forwarders. See https://www.linkedin.com/pulse/splunk-asynchronous-forwarding-lightning-fast-data-ingestor-rawat/ for how to configure the HFs for better performance in this situation. What specific questions do you have about the configuration?
I wonder if a Heavy Forwarder can be the intermediate instance among 1000 Universal Forwarders and 1000 Indexers? The hardware resources are supposed to be unlimited, the problem will be only about t... See more...
I wonder if a Heavy Forwarder can be the intermediate instance among 1000 Universal Forwarders and 1000 Indexers? The hardware resources are supposed to be unlimited, the problem will be only about the configuration. Any documentations or references will be big helps. Thank you very much!
That's correct because label has to be unique, in this case it will not generate unique label. I would suggest set the label as well with host field, because host name already tells you whether its ... See more...
That's correct because label has to be unique, in this case it will not generate unique label. I would suggest set the label as well with host field, because host name already tells you whether its QA or Prod or Dev.   I hope this helps!!!
``` Set a flag based on sourcetype ``` | eval flag=if(sourcetype="ma",1,2) ``` Get single event for each ParentOrderID by sourcetype (dedup) ``` | stats vakues(flag) as flag by ParentOrderID sourcety... See more...
``` Set a flag based on sourcetype ``` | eval flag=if(sourcetype="ma",1,2) ``` Get single event for each ParentOrderID by sourcetype (dedup) ``` | stats vakues(flag) as flag by ParentOrderID sourcetype ``` Add flags from both sourcetypes ``` | stats sum(flag) as flags by ParentOrderID ``` Count each type of flag ``` | stats count by flags ``` Flags is 1 for ma only, 2 for cs only, 3 for both ma and cs ```
Hi yuanliu Thank you for the feedback. It's perfect!    
Okay, I guess then nullQueue will even work with /event endpoint.   Thanks @PickleRick 
@ITWhisperer  thank you. I am trying to get the total execution id count between the different sourcetype, where parent id is equal.  As per the design, sourcetype=ma execution will be higher than s... See more...
@ITWhisperer  thank you. I am trying to get the total execution id count between the different sourcetype, where parent id is equal.  As per the design, sourcetype=ma execution will be higher than sourcetype=cs. But, i want to get execution count of sourcetype=ma that has sent to sourcetype=cs.
@gcusello , Any inputs from your end since still i can see the events are getting ingested with the password information present in it.    
@KothariSurbhi , Thank you for your prompt response. But actually it needs to be updated for each and every search and  all users want to have the default as 20 instead of 5. So our Search head is h... See more...
@KothariSurbhi , Thank you for your prompt response. But actually it needs to be updated for each and every search and  all users want to have the default as 20 instead of 5. So our Search head is hosted in Cloud and I have tried to create an app with ui-prefs.conf but most of the time i got an error during app vetting process. But at some point of time the app has been deployed successfully and we have restarted the Search head and once again when we navigate and checked the max lines its still the same.  display.events.maxLines = 20 I can able to do it in the default directory whereas when i do from local its getting error. So kindly let me know how to achieve it.
Hello All, We have log flow from fortigate to splunk as follows: Fortigate Analyzer> Syslog server with UF>Deployment server> SearchHead /Indexer. Kindly suggest how can i get logs using fortinet ... See more...
Hello All, We have log flow from fortigate to splunk as follows: Fortigate Analyzer> Syslog server with UF>Deployment server> SearchHead /Indexer. Kindly suggest how can i get logs using fortinet add on over indexer? will i have to install fortinet add on app over syslog server UF as well? and what data source need to be selected over indexer.
Hello, yes, it seems I have run into the same problem as well. It says it is using Python v2 as opposed to version 3.  It gives two options, 1 to remove the application called Splunk Visual Expo... See more...
Hello, yes, it seems I have run into the same problem as well. It says it is using Python v2 as opposed to version 3.  It gives two options, 1 to remove the application called Splunk Visual Exporter or update Python to version 3. Since this is a SaaS service, this is usually handled by the vendor (Splunk) since we don't manage the backend. Is there a way to update the existing application to a higher version, not sure if by removing the application we break something. Todd
Because when I try in a python program still got same error 
What's that function for ? And how to add that on the python program ? Is it like this?  
Sub-searches e.g. those used by join, are limited, so you could try combining the initial search like so index=india (sourcetype=ma NOT (source=*OPT* OR app_instance=MA_DROP_SESSION OR "11555=Y-NOBK... See more...
Sub-searches e.g. those used by join, are limited, so you could try combining the initial search like so index=india (sourcetype=ma NOT (source=*OPT* OR app_instance=MA_DROP_SESSION OR "11555=Y-NOBK" OR fix_applicationInstanceID IN(*OPT*,*GWIM*)) msgType=8 (execType=1 OR execType=2 OR execType=F) stream=Outgoing app_instance=UPSTREAM "clientid=XAC*") OR (sourcetype=cs NOT (source=*OPT* OR "11555=Y-NOBK" OR applicationInstanceID IN(*OPT*,*GWIM*)) msgType=8 (execType=1 OR execType=2 OR execType=F) app_instance=PUBHUB stream=Outgoing "clientid=XAC" "sourceid=AX_DN_XAC") Next you have to work out what is meant by your dedup. For example, if you rename fix_execID as execID, you could do your dedup like this | stats count execID ParentOrderID sourcetype Next problem is your join (apart from avoiding joins in the first place (with the combined initial search), your two searches do not return ParentOrderID since they both end with stats count, therefore the only field you have to join with is count, and I suspect this is not what you require?
Hi, I am trying to get the execution count based on the parentIDs over two different data sets. Please could you review and suggest ?  I would like to see what's execution count  between (sourcet... See more...
Hi, I am trying to get the execution count based on the parentIDs over two different data sets. Please could you review and suggest ?  I would like to see what's execution count  between (sourcetype=cs, sourcetype=ma) , only the field ParentOrderID is common between cs, ma sourcetype. Note: daily close to ~10Million events are loaded  into splunk and unique execution will be 4Million.Also, sometime the join query is getting auto-canceled. SPL: index=india sourcetype=ma NOT (source=*OPT* OR app_instance=MA_DROP_SESSION OR "11555=Y-NOBK" OR fix_applicationInstanceID IN(*OPT*,*GWIM*)) msgType=8 (execType=1 OR execType=2 OR execType=F) stream=Outgoing app_instance=UPSTREAM "clientid=XAC*" | dedup fix_execID,ParentOrderID | stats count | join ParentOrderID [ search index=india sourcetype=cs NOT (source=*OPT* OR "11555=Y-NOBK" OR applicationInstanceID IN(*OPT*,*GWIM*)) msgType=8 (execType=1 OR execType=2 OR execType=F) app_instance=PUBHUB stream=Outgoing "clientid=XAC" "sourceid=AX_DN_XAC" | dedup execID,ParentOrderID | stats count] Thanks, Selvam.
Hi, Try    preview=true   It's must be like that  curl -k -u admin:pass https://localhost:8089/services/search/v2/jobs/mysearch_02151949/results preview=true