All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I am getting this error:   Error in 'EvalCommand': Type checking failed. '/' only takes numbers.   Here is lines of SPL: | stats count as "Count of Balances", sum(BALANCECHANGE) as "SumBalances"... See more...
I am getting this error:   Error in 'EvalCommand': Type checking failed. '/' only takes numbers.   Here is lines of SPL: | stats count as "Count of Balances", sum(BALANCECHANGE) as "SumBalances" by balance_bin | eventstats sum("SumBalances") as total_balance | eval percentage_in_bin = round(("SumBalances" / total_balance) *100, 2) What could be causing this? Is there a way to olve this without the / symbol? 
In my mv field nameas  errortype.In the error type the counts shows file not found as 4 and empty as 2 .I want to exclude the empty values from the mv fields
Thank you! Same issue here on Splunk 9.2.1 Splunk was NOT starting at boot-start (with init.d) but manually was starting correctly. After having commented the mentioned line is now properly booting... See more...
Thank you! Same issue here on Splunk 9.2.1 Splunk was NOT starting at boot-start (with init.d) but manually was starting correctly. After having commented the mentioned line is now properly booting with the VM (Oracle Linux). I am going to open a case to the support to inform them about it.
Solved by myself, underscores gives not problem.
I'll try to explain it with a basic example. As an output of a stats command I have: detection query search1 google.com yahoo.com search2 google.com bing.com ... See more...
I'll try to explain it with a basic example. As an output of a stats command I have: detection query search1 google.com yahoo.com search2 google.com bing.com   I want to get which queries are not being detected by both search1 and search 2. Or else, getting rid of the queries that are in both searches, either way work. Like ok, search1 is detecting yahoo.com whereas search2 isn't, and viceversa with bing.com I thought about grouping by query instead of by search,  the problem is I have dozens or even hundreds of queries. Any thoughts? Cheers
Hi Splunkers, I have a doubt about underscores and path in props.conf. Suppose, in my props.conf, I have: [source::/aaa/bbb/ccc_ddd] As you can see, in my path I have an underscore in path name. ... See more...
Hi Splunkers, I have a doubt about underscores and path in props.conf. Suppose, in my props.conf, I have: [source::/aaa/bbb/ccc_ddd] As you can see, in my path I have an underscore in path name. This could be a problem? I mean: can I put the underscore without problem or I have to use backslash to escape it?
I've had more consistent results by putting the trigger condition in the search and having the alert trigger if the number of results is not zero. | tstats count where index=cts-dcpsa-app sourcetype... See more...
I've had more consistent results by putting the trigger condition in the search and having the alert trigger if the number of results is not zero. | tstats count where index=cts-dcpsa-app sourcetype=app:dcpsa host_ip IN (xx.xx.xxx.xxx, xx.xx.xxx.xxx) by host | eval current_time=_time | eval excluded_start_time=strptime("2024-04-14 21:00:00", "%Y-%m-%d %H:%M:%S") | eval excluded_end_time=strptime("2024-04-15 04:00:00", "%Y-%m-%d %H:%M:%S") | eval is_maintenance_window=if(current_time >= excluded_start_time AND current_time < excluded_end_time, 1, 0) | eval is_server_down=if(count == 0, 1, 0) | where is_maintenance window = 0 AND is_server_down=1
Hi @LuanNguyen , yes, yupu can use an HF as intermediate Forwarder between UFs and IDXs. The number is relevant only to correctly have a dimensioning of the reference hardware. At first I hint to ... See more...
Hi @LuanNguyen , yes, yupu can use an HF as intermediate Forwarder between UFs and IDXs. The number is relevant only to correctly have a dimensioning of the reference hardware. At first I hint to engage a Splunk Architect for this job. Then I hint to avoid a single point of failure using at least two or three HFs. Then there isn't a reference hardware for the HF, in my experience we started with the default HW reference (12 CPUs, 12 GB RAM, 300 GB disk), and then, analyzing the use of these resources, we defined to add some CPUs. In addition, you should define if these HFs are only  concentrators or if they also do parsing, merging and typing phases, especially the parsing phase: many transformations requires more resources. Then, if you have many UFs, you could prefer to have three or four HFs instead of two with more resources, to avoid that the network interfaces are the bottleneck. As I said, this design requires at least a Splunk Architect or a Splunk PS. Ciao. Giuseppe 
Yes, a heavy forwarder can be used in that manner.  Having only one HF, however, is a single point of failure that could lead to data loss if it is unavailable.  Be sure to set up at least 2 intermed... See more...
Yes, a heavy forwarder can be used in that manner.  Having only one HF, however, is a single point of failure that could lead to data loss if it is unavailable.  Be sure to set up at least 2 intermediate forwarders. See https://www.linkedin.com/pulse/splunk-asynchronous-forwarding-lightning-fast-data-ingestor-rawat/ for how to configure the HFs for better performance in this situation. What specific questions do you have about the configuration?
I wonder if a Heavy Forwarder can be the intermediate instance among 1000 Universal Forwarders and 1000 Indexers? The hardware resources are supposed to be unlimited, the problem will be only about t... See more...
I wonder if a Heavy Forwarder can be the intermediate instance among 1000 Universal Forwarders and 1000 Indexers? The hardware resources are supposed to be unlimited, the problem will be only about the configuration. Any documentations or references will be big helps. Thank you very much!
That's correct because label has to be unique, in this case it will not generate unique label. I would suggest set the label as well with host field, because host name already tells you whether its ... See more...
That's correct because label has to be unique, in this case it will not generate unique label. I would suggest set the label as well with host field, because host name already tells you whether its QA or Prod or Dev.   I hope this helps!!!
``` Set a flag based on sourcetype ``` | eval flag=if(sourcetype="ma",1,2) ``` Get single event for each ParentOrderID by sourcetype (dedup) ``` | stats vakues(flag) as flag by ParentOrderID sourcety... See more...
``` Set a flag based on sourcetype ``` | eval flag=if(sourcetype="ma",1,2) ``` Get single event for each ParentOrderID by sourcetype (dedup) ``` | stats vakues(flag) as flag by ParentOrderID sourcetype ``` Add flags from both sourcetypes ``` | stats sum(flag) as flags by ParentOrderID ``` Count each type of flag ``` | stats count by flags ``` Flags is 1 for ma only, 2 for cs only, 3 for both ma and cs ```
Hi yuanliu Thank you for the feedback. It's perfect!    
Okay, I guess then nullQueue will even work with /event endpoint.   Thanks @PickleRick 
@ITWhisperer  thank you. I am trying to get the total execution id count between the different sourcetype, where parent id is equal.  As per the design, sourcetype=ma execution will be higher than s... See more...
@ITWhisperer  thank you. I am trying to get the total execution id count between the different sourcetype, where parent id is equal.  As per the design, sourcetype=ma execution will be higher than sourcetype=cs. But, i want to get execution count of sourcetype=ma that has sent to sourcetype=cs.
@gcusello , Any inputs from your end since still i can see the events are getting ingested with the password information present in it.    
@KothariSurbhi , Thank you for your prompt response. But actually it needs to be updated for each and every search and  all users want to have the default as 20 instead of 5. So our Search head is h... See more...
@KothariSurbhi , Thank you for your prompt response. But actually it needs to be updated for each and every search and  all users want to have the default as 20 instead of 5. So our Search head is hosted in Cloud and I have tried to create an app with ui-prefs.conf but most of the time i got an error during app vetting process. But at some point of time the app has been deployed successfully and we have restarted the Search head and once again when we navigate and checked the max lines its still the same.  display.events.maxLines = 20 I can able to do it in the default directory whereas when i do from local its getting error. So kindly let me know how to achieve it.
Hello All, We have log flow from fortigate to splunk as follows: Fortigate Analyzer> Syslog server with UF>Deployment server> SearchHead /Indexer. Kindly suggest how can i get logs using fortinet ... See more...
Hello All, We have log flow from fortigate to splunk as follows: Fortigate Analyzer> Syslog server with UF>Deployment server> SearchHead /Indexer. Kindly suggest how can i get logs using fortinet add on over indexer? will i have to install fortinet add on app over syslog server UF as well? and what data source need to be selected over indexer.
Hello, yes, it seems I have run into the same problem as well. It says it is using Python v2 as opposed to version 3.  It gives two options, 1 to remove the application called Splunk Visual Expo... See more...
Hello, yes, it seems I have run into the same problem as well. It says it is using Python v2 as opposed to version 3.  It gives two options, 1 to remove the application called Splunk Visual Exporter or update Python to version 3. Since this is a SaaS service, this is usually handled by the vendor (Splunk) since we don't manage the backend. Is there a way to update the existing application to a higher version, not sure if by removing the application we break something. Todd
Because when I try in a python program still got same error