All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hey! I still get the same error. But thank you for trying! Let me know if something else clicks. Thank you.
The IN operator only works in the search command.  In where you must use the in function. | loadjob savedsearch="name:search:cust_info" | where in(AccountType,$AccountType$)  
Hi @mbozbura, I’m a Community Moderator in the Splunk Community. This question was posted 1 year ago, so it might not get the attention you need for your question to be answered. We recommend tha... See more...
Hi @mbozbura, I’m a Community Moderator in the Splunk Community. This question was posted 1 year ago, so it might not get the attention you need for your question to be answered. We recommend that you post a new question so that your issue can get the  visibility it deserves. To increase your chances of getting help from the community, follow these guidelines in the Splunk Answers User Manual when creating your post. Thank you! 
So I am creating a dashboard and I keep getting this error:  Error in 'where' command: The expression is malformed. Expected ). This is what I have: | loadjob savedsearch="name:search:cust_info... See more...
So I am creating a dashboard and I keep getting this error:  Error in 'where' command: The expression is malformed. Expected ). This is what I have: | loadjob savedsearch="name:search:cust_info" | where AccountType IN ($AccountType$)   I created a multiselect filter on AccountType and I want the SPL to query on those selected.  What could I be missing or another way to achieve this query to filter on AccountType?
I have the same issue i have a valid stix2, did you find a solution for this?
Thank you so much! That worked! 
The eval is trying to divide a string literal ("SumBalances") by a field, which won't work.  Replace the double quotes with single quotes or remote the double quotes.
I am getting this error:   Error in 'EvalCommand': Type checking failed. '/' only takes numbers.   Here is lines of SPL: | stats count as "Count of Balances", sum(BALANCECHANGE) as "SumBalances"... See more...
I am getting this error:   Error in 'EvalCommand': Type checking failed. '/' only takes numbers.   Here is lines of SPL: | stats count as "Count of Balances", sum(BALANCECHANGE) as "SumBalances" by balance_bin | eventstats sum("SumBalances") as total_balance | eval percentage_in_bin = round(("SumBalances" / total_balance) *100, 2) What could be causing this? Is there a way to olve this without the / symbol? 
In my mv field nameas  errortype.In the error type the counts shows file not found as 4 and empty as 2 .I want to exclude the empty values from the mv fields
Thank you! Same issue here on Splunk 9.2.1 Splunk was NOT starting at boot-start (with init.d) but manually was starting correctly. After having commented the mentioned line is now properly booting... See more...
Thank you! Same issue here on Splunk 9.2.1 Splunk was NOT starting at boot-start (with init.d) but manually was starting correctly. After having commented the mentioned line is now properly booting with the VM (Oracle Linux). I am going to open a case to the support to inform them about it.
Solved by myself, underscores gives not problem.
I'll try to explain it with a basic example. As an output of a stats command I have: detection query search1 google.com yahoo.com search2 google.com bing.com ... See more...
I'll try to explain it with a basic example. As an output of a stats command I have: detection query search1 google.com yahoo.com search2 google.com bing.com   I want to get which queries are not being detected by both search1 and search 2. Or else, getting rid of the queries that are in both searches, either way work. Like ok, search1 is detecting yahoo.com whereas search2 isn't, and viceversa with bing.com I thought about grouping by query instead of by search,  the problem is I have dozens or even hundreds of queries. Any thoughts? Cheers
Hi Splunkers, I have a doubt about underscores and path in props.conf. Suppose, in my props.conf, I have: [source::/aaa/bbb/ccc_ddd] As you can see, in my path I have an underscore in path name. ... See more...
Hi Splunkers, I have a doubt about underscores and path in props.conf. Suppose, in my props.conf, I have: [source::/aaa/bbb/ccc_ddd] As you can see, in my path I have an underscore in path name. This could be a problem? I mean: can I put the underscore without problem or I have to use backslash to escape it?
I've had more consistent results by putting the trigger condition in the search and having the alert trigger if the number of results is not zero. | tstats count where index=cts-dcpsa-app sourcetype... See more...
I've had more consistent results by putting the trigger condition in the search and having the alert trigger if the number of results is not zero. | tstats count where index=cts-dcpsa-app sourcetype=app:dcpsa host_ip IN (xx.xx.xxx.xxx, xx.xx.xxx.xxx) by host | eval current_time=_time | eval excluded_start_time=strptime("2024-04-14 21:00:00", "%Y-%m-%d %H:%M:%S") | eval excluded_end_time=strptime("2024-04-15 04:00:00", "%Y-%m-%d %H:%M:%S") | eval is_maintenance_window=if(current_time >= excluded_start_time AND current_time < excluded_end_time, 1, 0) | eval is_server_down=if(count == 0, 1, 0) | where is_maintenance window = 0 AND is_server_down=1
Hi @LuanNguyen , yes, yupu can use an HF as intermediate Forwarder between UFs and IDXs. The number is relevant only to correctly have a dimensioning of the reference hardware. At first I hint to ... See more...
Hi @LuanNguyen , yes, yupu can use an HF as intermediate Forwarder between UFs and IDXs. The number is relevant only to correctly have a dimensioning of the reference hardware. At first I hint to engage a Splunk Architect for this job. Then I hint to avoid a single point of failure using at least two or three HFs. Then there isn't a reference hardware for the HF, in my experience we started with the default HW reference (12 CPUs, 12 GB RAM, 300 GB disk), and then, analyzing the use of these resources, we defined to add some CPUs. In addition, you should define if these HFs are only  concentrators or if they also do parsing, merging and typing phases, especially the parsing phase: many transformations requires more resources. Then, if you have many UFs, you could prefer to have three or four HFs instead of two with more resources, to avoid that the network interfaces are the bottleneck. As I said, this design requires at least a Splunk Architect or a Splunk PS. Ciao. Giuseppe 
Yes, a heavy forwarder can be used in that manner.  Having only one HF, however, is a single point of failure that could lead to data loss if it is unavailable.  Be sure to set up at least 2 intermed... See more...
Yes, a heavy forwarder can be used in that manner.  Having only one HF, however, is a single point of failure that could lead to data loss if it is unavailable.  Be sure to set up at least 2 intermediate forwarders. See https://www.linkedin.com/pulse/splunk-asynchronous-forwarding-lightning-fast-data-ingestor-rawat/ for how to configure the HFs for better performance in this situation. What specific questions do you have about the configuration?
I wonder if a Heavy Forwarder can be the intermediate instance among 1000 Universal Forwarders and 1000 Indexers? The hardware resources are supposed to be unlimited, the problem will be only about t... See more...
I wonder if a Heavy Forwarder can be the intermediate instance among 1000 Universal Forwarders and 1000 Indexers? The hardware resources are supposed to be unlimited, the problem will be only about the configuration. Any documentations or references will be big helps. Thank you very much!
That's correct because label has to be unique, in this case it will not generate unique label. I would suggest set the label as well with host field, because host name already tells you whether its ... See more...
That's correct because label has to be unique, in this case it will not generate unique label. I would suggest set the label as well with host field, because host name already tells you whether its QA or Prod or Dev.   I hope this helps!!!
``` Set a flag based on sourcetype ``` | eval flag=if(sourcetype="ma",1,2) ``` Get single event for each ParentOrderID by sourcetype (dedup) ``` | stats vakues(flag) as flag by ParentOrderID sourcety... See more...
``` Set a flag based on sourcetype ``` | eval flag=if(sourcetype="ma",1,2) ``` Get single event for each ParentOrderID by sourcetype (dedup) ``` | stats vakues(flag) as flag by ParentOrderID sourcetype ``` Add flags from both sourcetypes ``` | stats sum(flag) as flags by ParentOrderID ``` Count each type of flag ``` | stats count by flags ``` Flags is 1 for ma only, 2 for cs only, 3 for both ma and cs ```
Hi yuanliu Thank you for the feedback. It's perfect!