All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

After configuring content pack for VMware. I repeatedly get "duplicate entity aliases found". We are also collecting for TA-Nix. How can I fix the duplicate entity alias issue. I am running ITE 4.18.... See more...
After configuring content pack for VMware. I repeatedly get "duplicate entity aliases found". We are also collecting for TA-Nix. How can I fix the duplicate entity alias issue. I am running ITE 4.18.1 and Splunk app for content packs 2.10
So I am creating a dashboard and I keep getting this error:  Error in 'where' command: The expression is malformed. Expected ). This is what I have: | loadjob savedsearch="name:search:cust_info... See more...
So I am creating a dashboard and I keep getting this error:  Error in 'where' command: The expression is malformed. Expected ). This is what I have: | loadjob savedsearch="name:search:cust_info" | where AccountType IN ($AccountType$)   I created a multiselect filter on AccountType and I want the SPL to query on those selected.  What could I be missing or another way to achieve this query to filter on AccountType?
I am getting this error:   Error in 'EvalCommand': Type checking failed. '/' only takes numbers.   Here is lines of SPL: | stats count as "Count of Balances", sum(BALANCECHANGE) as "SumBalances"... See more...
I am getting this error:   Error in 'EvalCommand': Type checking failed. '/' only takes numbers.   Here is lines of SPL: | stats count as "Count of Balances", sum(BALANCECHANGE) as "SumBalances" by balance_bin | eventstats sum("SumBalances") as total_balance | eval percentage_in_bin = round(("SumBalances" / total_balance) *100, 2) What could be causing this? Is there a way to olve this without the / symbol? 
In my mv field nameas  errortype.In the error type the counts shows file not found as 4 and empty as 2 .I want to exclude the empty values from the mv fields
I'll try to explain it with a basic example. As an output of a stats command I have: detection query search1 google.com yahoo.com search2 google.com bing.com ... See more...
I'll try to explain it with a basic example. As an output of a stats command I have: detection query search1 google.com yahoo.com search2 google.com bing.com   I want to get which queries are not being detected by both search1 and search 2. Or else, getting rid of the queries that are in both searches, either way work. Like ok, search1 is detecting yahoo.com whereas search2 isn't, and viceversa with bing.com I thought about grouping by query instead of by search,  the problem is I have dozens or even hundreds of queries. Any thoughts? Cheers
Hi Splunkers, I have a doubt about underscores and path in props.conf. Suppose, in my props.conf, I have: [source::/aaa/bbb/ccc_ddd] As you can see, in my path I have an underscore in path name. ... See more...
Hi Splunkers, I have a doubt about underscores and path in props.conf. Suppose, in my props.conf, I have: [source::/aaa/bbb/ccc_ddd] As you can see, in my path I have an underscore in path name. This could be a problem? I mean: can I put the underscore without problem or I have to use backslash to escape it?
I wonder if a Heavy Forwarder can be the intermediate instance among 1000 Universal Forwarders and 1000 Indexers? The hardware resources are supposed to be unlimited, the problem will be only about t... See more...
I wonder if a Heavy Forwarder can be the intermediate instance among 1000 Universal Forwarders and 1000 Indexers? The hardware resources are supposed to be unlimited, the problem will be only about the configuration. Any documentations or references will be big helps. Thank you very much!
Hello All, We have log flow from fortigate to splunk as follows: Fortigate Analyzer> Syslog server with UF>Deployment server> SearchHead /Indexer. Kindly suggest how can i get logs using fortinet ... See more...
Hello All, We have log flow from fortigate to splunk as follows: Fortigate Analyzer> Syslog server with UF>Deployment server> SearchHead /Indexer. Kindly suggest how can i get logs using fortinet add on over indexer? will i have to install fortinet add on app over syslog server UF as well? and what data source need to be selected over indexer.
Hi, I am trying to get the execution count based on the parentIDs over two different data sets. Please could you review and suggest ?  I would like to see what's execution count  between (sourcet... See more...
Hi, I am trying to get the execution count based on the parentIDs over two different data sets. Please could you review and suggest ?  I would like to see what's execution count  between (sourcetype=cs, sourcetype=ma) , only the field ParentOrderID is common between cs, ma sourcetype. Note: daily close to ~10Million events are loaded  into splunk and unique execution will be 4Million.Also, sometime the join query is getting auto-canceled. SPL: index=india sourcetype=ma NOT (source=*OPT* OR app_instance=MA_DROP_SESSION OR "11555=Y-NOBK" OR fix_applicationInstanceID IN(*OPT*,*GWIM*)) msgType=8 (execType=1 OR execType=2 OR execType=F) stream=Outgoing app_instance=UPSTREAM "clientid=XAC*" | dedup fix_execID,ParentOrderID | stats count | join ParentOrderID [ search index=india sourcetype=cs NOT (source=*OPT* OR "11555=Y-NOBK" OR applicationInstanceID IN(*OPT*,*GWIM*)) msgType=8 (execType=1 OR execType=2 OR execType=F) app_instance=PUBHUB stream=Outgoing "clientid=XAC" "sourceid=AX_DN_XAC" | dedup execID,ParentOrderID | stats count] Thanks, Selvam.
Why I get empty results while I using REST API (results) Search on python? And when I using REST API (events) in Python to got like this  For your information the SID is already successfu... See more...
Why I get empty results while I using REST API (results) Search on python? And when I using REST API (events) in Python to got like this  For your information the SID is already successfully retreived using the python program and when I try to use curl command to search the SID jobs (curl -k -u admin:pass https://localhost:8089/services/search/v2/jobs/mysearch_02151949/results) the results is show on the screen without any error. Can you help me about this case ? Thank you   
In an indexed cluster environment, I set the following stanza configuration in the deployment server's serverclass.conf file, but [Server class: splunk_indexer_master_cluster] stateOnClient = n... See more...
In an indexed cluster environment, I set the following stanza configuration in the deployment server's serverclass.conf file, but [Server class: splunk_indexer_master_cluster] stateOnClient = noop Whitelist = <ClusterManagerA> The _cluster folder under manager-app disappeared along with his Indexes.conf inside it. Fortunately, Indexes.conf remained in the cluster's peer app, so this was not a problem. If I want to use stateOnClient = noop, how should I maintain Indexes.conf deployed to the cluster on the cluster master?
Requirement - alert only needs to trigger outside window even if server is down in maintenance window | tstats count where index=cts-dcpsa-app sourcetype=app:dcpsa host_ip IN (xx.xx.xxx.xxx, xx.xx.... See more...
Requirement - alert only needs to trigger outside window even if server is down in maintenance window | tstats count where index=cts-dcpsa-app sourcetype=app:dcpsa host_ip IN (xx.xx.xxx.xxx, xx.xx.xxx.xxx) by host | eval current_time=_time | eval excluded_start_time=strptime("2024-04-14 21:00:00", "%Y-%m-%d %H:%M:%S") | eval excluded_end_time=strptime("2024-04-15 04:00:00", "%Y-%m-%d %H:%M:%S") | eval is_maintenance_window=if(current_time >= excluded_start_time AND current_time < excluded_end_time, 1, 0) | eval is_server_down=if((host="xx.xx.xxx.xxx" AND count == 0) OR (host="xx.xx.xxx.xxx" AND count == 0) 1, 0 ) Trigger condition- |search is_maintenance window = 0 AND is_server_down=1 Alert is not getting triggered outside maintenance window even though one of server is down. Help me what is wrong in query or another possible solution
Hello,   I am facing same issue as you ...I am not receiving email alerts from splunk ....Instead of localhost what name should I kept for  mail server host name?  Could you please suggest
Hello, While using sitimechart instead of timechart - The data has been changed. I would like to calculate an error percentage but the system shows 0 or fields count. Thanks!    
hello, We upgraded our red hat 7 to 9 this past monday. and splunk stopped sending emails. We were inexperience and unprepare for this so we upgraded our splunk enterprise from 9.1 to 9.13 to see ... See more...
hello, We upgraded our red hat 7 to 9 this past monday. and splunk stopped sending emails. We were inexperience and unprepare for this so we upgraded our splunk enterprise from 9.1 to 9.13 to see if this would fix it. It did not. then we upgraded to 9.2, that did not fix it. I started adding debug mode to everything and found that splunk would send the emails to postfix and the postfix logs would state the emails were send. however, after looking at it closer, I notice the from field of the splunk sendemail generated emails had the from field like: splunk@prod not splunk@prod.mydomain.com (as they used to before we upgraded to redhat 9 When we use mailx, the fron field from field is constructed correctly such as: splunk@prod.domain.com extra python debugging does not show the from field but only the user and the domain: from': 'splunk', 'hostname': 'prod.mydomain.com', My stanza in /opt/splunk/etc/system/local/alert_action.conf: [email] hostname = prod.mydomain.com Does anyone know how to fix this? Is there a setting in splunk that would make sure the email from field is constructed correctly. It is funny that if you add an incorrect "to" address splunk whines but if splunk create a incorrect to field address in sendemail, it is fine and, just send it to postfix and let it handle it, lol dandy  
Hello, I am not getting emails regarding the alerts I have setup in splunk ...Can anyone please help?
Can this be done or is the official Splunk guidance to utilize an index cluster? Curious if there's any current (potentially) possible method to achieve high-availability with only 2 indexers? My r... See more...
Can this be done or is the official Splunk guidance to utilize an index cluster? Curious if there's any current (potentially) possible method to achieve high-availability with only 2 indexers? My reading on index clusters has me thinking one needs at a minimum 3 licensed Splunk instances. At least, that's what I got from Splunk's documentation. You need one master, and at least 2 dedicated indexer peers. Where the search head goes in all of that and how that would be supported, I have no clue. I'm sure everyone can think of a very green reason as to why one would want to be able to just have a pair of indexers serve high availability without being forced into an index cluster kind of deployment. I can see older posts where apparently this used to be supported but my understanding now is that the only Splunk supported high-availability deployment is via index clusters. Can anyone confirm?
Hello, I am trying to troubleshoot sendemail.py since after an upgrate to red hat 9 our splunk stopped sending emails. I understand the command to use the splunk python interpreter in the cli is: ... See more...
Hello, I am trying to troubleshoot sendemail.py since after an upgrate to red hat 9 our splunk stopped sending emails. I understand the command to use the splunk python interpreter in the cli is: splunk cmd python /opt/splunk/etc/apps/search/bin/sendemail.py however, how do i combine the above with the below _internal search results so i can see what the interpreter would provide as feedback (such as errors).   _raw results of a sendemail: subject="old: : $: server-prod - AlertLog_Check - 4 Log(s) ", encoded_subject="old: : $: server-prod - AlertLog_Check - 4 Log(s) ", results_link="https://MyWebsite:8080/app/search/@go?sid=scheduler__nobody__search__RMD50fd7c7e5334fc616_at_1712993040_1213", recipients="['sysadmin@MyWebsite.com']", server="localhost"   any examples would e greatly apreciated, thanks, A totally blind Splunker with a mission
  <input type="dropdown" token="envtoken"> <label>env</label> <fieldForLabel>label</fieldForLabel> <fieldForValue>host</fieldForValue> <search> <query> index=aaa (source="/v... See more...
  <input type="dropdown" token="envtoken"> <label>env</label> <fieldForLabel>label</fieldForLabel> <fieldForValue>host</fieldForValue> <search> <query> index=aaa (source="/var/log/testd.log") |stats count by host | eval label=case(match(host, ".*tv*."), "Test", match(host, ".*qv*."), "QA", match(host, ".*pv*."), "Prod")| dedup label</query> <earliest>-15m</earliest> <latest>now</latest> </search> </input>   dropdownlist binding with TEST, QA and PROD In QA and prod have 3 host. If i select QA from dropdown list , will the search includes from all the three hosts? could you plase confirm
Hi Team, I need to extract the values of the fields where it has multiple values. So, I used commands like mvzip, mvexpand, mvindex and eval. However the output of my spl query is not matching with... See more...
Hi Team, I need to extract the values of the fields where it has multiple values. So, I used commands like mvzip, mvexpand, mvindex and eval. However the output of my spl query is not matching with the count of the interesting field. Could you please assist on this? Here is my SPL query and output screenshots below. index="xxx" sourcetype="xxx" source=xxx events{}.application="xxx" userExperienceScore=FRUSTRATED | rename userActions{}.application as Application, userActions{}.name as Action, userActions{}.targetUrl as Target_URL, userActions{}.duration as Duration, userActions{}.type as User_Action_Type, userActions{}.apdexCategory as useractions_experience_score | eval x=mvzip(mvzip(Application,Action),Target_URL), y=mvzip(mvzip(Duration,User_Action_Type),useractions_experience_score) | mvexpand x | mvexpand y | dedup x | eval x=split(x,","), y=split(y,",") | eval Application=mvindex(x,0), Action=mvindex(x,1), Target_URL=mvindex(x,2), Duration=mvindex(y,0), User_Action_Type=mvindex(y,1), useractions_experience_score=mvindex(y,2) | eval Duration_in_Mins=Duration/60000 | eval Duration_in_Mins=round(Duration_in_Mins,2) | table _time, Application, Action, Target_URL,Duration_in_Mins,User_Action_Type,useractions_experience_score | sort - _time | search useractions_experience_score=FRUSTRATED | search Application="*" | search Action="*" Query Output with the statistics count:   Expected Count: