All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Wildcards don't work everywhere and the eval function may be one of those places.  Try using isnotnull(), instead. index=* *jupiter* | stats count as "Total Traffic" count(eval(isnotnull(attack_typ... See more...
Wildcards don't work everywhere and the eval function may be one of those places.  Try using isnotnull(), instead. index=* *jupiter* | stats count as "Total Traffic" count(eval(isnotnull(attack_type))) as "Attack Traffic"  On the subject of wildcards, avoid using index=*, except in special circumstances.  Also, a leading wildcard in the search command (as in "*jupiter*") is very inefficient.
I am trying to get total traffic vs attack traffic splunk query in order to keep it in dashboard panel. We have a field called attack_type which contains all the attacks and those will be dynamic (... See more...
I am trying to get total traffic vs attack traffic splunk query in order to keep it in dashboard panel. We have a field called attack_type which contains all the attacks and those will be dynamic (daily new ones will be coming). For last 24 hours, we have 1000 total events and 400 attack_type events. how can I show this in single dashboard panel: tried to write this query: index=* *jupiter* | stats count as "Total Traffic" count(eval(attack_type="*")) as "Attack Traffic" but getting this error: Error in 'stats' command: The eval expression for dynamic field 'attack_type=*' is invalid. Error='The expression is malformed. An unexpected character is reached at '*'.'. please help me in this regards.
Where are you seeing the 400 error? The Hec client said timeout in this post, didnt seem to mention 400 bad request? if its format then something is fundamentally wrong with the payload or you sen... See more...
Where are you seeing the 400 error? The Hec client said timeout in this post, didnt seem to mention 400 bad request? if its format then something is fundamentally wrong with the payload or you sending to the wrong url, etc.  Anyways suggest you post your own post with any info and config, especially the hec url config and any splunk internal logs that align.  Otherwise would try support or the github issues.  found this issue that sounded kinda similar, but hard to tell without you providing config details or logs.  https://github.com/splunk/splunk-connect-for-syslog/issues/1329
no it is not busy - it is getting basically a 400 error which means format if i take and only have the default destionation be d_hec_default or d_hec_other basically if i go only 2 one of the splunk ... See more...
no it is not busy - it is getting basically a 400 error which means format if i take and only have the default destionation be d_hec_default or d_hec_other basically if i go only 2 one of the splunk HEC site it is fine.  It only gets the busy or format error on the second site if only the second site is configured so if i switch it whatever is the first site works and whatever is the second site doesnt work.  So no it is not busy or that it can not injest the logs it is sc4s doesnt seem to work well with multi destinations.  I am looking for help with someone that has done that.
Please provide some sample events which demonstrate the issue you have with your search
I am working on Splunk Enteprise Security.  | savedsearch "Traffic  - Total Count" is working fine and giving me desired output in the search but when calling it in source code not showing any r... See more...
I am working on Splunk Enteprise Security.  | savedsearch "Traffic  - Total Count" is working fine and giving me desired output in the search but when calling it in source code not showing any result. {     "type": "ds.savedSearch",     "options": {         "query": "'| savedsearch \"Traffic - Total Count\"'",         "ref": "Traffic - Total Count"     },     "meta": {         "name": "Traffic - Total Count"     } } Do I need to do any configurations to get output on this dashboard???
Hi, By default, if no timestamp exist in a field, Splunk defaulting timestamp of previous event On one hand, I do want Splunk to do it, but on the other hand I don't want Splunk to treat it as a "T... See more...
Hi, By default, if no timestamp exist in a field, Splunk defaulting timestamp of previous event On one hand, I do want Splunk to do it, but on the other hand I don't want Splunk to treat it as a "Timestamp Parsing Issues" in the Data quality. Is there any way explicitly to tell Splunk to do it? I just want Splunk to treat it as error. Thanks
Thank you for providing the link. Let me confirm once again. My client requires all nodes to be kept in a private subnet. So, by using indexer discovery, I can place both the manager node and peer ... See more...
Thank you for providing the link. Let me confirm once again. My client requires all nodes to be kept in a private subnet. So, by using indexer discovery, I can place both the manager node and peer nodes in the private subnet, then set up an NLB in the public subnet in front of the manager node, with TLS communication encryption enabled. In this case, in the forwarders’ configuration, I only need to set this NLB to the manager_uri, correct?
Your stated requirement does not match completely with your examples. For example, some expected outputs have fewer "words" than are available in the "field". Also, is there an unwritten requirement ... See more...
Your stated requirement does not match completely with your examples. For example, some expected outputs have fewer "words" than are available in the "field". Also, is there an unwritten requirement that your "words" begin with a letter but could contain numbers? Making some assumptions derived from your written requirement and expected outputs, you could try something like this | makeresults format=csv data="raw 00012243asdsfgh - No recommendations from System A. Message - ERROR: System A | No Matching Recommendations 001b135c-5348-4arf-b3vbv344v - Validation Exception reason - Empty/Invalid Page_Placement Value ::: Input received - Channel1; ::: Other details - 001sss-445-4f45-b3ad-gsdfg34 - Incorrect page and placement found: Channel1; 00assew-34df-34de-d34k-sf34546d :: Invalid requestTimestamp : 2025-01-21T21:36:21.224Z 01hg34hgh44hghg4 - Exception while calling System A - null" | rex field=raw " (?<dozenwords>([A-Za-z][A-Za-z0-9]*[^A-Za-z0-9]+){0,11}[A-Za-z][A-Za-z0-9]*)"
Hi When using email templates, how do I capture the current threshold value configured and the value that was observed. with the below template, I couldn't get the threshold value and actual... See more...
Hi When using email templates, how do I capture the current threshold value configured and the value that was observed. with the below template, I couldn't get the threshold value and actual observed value Health Rule Violation: ${latestEvent.healthRule.name} What is impacted: $impacted Summary: ${latestEvent.summaryMessage} Event Time: ${latestEvent.eventTime} Threshold Value: ${latestEvent.threshold} Actual Observed Value: ${latestEvent.observedValue} Output:
Hi , same issue for me, I deleted these files :  /opt/splunk/bin/python2.7 /opt/splunk/bin/jp.py and i restarted Splunk Then the messages disappeared
We could close this topic. Thanks to Mario 
Issue is with db schema. The Appleid field was created earlier and deleted, but due to indexing it might have not been deleted. The new field that we are seeing AppleId(notice the i in id) now, is ... See more...
Issue is with db schema. The Appleid field was created earlier and deleted, but due to indexing it might have not been deleted. The new field that we are seeing AppleId(notice the i in id) now, is the one we are seeing but there are two fields showing in the backend. we had performed a rollover/indexing to have the old field deleted. And it resolved the issue
I am using JS because I need to embed the input in an html section to perform further customizations, and Splunk input doesn't allow me to do them. Regarding the JS, the problem has something to do ... See more...
I am using JS because I need to embed the input in an html section to perform further customizations, and Splunk input doesn't allow me to do them. Regarding the JS, the problem has something to do with the button and the setToken function: if i comment the line  setToken(`${id}_token`, value); // <---  the button responds to each click without problems, but when I decomment the line it triggers only on the first button click and then no more. I cannot understand why
I need to track the number of alerts configured under index=idx_sensors by hostname
Hi @KT1 , if you're speaking of a very long search, try to execute it in background, so you'll avoid auto-cancellation. Ciao. Giuseppe
Hi @arunsoni , as you can read at https://docs.splunk.com/Documentation/Splunk/9.4.0/DistSearch/AboutSHC in a SH Cluster Deployer is relevant only for deployng new apps and updates. During normal ... See more...
Hi @arunsoni , as you can read at https://docs.splunk.com/Documentation/Splunk/9.4.0/DistSearch/AboutSHC in a SH Cluster Deployer is relevant only for deployng new apps and updates. During normal running, Deployer doesn't partecipate to the activities because SHs reply configurations and lookups data by themselves managed by one of the SHs elected by themselves "Captain". Ciao. Giuseppe
One comment what could helps you with ip based lookups. When you are creating lookups which contains lookup and you need to find something there, you should/could use CIDR based searches. When you c... See more...
One comment what could helps you with ip based lookups. When you are creating lookups which contains lookup and you need to find something there, you should/could use CIDR based searches. When you create lookup just define that it contains IP and search method is CIDR base. One example https://community.splunk.com/t5/Splunk-Search/Using-CIDR-in-a-lookup-table/m-p/35787
Hi @SN1 , as @isoutamo said, index is defined in the used Add-on. Anyway, Splunk isn't a database, so the field definition is indipendent from the index where logs are stored and you can see the fi... See more...
Hi @SN1 , as @isoutamo said, index is defined in the used Add-on. Anyway, Splunk isn't a database, so the field definition is indipendent from the index where logs are stored and you can see the fields wherever they are stored. If you don't see the fields, see the add-on you used. Ciao. Giuseppe
Splunk have internal LB in UF/HF -> HF/Indexers. There are two options to use it. If you have static IPs on your indexers then you can just create outputs.conf which contains those. But if you have n... See more...
Splunk have internal LB in UF/HF -> HF/Indexers. There are two options to use it. If you have static IPs on your indexers then you can just create outputs.conf which contains those. But if you have not so static IP on indexers (those are e.g. in cloud, or you need more indexers frequently) then you could use indexer discovery feature. This keeps list of indexers on master node and UFs/HFs is asking it and then those can modify their output targets on fly. https://docs.splunk.com/Documentation/Splunk/latest/Indexer/indexerdiscovery