All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Please provide some sample events which demonstrate the issue you have with your search
I am working on Splunk Enteprise Security.  | savedsearch "Traffic  - Total Count" is working fine and giving me desired output in the search but when calling it in source code not showing any r... See more...
I am working on Splunk Enteprise Security.  | savedsearch "Traffic  - Total Count" is working fine and giving me desired output in the search but when calling it in source code not showing any result. {     "type": "ds.savedSearch",     "options": {         "query": "'| savedsearch \"Traffic - Total Count\"'",         "ref": "Traffic - Total Count"     },     "meta": {         "name": "Traffic - Total Count"     } } Do I need to do any configurations to get output on this dashboard???
Hi, By default, if no timestamp exist in a field, Splunk defaulting timestamp of previous event On one hand, I do want Splunk to do it, but on the other hand I don't want Splunk to treat it as a "T... See more...
Hi, By default, if no timestamp exist in a field, Splunk defaulting timestamp of previous event On one hand, I do want Splunk to do it, but on the other hand I don't want Splunk to treat it as a "Timestamp Parsing Issues" in the Data quality. Is there any way explicitly to tell Splunk to do it? I just want Splunk to treat it as error. Thanks
Thank you for providing the link. Let me confirm once again. My client requires all nodes to be kept in a private subnet. So, by using indexer discovery, I can place both the manager node and peer ... See more...
Thank you for providing the link. Let me confirm once again. My client requires all nodes to be kept in a private subnet. So, by using indexer discovery, I can place both the manager node and peer nodes in the private subnet, then set up an NLB in the public subnet in front of the manager node, with TLS communication encryption enabled. In this case, in the forwarders’ configuration, I only need to set this NLB to the manager_uri, correct?
Your stated requirement does not match completely with your examples. For example, some expected outputs have fewer "words" than are available in the "field". Also, is there an unwritten requirement ... See more...
Your stated requirement does not match completely with your examples. For example, some expected outputs have fewer "words" than are available in the "field". Also, is there an unwritten requirement that your "words" begin with a letter but could contain numbers? Making some assumptions derived from your written requirement and expected outputs, you could try something like this | makeresults format=csv data="raw 00012243asdsfgh - No recommendations from System A. Message - ERROR: System A | No Matching Recommendations 001b135c-5348-4arf-b3vbv344v - Validation Exception reason - Empty/Invalid Page_Placement Value ::: Input received - Channel1; ::: Other details - 001sss-445-4f45-b3ad-gsdfg34 - Incorrect page and placement found: Channel1; 00assew-34df-34de-d34k-sf34546d :: Invalid requestTimestamp : 2025-01-21T21:36:21.224Z 01hg34hgh44hghg4 - Exception while calling System A - null" | rex field=raw " (?<dozenwords>([A-Za-z][A-Za-z0-9]*[^A-Za-z0-9]+){0,11}[A-Za-z][A-Za-z0-9]*)"
Hi When using email templates, how do I capture the current threshold value configured and the value that was observed. with the below template, I couldn't get the threshold value and actual... See more...
Hi When using email templates, how do I capture the current threshold value configured and the value that was observed. with the below template, I couldn't get the threshold value and actual observed value Health Rule Violation: ${latestEvent.healthRule.name} What is impacted: $impacted Summary: ${latestEvent.summaryMessage} Event Time: ${latestEvent.eventTime} Threshold Value: ${latestEvent.threshold} Actual Observed Value: ${latestEvent.observedValue} Output:
Hi , same issue for me, I deleted these files :  /opt/splunk/bin/python2.7 /opt/splunk/bin/jp.py and i restarted Splunk Then the messages disappeared
We could close this topic. Thanks to Mario 
Issue is with db schema. The Appleid field was created earlier and deleted, but due to indexing it might have not been deleted. The new field that we are seeing AppleId(notice the i in id) now, is ... See more...
Issue is with db schema. The Appleid field was created earlier and deleted, but due to indexing it might have not been deleted. The new field that we are seeing AppleId(notice the i in id) now, is the one we are seeing but there are two fields showing in the backend. we had performed a rollover/indexing to have the old field deleted. And it resolved the issue
I am using JS because I need to embed the input in an html section to perform further customizations, and Splunk input doesn't allow me to do them. Regarding the JS, the problem has something to do ... See more...
I am using JS because I need to embed the input in an html section to perform further customizations, and Splunk input doesn't allow me to do them. Regarding the JS, the problem has something to do with the button and the setToken function: if i comment the line  setToken(`${id}_token`, value); // <---  the button responds to each click without problems, but when I decomment the line it triggers only on the first button click and then no more. I cannot understand why
I need to track the number of alerts configured under index=idx_sensors by hostname
Hi @KT1 , if you're speaking of a very long search, try to execute it in background, so you'll avoid auto-cancellation. Ciao. Giuseppe
Hi @arunsoni , as you can read at https://docs.splunk.com/Documentation/Splunk/9.4.0/DistSearch/AboutSHC in a SH Cluster Deployer is relevant only for deployng new apps and updates. During normal ... See more...
Hi @arunsoni , as you can read at https://docs.splunk.com/Documentation/Splunk/9.4.0/DistSearch/AboutSHC in a SH Cluster Deployer is relevant only for deployng new apps and updates. During normal running, Deployer doesn't partecipate to the activities because SHs reply configurations and lookups data by themselves managed by one of the SHs elected by themselves "Captain". Ciao. Giuseppe
One comment what could helps you with ip based lookups. When you are creating lookups which contains lookup and you need to find something there, you should/could use CIDR based searches. When you c... See more...
One comment what could helps you with ip based lookups. When you are creating lookups which contains lookup and you need to find something there, you should/could use CIDR based searches. When you create lookup just define that it contains IP and search method is CIDR base. One example https://community.splunk.com/t5/Splunk-Search/Using-CIDR-in-a-lookup-table/m-p/35787
Hi @SN1 , as @isoutamo said, index is defined in the used Add-on. Anyway, Splunk isn't a database, so the field definition is indipendent from the index where logs are stored and you can see the fi... See more...
Hi @SN1 , as @isoutamo said, index is defined in the used Add-on. Anyway, Splunk isn't a database, so the field definition is indipendent from the index where logs are stored and you can see the fields wherever they are stored. If you don't see the fields, see the add-on you used. Ciao. Giuseppe
Splunk have internal LB in UF/HF -> HF/Indexers. There are two options to use it. If you have static IPs on your indexers then you can just create outputs.conf which contains those. But if you have n... See more...
Splunk have internal LB in UF/HF -> HF/Indexers. There are two options to use it. If you have static IPs on your indexers then you can just create outputs.conf which contains those. But if you have not so static IP on indexers (those are e.g. in cloud, or you need more indexers frequently) then you could use indexer discovery feature. This keeps list of indexers on master node and UFs/HFs is asking it and then those can modify their output targets on fly. https://docs.splunk.com/Documentation/Splunk/latest/Indexer/indexerdiscovery
Hi @rahulkumar , as I said, you have to extract from the json fields the Splunk metadata: host, timestamp, etc... if present in the json fields. Then you have to identify the sourcetype from the co... See more...
Hi @rahulkumar , as I said, you have to extract from the json fields the Splunk metadata: host, timestamp, etc... if present in the json fields. Then you have to identify the sourcetype from the content of one of the json fields. As last, you have to remove all but the _raw, that's usually in one field called message or msg, or something else. In this way, you'll have all the metadata to associate to your events amd the original raw events to parse using the standard add-ons. When you choose the sourcetype, remember to use the one defined in the related add-on. Ciao. Giuseppe
Hi @Karthikeya , this is an inclusion condition, to have an exclusion condition you need only to add NOT before the subsearch <your_search> NOT [ | inputlookup whitelisted_ips.csv | fields IP ] | .... See more...
Hi @Karthikeya , this is an inclusion condition, to have an exclusion condition you need only to add NOT before the subsearch <your_search> NOT [ | inputlookup whitelisted_ips.csv | fields IP ] | ... or <your_search> NOT [ | inputlookup whitelisted_ips.csv | rename ip AS query | fields query ] | ... You can use this search to exclude from your messages the IPs from your lookup. If you want the search to automatically pole the lookup (if possible), I cannot help you because I don't know your data: you have to create a search that extract the IPs list, save it in the lookup and schedule it, something like this: <your_search> | dedup IP | table IP | outputlookup whitelisted_ips.csv  Ciao. Giuseppe
We have big application which contains small applications data coming onto Splunk. Currently we are mapping FQDNs to indexname for other application. But this big application wants single index for a... See more...
We have big application which contains small applications data coming onto Splunk. Currently we are mapping FQDNs to indexname for other application. But this big application wants single index for all their FQDNs but they want to differentiate their application data based on sourcetype. As of now we have only one sourcetype which receives data from all other applications. Example: there is Fruits application and there is apple, orange, and pineapple applications in it. They want single index for Fruits application and they want to differentiate by using sourcetype=apple and sourcetype=orange and soon.... For remaining applications we are simply mapping FQDN to indexname in transforms.conf by using lookups and ingestEval. I can map all fruits application FQDNs to single index then all logs will be mixed right (apple,orange and soon....)... How can we differentiate with by using sourcetype? Where and how I need to write the logic? 
Hi those are always your organization's decisions. Usually there be some naming standards which define those index names. Best option is to ask it from your Splunk admin or look your internal docume... See more...
Hi those are always your organization's decisions. Usually there be some naming standards which define those index names. Best option is to ask it from your Splunk admin or look your internal documentation. One option is try  | metadata type=hosts index=* which shows what hosts has sent events to indexes on your selected time slot. r. Ismo