Thank you for providing the link. Let me confirm once again. My client requires all nodes to be kept in a private subnet. So, by using indexer discovery, I can place both the manager node and peer ...
See more...
Thank you for providing the link. Let me confirm once again. My client requires all nodes to be kept in a private subnet. So, by using indexer discovery, I can place both the manager node and peer nodes in the private subnet, then set up an NLB in the public subnet in front of the manager node, with TLS communication encryption enabled. In this case, in the forwarders’ configuration, I only need to set this NLB to the manager_uri, correct?
Your stated requirement does not match completely with your examples. For example, some expected outputs have fewer "words" than are available in the "field". Also, is there an unwritten requirement ...
See more...
Your stated requirement does not match completely with your examples. For example, some expected outputs have fewer "words" than are available in the "field". Also, is there an unwritten requirement that your "words" begin with a letter but could contain numbers? Making some assumptions derived from your written requirement and expected outputs, you could try something like this | makeresults format=csv data="raw
00012243asdsfgh - No recommendations from System A. Message - ERROR: System A | No Matching Recommendations
001b135c-5348-4arf-b3vbv344v - Validation Exception reason - Empty/Invalid Page_Placement Value ::: Input received - Channel1; ::: Other details - 001sss-445-4f45-b3ad-gsdfg34 - Incorrect page and placement found: Channel1;
00assew-34df-34de-d34k-sf34546d :: Invalid requestTimestamp : 2025-01-21T21:36:21.224Z
01hg34hgh44hghg4 - Exception while calling System A - null"
| rex field=raw " (?<dozenwords>([A-Za-z][A-Za-z0-9]*[^A-Za-z0-9]+){0,11}[A-Za-z][A-Za-z0-9]*)"
Hi
When using email templates, how do I capture the current threshold value configured and the value that was observed.
with the below template, I couldn't get the threshold value and actual...
See more...
Hi
When using email templates, how do I capture the current threshold value configured and the value that was observed.
with the below template, I couldn't get the threshold value and actual observed value
Health Rule Violation: ${latestEvent.healthRule.name}
What is impacted: $impacted
Summary: ${latestEvent.summaryMessage}
Event Time: ${latestEvent.eventTime}
Threshold Value: ${latestEvent.threshold}
Actual Observed Value: ${latestEvent.observedValue}
Output:
Issue is with db schema. The Appleid field was created earlier and deleted, but due to indexing it might have not been deleted. The new field that we are seeing AppleId(notice the i in id) now, is ...
See more...
Issue is with db schema. The Appleid field was created earlier and deleted, but due to indexing it might have not been deleted. The new field that we are seeing AppleId(notice the i in id) now, is the one we are seeing but there are two fields showing in the backend. we had performed a rollover/indexing to have the old field deleted. And it resolved the issue
I am using JS because I need to embed the input in an html section to perform further customizations, and Splunk input doesn't allow me to do them. Regarding the JS, the problem has something to do ...
See more...
I am using JS because I need to embed the input in an html section to perform further customizations, and Splunk input doesn't allow me to do them. Regarding the JS, the problem has something to do with the button and the setToken function: if i comment the line setToken(`${id}_token`, value); // <--- the button responds to each click without problems, but when I decomment the line it triggers only on the first button click and then no more. I cannot understand why
Hi @arunsoni , as you can read at https://docs.splunk.com/Documentation/Splunk/9.4.0/DistSearch/AboutSHC in a SH Cluster Deployer is relevant only for deployng new apps and updates. During normal ...
See more...
Hi @arunsoni , as you can read at https://docs.splunk.com/Documentation/Splunk/9.4.0/DistSearch/AboutSHC in a SH Cluster Deployer is relevant only for deployng new apps and updates. During normal running, Deployer doesn't partecipate to the activities because SHs reply configurations and lookups data by themselves managed by one of the SHs elected by themselves "Captain". Ciao. Giuseppe
One comment what could helps you with ip based lookups. When you are creating lookups which contains lookup and you need to find something there, you should/could use CIDR based searches. When you c...
See more...
One comment what could helps you with ip based lookups. When you are creating lookups which contains lookup and you need to find something there, you should/could use CIDR based searches. When you create lookup just define that it contains IP and search method is CIDR base. One example https://community.splunk.com/t5/Splunk-Search/Using-CIDR-in-a-lookup-table/m-p/35787
Hi @SN1 , as @isoutamo said, index is defined in the used Add-on. Anyway, Splunk isn't a database, so the field definition is indipendent from the index where logs are stored and you can see the fi...
See more...
Hi @SN1 , as @isoutamo said, index is defined in the used Add-on. Anyway, Splunk isn't a database, so the field definition is indipendent from the index where logs are stored and you can see the fields wherever they are stored. If you don't see the fields, see the add-on you used. Ciao. Giuseppe
Splunk have internal LB in UF/HF -> HF/Indexers. There are two options to use it. If you have static IPs on your indexers then you can just create outputs.conf which contains those. But if you have n...
See more...
Splunk have internal LB in UF/HF -> HF/Indexers. There are two options to use it. If you have static IPs on your indexers then you can just create outputs.conf which contains those. But if you have not so static IP on indexers (those are e.g. in cloud, or you need more indexers frequently) then you could use indexer discovery feature. This keeps list of indexers on master node and UFs/HFs is asking it and then those can modify their output targets on fly. https://docs.splunk.com/Documentation/Splunk/latest/Indexer/indexerdiscovery
Hi @rahulkumar , as I said, you have to extract from the json fields the Splunk metadata: host, timestamp, etc... if present in the json fields. Then you have to identify the sourcetype from the co...
See more...
Hi @rahulkumar , as I said, you have to extract from the json fields the Splunk metadata: host, timestamp, etc... if present in the json fields. Then you have to identify the sourcetype from the content of one of the json fields. As last, you have to remove all but the _raw, that's usually in one field called message or msg, or something else. In this way, you'll have all the metadata to associate to your events amd the original raw events to parse using the standard add-ons. When you choose the sourcetype, remember to use the one defined in the related add-on. Ciao. Giuseppe
Hi @Karthikeya , this is an inclusion condition, to have an exclusion condition you need only to add NOT before the subsearch <your_search> NOT [ | inputlookup whitelisted_ips.csv | fields IP ]
| ....
See more...
Hi @Karthikeya , this is an inclusion condition, to have an exclusion condition you need only to add NOT before the subsearch <your_search> NOT [ | inputlookup whitelisted_ips.csv | fields IP ]
| ... or <your_search> NOT [ | inputlookup whitelisted_ips.csv | rename ip AS query | fields query ]
| ... You can use this search to exclude from your messages the IPs from your lookup. If you want the search to automatically pole the lookup (if possible), I cannot help you because I don't know your data: you have to create a search that extract the IPs list, save it in the lookup and schedule it, something like this: <your_search>
| dedup IP
| table IP
| outputlookup whitelisted_ips.csv Ciao. Giuseppe
We have big application which contains small applications data coming onto Splunk. Currently we are mapping FQDNs to indexname for other application. But this big application wants single index for a...
See more...
We have big application which contains small applications data coming onto Splunk. Currently we are mapping FQDNs to indexname for other application. But this big application wants single index for all their FQDNs but they want to differentiate their application data based on sourcetype. As of now we have only one sourcetype which receives data from all other applications. Example: there is Fruits application and there is apple, orange, and pineapple applications in it. They want single index for Fruits application and they want to differentiate by using sourcetype=apple and sourcetype=orange and soon.... For remaining applications we are simply mapping FQDN to indexname in transforms.conf by using lookups and ingestEval. I can map all fruits application FQDNs to single index then all logs will be mixed right (apple,orange and soon....)... How can we differentiate with by using sourcetype? Where and how I need to write the logic?
Hi those are always your organization's decisions. Usually there be some naming standards which define those index names. Best option is to ask it from your Splunk admin or look your internal docume...
See more...
Hi those are always your organization's decisions. Usually there be some naming standards which define those index names. Best option is to ask it from your Splunk admin or look your internal documentation. One option is try | metadata type=hosts index=* which shows what hosts has sent events to indexes on your selected time slot. r. Ismo
Other than debugging the JS, I can only question if you need to do this in JS - Splunk inputs do this out of the box. Are you using JS for a specific reason that the out of the box stuff does not han...
See more...
Other than debugging the JS, I can only question if you need to do this in JS - Splunk inputs do this out of the box. Are you using JS for a specific reason that the out of the box stuff does not handle?
You are trying to get some kind of unstructured learning going on - the cluster command will give you what it thinks are common repetitions of events, how you then take its output to manage your requ...
See more...
You are trying to get some kind of unstructured learning going on - the cluster command will give you what it thinks are common repetitions of events, how you then take its output to manage your requirement is really beyond the scope of this type of forum. If you give the search you specified and then cluster it accordingly you will be getting some results - I assume you are not getting nothing. So, given that output it's really impossible for us to say how you can massage what you are getting to save that into some kind of lookup for the following day. But if you search at midnight for your error/warn/timeout events, cluster the responses, massage that data and store it as a lookup file, then the searches you subsequently run you will also have to cluster those results, massage that data then perform a lookup against the previously generated lookup. Without knowing why the results are not what you require - perhaps you could give an example of what is given and why it does not match your requirements - it's hard to say where to go from here. Anyway, if you can make some concrete progress with the suggestions so far given, I am sure we can continue to help you get to where you are trying to get to.
i want to know in which index is microsoft defender logs getting stored , I know some important fields which are there in microsoft defender and now i want to find them whether they are getting store...
See more...
i want to know in which index is microsoft defender logs getting stored , I know some important fields which are there in microsoft defender and now i want to find them whether they are getting stored or not .