All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @rahulkumar , as I said, you have to extract from the json fields the Splunk metadata: host, timestamp, etc... if present in the json fields. Then you have to identify the sourcetype from the co... See more...
Hi @rahulkumar , as I said, you have to extract from the json fields the Splunk metadata: host, timestamp, etc... if present in the json fields. Then you have to identify the sourcetype from the content of one of the json fields. As last, you have to remove all but the _raw, that's usually in one field called message or msg, or something else. In this way, you'll have all the metadata to associate to your events amd the original raw events to parse using the standard add-ons. When you choose the sourcetype, remember to use the one defined in the related add-on. Ciao. Giuseppe
Hi @Karthikeya , this is an inclusion condition, to have an exclusion condition you need only to add NOT before the subsearch <your_search> NOT [ | inputlookup whitelisted_ips.csv | fields IP ] | .... See more...
Hi @Karthikeya , this is an inclusion condition, to have an exclusion condition you need only to add NOT before the subsearch <your_search> NOT [ | inputlookup whitelisted_ips.csv | fields IP ] | ... or <your_search> NOT [ | inputlookup whitelisted_ips.csv | rename ip AS query | fields query ] | ... You can use this search to exclude from your messages the IPs from your lookup. If you want the search to automatically pole the lookup (if possible), I cannot help you because I don't know your data: you have to create a search that extract the IPs list, save it in the lookup and schedule it, something like this: <your_search> | dedup IP | table IP | outputlookup whitelisted_ips.csv  Ciao. Giuseppe
We have big application which contains small applications data coming onto Splunk. Currently we are mapping FQDNs to indexname for other application. But this big application wants single index for a... See more...
We have big application which contains small applications data coming onto Splunk. Currently we are mapping FQDNs to indexname for other application. But this big application wants single index for all their FQDNs but they want to differentiate their application data based on sourcetype. As of now we have only one sourcetype which receives data from all other applications. Example: there is Fruits application and there is apple, orange, and pineapple applications in it. They want single index for Fruits application and they want to differentiate by using sourcetype=apple and sourcetype=orange and soon.... For remaining applications we are simply mapping FQDN to indexname in transforms.conf by using lookups and ingestEval. I can map all fruits application FQDNs to single index then all logs will be mixed right (apple,orange and soon....)... How can we differentiate with by using sourcetype? Where and how I need to write the logic? 
Hi those are always your organization's decisions. Usually there be some naming standards which define those index names. Best option is to ask it from your Splunk admin or look your internal docume... See more...
Hi those are always your organization's decisions. Usually there be some naming standards which define those index names. Best option is to ask it from your Splunk admin or look your internal documentation. One option is try  | metadata type=hosts index=* which shows what hosts has sent events to indexes on your selected time slot. r. Ismo 
Other than debugging the JS, I can only question if you need to do this in JS - Splunk inputs do this out of the box. Are you using JS for a specific reason that the out of the box stuff does not han... See more...
Other than debugging the JS, I can only question if you need to do this in JS - Splunk inputs do this out of the box. Are you using JS for a specific reason that the out of the box stuff does not handle?
You are trying to get some kind of unstructured learning going on - the cluster command will give you what it thinks are common repetitions of events, how you then take its output to manage your requ... See more...
You are trying to get some kind of unstructured learning going on - the cluster command will give you what it thinks are common repetitions of events, how you then take its output to manage your requirement is really beyond the scope of this type of forum. If you give the search you specified and then cluster it accordingly you will be getting some results - I assume you are not getting nothing. So, given that output it's really impossible for us to say how you can massage what you are getting to save that into some kind of lookup for the following day. But if you search at midnight for your error/warn/timeout events, cluster the responses, massage that data and store it as a lookup file, then the searches you subsequently run you will also have to cluster those results, massage that data then perform a lookup against the previously generated lookup. Without knowing why the results are not what you require - perhaps you could give an example of what is given and why it does not match your requirements - it's hard to say where to go from here. Anyway, if you can make some concrete progress with the suggestions so far given, I am sure we can continue to help you get to where you are trying to get to.  
i want to know in which index is microsoft defender logs getting stored , I know some important fields which are there in microsoft defender and now i want to find them whether they are getting store... See more...
i want to know in which index is microsoft defender logs getting stored , I know some important fields which are there in microsoft defender and now i want to find them whether they are getting stored or not .
Thank you for the detailed  answer its really helpful
As @bowesmana exemplifies, putting your complete set of values in a lookup is one way to count "missing" values.  Another way to is to put them in an multivalued field and use this field for counting... See more...
As @bowesmana exemplifies, putting your complete set of values in a lookup is one way to count "missing" values.  Another way to is to put them in an multivalued field and use this field for counting.  Here is an example   index=idx1 host=host1 OR host=host2 source=*filename*.txt field1!=20250106 (field2="20005") OR (field2="20006") OR (field2="20007") OR (field2="666") | eval field2prime = mvappend("20005", "20006", "20007", "666") | mvexpand field2prime | eval field2match = if(field2 == field2prime, 1, 0) | stats sum(field2match) as count by field2prime | rename field2prime as field2   Here is an emulation you can play with and compare with real data:   | makeresults count=16 | streamstats count as _count | eval field2 = round(_count / % 3 + 20005 ``` the above emulates index=idx1 host=host1 OR host=host2 source=*filename*.txt field1!=20250106 (field2="20005") OR (field2="20006") OR (field2="20007") OR (field2="666") ```   Mock data looks like this: fiel2 20005 20005 20005 20006 20006 20006 20006 20006 20006 20006 20006 20007 20007 20007 20007 20007 If you count this against field2 directly, you get field2 count 20005 3 20006 8 20007 5 Using the above search, the result is field2 count 20005 3 20006 8 20007 5 666 0
Getting following Static Errors in Splunk SOAR PR review from Bot. 1.  { "minimal_data_paths": { "description": "Checks to make sure each action includes the min... See more...
Getting following Static Errors in Splunk SOAR PR review from Bot. 1.  { "minimal_data_paths": { "description": "Checks to make sure each action includes the minimal required data paths", "message": "One or more actions are missing a required data path", "success": false, "verbose": [ "Minimal data paths: summary.total_objects_successful, action_result.status, action_result.message, summary.total_objects", " action one is missing one or more required data path", " action two is missing one or more required data path", " action three is missing one or more required data path" ] } },  I have provided all the data paths in output array in <App Name>.json file. Is there any other place where I have to provide the data paths? 2. { "repo_name_has_expected_app_id": { "description": "Validates that the app ID in the app repo's JSON file matches the recorded app ID for the app", "message": "Could not find an app id for <App Name>. Please add the app id for <App Name> to data/repo_name_to_appid.json", "success": false, "verbose": [ "Could not find an app id for <App Name>. Please add the app id for <App Name> to data/repo_name_to_appid.json" ] } } How do we resolve this issue,  did I missed any file?
Thank you for your reply. Could you tell me how to set up indexes in a private subnet without using an NLB, and how to configure forwards?
What field does your data contain that holds the sensor value? Did you change the query as needed to pick up that field.  
I can manually count and see that there are x # of sensors setup per hostname.  You need to show volunteers here HOW do you count number of sensors from logs (without using SPL). Here are four co... See more...
I can manually count and see that there are x # of sensors setup per hostname.  You need to show volunteers here HOW do you count number of sensors from logs (without using SPL). Here are four commandments to help you ask answerable questions in this forum: Illustrate data input (in raw text, anonymize as needed), whether they are raw events or output from a search (SPL that volunteers here do not have to look at). Illustrate the desired output from illustrated data. Explain the logic between illustrated data and desired output without SPL. If you also illustrate attempted SPL, illustrate actual output and compare with desired output, explain why they look different to you if that is not painfully obvious.
Yes, it returned 0s
Did you try the query I posted?
You can't count non-existence of a field value if that value does not exist unless you know what values are expected - that is generally termed the 'proving the negative' in these forums. You would ... See more...
You can't count non-existence of a field value if that value does not exist unless you know what values are expected - that is generally termed the 'proving the negative' in these forums. You would typically have a lookup file of the expected values for field 2, e.g. if you have a csv with field2 having 2 values 666 and 999 and in your search you get field2 for value 999 has N results but no 666 results, then this at the end will add a 0 for all missing expected values | inputlookup append=t field2.csv | stats max(count) as count by field2 | fillnull field2
Lets say I have a dashboard setup with 5 hosts (serverA, serverB, serverC, serverD, serverE), for each host there are 5-10 queries setup to pull data using the same index=idx_sensors. I can manually... See more...
Lets say I have a dashboard setup with 5 hosts (serverA, serverB, serverC, serverD, serverE), for each host there are 5-10 queries setup to pull data using the same index=idx_sensors. I can manually count and see that there are x # of sensors setup per hostname.  How would I create a query to check how many sensors are being monitored by hostname?  (I've got 7 diff dashboards w/ multiple hosts monitoring X number of sensors. I need to get metrics for which host has how many sensors that are currently being monitored.)  
You can use rex, but your example is not entirely clear - you are expecting - and | and / characters in your output? See the rex statement in this example with your data. | makeresults format=csv d... See more...
You can use rex, but your example is not entirely clear - you are expecting - and | and / characters in your output? See the rex statement in this example with your data. | makeresults format=csv data="raw 00012243asdsfgh - No recommendations from System A. Message - ERROR: System A | No Matching Recommendations 001b135c-5348-4arf-b3vbv344v - Validation Exception reason - Empty/Invalid Page_Placement Value ::: Input received - Channel1; ::: Other details - 001sss-445-4f45-b3ad-gsdfg34 - Incorrect page and placement found: Channel1; 00assew-34df-34de-d34k-sf34546d :: Invalid requestTimestamp : 2025-01-21T21:36:21.224Z 01hg34hgh44hghg4 - Exception while calling System A - null" | rex field=raw max_match=0 " (?<words>[A-Za-z]+)" | eval words = mvjoin(words, " ")  
what is the json syntax? documentation is not clear
Use dc index=idx_sensors sourcetype = sensorlog | stats dc(sensor_field) as sensors by host