All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Other than debugging the JS, I can only question if you need to do this in JS - Splunk inputs do this out of the box. Are you using JS for a specific reason that the out of the box stuff does not han... See more...
Other than debugging the JS, I can only question if you need to do this in JS - Splunk inputs do this out of the box. Are you using JS for a specific reason that the out of the box stuff does not handle?
You are trying to get some kind of unstructured learning going on - the cluster command will give you what it thinks are common repetitions of events, how you then take its output to manage your requ... See more...
You are trying to get some kind of unstructured learning going on - the cluster command will give you what it thinks are common repetitions of events, how you then take its output to manage your requirement is really beyond the scope of this type of forum. If you give the search you specified and then cluster it accordingly you will be getting some results - I assume you are not getting nothing. So, given that output it's really impossible for us to say how you can massage what you are getting to save that into some kind of lookup for the following day. But if you search at midnight for your error/warn/timeout events, cluster the responses, massage that data and store it as a lookup file, then the searches you subsequently run you will also have to cluster those results, massage that data then perform a lookup against the previously generated lookup. Without knowing why the results are not what you require - perhaps you could give an example of what is given and why it does not match your requirements - it's hard to say where to go from here. Anyway, if you can make some concrete progress with the suggestions so far given, I am sure we can continue to help you get to where you are trying to get to.  
i want to know in which index is microsoft defender logs getting stored , I know some important fields which are there in microsoft defender and now i want to find them whether they are getting store... See more...
i want to know in which index is microsoft defender logs getting stored , I know some important fields which are there in microsoft defender and now i want to find them whether they are getting stored or not .
Thank you for the detailed  answer its really helpful
As @bowesmana exemplifies, putting your complete set of values in a lookup is one way to count "missing" values.  Another way to is to put them in an multivalued field and use this field for counting... See more...
As @bowesmana exemplifies, putting your complete set of values in a lookup is one way to count "missing" values.  Another way to is to put them in an multivalued field and use this field for counting.  Here is an example   index=idx1 host=host1 OR host=host2 source=*filename*.txt field1!=20250106 (field2="20005") OR (field2="20006") OR (field2="20007") OR (field2="666") | eval field2prime = mvappend("20005", "20006", "20007", "666") | mvexpand field2prime | eval field2match = if(field2 == field2prime, 1, 0) | stats sum(field2match) as count by field2prime | rename field2prime as field2   Here is an emulation you can play with and compare with real data:   | makeresults count=16 | streamstats count as _count | eval field2 = round(_count / % 3 + 20005 ``` the above emulates index=idx1 host=host1 OR host=host2 source=*filename*.txt field1!=20250106 (field2="20005") OR (field2="20006") OR (field2="20007") OR (field2="666") ```   Mock data looks like this: fiel2 20005 20005 20005 20006 20006 20006 20006 20006 20006 20006 20006 20007 20007 20007 20007 20007 If you count this against field2 directly, you get field2 count 20005 3 20006 8 20007 5 Using the above search, the result is field2 count 20005 3 20006 8 20007 5 666 0
Getting following Static Errors in Splunk SOAR PR review from Bot. 1.  { "minimal_data_paths": { "description": "Checks to make sure each action includes the min... See more...
Getting following Static Errors in Splunk SOAR PR review from Bot. 1.  { "minimal_data_paths": { "description": "Checks to make sure each action includes the minimal required data paths", "message": "One or more actions are missing a required data path", "success": false, "verbose": [ "Minimal data paths: summary.total_objects_successful, action_result.status, action_result.message, summary.total_objects", " action one is missing one or more required data path", " action two is missing one or more required data path", " action three is missing one or more required data path" ] } },  I have provided all the data paths in output array in <App Name>.json file. Is there any other place where I have to provide the data paths? 2. { "repo_name_has_expected_app_id": { "description": "Validates that the app ID in the app repo's JSON file matches the recorded app ID for the app", "message": "Could not find an app id for <App Name>. Please add the app id for <App Name> to data/repo_name_to_appid.json", "success": false, "verbose": [ "Could not find an app id for <App Name>. Please add the app id for <App Name> to data/repo_name_to_appid.json" ] } } How do we resolve this issue,  did I missed any file?
Thank you for your reply. Could you tell me how to set up indexes in a private subnet without using an NLB, and how to configure forwards?
What field does your data contain that holds the sensor value? Did you change the query as needed to pick up that field.  
I can manually count and see that there are x # of sensors setup per hostname.  You need to show volunteers here HOW do you count number of sensors from logs (without using SPL). Here are four co... See more...
I can manually count and see that there are x # of sensors setup per hostname.  You need to show volunteers here HOW do you count number of sensors from logs (without using SPL). Here are four commandments to help you ask answerable questions in this forum: Illustrate data input (in raw text, anonymize as needed), whether they are raw events or output from a search (SPL that volunteers here do not have to look at). Illustrate the desired output from illustrated data. Explain the logic between illustrated data and desired output without SPL. If you also illustrate attempted SPL, illustrate actual output and compare with desired output, explain why they look different to you if that is not painfully obvious.
Yes, it returned 0s
Did you try the query I posted?
You can't count non-existence of a field value if that value does not exist unless you know what values are expected - that is generally termed the 'proving the negative' in these forums. You would ... See more...
You can't count non-existence of a field value if that value does not exist unless you know what values are expected - that is generally termed the 'proving the negative' in these forums. You would typically have a lookup file of the expected values for field 2, e.g. if you have a csv with field2 having 2 values 666 and 999 and in your search you get field2 for value 999 has N results but no 666 results, then this at the end will add a 0 for all missing expected values | inputlookup append=t field2.csv | stats max(count) as count by field2 | fillnull field2
Lets say I have a dashboard setup with 5 hosts (serverA, serverB, serverC, serverD, serverE), for each host there are 5-10 queries setup to pull data using the same index=idx_sensors. I can manually... See more...
Lets say I have a dashboard setup with 5 hosts (serverA, serverB, serverC, serverD, serverE), for each host there are 5-10 queries setup to pull data using the same index=idx_sensors. I can manually count and see that there are x # of sensors setup per hostname.  How would I create a query to check how many sensors are being monitored by hostname?  (I've got 7 diff dashboards w/ multiple hosts monitoring X number of sensors. I need to get metrics for which host has how many sensors that are currently being monitored.)  
You can use rex, but your example is not entirely clear - you are expecting - and | and / characters in your output? See the rex statement in this example with your data. | makeresults format=csv d... See more...
You can use rex, but your example is not entirely clear - you are expecting - and | and / characters in your output? See the rex statement in this example with your data. | makeresults format=csv data="raw 00012243asdsfgh - No recommendations from System A. Message - ERROR: System A | No Matching Recommendations 001b135c-5348-4arf-b3vbv344v - Validation Exception reason - Empty/Invalid Page_Placement Value ::: Input received - Channel1; ::: Other details - 001sss-445-4f45-b3ad-gsdfg34 - Incorrect page and placement found: Channel1; 00assew-34df-34de-d34k-sf34546d :: Invalid requestTimestamp : 2025-01-21T21:36:21.224Z 01hg34hgh44hghg4 - Exception while calling System A - null" | rex field=raw max_match=0 " (?<words>[A-Za-z]+)" | eval words = mvjoin(words, " ")  
what is the json syntax? documentation is not clear
Use dc index=idx_sensors sourcetype = sensorlog | stats dc(sensor_field) as sensors by host
Can you give sample of your events? You could add another or more field after by on stats if there is something which you could use.
Calculating metrics. I need to count the number of sensors that are created and monitored for each host. I have the index and sourcetype. I created about 7 different dashboards with multiple host on... See more...
Calculating metrics. I need to count the number of sensors that are created and monitored for each host. I have the index and sourcetype. I created about 7 different dashboards with multiple host on each dashboard and I need to get a count on the number of sensors that are being monitored by each host.  index=idx_sensors sourcetype = sensorlog | stats count by host the above query is giving me all the hostnames that are being monitored but the count is giving me all the events... I just need the # of sensors per host.   
@jkamdar Please follow this  https://docs.splunk.com/Documentation/Forwarder/9.4.0/Forwarder/Installanixuniversalforwarder 
@jkamdar  Yes, please replace the user while using chown. If you still face issues, it might be necessary to check with the OS team to determine if there are any permission-related problems