Splunk Search

How to count values based on dynamic field in the same splunk event , lookup event in csv file and display as table

New Member

I am new to splunk , can someone please help me on below case
my log looks like this:

Name="ABCD"
Config Name="XYZ"
Count_of_Date_Chunks="2"
Dates_1="2017-01-01,2017-01-02,2017-01-03,2017-01-05,2017-01-07"
Dates_2="2016-01-01,2016-01-02,2016-01-03,2016-01-05,2016-01-07"
Count_of_Missing_Date_Chunks="2"
Missing_dates_1="2017-01-04,2017-01-06"
Missing_dates_2="2016-01-01,2016-02-06"
Msg="SUCCESS" or "FAIL"

I have a lookup file with configuration names , what I have to do is, if I found a log for the Config, it need to marked as found or else not found and then I have to count the Missing_dates (need to consider Missing_dates_1, Missing_dates_2, it's dynamic and based on the field Count_of_Missing_Date_Chunks ) count and display as table .

As of now, I have the query for non-dynamic Chunks without lookup file, can someone please help me to form the Splunk query for Dynamic chunks and file lookup. in this below query i am counting based on comma as a delimiter

| where Msg="SUCCESS" 
| eval count_of_missing_dates = mvcount(split(missing_dates, ",")) 
| table Name Config count_of_missing_dates Msg

CSV file has in below format

Config_name
XYZ
PQR
AAA

the Final table will have-- Name, Config, count_of_missing_dates, Msg, Event_Status (Found/not Found from look up)

0 Karma

SplunkTrust
SplunkTrust

Here is some run-anywhere code that shows you one way to do it. Since you just want to know how many total values are in fields named Missing_dates_*, we can completely ignore the other fields and go after that total value with the splunk | foreach command.

This produces some test data , one of which will be in the lookup and one will not

| makeresults 
| eval Name="ABCD",Msg="SUCCESS"
| eval Count_of_Date_Chunks="2", Dates_1="2017-01-01,2017-01-02,2017-01-03,2017-01-05,2017-01-07", Dates_2="2016-01-01,2016-01-02,2016-01-03,2016-01-05,2016-01-07"
| eval Count_of_Missing_Date_Chunks="2", Missing_dates_1="2017-01-04,2017-01-06", Missing_dates_2="2016-01-01,2016-02-06" 
| eval Config_name=mvappend("XYZ","XZZ") 
| mvexpand Config_name 

This part strips it down to the needed fields, sets the count to zero, and then adds up the number of missing dates in each of the fields that start with Missing_dates*...

| table Name Config_name Missing_dates* Msg 
| eval count_of_missing_dates=0 
| foreach Missing_dates* [eval count_of_missing_dates=count_of_missing_dates+mvcount(split(<<FIELD>>,","))] 
| table Name Config_name count_of_missing_dates Msg 

This part joins with your lookup table to get the status...

| join type=left Config_name 
    [ |makeresults | eval Config_name=mvappend("XYZ","PQR","AAA") 
    |mvexpand Config_name 
    | eval Event_Status = "Found" 
    | table Config_name Event_Status] 
| eval Event_Status=coalesce( Event_Status,"Not Found")

You would probably use | inputcsv instead of the | makeresults verb I used to create the lookup table. You can also use | lookup instead of | join if you prefer.

0 Karma

SplunkTrust
SplunkTrust

I'm missing the underlying "why" for this, and the "how" for the log record you are looking at.

When you refer to this as a "log" what do you mean?

I also don't understand why dates1 and dates2 would always have exactly the same number of missing dates, but the ones missing are not exactly the same (off by one year).

What, EXACTLY, is Count of Date chunks? Why is the value in this case 2, rather than 5 or 7 when there are two values missing in a 7-day period?

What does SUCCESS or FAIL mean, in real terms?

0 Karma

New Member

Thank you DalJeanis for looking into big question.

This log file is generated by Unix bash process , in directly the Log is nothing but an event in Splunk. SUCCESS or FAIL indicate bash process status .

Count of Date chunks will generate the how many Date parts we have in current Log/event , for example if the bash process generates 2 Data parts then CountofMissingDateChunks="2" and Missingdates1 and Missingdates2. if it is 3 then Missingdates1,Missingdates2,Missingdates3 .
the main reason behind this splitting is " the filed value automatically terminating in splunk event after 10,000 characters " (for very huge Logs/events) to avoid this we splitted into multiple chunks and it is taken care by bash process.

Missingdates* were generated based on Missingdates in unix bash process , first 100 lines sent to Missingdates1 and 101 to 200 sent to Missingdates_2 etc .

0 Karma