Splunk Search

Why am I unable to filter out events from payload with my current search?

New Member

I have an event from which I want to filter this string:

\\\"name\\\":\\\"experience\\\",\\\"status\\\":\\\"FAILURE\\\" 

Search:

"pxc" "fail*" | rex  max_match=20 "((?[\w*])\W*\w*\W*(?[\w*]))" | where SOR="Experience" AND status="FAILURE" 

But this gives me an event where SOR= abcand status = FAILURE too which is unwanted.

Payload:

[{\\\"name\\\":\\\"xyz\\\",\\\"status\\\":\\\"SUCCESS\\\",\\\"updatedTimestamp\\\":\\\"2016-05-04T22:11:48Z\\\"},\\\"name\\\":\\\"gya\\\",\\\"status\\\":\\\"SUCCESS\\\",\\\"updatedTimestamp\\\"
:\\\"2016-05-04T22:11:50Z\\\"},{\\\"name\\\":\\\"abc\\\",\\\"status\\\":\\\"FAILURE\\\",\\\"updatedTimestamp\\\":\\\"2016-05-04T22:11:51Z\\\"},{\\\"name\\\":\\\"guest\\\",\\\"status\\\":\\\"SUCCESS\\\",\\\"updatedTimestamp\\\":\\\"2016-05-04T22:11:51Z\\\"},{\\\"name\\\":\\\"Experience\\\",\\\"status\\\":\\\"SUCCESS\\\",\\\"updatedTimestamp\\\":\\\"2016-05-04T22:12:02Z\\\"},{\\\"name\\\":\\\"Name\\\",\\\"status\\\":\\\"SUCCESS\\\",\\\"updatedTimestamp\\\":\\\"2016-05-04T22:11:58Z\\\"},{\\\"name\\\":\\\"Ajax\\\",\\\"status\\\":\\\"SUCCESS\\\"
0 Karma

SplunkTrust
SplunkTrust

Assuming the format above is the final one, you can try this (ignore the top 3 lines that I used to replicate your use case):

| stats count
| fields - count
| eval eventraw = "
[{\\\"name\\\":\\\"XBMS\\\",\\\"status\\\":\\\"SUCCESS\\\",\\\"updatedTimestamp\\\":\\\"2016-05-04T22:11:48Z\\\"},{\\\"name\\\":\\\"GXP\\\",\\\"status\\\":\\\"SUCCESS\\\",\\\"updatedTimestamp\\\":\\\"2016-05-04T22:11:50Z\\\"},{\\\"name\\\":\\\"ATS\\\",\\\"status\\\":\\\"FAILURE\\\",\\\"updatedTimestamp\\\":\\\"2016-05-04T22:11:51Z\\\"},{\\\"name\\\":\\\"GUESTLINK\\\",\\\"status\\\":\\\"SUCCESS\\\",\\\"updatedTimestamp\\\":\\\"2016-05-04T22:11:51Z\\\"},{\\\"name\\\":\\\"EXPERIENCEMEDIA\\\",\\\"status\\\":\\\"SUCCESS\\\",\\\"updatedTimestamp\\\":\\\"2016-05-04T22:12:02Z\\\"},{\\\"name\\\":\\\"CHARGEACCOUNT\\\",\\\"status\\\":\\\"SUCCESS\\\",\\\"updatedTimestamp\\\":\\\"2016-05-04T22:11:58Z\\\"}]
"
| eval eventraw = replace(eventraw, "\\\\", "")
| spath input=eventraw
| fields - eventraw
| rename {}.name as Name, {}.status AS Status, {}.updatedTimestamp AS Timestamp
| eval temp = mvzip(Name, Status, "|")
| eval temp = mvzip(temp, Timestamp, "|")
| fields - Name, Status, Timestamp
| mvexpand temp
| rex field=temp "^(?<Name>[^\|]+)\|(?<Status>[^\|]+)\|(?<Timestamp>[^\|]+)$"
| fields - temp
| search Name ="EXPERIENCEMEDIA" AND Status="SUCCESS"

Output:

Name    Status  Timestamp
EXPERIENCEMEDIA     SUCCESS     2016-05-04T22:12:02Z 
0 Karma

New Member

@javiergn
Sir , am trying to use the query u gave but it is not working . Since long i am trying to correct this . Finally i got some hint about the issue
rename {}.name as Name, {}.status AS Status, {}.updatedTimestamp AS Timestamp is catching the line {\\"name\\":\\"abc\\",\\"status\\":\\"FAILURE\\",\\"updatedTimestamp\\":\\"2016-05-04T22:11:51Z\\"},

But the thing is that name is the sample String is not a field , so i guess thats the reason it cannot be renamed.
Can we work on some alternate solution if possible...

Thanks in Advance!!

0 Karma

SplunkTrust
SplunkTrust

Hi,

I'm not quite sure what you mean I'm afraid. Name is just a string but it will be used as a field name when you pipe your raw data to the spath command (which expects data in JSON format).

I'll tell you what, if you could post

  • Sample data
  • Expected output

All in a very simplified way and making sure you escape any code by using the code sample button above, I could try to take a look again.

Hope that makes sense

0 Karma

New Member

for some reason its not fetching anything....do you want any more information to correct your query...

0 Karma

SplunkTrust
SplunkTrust

Sure, thing.

So the first 3 lines:

 | stats count
 | fields - count
 | eval eventraw = "..... bla bla bla

Are just for me to be able to generate sample data. You can replace these with your base search. For instance:

index=foo sourcetype=bar "pxc" "fail*"

This is simply deleting double back slashes from the sample:

| eval eventraw = replace(eventraw, "\\\\", "")

Spath will parse XML or JSON formatted events. eventraw looks like JSON hence me using this command to save some time.

| spath input=eventraw

Once parsed I don't need that field anymore:

 | fields - eventraw

Now in order to make things easier, I'm going to rename those horrible field names with curly brackets to some more readable:

 | rename {}.name as Name, {}.status AS Status, {}.updatedTimestamp AS Timestamp

The following mvzip lines are simply merging our Name, Status and Timestamp fields and separating them by a pipe. I'll explain why below.

 | eval temp = mvzip(Name, Status, "|")
 | eval temp = mvzip(temp, Timestamp, "|")

We don't need to the old field names anymore

 | fields - Name, Status, Timestamp

Now, because Temp is a multivalue field, we can't filter by individual event values. We need to expand it so that each value becomes an event. Because of the way mvexpand works, you can only apply it to one field at a time. If I hadn't merged Name, Status and Timestamp into temp, I would have had to run mvexpand 3 times and therefore it would have triplicated your events because of the way it works.

 | mvexpand temp

Now that we have one event per line, let's extract those fields again:

 | rex field=temp "^(?<Name>[^\|]+)\|(?<Status>[^\|]+)\|(?<Timestamp>[^\|]+)$"

I don't need this field anymore:

 | fields - temp

Now filter and return only those events I'm interested about:

 | search Name ="EXPERIENCEMEDIA" AND Status="SUCCESS"

Let me know if that helps.
If not, please post your exact query you are running here (using the code button) and let me know what is missing.

Thanks,
J

New Member

@javiergn,
Sir ,

I am very new to splunk.
I am trying to understand the query who provided me. Can you please explain how is that working.
That will be of great help. 🙂

0 Karma

SplunkTrust
SplunkTrust

Hi, can you edit your question and convert both query and payload to Code by using the button above (the one with 1s and 0s)?
Otherwise some special characters will be escaped and we can't be 100% sure your log format is right.

0 Karma