Knowledge Management

How to auto extract fields from this log?

abhisplunk1
Explorer

 

Hi this is the log

{"time":"2023-06-13 20:35:02.046 +00:00", "level":"Information", "client":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.12.1.0Safari/537.36 Edg/xx.x.xx.x.x", "environment":"deduction", "user":"CORP\NBSWWUK", "clientIp":"xxx.xxx.xxx.xx", "processId":"24560", "processName":"w3wp", "machine":"mymachine", "version":"", "message":"", "log":"", "requestURL":"/request/v1/Application/getorganizations", "exception":"", "requestBody":"", "requestParam":"", "exceptionStack":""}

Labels (1)
0 Karma

caiosalonso
Path Finder

Hi,

This seems to be a valid json event. Did you already send this log record to Splunk? Which sourcetype settings are you using?

I guess that the default "_json" sourcetype should parse this log and extract fields correctly.

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @caiosalonso,

as @abhisplunk1 said, this seems to be a json file, so you could use the _json sourcetype.

otherwise, you could add the option:

INDEXED_EXTRACTIONS = json

to your sourcetype in props.conf.

You can find more information at https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/Aboutindexedfieldextraction or https://docs.splunk.com/Documentation/Splunk/latest/Admin/Propsconf 

At least, you could try to use the spath command (https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Spath) in search.

Ciao.

Giuseppe

Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...