<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: ingest json logs -  no extractions in Getting Data In</title>
    <link>https://community.splunk.com/t5/Getting-Data-In/ingest-json-logs-no-extractions/m-p/700923#M116035</link>
    <description>&lt;P class=""&gt;Thank you for sharing your detailed process and the issue you're encountering with JSON log ingestion. Your testing approach was thorough, but there are a few key points to address:&lt;/P&gt;&lt;OL class=""&gt;&lt;LI&gt;Props.conf location: The primary parsing settings should be on the indexers, not the search heads. For JSON data, you typically only need minimal settings on the search head.&lt;/LI&gt;&lt;LI&gt;Search head settings: On the search head, you can simplify your props.conf to just:&lt;DIV class=""&gt;&lt;DIV class=""&gt;[yoursourcetype]&lt;/DIV&gt;&lt;DIV&gt;&lt;DIV class=""&gt;&lt;SPAN&gt;KV_MODE = JSON&lt;/SPAN&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;This tells Splunk to parse the JSON at search time, which should give you the field extractions you're looking for.&lt;/LI&gt;&lt;LI&gt;In order to onboard this properly you can also set MAGIC6 props on your indexers.&lt;BR /&gt;&lt;A href="https://community.splunk.com/t5/Getting-Data-In/props-conf/m-p/426134" target="_blank" rel="noopener"&gt;https://community.splunk.com/t5/Getting-Data-In/props-conf/m-p/426134&lt;/A&gt;&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Try to run the below search to figure out what app is taking precedence.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;| rest splunk_server=local /services/configs/conf-props/YOURSOURCEYPE| transpose | search column=eai:acl.app&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;Please UpVote/Solved if this helps.&lt;/P&gt;</description>
    <pubDate>Thu, 03 Oct 2024 16:35:15 GMT</pubDate>
    <dc:creator>sainag_splunk</dc:creator>
    <dc:date>2024-10-03T16:35:15Z</dc:date>
    <item>
      <title>ingest json logs -  no extractions</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/ingest-json-logs-no-extractions/m-p/700917#M116033</link>
      <description>&lt;P&gt;First time ingesting JSON logs, so need assistance on figuring out why my JSON log ingestion is not auto extracting.&lt;BR /&gt;&lt;BR /&gt;Environment: SHC, IDX cluster, typical management servers.&lt;BR /&gt;&lt;BR /&gt;I first tested a manual upload of a log sample by going to a SH then settings -&amp;gt; add data -&amp;gt; upload.&amp;nbsp;&amp;nbsp; When i uploaded a log, the sourcetype _json was automatically selected. In the preview panel, everything looked good, so i saved the sourcetype as foo. i completed the upload into a index=test.&amp;nbsp; Looked at the data, everything was good, in the " interesting fields " pane on the left had the auto extractions completed.&lt;BR /&gt;&lt;BR /&gt;in ../apps/search/local/props.conf, an entry was created...&lt;BR /&gt;&lt;BR /&gt;[foo]&lt;BR /&gt;KV_MODE = none&lt;BR /&gt;LINE_BREAKER = ([\r\n]+)&lt;BR /&gt;NO_BINARY_CHECK = true&lt;BR /&gt;category = Structured&lt;BR /&gt;description = JavaScript Object Notation format. For more information, visit &lt;A href="http://json.org/" target="_blank" rel="noopener"&gt;http://json.org/&lt;/A&gt;&lt;BR /&gt;disabled = false&lt;BR /&gt;pulldown_type = true&lt;BR /&gt;BREAK_ONLY_BEFORE =&lt;BR /&gt;INDEXED_EXTRACTIONS = json&lt;BR /&gt;&lt;BR /&gt;I noticed it used INDEXED_EXTRACTIONS, which is not what i wanted (have never used indexed_extractions before), but figured these are just occasional scan logs which were literally just a few kilobytes every now and then, so wasn't a big deal.&lt;BR /&gt;&lt;BR /&gt;I copied the above sourcetype stanza to an app in the cluster masters-manager apps folder in an app that i have a bunch of random one off props.conf sourcetype stanzas,&amp;nbsp; then pushed it out to the IDX cluster.&amp;nbsp; Then created an inputs.conf&amp;nbsp; and server class on the DS to push to the particular forwarder that monitors the folder for the appropriate JSON scan logs.&amp;nbsp;&amp;nbsp; As expected, eventually the scan logs started being indexed and viewable on the Search head.&lt;BR /&gt;&lt;BR /&gt;Unfortunately, the auto extractions were not being parsed. The Interesting fields panel on the left only had the default fields. on the Right panel where the logs are the Fields names were highlighted in Red, which i guess means splunk recognizes the field names??&amp;nbsp; But either way the issue is i had no interesting fields.&lt;BR /&gt;&lt;BR /&gt;I figured maybe the issue was on the Search heads i had the " indexed extractions"&amp;nbsp; set and figure thats probably an indexer setting,&amp;nbsp; &amp;nbsp; so i commented that out and tried using KV_MODE=json in its place. saved the .conf file and restarted the SH.&amp;nbsp; But the issue remains.. no interesting fields.&lt;BR /&gt;&lt;BR /&gt;The test upload worked just fine and i had interesting fields in the test index,&amp;nbsp; however when the logs started coming through from the UF,&amp;nbsp; I no longer had interesting fields despite using the same sourcetype.&lt;BR /&gt;&lt;BR /&gt;What am i missing? is there more to ingesting a JSON file then simply just using&amp;nbsp; kv_mode&amp;nbsp; or indexed_extractions?&amp;nbsp; but then why does my test upload work ?&lt;/P&gt;&lt;P&gt;here is a sample log:&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;{"createdAt": "2024-09-04T15:23:12-04:00", "description": "bunch of words.", "detectorId": "text/hardcoded-credentials@v1.0", "detectorName": "Hardcoded credentials", "detectorTags": ["secrets", "security", "owasp-top10", "top25-cwes", "cwe-798", "Text"], "generatorId": "something", "id": "LongIDstring", "remediation": {"recommendation": {"text": "a bunch of text.", "url": "a url"}}, "resource": {"id": "oit-aws-codescan"}, "ruleId": "multilanguage-password", "severity": "Critical", "status": "Open", "title": "CWE-xxx - Hardcoded credentials", "type": "Software and Configuration Checks", "updatedAt": "2024-09-18T10:54:02.916000-04:00", "vulnerability": {"filePath": {"codeSnippet": [{"content": "    ftp_site = 'something.com'", "number": 139}, {"content": "    ftp_base = '/somesite/'", "number": 140}, {"content": "    ftp_filename_ext = '.csv'", "number": 111}, {"content": "    ", "number": 111}, {"content": "    ftp_username = 'anonymous'", "number": 111}, {"content": "    ftp_password = 'a****'", "number": 111}, {"content": "", "number": 111}, {"content": "    # -- DOWNLOAD DATA -----", "number": 111}, {"content": "    # Put all of the data pulls within a try-except case to protect against crashing", "number": 111}, {"content": "", "number": 148}, {"content": "    email_alert_sent = False", "number": 111}], "endLine": 111, "name": "somethingsomething.py", "path": "something.py", "startLine": 111}, "id": "LongIDstring", "referenceUrls": [], "relatedVulnerabilities": ["CWE-xxx"]}}&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;I appreciate any guidance..&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 03 Oct 2024 16:12:47 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/ingest-json-logs-no-extractions/m-p/700917#M116033</guid>
      <dc:creator>Darthsplunker</dc:creator>
      <dc:date>2024-10-03T16:12:47Z</dc:date>
    </item>
    <item>
      <title>Re: ingest json logs -  no extractions</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/ingest-json-logs-no-extractions/m-p/700923#M116035</link>
      <description>&lt;P class=""&gt;Thank you for sharing your detailed process and the issue you're encountering with JSON log ingestion. Your testing approach was thorough, but there are a few key points to address:&lt;/P&gt;&lt;OL class=""&gt;&lt;LI&gt;Props.conf location: The primary parsing settings should be on the indexers, not the search heads. For JSON data, you typically only need minimal settings on the search head.&lt;/LI&gt;&lt;LI&gt;Search head settings: On the search head, you can simplify your props.conf to just:&lt;DIV class=""&gt;&lt;DIV class=""&gt;[yoursourcetype]&lt;/DIV&gt;&lt;DIV&gt;&lt;DIV class=""&gt;&lt;SPAN&gt;KV_MODE = JSON&lt;/SPAN&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;This tells Splunk to parse the JSON at search time, which should give you the field extractions you're looking for.&lt;/LI&gt;&lt;LI&gt;In order to onboard this properly you can also set MAGIC6 props on your indexers.&lt;BR /&gt;&lt;A href="https://community.splunk.com/t5/Getting-Data-In/props-conf/m-p/426134" target="_blank" rel="noopener"&gt;https://community.splunk.com/t5/Getting-Data-In/props-conf/m-p/426134&lt;/A&gt;&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Try to run the below search to figure out what app is taking precedence.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;| rest splunk_server=local /services/configs/conf-props/YOURSOURCEYPE| transpose | search column=eai:acl.app&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;Please UpVote/Solved if this helps.&lt;/P&gt;</description>
      <pubDate>Thu, 03 Oct 2024 16:35:15 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/ingest-json-logs-no-extractions/m-p/700923#M116035</guid>
      <dc:creator>sainag_splunk</dc:creator>
      <dc:date>2024-10-03T16:35:15Z</dc:date>
    </item>
  </channel>
</rss>

