<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: transform ingested HEC JSON log as regular log in Splunk Enterprise</title>
    <link>https://community.splunk.com/t5/Splunk-Enterprise/transform-ingested-HEC-JSON-log-as-regular-log/m-p/701989#M20510</link>
    <description>&lt;P&gt;The only way you can do is to use&amp;nbsp; &lt;STRONG&gt;/services/collector/raw&lt;/STRONG&gt; endpoint.&lt;/P&gt;&lt;P class=""&gt;I understand the desire to maintain your existing Splunk setup, I would advise against using the raw endpoint (/services/collector/raw) to transform the JSON logs back into regular log format. This approach would unnecessarily increase system load and complexity.&lt;/P&gt;&lt;P class=""&gt;Instead, the best practice is to use the existing event endpoint (/services/collector/event) for ingesting data into Splunk. This is optimized for handling structured data like JSON and is more efficient.&lt;/P&gt;&lt;P class=""&gt;I recommend adjusting your alerts and dashboards to work with the new JSON structure from logging-operator. While this may require some initial effort, it's a more sustainable approach in the long run:&lt;/P&gt;&lt;OL class=""&gt;&lt;LI&gt;Update your search queries to use JSON-specific commands like spath&amp;nbsp;or KV_MODE=JSON to extract fields.&lt;/LI&gt;&lt;LI&gt;Modify dashboards to reference the new JSON field names.&lt;/LI&gt;&lt;LI&gt;Adjust alerts to use the appropriate JSON fields and structure.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;&lt;/OL&gt;</description>
    <pubDate>Tue, 15 Oct 2024 18:59:59 GMT</pubDate>
    <dc:creator>sainag_splunk</dc:creator>
    <dc:date>2024-10-15T18:59:59Z</dc:date>
    <item>
      <title>transform ingested HEC JSON log as regular log</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/transform-ingested-HEC-JSON-log-as-regular-log/m-p/701966#M20507</link>
      <description>&lt;P&gt;Looking for props.conf / transforms.conf configuration guidance.&lt;BR /&gt;&lt;BR /&gt;The aim is to search logs from a HTTP Event Collector the same way we search for regular logs. Don't want to search JSON in the search heads.&lt;/P&gt;&lt;P&gt;We're in the process of migrating from Splunk Forwarders to logging-operator in k8s. Thing is, Splunk Forwarder uses log files and standard indexer discovery whereas logging-operator uses stdout/stderr and must output to an HEC endpoint, meaning the logs arrive as JSON at the heavy forwarder.&lt;BR /&gt;&lt;BR /&gt;We want to use Splunk the same way we did over the years and want to avoid adapting alerts/dashboards etc to the new JSON source&lt;BR /&gt;&lt;BR /&gt;OLD CONFIG AIMED TO THE INDEXERS (using the following config we get environment/site/node/team/pod as search-time extraction fields)&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;[vm.container.meta]
# source: /data/nodes/env1/site1/host1/logs/team1/env1/pod_name/localhost_access_log.log
CLEAN_KEYS = 0
REGEX = \/.*\/.*\/(.*)\/(.*)\/(.*)\/.*\/(.*)\/.*\/(.*)\/
FORMAT = environment::$1 site::$2 node::$3 team::$4 pod::$5
SOURCE_KEY = MetaData:Source
WRITE_META = true&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;SAMPLE LOG USING logging-operator&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;{
"log": "ts=2024-10-15T15:22:44.548Z caller=scrape.go:1353 level=debug component=\"scrape manager\" scrape_pool=kubernetes-pods target=http://1.1.1.1:8050/_api/metrics msg=\"Scrape failed\" err=\"Get \\\"http://1.1.1.1:8050/_api/metrics\\\": dial tcp 1.1.1.1:8050: connect: connection refused\"\n",
"stream": "stderr",
"time": "2024-10-15T15:22:44.548801729Z",
"environment": "env1",
"node": "host1",
"pod": "pod_name",
"site": "site1",
"team": "team1"
}&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 15 Oct 2024 15:28:21 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/transform-ingested-HEC-JSON-log-as-regular-log/m-p/701966#M20507</guid>
      <dc:creator>PT_crusher</dc:creator>
      <dc:date>2024-10-15T15:28:21Z</dc:date>
    </item>
    <item>
      <title>Re: transform ingested HEC JSON log as regular log</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/transform-ingested-HEC-JSON-log-as-regular-log/m-p/701989#M20510</link>
      <description>&lt;P&gt;The only way you can do is to use&amp;nbsp; &lt;STRONG&gt;/services/collector/raw&lt;/STRONG&gt; endpoint.&lt;/P&gt;&lt;P class=""&gt;I understand the desire to maintain your existing Splunk setup, I would advise against using the raw endpoint (/services/collector/raw) to transform the JSON logs back into regular log format. This approach would unnecessarily increase system load and complexity.&lt;/P&gt;&lt;P class=""&gt;Instead, the best practice is to use the existing event endpoint (/services/collector/event) for ingesting data into Splunk. This is optimized for handling structured data like JSON and is more efficient.&lt;/P&gt;&lt;P class=""&gt;I recommend adjusting your alerts and dashboards to work with the new JSON structure from logging-operator. While this may require some initial effort, it's a more sustainable approach in the long run:&lt;/P&gt;&lt;OL class=""&gt;&lt;LI&gt;Update your search queries to use JSON-specific commands like spath&amp;nbsp;or KV_MODE=JSON to extract fields.&lt;/LI&gt;&lt;LI&gt;Modify dashboards to reference the new JSON field names.&lt;/LI&gt;&lt;LI&gt;Adjust alerts to use the appropriate JSON fields and structure.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;&lt;/OL&gt;</description>
      <pubDate>Tue, 15 Oct 2024 18:59:59 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/transform-ingested-HEC-JSON-log-as-regular-log/m-p/701989#M20510</guid>
      <dc:creator>sainag_splunk</dc:creator>
      <dc:date>2024-10-15T18:59:59Z</dc:date>
    </item>
    <item>
      <title>Re: transform ingested HEC JSON log as regular log</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/transform-ingested-HEC-JSON-log-as-regular-log/m-p/701998#M20512</link>
      <description>&lt;P&gt;&lt;STRONG&gt;raw &lt;/STRONG&gt;endpoint is not an option because it &lt;A href="https://github.com/splunk/fluent-plugin-splunk-hec/issues/17" target="_self"&gt;is not supported by the logging-operator&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 15 Oct 2024 20:08:59 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/transform-ingested-HEC-JSON-log-as-regular-log/m-p/701998#M20512</guid>
      <dc:creator>PT_crusher</dc:creator>
      <dc:date>2024-10-15T20:08:59Z</dc:date>
    </item>
    <item>
      <title>Re: transform ingested HEC JSON log as regular log</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/transform-ingested-HEC-JSON-log-as-regular-log/m-p/702000#M20513</link>
      <description>&lt;P&gt;I don't think that's the issue here. The same payload sent to the /raw endpoint would end up looking the same. It's the source formatting the data differently than before.&lt;/P&gt;</description>
      <pubDate>Tue, 15 Oct 2024 20:32:28 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/transform-ingested-HEC-JSON-log-as-regular-log/m-p/702000#M20513</guid>
      <dc:creator>PickleRick</dc:creator>
      <dc:date>2024-10-15T20:32:28Z</dc:date>
    </item>
  </channel>
</rss>

