<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Problem with connection_host = dns setting in Getting Data In</title>
    <link>https://community.splunk.com/t5/Getting-Data-In/Problem-with-connection-host-dns-setting/m-p/689490#M114760</link>
    <description>&lt;P&gt;For this situation, we have a weekly alert that shows "missing hosts"&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;| tstats latest(_time) as latest where NOT index=main AND NOT index="*-summary" earliest=-30d by index, host
| eval DeltaSeconds = now() - latest
| where DeltaSeconds&amp;gt;604800
| eval LastEventTime = strftime(latest,"%Y-%m-%d %H:%M:%S")
| eval DeltaHours = round(DeltaSeconds/3600)
| eval DeltaDays = round(DeltaHours/24)
| join index
    [| inputlookup generated_file_with_admins_mails.csv]
| table index, host, LastEventTime, DeltaHours, DeltaDays, email_to&lt;/LI-CODE&gt;&lt;P&gt;&lt;BR /&gt;Using the sendresults app, this Splunk alerts the responsible employee(s) about these hosts.&lt;BR /&gt;Now this search shows only hosts that haven't sent Syslog for more than 7 days and that's OK for us&lt;BR /&gt;In most cases, this alert shows only hosts that we removed from our infrastructure &lt;span class="lia-unicode-emoji" title=":winking_face:"&gt;😉&lt;/span&gt;&lt;BR /&gt;But if it will be necessary, I can run this alert more frequently or separate it into several searches with different "missing" conditions&lt;BR /&gt;I understand that this approach cannot handle, for example, some intermittent network or software lags, but I have used this approach for about a year and all is quite fine, excluding some rare cases (like this topic)&lt;/P&gt;</description>
    <pubDate>Tue, 04 Jun 2024 06:53:59 GMT</pubDate>
    <dc:creator>NoSpaces</dc:creator>
    <dc:date>2024-06-04T06:53:59Z</dc:date>
    <item>
      <title>Problem with connection_host = dns setting</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Problem-with-connection-host-dns-setting/m-p/689180#M114718</link>
      <description>&lt;P&gt;Hello to everyone&lt;BR /&gt;We have about &amp;gt;300 hosts sending syslog messages to the indexer cluster&lt;BR /&gt;The cluster runs on Windows Server&lt;BR /&gt;All settings across the indexer cluster that relate to syslog ingestion look like this:&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;[udp://port_number]
connection_host = dns
index = index_name
sourcetype = sourcetype_name&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;So I expected to see no IP addresses in the host field when I ran searches&lt;BR /&gt;I created the alert to be aware that no one message has an IP in the host field&lt;BR /&gt;But a couple of hosts have this problem&lt;/P&gt;&lt;P&gt;I know that PTR records are required for this setting, but we checked that the record exists.&lt;BR /&gt;When I run "dnslookup *host_ip* *dns_server_ip* I see that everything is OK&lt;BR /&gt;I also cleared the DNS cache across the indexer cluster, but I still see this problem&lt;/P&gt;&lt;P&gt;Does Splunk have some internal logs that can help me identify where the problem is?&lt;BR /&gt;Or do I only have the opportunity to catch the network traffic dump with DNS queries?&lt;/P&gt;</description>
      <pubDate>Fri, 31 May 2024 09:27:41 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Problem-with-connection-host-dns-setting/m-p/689180#M114718</guid>
      <dc:creator>NoSpaces</dc:creator>
      <dc:date>2024-05-31T09:27:41Z</dc:date>
    </item>
    <item>
      <title>Re: Problem with connection_host = dns setting</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Problem-with-connection-host-dns-setting/m-p/689294#M114728</link>
      <description>&lt;P&gt;Hello, you should check DNS records on your server, not sure internal logs can help.&lt;/P&gt;
&lt;P&gt;In worst case use this example :&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="markup"&gt;props.conf

[host::&amp;lt;IP address&amp;gt;]

TRANSFORMS-&amp;lt;hostname&amp;gt;=&amp;lt;hostname&amp;gt;_override



transforms.conf

[&amp;lt;hostname&amp;gt;_override]

REGEX = (.*)

DEST_KEY = MetaData:Host

FORMAT = host::&amp;lt;FQDN&amp;gt;&lt;/LI-CODE&gt;</description>
      <pubDate>Sat, 01 Jun 2024 15:53:12 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Problem-with-connection-host-dns-setting/m-p/689294#M114728</guid>
      <dc:creator>splunkreal</dc:creator>
      <dc:date>2024-06-01T15:53:12Z</dc:date>
    </item>
    <item>
      <title>Re: Problem with connection_host = dns setting</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Problem-with-connection-host-dns-setting/m-p/689308#M114733</link>
      <description>&lt;P&gt;I would expect Splunk to keep some form of cache (you can't expect it to query DNS for every single UDP incoming packet. That would be silly. I wouldn't bet my money either that it doesn't have its own resolver independent from the OS (like Java does for example).&lt;/P&gt;&lt;P&gt;Having said that.&lt;/P&gt;&lt;P&gt;1. Identifying hosts by names is usually more error-prone than using IPs&lt;/P&gt;&lt;P&gt;2. With syslog sources you often have transforms overwriting the host field with the value parsed from within the event (and that might affect your case as well)&lt;/P&gt;&lt;P&gt;3. It's not a good idea to receive syslogs directly on your indexers (or even forwarders). It's better to use intermediate syslog daemon writing to files or sending to HEC (sc4s or properly configured "raw" syslog-ng or rsyslog).&lt;/P&gt;&lt;P&gt;4. As you're saying that you're sending syslogs to "indexer cluster" I suspect you have some kind of LB in front of those indexers. That's not a good idea usually. Typical load balancers don't handle syslog traffic (especially UDP) well.&lt;/P&gt;</description>
      <pubDate>Sun, 02 Jun 2024 08:56:04 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Problem-with-connection-host-dns-setting/m-p/689308#M114733</guid>
      <dc:creator>PickleRick</dc:creator>
      <dc:date>2024-06-02T08:56:04Z</dc:date>
    </item>
    <item>
      <title>Re: Problem with connection_host = dns setting</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Problem-with-connection-host-dns-setting/m-p/689387#M114740</link>
      <description>&lt;P&gt;I agree with you and also suspect that Splunk has an internal resolver or cache, but I can't find any docs or Q&amp;amp;A that can help me find out more&lt;BR /&gt;&lt;BR /&gt;1. I understand it, but we need to see hostnames instead of IPs because we are using Splunk as a log collector from different parts of our internal infrastructure. Using hostnames is more convenient because they are human-readable&lt;BR /&gt;2. If I correctly understand Splunk, it has a pre-defined [syslog] stanza in props.conf and a related [syslog-host] stanza in transforms.conf. But in my particular situation, all sourcetypes don't match the syslog pattern because they all have names like *_syslog. My transforms.conf also doesn't have records related to the hostname override&lt;BR /&gt;3 and 4. I know it, but we decided to abandon using a dedicated syslog server for different reasons, such as fault tolerance and the desire to make the "log ingention" system less complicated. Thank you for your advices&lt;/P&gt;</description>
      <pubDate>Mon, 03 Jun 2024 08:55:59 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Problem-with-connection-host-dns-setting/m-p/689387#M114740</guid>
      <dc:creator>NoSpaces</dc:creator>
      <dc:date>2024-06-03T08:55:59Z</dc:date>
    </item>
    <item>
      <title>Re: Problem with connection_host = dns setting</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Problem-with-connection-host-dns-setting/m-p/689388#M114741</link>
      <description>&lt;P&gt;I checked DNS records many times&lt;BR /&gt;Also, thank you for your advice but it is not a solution, just a workaround &lt;span class="lia-unicode-emoji" title=":grinning_face_with_big_eyes:"&gt;😃&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 03 Jun 2024 08:58:29 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Problem-with-connection-host-dns-setting/m-p/689388#M114741</guid>
      <dc:creator>NoSpaces</dc:creator>
      <dc:date>2024-06-03T08:58:29Z</dc:date>
    </item>
    <item>
      <title>Re: Problem with connection_host = dns setting</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Problem-with-connection-host-dns-setting/m-p/689389#M114742</link>
      <description>&lt;P&gt;Yes, the default &lt;EM&gt;syslog&lt;/EM&gt; sourcetype calls the transform you mention but as far as I remember there are more apps that bring similar extractions with them.&lt;/P&gt;&lt;P&gt;And I still advocate for external syslog receiver. This way you can easily (compared to doing it with transforms) manipulate what you're indexing from which source and so on. Also "fault tolerance" in case of not-syslog-aware LB is... discussable. But hey, that's your environment &lt;span class="lia-unicode-emoji" title=":winking_face:"&gt;😉&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 03 Jun 2024 09:22:34 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Problem-with-connection-host-dns-setting/m-p/689389#M114742</guid>
      <dc:creator>PickleRick</dc:creator>
      <dc:date>2024-06-03T09:22:34Z</dc:date>
    </item>
    <item>
      <title>Re: Problem with connection_host = dns setting</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Problem-with-connection-host-dns-setting/m-p/689392#M114743</link>
      <description>&lt;P&gt;I also understand that apps can do similar extractions but there is no apps related to the sourcetypes about which we talking&lt;/P&gt;&lt;P&gt;If talking about external syslog receiver, mayby in future &lt;span class="lia-unicode-emoji" title=":grinning_face_with_big_eyes:"&gt;😃&lt;/span&gt;&lt;BR /&gt;In present time, we ingest and index literally everything just because we don't know what information we will really need to resolve the problem&lt;/P&gt;&lt;P&gt;Can you tell a little more about "not-syslog-aware" LB? What do you mean?&lt;BR /&gt;Our LB does the following:&lt;BR /&gt;- monitors the indexers by health API endpoint of earch indexer&lt;BR /&gt;- if one or more is down, for some reasons, LB selects another healthy instance&lt;BR /&gt;- spreads syslog messages to all IDXC members to avoid&amp;nbsp; "data imbalance"&amp;nbsp; - our approach is disscussable but works &lt;span class="lia-unicode-emoji" title=":winking_face:"&gt;😉&lt;/span&gt;&lt;BR /&gt;- for some reasons, we also makes source port and protocol overrides (some systems not support UDP and we change the protocol for UDP to avoid back TCP traffic)&lt;/P&gt;</description>
      <pubDate>Mon, 03 Jun 2024 09:52:36 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Problem-with-connection-host-dns-setting/m-p/689392#M114743</guid>
      <dc:creator>NoSpaces</dc:creator>
      <dc:date>2024-06-03T09:52:36Z</dc:date>
    </item>
    <item>
      <title>Re: Problem with connection_host = dns setting</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Problem-with-connection-host-dns-setting/m-p/689394#M114744</link>
      <description>&lt;P&gt;First sin is "monitor by health API" - it doesn't tell you anything about availability of syslog input.&lt;/P&gt;&lt;P&gt;But from your description it seems that your LB is at least a bit syslog-aware (if you're able to extract the payload and resend it as UDP, that's something). What is it if you can share this information?&lt;/P&gt;</description>
      <pubDate>Mon, 03 Jun 2024 10:18:11 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Problem-with-connection-host-dns-setting/m-p/689394#M114744</guid>
      <dc:creator>PickleRick</dc:creator>
      <dc:date>2024-06-03T10:18:11Z</dc:date>
    </item>
    <item>
      <title>Re: Problem with connection_host = dns setting</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Problem-with-connection-host-dns-setting/m-p/689405#M114745</link>
      <description>&lt;P&gt;When we built our environment (splunk-related) I checked the splunk docs for some information that could say something about the proper functioning of one indexer&lt;BR /&gt;I can be mistaken, but in this case, I selected the indexer color status&lt;BR /&gt;The API endpoint is "bla bla bla/services/server/info/health_info"&lt;BR /&gt;If an indexer has green or yellow status, LB decides that node is OK&lt;BR /&gt;If an indexer has a red status, LB decides that node is not OK and selects another one&lt;/P&gt;</description>
      <pubDate>Mon, 03 Jun 2024 11:55:05 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Problem-with-connection-host-dns-setting/m-p/689405#M114745</guid>
      <dc:creator>NoSpaces</dc:creator>
      <dc:date>2024-06-03T11:55:05Z</dc:date>
    </item>
    <item>
      <title>Re: Problem with connection_host = dns setting</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Problem-with-connection-host-dns-setting/m-p/689457#M114752</link>
      <description>&lt;P&gt;What if someone mistakenly disables udp input? Just first example from the top of my head.&lt;/P&gt;</description>
      <pubDate>Mon, 03 Jun 2024 21:55:59 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Problem-with-connection-host-dns-setting/m-p/689457#M114752</guid>
      <dc:creator>PickleRick</dc:creator>
      <dc:date>2024-06-03T21:55:59Z</dc:date>
    </item>
    <item>
      <title>Re: Problem with connection_host = dns setting</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Problem-with-connection-host-dns-setting/m-p/689490#M114760</link>
      <description>&lt;P&gt;For this situation, we have a weekly alert that shows "missing hosts"&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;| tstats latest(_time) as latest where NOT index=main AND NOT index="*-summary" earliest=-30d by index, host
| eval DeltaSeconds = now() - latest
| where DeltaSeconds&amp;gt;604800
| eval LastEventTime = strftime(latest,"%Y-%m-%d %H:%M:%S")
| eval DeltaHours = round(DeltaSeconds/3600)
| eval DeltaDays = round(DeltaHours/24)
| join index
    [| inputlookup generated_file_with_admins_mails.csv]
| table index, host, LastEventTime, DeltaHours, DeltaDays, email_to&lt;/LI-CODE&gt;&lt;P&gt;&lt;BR /&gt;Using the sendresults app, this Splunk alerts the responsible employee(s) about these hosts.&lt;BR /&gt;Now this search shows only hosts that haven't sent Syslog for more than 7 days and that's OK for us&lt;BR /&gt;In most cases, this alert shows only hosts that we removed from our infrastructure &lt;span class="lia-unicode-emoji" title=":winking_face:"&gt;😉&lt;/span&gt;&lt;BR /&gt;But if it will be necessary, I can run this alert more frequently or separate it into several searches with different "missing" conditions&lt;BR /&gt;I understand that this approach cannot handle, for example, some intermittent network or software lags, but I have used this approach for about a year and all is quite fine, excluding some rare cases (like this topic)&lt;/P&gt;</description>
      <pubDate>Tue, 04 Jun 2024 06:53:59 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Problem-with-connection-host-dns-setting/m-p/689490#M114760</guid>
      <dc:creator>NoSpaces</dc:creator>
      <dc:date>2024-06-04T06:53:59Z</dc:date>
    </item>
    <item>
      <title>Re: Problem with connection_host = dns setting</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Problem-with-connection-host-dns-setting/m-p/689491#M114761</link>
      <description>&lt;P&gt;Sure. Whatever rocks your boat &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;&lt;/P&gt;&lt;P&gt;But seriously - it's like ITIL - adopt and adapt. If something works for you and you are aware of your approach's limitations - go ahead.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 04 Jun 2024 07:11:18 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Problem-with-connection-host-dns-setting/m-p/689491#M114761</guid>
      <dc:creator>PickleRick</dc:creator>
      <dc:date>2024-06-04T07:11:18Z</dc:date>
    </item>
    <item>
      <title>Re: Problem with connection_host = dns setting</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Problem-with-connection-host-dns-setting/m-p/689492#M114762</link>
      <description>&lt;P&gt;I really appreciate your advices, Thank you for discussion &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 04 Jun 2024 07:16:45 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Problem-with-connection-host-dns-setting/m-p/689492#M114762</guid>
      <dc:creator>NoSpaces</dc:creator>
      <dc:date>2024-06-04T07:16:45Z</dc:date>
    </item>
  </channel>
</rss>

