<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: opinion needed for forwarder management in Deployment Architecture</title>
    <link>https://community.splunk.com/t5/Deployment-Architecture/opinion-needed-for-forwarder-management/m-p/283887#M10757</link>
    <description>&lt;P&gt;Thank you MuS,&lt;BR /&gt;
yeah we were thinking that too, but we would need to bring in more CPU power to do the indexing&lt;BR /&gt;
Thanks for the infos&lt;/P&gt;</description>
    <pubDate>Tue, 20 Oct 2015 20:55:23 GMT</pubDate>
    <dc:creator>s0rbeto</dc:creator>
    <dc:date>2015-10-20T20:55:23Z</dc:date>
    <item>
      <title>opinion needed for forwarder management</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/opinion-needed-for-forwarder-management/m-p/283880#M10750</link>
      <description>&lt;P&gt;we just deployed splunk into our enterprise environment.  We have 3000 clients and all have UF installed with simple built-in apps "Splunk_TA_windows" and "Splunk_TA_Linux".&lt;BR /&gt;
Now we are pushing logs/data from tier 1 applications (mission critical applications), about 4 millions logs everyday and we have 1TB license per/day&lt;BR /&gt;
Our current challenge is, to differentiate our data based on applications and other informations.&lt;BR /&gt;
Currently, our data(s) are indexed with "index", "host", "sourcetype", but then we realized we need to be more specific on our data(s), then it comes to two approaches.&lt;/P&gt;

&lt;P&gt;1).  add more fields, but then we need more license&lt;BR /&gt;
160 bytes * 40 billions data = 6.4 Terabytes&lt;/P&gt;

&lt;P&gt;2) utilize clientName, but then we need to have script pushed to each machine and edit deploymentclient.conf to change the "clientName" field.  Then we need to build a lookup table ... At this point we don't know if clientName is even searchable in "Searching and Reporting" app.&lt;/P&gt;

&lt;P&gt;What you guys think?  Please feel free to drop any comments or suggestion.&lt;/P&gt;

&lt;P&gt;Thank you guys!&lt;/P&gt;</description>
      <pubDate>Tue, 29 Sep 2020 07:38:08 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/opinion-needed-for-forwarder-management/m-p/283880#M10750</guid>
      <dc:creator>s0rbeto</dc:creator>
      <dc:date>2020-09-29T07:38:08Z</dc:date>
    </item>
    <item>
      <title>Re: opinion needed for forwarder management</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/opinion-needed-for-forwarder-management/m-p/283881#M10751</link>
      <description>&lt;P&gt;Hi s0rbeto,&lt;/P&gt;

&lt;P&gt;if you only add those fields as additional index time fields (not recommended btw &lt;A href="http://docs.splunk.com/Documentation/Splunk/6.3.0/Data/Configureindex-timefieldextraction"&gt;http://docs.splunk.com/Documentation/Splunk/6.3.0/Data/Configureindex-timefieldextraction&lt;/A&gt; ) or search time fields and not into the source log files, it will not need more license. Because the license is based on the amount of &lt;CODE&gt;_raw&lt;/CODE&gt; data passed into Splunk. &lt;BR /&gt;
So, if you want to add some field extractions based on the existing log source take a look at the docs &lt;A href="http://docs.splunk.com/Documentation/Splunk/6.3.0/Knowledge/Createandmaintainsearch-timefieldextractionsthroughconfigurationfiles"&gt;http://docs.splunk.com/Documentation/Splunk/6.3.0/Knowledge/Createandmaintainsearch-timefieldextractionsthroughconfigurationfiles&lt;/A&gt; to learn more about search time field extraction.&lt;/P&gt;

&lt;P&gt;After reading and learning about &lt;CODE&gt;field extractions&lt;/CODE&gt; I don't think you need the second option....&lt;/P&gt;

&lt;P&gt;Hope this helps ...&lt;/P&gt;

&lt;P&gt;cheers, MuS&lt;/P&gt;</description>
      <pubDate>Tue, 20 Oct 2015 08:50:43 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/opinion-needed-for-forwarder-management/m-p/283881#M10751</guid>
      <dc:creator>MuS</dc:creator>
      <dc:date>2015-10-20T08:50:43Z</dc:date>
    </item>
    <item>
      <title>Re: opinion needed for forwarder management</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/opinion-needed-for-forwarder-management/m-p/283882#M10752</link>
      <description>&lt;P&gt;Why would you  need more fields to create differentiation?  Can you not use &lt;CODE&gt;tags&lt;/CODE&gt; and &lt;CODE&gt;eventtypes&lt;/CODE&gt; for this, combined with your site-specific knowledge of what each &lt;CODE&gt;host&lt;/CODE&gt; "is"?  Do you not have CMDB that you can query to create a &lt;CODE&gt;lookup&lt;/CODE&gt; to help you differentiate hosts?  What kind of information are you thinking that you need to add to each event?&lt;/P&gt;</description>
      <pubDate>Tue, 20 Oct 2015 13:32:42 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/opinion-needed-for-forwarder-management/m-p/283882#M10752</guid>
      <dc:creator>woodcock</dc:creator>
      <dc:date>2015-10-20T13:32:42Z</dc:date>
    </item>
    <item>
      <title>Re: opinion needed for forwarder management</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/opinion-needed-for-forwarder-management/m-p/283883#M10753</link>
      <description>&lt;P&gt;We have tags and eventtypes, but at some point we need to extract specific data in easier way, that is why we need additional fields.&lt;BR /&gt;
Yes we do have CMDB, are you talking about the second option? i don't know what cmdb can integrate with splunk.&lt;BR /&gt;
We are adding additional fields like vlanid, ip address, datacenter_location and etcs&lt;/P&gt;</description>
      <pubDate>Tue, 20 Oct 2015 18:48:59 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/opinion-needed-for-forwarder-management/m-p/283883#M10753</guid>
      <dc:creator>s0rbeto</dc:creator>
      <dc:date>2015-10-20T18:48:59Z</dc:date>
    </item>
    <item>
      <title>Re: opinion needed for forwarder management</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/opinion-needed-for-forwarder-management/m-p/283884#M10754</link>
      <description>&lt;P&gt;You would do well to back up and FIRST explain your problem as clearly and completely as you can (without confusing the issue by discussing any kind of a solution).  The problem is that you are too deep into your preferred solution for anybody else to understand what the real problem is.  What is the real problem?  What is it that is "no longer easy"?   What is it that you &lt;EM&gt;really&lt;/EM&gt; need (and don't say "more fields")?  You need some way to do exactly what?&lt;/P&gt;</description>
      <pubDate>Tue, 20 Oct 2015 19:05:21 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/opinion-needed-for-forwarder-management/m-p/283884#M10754</guid>
      <dc:creator>woodcock</dc:creator>
      <dc:date>2015-10-20T19:05:21Z</dc:date>
    </item>
    <item>
      <title>Re: opinion needed for forwarder management</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/opinion-needed-for-forwarder-management/m-p/283885#M10755</link>
      <description>&lt;P&gt;We have 3000 clients and all forwarders are pushing data to splunk, currently they are indexed with "index" ,"sourcetype", "host" and nothing more.  What we like to have is, we want to look up those data by additional fields, ex "environment : Production/nonprod" , "vlanid", "location", "applications".&lt;BR /&gt;
The challenge to this is the license, and we aren't sure if adding more fields like i mentioned above would increase the data size that are pushing into splunk.&lt;BR /&gt;
We are open to all solutions, we haven't implement anything yet&lt;BR /&gt;
What do you think?&lt;BR /&gt;
Thanks!&lt;/P&gt;</description>
      <pubDate>Tue, 20 Oct 2015 19:32:32 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/opinion-needed-for-forwarder-management/m-p/283885#M10755</guid>
      <dc:creator>s0rbeto</dc:creator>
      <dc:date>2015-10-20T19:32:32Z</dc:date>
    </item>
    <item>
      <title>Re: opinion needed for forwarder management</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/opinion-needed-for-forwarder-management/m-p/283886#M10756</link>
      <description>&lt;P&gt;OK, I am doing exactly this for a client using a nightly extract from CMDB.  This DB already contains fields like &lt;CODE&gt;status&lt;/CODE&gt;, &lt;CODE&gt;environment&lt;/CODE&gt;, etc.  We just schedule an &lt;CODE&gt;dbquery&lt;/CODE&gt; and save this to a lookup file with &lt;CODE&gt;outputcsv&lt;/CODE&gt; and whenever we need to, we use the lookup file to augment our dataset.&lt;/P&gt;</description>
      <pubDate>Tue, 20 Oct 2015 20:12:05 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/opinion-needed-for-forwarder-management/m-p/283886#M10756</guid>
      <dc:creator>woodcock</dc:creator>
      <dc:date>2015-10-20T20:12:05Z</dc:date>
    </item>
    <item>
      <title>Re: opinion needed for forwarder management</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/opinion-needed-for-forwarder-management/m-p/283887#M10757</link>
      <description>&lt;P&gt;Thank you MuS,&lt;BR /&gt;
yeah we were thinking that too, but we would need to bring in more CPU power to do the indexing&lt;BR /&gt;
Thanks for the infos&lt;/P&gt;</description>
      <pubDate>Tue, 20 Oct 2015 20:55:23 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/opinion-needed-for-forwarder-management/m-p/283887#M10757</guid>
      <dc:creator>s0rbeto</dc:creator>
      <dc:date>2015-10-20T20:55:23Z</dc:date>
    </item>
  </channel>
</rss>

