<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Deployment Server: Best practices for scaling in Deployment Architecture</title>
    <link>https://community.splunk.com/t5/Deployment-Architecture/Deployment-Server-Best-practices-for-scaling/m-p/266120#M10116</link>
    <description>&lt;P&gt;We have tested with 5000 clients and with 30mins polling interval. Works smooth.&lt;BR /&gt;
The only problem is the time taken to deploy the code config into all the configs (which may be Ok in most of the customers), but you need to think into it. A good link is &lt;A href="https://docs.splunk.com/Documentation/Splunk/6.5.2/Updating/Calculatedeploymentserverperformance"&gt;this&lt;/A&gt;. So for 5000 clients, for a 50MB apps it takes 50mins to apply the code.&lt;BR /&gt;
Also when you extend try to make the serverclass.conf may become unmanageable. WE do script it with csv files rather than putting wildcards to prevent conflicts&lt;/P&gt;</description>
    <pubDate>Wed, 22 Feb 2017 21:41:30 GMT</pubDate>
    <dc:creator>koshyk</dc:creator>
    <dc:date>2017-02-22T21:41:30Z</dc:date>
    <item>
      <title>Deployment Server: Best practices for scaling</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/Deployment-Server-Best-practices-for-scaling/m-p/266114#M10110</link>
      <description>&lt;P&gt;We've currently got just one DS in our environment handling about 1100 forwarders. Performance has been pretty stable at this level, but I'm wondering what the cap is. We may be tasked with adding another 5000-6000 more UFs to capture logs from our workstations (thanks, Splunk, for removing Win7 support in 6.5+ by the way /s). I'm wondering how other admins balance their clients vs multiple (if necessary) deployment servers.&lt;/P&gt;

&lt;P&gt;I'm wondering if it may be more feasible to configure Windows Event Collector and stand up a couple 2012 boxes, then collect using the UF/HF from there for this new set of clients.&lt;/P&gt;</description>
      <pubDate>Tue, 31 Jan 2017 19:59:50 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/Deployment-Server-Best-practices-for-scaling/m-p/266114#M10110</guid>
      <dc:creator>coltwanger</dc:creator>
      <dc:date>2017-01-31T19:59:50Z</dc:date>
    </item>
    <item>
      <title>Re: Deployment Server: Best practices for scaling</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/Deployment-Server-Best-practices-for-scaling/m-p/266115#M10111</link>
      <description>&lt;P&gt;Hello coltwanger,&lt;BR /&gt;
we have tested DS performance on a 2 CPU box with 10000 deployment clients (10 Apps, 10 server classes).  You should be fine, but a lot depends on how many apps and serverclasses you have and what your expectations are with respect to deployment times.&lt;BR /&gt;
Many larger customers also tune their phoneHomeInterval to a larger number to help with Deployment server load.&lt;/P&gt;

&lt;P&gt;We don't generally recommend going the WEC server route, because it requires custom processing to preserve the source hostname and causes some of our TAs for Windows-related apps to not work properly. I would stay away from that if you can. &lt;/P&gt;

&lt;P&gt;HTH! &lt;/P&gt;</description>
      <pubDate>Tue, 31 Jan 2017 22:57:27 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/Deployment-Server-Best-practices-for-scaling/m-p/266115#M10111</guid>
      <dc:creator>s2_splunk</dc:creator>
      <dc:date>2017-01-31T22:57:27Z</dc:date>
    </item>
    <item>
      <title>Re: Deployment Server: Best practices for scaling</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/Deployment-Server-Best-practices-for-scaling/m-p/266116#M10112</link>
      <description>&lt;P&gt;In case you use intermediate Heavy Forwarders you can leverage on them also as intermediate Deployment Servers.&lt;BR /&gt;
See picture below as an high level overview:&lt;/P&gt;

&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper" image-alt="alt text"&gt;&lt;img src="https://community.splunk.com/t5/image/serverpage/image-id/2406i0C0BB4B15C390863/image-size/large?v=v2&amp;amp;px=999" role="button" title="alt text" alt="alt text" /&gt;&lt;/span&gt;&lt;BR /&gt;
If you need additional details let me know.&lt;/P&gt;</description>
      <pubDate>Wed, 01 Feb 2017 14:04:25 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/Deployment-Server-Best-practices-for-scaling/m-p/266116#M10112</guid>
      <dc:creator>mirkoneverstops</dc:creator>
      <dc:date>2017-02-01T14:04:25Z</dc:date>
    </item>
    <item>
      <title>Re: Deployment Server: Best practices for scaling</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/Deployment-Server-Best-practices-for-scaling/m-p/266117#M10113</link>
      <description>&lt;P&gt;This is a good point, and this is how we forward logs out of the DMZ, but I hadn't considered using them as Deployment Servers in our internal network (duh). Thanks for the additional info!&lt;/P&gt;</description>
      <pubDate>Wed, 01 Feb 2017 17:14:59 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/Deployment-Server-Best-practices-for-scaling/m-p/266117#M10113</guid>
      <dc:creator>coltwanger</dc:creator>
      <dc:date>2017-02-01T17:14:59Z</dc:date>
    </item>
    <item>
      <title>Re: Deployment Server: Best practices for scaling</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/Deployment-Server-Best-practices-for-scaling/m-p/266118#M10114</link>
      <description>&lt;P&gt;Off-topic side note: If you do use intermediary forwarders, make sure you have at least 2x forwarders than indexers to prevent potential issues with even event distribution across your indexers. Often, intermediary forwarding tiers introduce such issues, if not properly architected.&lt;BR /&gt;
You can employ parallel pipelines on your forwarder to make them behave like multiple instances.&lt;/P&gt;</description>
      <pubDate>Wed, 01 Feb 2017 19:22:23 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/Deployment-Server-Best-practices-for-scaling/m-p/266118#M10114</guid>
      <dc:creator>s2_splunk</dc:creator>
      <dc:date>2017-02-01T19:22:23Z</dc:date>
    </item>
    <item>
      <title>Re: Deployment Server: Best practices for scaling</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/Deployment-Server-Best-practices-for-scaling/m-p/266119#M10115</link>
      <description>&lt;P&gt;Would you still experience the uneven distribution of events even with forceTimebasedAutoLB enabled in outputs.conf on the intermediate forwarder? &lt;/P&gt;

&lt;P&gt;We're rolling 6 indexers and have one IF in the DMZ for ACL reasons, and one internally as a syslog relay (with rsyslog). It doesn't appear to be an issue at this point (ingesting 250GB-300GB/day) but that doesn't necessarily mean it won't cause us issues as we expand. &lt;/P&gt;</description>
      <pubDate>Wed, 01 Feb 2017 19:27:29 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/Deployment-Server-Best-practices-for-scaling/m-p/266119#M10115</guid>
      <dc:creator>coltwanger</dc:creator>
      <dc:date>2017-02-01T19:27:29Z</dc:date>
    </item>
    <item>
      <title>Re: Deployment Server: Best practices for scaling</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/Deployment-Server-Best-practices-for-scaling/m-p/266120#M10116</link>
      <description>&lt;P&gt;We have tested with 5000 clients and with 30mins polling interval. Works smooth.&lt;BR /&gt;
The only problem is the time taken to deploy the code config into all the configs (which may be Ok in most of the customers), but you need to think into it. A good link is &lt;A href="https://docs.splunk.com/Documentation/Splunk/6.5.2/Updating/Calculatedeploymentserverperformance"&gt;this&lt;/A&gt;. So for 5000 clients, for a 50MB apps it takes 50mins to apply the code.&lt;BR /&gt;
Also when you extend try to make the serverclass.conf may become unmanageable. WE do script it with csv files rather than putting wildcards to prevent conflicts&lt;/P&gt;</description>
      <pubDate>Wed, 22 Feb 2017 21:41:30 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/Deployment-Server-Best-practices-for-scaling/m-p/266120#M10116</guid>
      <dc:creator>koshyk</dc:creator>
      <dc:date>2017-02-22T21:41:30Z</dc:date>
    </item>
    <item>
      <title>Re: Deployment Server: Best practices for scaling</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/Deployment-Server-Best-practices-for-scaling/m-p/266121#M10117</link>
      <description>&lt;P&gt;Can you please explain a bit how to use an intermediate deployment server.&lt;/P&gt;

&lt;UL&gt;
&lt;LI&gt;Which directories on the heavy forwarder are used? (only $SPLUNK_HOME/etc/apps for both, receiving and pushing apps?)&lt;/LI&gt;
&lt;LI&gt;Usage of repositoryLocation and/or targetRepositoryLocation setting (only repositoryLocation = $SPLUNK_HOME/etc/apps?)&lt;/LI&gt;
&lt;LI&gt;Can targetRepositoryLocation be used on serverClass level?
According to documentation, this is not possible. But needed, if connecting heavy and universal forwarders to the central deployment server (so, only repositoryLocation can be used in this case?)&lt;/LI&gt;
&lt;LI&gt;stateOnClient is used to enable the app contents only on the target universal forwarder?&lt;/LI&gt;
&lt;LI&gt;The serverclass.conf for the heavy forwarder is deployed by a seperate app?&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Tue, 30 Jan 2018 12:44:16 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/Deployment-Server-Best-practices-for-scaling/m-p/266121#M10117</guid>
      <dc:creator>ics_ernst</dc:creator>
      <dc:date>2018-01-30T12:44:16Z</dc:date>
    </item>
    <item>
      <title>Re: Deployment Server: Best practices for scaling</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/Deployment-Server-Best-practices-for-scaling/m-p/562484#M24671</link>
      <description>&lt;P&gt;Try out following answer.&lt;/P&gt;&lt;P&gt;&lt;A href="https://community.splunk.com/t5/All-Apps-and-Add-ons/Deployment-Server-scalability-best-practices/m-p/562482/thread-id/74731#M74732" target="_blank"&gt;https://community.splunk.com/t5/All-Apps-and-Add-ons/Deployment-Server-scalability-best-practices/m-p/562482/thread-id/74731#M74732&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Sat, 07 Aug 2021 04:46:34 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/Deployment-Server-Best-practices-for-scaling/m-p/562484#M24671</guid>
      <dc:creator>hrawat</dc:creator>
      <dc:date>2021-08-07T04:46:34Z</dc:date>
    </item>
  </channel>
</rss>

