Archive
Highlighted

Using DFS File replication for Knowledge Bundle syncing

Explorer

As the title states, has anyone tried this for Windows Search Head Pools?

Our file server is across the WAN from our search heads and I fear we'll experience performance issues using a UNC path for the knowledge bunch. Curious if anybody has tried DFS replication for knowledge bundle syncing across search heads?

0 Karma
Highlighted

Re: Using DFS File replication for Knowledge Bundle syncing

New Member

I tried a moment ago.
(DFS mounted over CentOS 6.5 with Samba 4)

Despite i'm able to add/remove content with full rights, search head pooling is not working.
I'm always receiving a: "Failed to lock /mnt/DFS/splunk/etc/pooling/pooling.ini with code -1, possible reason: File exists"

Perhaps it does in windows...

0 Karma
Highlighted

Re: Using DFS File replication for Knowledge Bundle syncing

Explorer

We successfully implemented DFS replication in place of the shared mounted path.

We have 3 Search Heads all configured with search head pooling enabled. Each SH has this configured in $SPLUNK_HOME$/etc/system/local/server.conf

[pooling]
state = enabled
storage = c:\Splunk

We ran into an issue where saved search alerts were being run from multiple search heads so we dedicated a single SH as our "job" server. 2 of the search heads have the pipeliner scheduler disabled with this configuration setting in $SPLUNK_HOME/etc/system/local/default-mode.conf

[pipeline:scheduler]
disabled = true

The only other configuration necessary is setting up DFS replication. Each search head is configured to replicate the c:\Splunk directory to the other 2 search heads.

This setup has worked very well for us for about a year now. One thing to note though, the C:\Splunk\var\run\splunk\dispatch directory is extremely active. It's constantly updating, creating, and deleting files. Because of this, on a very active system with lots of users, DFS may have issues keeping up with all the changes.

0 Karma
Highlighted

Re: Using DFS File replication for Knowledge Bundle syncing

New Member

Regarding http://wiki.splunk.com/Community:Deploy:How_To_Set_Up_Search_Head_Pooling_and_Shared_Bundle
you should have your SHs behind a LVS and them sharing same "pooling" folder. In that way, if your job SH goes down, you won't need to enable the pipeline scheduler manually. (and you will be only replicating the data across other DFS servers instead doing into splunk servers).

That should avoid the DFSR issues.

0 Karma