Getting Data In

How to setup usage of 2 auto-load balanced indexers

mfrost8
Builder

We have 2 production auto-load balanced indexers that are currently getting all of our production data. Both running 4.2.1, both are the same hardware configuration (2 quad-core CPUs, 24GB RAM, 1TB RAID10 storage).

Up to this point, we'd been setting up the first server of the pair to be the server that people login to and run searches from. That is, it's where we populate the apps, saved searches for users to run, dashboards, etc. This would also be where the user accounts are created for non-admins.

We'd been designating the second server as the one to run scheduled searches on and in theory, we wouldn't create user accounts there. Partly because we didn't think we wanted users running searches there too, but also because we hadn't planned on copying the apps to both servers to avoid the inevitable question of "why don't I see X when I go to this box?" from users.

The problem of course, is if we let the scheduled searches include links, the URL always refers to the second server because that's where the scheduled search runs (and where the cached search results are stored). So then we end up having to create accounts for users anyway on the second server.

So I'm thinking that this plan isn't going to work. We could just mix everything together -- apps sync'd on both machines, user accounts created on both machines and scheduled searches running from either machine. That doesn't really sound appealing to me. Or we could just run all the scheduled searches that users might receive from the first server where they already have accounts, but then we're under-utilizing the second server.

Unfortunately the users and data here are varied. Not a single type of app or data or user.

Any thoughts of suggestions are appreciated.

Thanks

Tags (2)
0 Karma
1 Solution

Simeon
Splunk Employee
Splunk Employee

The latest version of Splunk (4.2.1) has search head pooling. This means that you can have multiple instances of Splunk share search and app configuration data across both systems. Even the scheduled searches are synchronized.

It is important to note that the indexer is what typically performs the heavy lifting in searches. Therefore, if your intent is to have two search heads distributing to a single indexer, you will not gain any performance by having an additional search head. Instead, you should have a single search head distributing to many (additional) indexers so our map reduce technology is leveraged.

UPDATE:

Conceptually, any Splunk instance (except for the universal forwarder) can act as a search head. Therefore, you could have two servers that perform both indexing and searching that peer to each other. When search head pooling is configured, the core user configs are shared as well as the search jobs. So from a standpoint of managing scheduled searches, Splunk will maintain the schedule between both systems (no duplicates).

View solution in original post

0 Karma

gadjet
New Member

Sorry to revive an old post, but I was doing some searches, and came across this question...

You could just set the URL to go to by editing /opt/splunk/etc/system/local/alert_actions.conf

[email]
from = Splunk Platform
hostname = http://indexer1:8000
reportPaperSize = a4
reportServerURL =
subject = Analytics Alert: \$name\$
mailserver = x.y.z.a
0 Karma

Simeon
Splunk Employee
Splunk Employee

The latest version of Splunk (4.2.1) has search head pooling. This means that you can have multiple instances of Splunk share search and app configuration data across both systems. Even the scheduled searches are synchronized.

It is important to note that the indexer is what typically performs the heavy lifting in searches. Therefore, if your intent is to have two search heads distributing to a single indexer, you will not gain any performance by having an additional search head. Instead, you should have a single search head distributing to many (additional) indexers so our map reduce technology is leveraged.

UPDATE:

Conceptually, any Splunk instance (except for the universal forwarder) can act as a search head. Therefore, you could have two servers that perform both indexing and searching that peer to each other. When search head pooling is configured, the core user configs are shared as well as the search jobs. So from a standpoint of managing scheduled searches, Splunk will maintain the schedule between both systems (no duplicates).

0 Karma

mfrost8
Builder

So for search head pooling, what happens if (when) the NAS share between the shared servers goes down? Since this would only be a subset of the full set of configuration files, I'd expect that Splunk could still function. If all of those configuration files are perhaps loaded into memory, then that might be OK. In particular, config files that might have information about how to index incoming data. Thanks.

0 Karma

Simeon
Splunk Employee
Splunk Employee

There is no preference for which server runs any search as I believe it comes down to which server gets to it first. For example, if one server is slower (loaded) the other will run the job and vice versa.

0 Karma

mfrost8
Builder

Interesting. I must have missed this as a new feature. Note that these are both indexers. We don't run separate search heads. I'm assuming search head pooling would still work on this case if it's really on indexers rather than just search heads?

I didn't see anything in the docs that indicated how the saved searches run. While they'd be known on both servers, which server would run a given saved search?

0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...