For quite some time, I too have been looking for the ability to look at any user, regardless of how many roles have been inherited, and say "This is exactly how this user should expect their experience to be". (Administering an adopted environment can be tough)
The closest I have come, without making assumptions about how Splunk handles inheritance, is to create a new role, with no properties set, and configuring it to inherit the same roles that the specific user has. Then you can run the following search (replacing [search_head] and [role] appropriately):
| rest splunk_server=[search_head] /services/authorization/roles/[role] | fields imported_*
The only thing this doesn't address is how the user would expect the "Role-level concurrent..." settings to affect them if those settings overlap across multiple roles.
... View more
So, in any scenario where all data has search copies in the site local to a search head, disabling affinity on that search head actually causes (at least some) performance overhead? and based on that info, disabling search affinity should be reserved for scenarios where the search head is not local to searchable copies all the data it would be searching?
Thank you very much for this info. It helps a lot!
I also have a follow-up question if you'd oblige it. To squeeze every bit of performance out of our build-out that includes a low-latency high-bandwidth DR site; and considering the information you have provided here; it seems I COULD gain performance by
1. Distributing indexing across both sites
2. Configuring a replication factor that ensures data redundancy
3. NOT configuring a multisite search factor
4. Disabling affinity on the search-heads
In this way, I would expect that I get the full benefit of distributing search and index load across all indexers, and the search head would never receive duplicate data which it would have to dedup. The downside being that if a site were to fail, the indexes would have to be built at the functioning site before the data could be searched again. Does this all sound correct?
... View more
I want to start with a couple of statements that I'd like to be corrected on if I'm interpreting them incorrectly.
In a single site indexer cluster, the search affinity can be replicated, but only one "active"/searchable copy is available at any time.
In a multisite indexer cluster, the search affinity allows for replication of the active portion of the searchable data, so that each site can have an actively searchable copy.
In Splunk 6.3, a search head can be configured to a site value of "site0" which disables its search affinity.
So, my questions are:
Given an exaggerated example, where a 2 site multisite cluster has site1 collecting ALL (100%) of data, and site2 acting as a replicated store with a positive search affinity for DR or continuity or whatever.
1. I expect a search head using site0 distribute its search load across both sites, but would it receive data back from both sites or just the site where the data was first indexed?
2. If it's returned both sites, will both sites return the full amount of data that is searchable on them, duplicating the the returned data, and if so, would the search head deduplicate that data? or would the indexers return an intelligent portion of the data?
3. If it's returned from only the origination site, I guess that means that it lacks a performance boost it might get if that search load were balanced over the indexers from both sites?
(#3 isn't really a question, as much as an observation)
Any insight would be helpful, as it affects an active build-out that could change based on how this all actually works behind the scenes.
Thank you in advance,
... View more
I'm commenting here hoping that my experience might help someone else who is experiencing my variation of this problem, which initially manifested in the same way OP illustrated (which brought me to this post), but was partially fixed by force reloading the broken pages. The part that wasn't fixed with the cookie clearing method involved broken menus as a persistent issue.
After I upgraded, I initially had the same broken images along with broken menus, using Firefox or Chrome for the instances which I did NOT have an Apache proxy handling auths, but the images portion of the problem didn't last long. I believe it was fixed because I have a habit of force reloading a webpage (Ctrl+F5) anytime I see broken images or menus.
With that said, even though the broken images were fixed, there continued to be certain pages/locations where the navbars and appbars are just broken, and to confuse me even more, it magically started working once, for like 2 minutes, only to break again shortly afterwards. The fastest repeatable test I found was navigating to the indexer clustering page on an instance which didn't have clustering configured yet. I tried clearing cookies, not using the fqdn, using the IP instead of the dns name, navigating to/from different pages with different contexts, modifying Cookie.py as instructed by rbal and a few other random things. Eventually I created a new host with a fresh install, thinking that running browsers side-by-side with a working instance would give me some clues to how I could fix it, but I was bummed to find that the fresh install was exhibiting the same behavior.
When thinking about my working instances, I thought that Apache handling the auth was somehow the fix, so I was going to install and configure it on my test instance, but before I did that, I realized that I had overlooked one Splunk setting which was different regardless of the proxy. The non-proxied systems weren't using https. Finally! Enabling Splunk Web SSL, and logging in with the https prefix fixed the menus... disabling it and logging in with the http prefix broke them again. Toggled back and forth between SSL and not a few times with consistent results.
Anyways, I know it was a bit long winded, but I hope it helps someone.
... View more