All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@richgalloway Thank you so much for the clear answer!
The candidate always votes for itself.  Therefore, there are three votes for the new captain.
Thanks for the sample. I opted to add a column "key" to my csv file, with wild card before and after the colorkey, (*blue*  for example) then add a lookup to the search after the inputlookup section.... See more...
Thanks for the sample. I opted to add a column "key" to my csv file, with wild card before and after the colorkey, (*blue*  for example) then add a lookup to the search after the inputlookup section.      | lookup keywords.csv key as "String1" output Key .  I'm not sure of the performance ramifications, I don't see any difference in run times.
 Hello. I have a question about the captain selection process. Let me ask you a question using the example below. 1. In a clustering of four searchheads, the captain goes down. 2. Among the rem... See more...
 Hello. I have a question about the captain selection process. Let me ask you a question using the example below. 1. In a clustering of four searchheads, the captain goes down. 2. Among the remaining three, the search header whose timer expired earliest asks the remaining two to vote. 3. The remaining two cars vote for the search header whose timer ended early. 4. Although two votes were received, the captain election failed because three votes were required due to the majority rule. This is the process of captain selection in my opinion. However, when I practiced it myself, even if one of the four planes was down, another captain was automatically selected from the three remaining planes. How is this possible when there are not enough votes?
We are using /api base url, is that correct for .splunkrc as it asks for host and in our environment we use url? thanks for your help!   .splunkrc # Splunk host (default: localhost) host=splu... See more...
We are using /api base url, is that correct for .splunkrc as it asks for host and in our environment we use url? thanks for your help!   .splunkrc # Splunk host (default: localhost) host=splunkurl/api # Splunk admin port (default: 8089) port=443 # Splunk username username= # Splunk password password= # Access scheme (default: https) scheme=https # Your version of Splunk (default: 6.3) version=9.0.4  
Thanks, I can work with this!
The answer depends on your usecase. One approach, which you seem to be alluding to, is to run a daily report to populate the summary index (with the results from the search, not the raw events). You... See more...
The answer depends on your usecase. One approach, which you seem to be alluding to, is to run a daily report to populate the summary index (with the results from the search, not the raw events). Your dashboard could then read from the summary index and append results from the raw index to cover the gap between the end of the previous day to the end of your time period. So, to answer your final question, the logs are not saved twice (unless your report which is populating the summary index is saving the raw events - but why would you do that, as it doesn't provide any benefit).
Hello Thanks for your reply. I have few heavy dashboards that most of them are using the same base search so i thought that summary index can be the right way to reduce the running time. As I unde... See more...
Hello Thanks for your reply. I have few heavy dashboards that most of them are using the same base search so i thought that summary index can be the right way to reduce the running time. As I understood from documentation, I need to create a report that running the base search and schedule it to run once a day and send the result to summary index, is it right ? If yes, should I run the dashboards with the summary index and the "regular" index ? also, If the report results are saved in summary index, does it mean the logs are saved twice ? once in the "regular" index and once in summary index ?
Thanks, I was reading the same page ^^ I keep u updated. I just want to verify before pushing the modificiation in the CM server.conf (only) + restart CM deamon : [clustering] mode = manager const... See more...
Thanks, I was reading the same page ^^ I keep u updated. I just want to verify before pushing the modificiation in the CM server.conf (only) + restart CM deamon : [clustering] mode = manager constrain_singlesite_buckets = false Do you know how to perform : To see how many buckets will require conversion to multisite, use services/cluster/manager/buckets?filter=multisite_bucket=false&filter=standalone=false before changing the manager node configuration. Thanks Thanks
Can some one assist on it?  
I haven't personally done it but this docs describe migrating buckets to multisite. https://docs.splunk.com/Documentation/Splunk/9.1.2/Indexer/Migratetomultisite#How_the_cluster_migrates_and_maintai... See more...
I haven't personally done it but this docs describe migrating buckets to multisite. https://docs.splunk.com/Documentation/Splunk/9.1.2/Indexer/Migratetomultisite#How_the_cluster_migrates_and_maintains_existing_buckets
OK. You downloaded and installed the UF. I assume you started it as well. But as you are apparently using a Deployment Server, did you configure your UF to connect to that DS?
I want to connect Splunk to the Linux server, and I downloaded the UF on the Linux server to get the security logs from it. After I created the server class and added clients to it, I downloaded the ... See more...
I want to connect Splunk to the Linux server, and I downloaded the UF on the Linux server to get the security logs from it. After I created the server class and added clients to it, I downloaded the UF to it and made 2 apps (one for nix and one for main) to receive logs.   When I searched the search head, no logs appeared I think the error is in the nix app. Does anyone know what modifications are required to be made on the nix app so that I can take the security logs?
I want to connect Splunk to the Linux server, and I downloaded the UF on the Linux server to get the security logs from it. After I created the server class and added clients to it, I downloaded the ... See more...
I want to connect Splunk to the Linux server, and I downloaded the UF on the Linux server to get the security logs from it. After I created the server class and added clients to it, I downloaded the UF to it and made 2 apps (one for nix and one for main) to receive logs.   When I searched the search head, no logs appeared I think the error is in the nix app. Does anyone know what modifications are required to be made on the nix app so that I can take the security logs?
Hi @maede_yavari, this isn't a Splunk issue: if you want to have more security running Splunk Universal Forwarder with a not LOCAL SYSTEM user, you have to give to the user that you're using the gra... See more...
Hi @maede_yavari, this isn't a Splunk issue: if you want to have more security running Splunk Universal Forwarder with a not LOCAL SYSTEM user, you have to give to the user that you're using the grants to read you eventlog: you need a Windows technician, or you could accept to run Splunk using SYSTEM. Ciao. Giuseppe
@meshorer <tag name> is in container:tags should work. The tag name is case sensitive and has to be exact match. 
Hi and thanks for the support. >>> If your cluster has been recently migrated from single site to multisite  : NO >>> Restarting CM _might_ resolve your issue : Already done (CM only and rolling re... See more...
Hi and thanks for the support. >>> If your cluster has been recently migrated from single site to multisite  : NO >>> Restarting CM _might_ resolve your issue : Already done (CM only and rolling restart with no effect for RP/SF) The operation was to add new indexers, and then decomission the old ones. It's a multisite since the build of everything. But I have "constrain_singlesite_buckets=true", on the CM and on the INDX in etc/system/default/server.conf Maybe it was buckets from the beginning of my infrastructure, at a time that the multi site cluster was not builded and operational ? Do you know the impact of changing constrain_singlesite_buckets to false ? Many thanks
If your cluster has been recently migrated from single site to multisite there might be issues with "dangling" non-multisite buckets especially if you have constrain_singlesite_buckets=true. Restart... See more...
If your cluster has been recently migrated from single site to multisite there might be issues with "dangling" non-multisite buckets especially if you have constrain_singlesite_buckets=true. Restarting CM _might_ resolve your issue but doesn't have to. In case it doesn't it's probably a case for support.
You can export the data from searches (use by panels) in csv format which Excel can read, but I am not aware of any standard feature to export a complete dashboard in XLS format. What would that even... See more...
You can export the data from searches (use by panels) in csv format which Excel can read, but I am not aware of any standard feature to export a complete dashboard in XLS format. What would that even look like (when you could potentially have base searches, hidden panels, saved searches, etc.)?