We are using /api base url, is that correct for .splunkrc as it asks for host and in our environment we use url?
thanks for your help!
.splunkrc
# Splunk host (default: localhost)
host=splu...
See more...
We are using /api base url, is that correct for .splunkrc as it asks for host and in our environment we use url?
thanks for your help!
.splunkrc
# Splunk host (default: localhost)
host=splunkurl/api
# Splunk admin port (default: 8089)
port=443
# Splunk username
username=
# Splunk password
password=
# Access scheme (default: https)
scheme=https
# Your version of Splunk (default: 6.3)
version=9.0.4
The answer depends on your usecase. One approach, which you seem to be alluding to, is to run a daily report to populate the summary index (with the results from the search, not the raw events). You...
See more...
The answer depends on your usecase. One approach, which you seem to be alluding to, is to run a daily report to populate the summary index (with the results from the search, not the raw events). Your dashboard could then read from the summary index and append results from the raw index to cover the gap between the end of the previous day to the end of your time period. So, to answer your final question, the logs are not saved twice (unless your report which is populating the summary index is saving the raw events - but why would you do that, as it doesn't provide any benefit).
Hello Thanks for your reply. I have few heavy dashboards that most of them are using the same base search so i thought that summary index can be the right way to reduce the running time. As I unde...
See more...
Hello Thanks for your reply. I have few heavy dashboards that most of them are using the same base search so i thought that summary index can be the right way to reduce the running time. As I understood from documentation, I need to create a report that running the base search and schedule it to run once a day and send the result to summary index, is it right ? If yes, should I run the dashboards with the summary index and the "regular" index ? also, If the report results are saved in summary index, does it mean the logs are saved twice ? once in the "regular" index and once in summary index ?
Thanks, I was reading the same page ^^ I keep u updated. I just want to verify before pushing the modificiation in the CM server.conf (only) + restart CM deamon : [clustering]
mode = manager
const...
See more...
Thanks, I was reading the same page ^^ I keep u updated. I just want to verify before pushing the modificiation in the CM server.conf (only) + restart CM deamon : [clustering]
mode = manager
constrain_singlesite_buckets = false Do you know how to perform : To see how many buckets will require conversion to multisite, use services/cluster/manager/buckets?filter=multisite_bucket=false&filter=standalone=false before changing the manager node configuration. Thanks Thanks
I haven't personally done it but this docs describe migrating buckets to multisite. https://docs.splunk.com/Documentation/Splunk/9.1.2/Indexer/Migratetomultisite#How_the_cluster_migrates_and_maintai...
See more...
I haven't personally done it but this docs describe migrating buckets to multisite. https://docs.splunk.com/Documentation/Splunk/9.1.2/Indexer/Migratetomultisite#How_the_cluster_migrates_and_maintains_existing_buckets
OK. You downloaded and installed the UF. I assume you started it as well. But as you are apparently using a Deployment Server, did you configure your UF to connect to that DS?
I want to connect Splunk to the Linux server, and I downloaded the UF on the Linux server to get the security logs from it. After I created the server class and added clients to it, I downloaded the ...
See more...
I want to connect Splunk to the Linux server, and I downloaded the UF on the Linux server to get the security logs from it. After I created the server class and added clients to it, I downloaded the UF to it and made 2 apps (one for nix and one for main) to receive logs. When I searched the search head, no logs appeared I think the error is in the nix app. Does anyone know what modifications are required to be made on the nix app so that I can take the security logs?
I want to connect Splunk to the Linux server, and I downloaded the UF on the Linux server to get the security logs from it. After I created the server class and added clients to it, I downloaded the ...
See more...
I want to connect Splunk to the Linux server, and I downloaded the UF on the Linux server to get the security logs from it. After I created the server class and added clients to it, I downloaded the UF to it and made 2 apps (one for nix and one for main) to receive logs. When I searched the search head, no logs appeared I think the error is in the nix app. Does anyone know what modifications are required to be made on the nix app so that I can take the security logs?
Hi @maede_yavari, this isn't a Splunk issue: if you want to have more security running Splunk Universal Forwarder with a not LOCAL SYSTEM user, you have to give to the user that you're using the gra...
See more...
Hi @maede_yavari, this isn't a Splunk issue: if you want to have more security running Splunk Universal Forwarder with a not LOCAL SYSTEM user, you have to give to the user that you're using the grants to read you eventlog: you need a Windows technician, or you could accept to run Splunk using SYSTEM. Ciao. Giuseppe
Hi and thanks for the support. >>> If your cluster has been recently migrated from single site to multisite : NO >>> Restarting CM _might_ resolve your issue : Already done (CM only and rolling re...
See more...
Hi and thanks for the support. >>> If your cluster has been recently migrated from single site to multisite : NO >>> Restarting CM _might_ resolve your issue : Already done (CM only and rolling restart with no effect for RP/SF) The operation was to add new indexers, and then decomission the old ones. It's a multisite since the build of everything. But I have "constrain_singlesite_buckets=true", on the CM and on the INDX in etc/system/default/server.conf Maybe it was buckets from the beginning of my infrastructure, at a time that the multi site cluster was not builded and operational ? Do you know the impact of changing constrain_singlesite_buckets to false ? Many thanks
If your cluster has been recently migrated from single site to multisite there might be issues with "dangling" non-multisite buckets especially if you have constrain_singlesite_buckets=true. Restart...
See more...
If your cluster has been recently migrated from single site to multisite there might be issues with "dangling" non-multisite buckets especially if you have constrain_singlesite_buckets=true. Restarting CM _might_ resolve your issue but doesn't have to. In case it doesn't it's probably a case for support.
You can export the data from searches (use by panels) in csv format which Excel can read, but I am not aware of any standard feature to export a complete dashboard in XLS format. What would that even...
See more...
You can export the data from searches (use by panels) in csv format which Excel can read, but I am not aware of any standard feature to export a complete dashboard in XLS format. What would that even look like (when you could potentially have base searches, hidden panels, saved searches, etc.)?
Hi We have a cloud instance , however we would like have predictive storage analysis for feature requirement. As part of the i was trying to look after the accurate options available. While look at...
See more...
Hi We have a cloud instance , however we would like have predictive storage analysis for feature requirement. As part of the i was trying to look after the accurate options available. While look at it ,i noticed that our daily ingestion data is around 450-500 GB but when i check the Searchable storage (DDAS) has increased around 60GB compared to previous day. Could you please let me know whether i'm missing anything while doing this calculations. Secondly, is there a way to do predictable SVC & Storage analysis (DDAS & DDAA) for future requirement.
Hello! Tell me, is there a ready-made solution in splunk that makes is possible to save data from dashboards into excel. This functionality is needed for all existing dashboards. It would be nice if ...
See more...
Hello! Tell me, is there a ready-made solution in splunk that makes is possible to save data from dashboards into excel. This functionality is needed for all existing dashboards. It would be nice if the xls item appeared on the export button. Thank you!
Hello, I'm encountering an issue with Splunk Forwarder on a Windows Server OS. When it runs under the "SplunkForwarder" user, it fails to send Sysmon logs. Surprisingly, the forwarding works correct...
See more...
Hello, I'm encountering an issue with Splunk Forwarder on a Windows Server OS. When it runs under the "SplunkForwarder" user, it fails to send Sysmon logs. Surprisingly, the forwarding works correctly when the forwarder is configured to run as the "SYSTEM" user. While this resolves the immediate problem, I'm hesitant to continue using the "SYSTEM" user due to its extensive access to system resources. I'm seeking a better solution that allows the Splunk Forwarder to send Sysmon logs without compromising security. Any guidance on this matter would be greatly appreciated. Thank you.
Command passed under search of my Monitoring Console, I have all my 17 Indexers "Up" with the right site repartion. I dont see the decomissionned indexer who dont have any splunkd running. (Splunkd...
See more...
Command passed under search of my Monitoring Console, I have all my 17 Indexers "Up" with the right site repartion. I dont see the decomissionned indexer who dont have any splunkd running. (Splunkd have been disabled). Thanks