All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This is actually a MSI quirk. I've seen it happen with various softwares over last 20+ years. And yes, it's frustrating.
Hi @gcusello , I am currently using a license Splunk Enterprise, but at the moment I do not have the email information used to purchase the license.
Ultimately, it was time for a new machine.   Upshot: Not installing Splunk on the new machine to avoid this from happening again.
I'm trying to take a single node Splunk Enterprise system and expand it to a cluster with an additional search head and indexes. I copied the existing install to a new system and that worked perfect... See more...
I'm trying to take a single node Splunk Enterprise system and expand it to a cluster with an additional search head and indexes. I copied the existing install to a new system and that worked perfectly. Then I added the cluster manager and indexes and all of the settings that were in the old system that were copied to the search head were gone. I'm assuming that I put the copy of the single node into the wrong role, but I'm not sure which role I should have picked.
Yes, the replication_host setting only applies to search head clusters.  That is what the second sentence in the description is trying to say. Have you tried KVStore backup/restore?  It will transfe... See more...
Yes, the replication_host setting only applies to search head clusters.  That is what the second sentence in the description is trying to say. Have you tried KVStore backup/restore?  It will transfer data from one SH to another, but is not suitable for keeping the two in sync - that is what replication is for.
i want to replicate search1's kvstore to search2. that two cluster is non-clustering and standalone.   And i found option in server.conf [kvstore] stanza replication_host. Does this option only o... See more...
i want to replicate search1's kvstore to search2. that two cluster is non-clustering and standalone.   And i found option in server.conf [kvstore] stanza replication_host. Does this option only operated in clustering environment?   
This is the error I get when trying to start SOAR on the Warmstandby: Splunk SOAR is in standby. Starting all services except automation daemons Starting Database server (PostgreSQL): [ OK ] Sta... See more...
This is the error I get when trying to start SOAR on the Warmstandby: Splunk SOAR is in standby. Starting all services except automation daemons Starting Database server (PostgreSQL): [ OK ] Starting Connection pooler (PgBouncer): [ OK ] Checking database connectivity: [ OK ] Checking component versions: [ OK ] Starting Supervisord: [ OK ] Starting Splunk SOAR daemons: [ OK ] Checking Supervisord processes: [FAILED] add_to_es_index failed to start. Check /opt/soar/var/log/phantom/add-es-index-stderr.log for more info. Splunk SOAR startup failed.   I saw there is a known issue for this: https://docs.splunk.com/Documentation/SOARonprem/6.3.1/ReleaseNotes/KnownIssues 2024-12-03 PSAAS-20901 supervisord failing to start on warm standby instance   Does anyone have a workaround or fix for this issue?
1. Why would you fiddle with license manager? (unless it's on the CM which is not a very good idea) 2. Why copy anything from var/run? 3. Switching indexers between CMs is asking for trouble. I'd r... See more...
1. Why would you fiddle with license manager? (unless it's on the CM which is not a very good idea) 2. Why copy anything from var/run? 3. Switching indexers between CMs is asking for trouble. I'd replace a CM in place.
You can Download IT Essentials Work, it's an ITSI without a license. But you can upgrade it to a full ITSI if you insert a license once. https://splunkbase.splunk.com/app/5403
There are some apps on splunkbase which might help you: https://splunkbase.splunk.com/app/7384 https://splunkbase.splunk.com/app/6724  
Hi all, I would like to migrate our current cluster master to the a new server. Here's what I gather the process to do so. If someone can take a look and let me know if there's anything missing that... See more...
Hi all, I would like to migrate our current cluster master to the a new server. Here's what I gather the process to do so. If someone can take a look and let me know if there's anything missing that'll be much appreciated. Thank you! Additionally, should I enable cluster maintenance mode on the old cluster master prior to the migration?  ======================================================================== ======================== Migrate the Cluster Master ==================== ======================================================================== - Stop the splunk service on both the old and new cluster master /opt/splunk/bin/splunk stop - On the old Cluster Master change encrypted passwords to clear text and save theses find /opt/splunk/etc -name '*.conf' -exec grep -inH '\$[0-9]\$' {} \; /opt/splunk/bin/splunk show-decrypted --value '$encryptedpassword' - - Copy files to the new Cluster Master scp -r /opt/splunk/var/run/splunk/cluster/remote-bundle/ new_splunkmaster:/opt/splunk/var/run/splunk/cluster/remote-bundle/ scp -r /opt/splunk/etc/master-apps/ new_splunkmaster:/opt/splunk/etc/ scp -r /opt/splunk/etc/system/local/server.conf new_splunkmaster:/opt/splunk/etc/system/local/ - Make sure the above decrypted the main 2 passwords below and replace them in the copied server.conf, in clear text, on the new Cluster Master until it is restarted when it will then encrypt. [general] sslPassword= [clustering] pass4SymmKey= - Start splunk on the new Cluster Master /opt/splunk/bin/splunk start - Point indexers to the new Cluster Master /opt/splunk/bin/splunk edit cluster-config -mode peer -manager_uri https://new_splunkmaster:8089 -replication_port 9887 -secret new_splunkmaster - Point the search heads to the new Cluster Master /opt/splunk/bin/splunk edit cluster-config -mode searchhead -manager_uri https://new_splunkmaster:8089 -secret new_splunkmaster ======================================================================== ======================== Migrate the License Manager ==================== ======================================================================== - Promote a license peer to be the manager: On the peer, navigate to Settings > Licensing. Click Switch to local manager. On the Change manager association page, choose Designate this Splunk instance as the manager license server. Click Save. Restart the Splunk Enterprise services. On the new license manager, install your licenses. See Install a license. Configure the license peers to use the new license manager: - On the peer (indexer / search heads / deployer), navigate to Settings > Licensing. Click Switch to local manager. Update the Manager license server URI to point at the new license manager. Click Save. Restart the Splunk Enterprise services. Demote the old license manager to be a peer: - On the old license manager, navigate to Settings > Licensing. Click Change to peer. Click Designate a different Splunk instance as the manager license server. Update the Manager license server URI to point at the new license manager. Click Save. Stop the Splunk Enterprise services. Using the CLI, delete any license files under $SPLUNK_HOME/etc/licenses/enterprise/. Start the Splunk Enterprise services.  
Settings in the [default] stanza apply if they are not mentioned in a specific stanza.  IOW, if you have [default] frozenTimePeriodInSecs = 36000000 [ubuntu] frozenTimePeriodInSecs = 72000000 [rhe... See more...
Settings in the [default] stanza apply if they are not mentioned in a specific stanza.  IOW, if you have [default] frozenTimePeriodInSecs = 36000000 [ubuntu] frozenTimePeriodInSecs = 72000000 [rhel] someOtherSetting = foo  The ubuntu index will have a retention period of 72,000,000 seconds and the rhel index will have a retention period of 36,000,000 seconds.
You can use the API to perform normal searches. Theoretically, you could retrieve indexed events and reingest them on the receiving side. But that is far far from convenient and can cause loads of pr... See more...
You can use the API to perform normal searches. Theoretically, you could retrieve indexed events and reingest them on the receiving side. But that is far far from convenient and can cause loads of problems.
You can't do it directly since when you so timechart by a field, it will get split. So you have to improvise. EDIT: Missed the fact that was avg(), not sum(). Of course summing averages is not the w... See more...
You can't do it directly since when you so timechart by a field, it will get split. So you have to improvise. EDIT: Missed the fact that was avg(), not sum(). Of course summing averages is not the way to go so @ITWhisperer 's solution is the one to go for. The obvious solution already provided is timechart | addtotals. You could also try to manually bin _time and stats but it boils down to the same thing. Several caveats: 1) Careful with rounding. 2) Do fillnull if you can expect the by-field to be empty sometimes. Otherwise your total will be wrong. 3) Either limit=0 or useother=t - without it you'll lose data for the sum.
Yes and I don't think that's what I want.  That seems to sum the split values, I want the non-split (effectively average) value.  If there were a similar avgtotals that would probably be what I'm loo... See more...
Yes and I don't think that's what I want.  That seems to sum the split values, I want the non-split (effectively average) value.  If there were a similar avgtotals that would probably be what I'm looking for.
The goal is to calculate an overhead value over a span of 1 second. Overhead is calcuated as being the difference between totaltime and routingtime.  Then for each host as identified by hostname, cre... See more...
The goal is to calculate an overhead value over a span of 1 second. Overhead is calcuated as being the difference between totaltime and routingtime.  Then for each host as identified by hostname, create a line chart that shows the overhead for each host, and include another line on the chart that shows the average overhead across all hosts. Here are a few anonymized sample records: {"severity":"Audit","hostname":"ahost02","received":"2025-01-14T19:12:44.623Z","protocol":"http","routingtime":189,"totaltime":234} {"severity":"Audit","hostname":"ahost01","received":"2025-01-14T19:12:44.650Z","protocol":"https","routingtime":27,"totaltime":78} {"severity":"Audit","hostname":"ahost01","received":"2025-01-14T19:12:44.634Z","protocol":"http","routingtime":36,"totaltime":74} {"severity":"Audit","hostname":"ahost02","received":"2025-01-14T19:12:44.427Z","protocol":"http","routingtime":205,"totaltime":220}
I cannot see an option how this can do without any configuration on onprem side. Usually clients approve some configuration changes if they really want this and when those options have explained to ... See more...
I cannot see an option how this can do without any configuration on onprem side. Usually clients approve some configuration changes if they really want this and when those options have explained to them.
And if the client does not accept any type of configuration, is it possible to extract the information or events using Splunk's APIs?
@isoutamo Sorry, my bad. Not sure, how ended up finding that post. I will keep in mind.
What does your expected output look like?