All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Splunkers, I have some doubt about SSL use for S2S communication. First, let us remark what is sure, with no doubts: 1. SSL provide a compression ratio better than default one: 1:8 vs 1:2. 2. S... See more...
Hi Splunkers, I have some doubt about SSL use for S2S communication. First, let us remark what is sure, with no doubts: 1. SSL provide a compression ratio better than default one: 1:8 vs 1:2. 2. SSL Compression does NOT affect license. In general, ALL Compression on Splunk does not affect license. This means that: if before compression data has dimension X + Y and, after it, X, consumed license will be X + Y, not X. 3. From a security perspective, if I have multiple Splunk components, the best way to configure flows should be encrypt all of them. For example if I have UF -> HF -> IDX, for security purpose the best is to encrypt both UF -> HF flow and HF -> IDX one. Now, for a customer we have the following data flow: Log sources -> Intermediate Forwarder -> Heavy forwarders -> Indexers I know that when possible we should avoid HF and IF but, for different reason, we need them on this particular environment. Here, 2 doubt rise: Suppose we apply SSL only between IF and HF. 1. Data arrive compressed on HF. When they leaves it and goes to IDXs, they are still compressed? So, for example suppose we have original data with a total dimension of 800 MB: Between IF and HF exist SSL, so in HF there is a tcp-ssl input on port 9997 SSL compression is applied: now data have 100 MB dimension When they arrive to HF, they have 100 MB dimension When they leave the HF to go on IDXs, they still have 100 MB dimension? Suppose now we apply SSL on entire S2S data flow: between IF and HF and between HF and IDXs. In addition to a better security posture, which other advantage we should achieve going in this direction?    
OK so we have 2 search heads and we want to migrate enterprise security app from 1 search head to another . How should we do that step by step so that we don't face any issues.
I am pretty sure that a leave a reply to you yesterday, but I do found it. Anyway, I already tried what you suggest to do, thanks. Looking at metrics.log I found a line that I think demonstrate that ... See more...
I am pretty sure that a leave a reply to you yesterday, but I do found it. Anyway, I already tried what you suggest to do, thanks. Looking at metrics.log I found a line that I think demonstrate that splunk in some way get the data (ev=21) but discard it: 02-19-2025 10:49:29.584 +0100 INFO Metrics - group=per_source_thruput, series="/opt/splunk/etc/apps/adsmart_summary/bin/getcampaigndata.py", kbps=0.436, eps=0.677, kb=13.525, ev=21, avg_age=-3600.000, max_age=-3600 host = splunkidx01 source = /opt/splunk/var/log/splunk/metrics.log sourcetype = splunkd Maybe the issue is avg_age and max_age that are negative, so it is something about the timestamp the the script produce for the events.
Yes. this is my localhost so I use admin local user (the default one) and its have the max permission 
@SHEBHADAYANA  Did you implement this? In the Advanced Options, configure the throttling settings and select the duration (in seconds) to suppress alerts. Throttling prevents this correlation searc... See more...
@SHEBHADAYANA  Did you implement this? In the Advanced Options, configure the throttling settings and select the duration (in seconds) to suppress alerts. Throttling prevents this correlation search from generating duplicate notable events or alerts for the same issue every time it runs. If you apply grouping to one or more fields, throttling will be enforced on each unique combination of field values. For example, setting throttling by host once per day ensures that only one notable event of this type is generated per server per day. How can we suppress notable events in Splunk ITSI? https://community.splunk.com/t5/Splunk-ITSI/How-to-suppress-Notable-Events-in-ITSI/m-p/610503 
Hi @seiimonn ! I noticed that, every time i start AME Events, i get the following error. I appreciate your help. A. 2/20/25 8:40:34.605 AM   127.0.0.1 - splunk-system-user [20/Feb/2025:08... See more...
Hi @seiimonn ! I noticed that, every time i start AME Events, i get the following error. I appreciate your help. A. 2/20/25 8:40:34.605 AM   127.0.0.1 - splunk-system-user [20/Feb/2025:08:40:34.605 +0100] "GET /servicesNS/nobody/alert_manager_enterprise/messages/ame-index-resilience-default-error HTTP/1.0" 404 177 "-" "splunk-sdk-python/1.7.3" - - - 0ms
"Hello Team, I have created a Maintenance Window in Splunk ITSI to suppress alerts from certain correlation searches. However, despite the Maintenance Window being active, we are still receiving ale... See more...
"Hello Team, I have created a Maintenance Window in Splunk ITSI to suppress alerts from certain correlation searches. However, despite the Maintenance Window being active, we are still receiving alerts. What could be the possible reasons for this, and how can we troubleshoot and resolve the issue?"
Hi @KKuser  1. For a user to use Splunk support portal, should the user be granted access to the support portal? Don't they get the access inherently? If you have multiple cloud instances within yo... See more...
Hi @KKuser  1. For a user to use Splunk support portal, should the user be granted access to the support portal? Don't they get the access inherently? If you have multiple cloud instances within your organisation then these will be covered by individual "entitlements" within the Splunk Support Portal. This means that the operational and organisation contacts can be different for each, and that they shouldnt also be granted to both unless otherwise requested. For example your IT department might be an operational contact for both, however a user in Department A might be a contact on one, and Department B might be a contact on another. Users who are configured to use your Splunk instance do not automatically get added to the support portal, this must be done by request to Splunk Support/Account team or by configuration by an existing portal admin in your organisation. 2. Company has 2 different instances of Splunk. Will the dashboard created in one be visible in another as well? Are the 2 instances independent of each other? Can you paint a picture for me, how they'd be related? No - If you have 2 different instances of Splunk then dashboards and other knowledge objects/searches etc created in one will not automatically replicate to the other, as these are two different instances and not part of a Search Head Cluster. Data can be sent to one independently to the other, or you can have data sent to both. In other words you might find that the different data sources are sent to each, but some could be the same.  3. In order for me to know the answers to these questions, what concepts/topics should I know well? Check out the Splunk Cloud Admin training which will give details on how Splunk Cloud works, from getting data in to configuration of users/roles and apps. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will  
Hi @anooshac  Please be aware that if you are using Splunk >9.0.1 then you should use the V2 endpoints rather than the un-versioned endpoints mentioned in older answers which are now deprecated.  H... See more...
Hi @anooshac  Please be aware that if you are using Splunk >9.0.1 then you should use the V2 endpoints rather than the un-versioned endpoints mentioned in older answers which are now deprecated.  Here is a sample cURL request to run a search and export the results as a CSV using Splunk's /services/search/v2/jobs/export endpoint: curl -k -u "admin:yourpassword" \ -X POST \ https://splunk_server:8089/services/search/v2/jobs/export \ -d search="search index=_internal | head 10" \ -d output_mode=csv Explanation of the Command: curl: Command-line tool used for making HTTP requests. -k: This flag allows connections to SSL sites without certificates (useful if Splunk uses self-signed certificates) - This should be omitted if you have a valid SSL certificate on your management port, -u "admin:yourpassword": Specifies the authentication credentials (username:password). Replace admin and yourpassword with your actual Splunk credentials. Alternatively you can use Token based auth -X POST: Specifies that this is a POST request, as required by Splunk's search export API. https://splunk_server:8089/services/search/v2/jobs/export: The Splunk API endpoint for executing a search and exporting results immediately. Replace splunk_server with the actual Splunk host (e.g., splunk.yourcompany.com). 8089 is the default Splunk REST API management port. -d search="search index=_internal | head 10": The actual Splunk search query. index=_internal queries the internal Splunk logs. head 10 limits the output to the first 10 results. -d output_mode=csv: Specifies that the output format should be in CSV (other options include JSON and XML). Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
right now we have a online soultion in the central observability otel collector send to splunk   now im looking for the offline solution where internet connectivity may be lost for 14 hours... See more...
right now we have a online soultion in the central observability otel collector send to splunk   now im looking for the offline solution where internet connectivity may be lost for 14 hours+ on-premises cluster should send to both the FDR(factory data room) and central observability i need a solution like the above for offline how it can be achived for offline when there no internet for recent time?  
You must be running in FIPS mode. The settings will work fine in non-FIPS, but in FIPS mode if you even add the 1 setting of 'sslRootCAPath' your kvstore will fail. The only way to get kvstore to r... See more...
You must be running in FIPS mode. The settings will work fine in non-FIPS, but in FIPS mode if you even add the 1 setting of 'sslRootCAPath' your kvstore will fail. The only way to get kvstore to run in FIPS mode with your own certs is to rename your certs to be the default filepaths, which is $SPLUNK_HOME/etc/auth/server.pem and cacert.pem You will also have to cat 'appsCA.pem' to the end of your cacert.pem so that the Manage Apps UI works. Note that the kvstore is only needed on Search Heads, so you could disable or ignore it on non-SHs. The other problem you will still have is you cannot set 'requireClientCert=true' in FIPS mode, possibly in non-FIPS I have to confirm that again.  
If your problem is resolved, then please click the "Accept as Solution" button to help future readers.
@anooshac  Please check this documentations for solution.  https://community.splunk.com/t5/Installation/What-is-a-good-way-to-export-data-from-Splunk-via-rest-API/m-p/551542  https://docs.splunk.c... See more...
@anooshac  Please check this documentations for solution.  https://community.splunk.com/t5/Installation/What-is-a-good-way-to-export-data-from-Splunk-via-rest-API/m-p/551542  https://docs.splunk.com/Documentation/Splunk/9.4.0/Search/ExportdatausingRESTAPI  https://stackoverflow.com/questions/67525334/splunk-data-export-using-api 
@anooshac  have you seen this post ?  export search results using curl - Splunk Community 
@KKuser  Scenario 1: Independent Instances If the two Splunk instances are completely independent: Data ingested into one instance will not be visible in the other. Dashboards, reports, alerts, a... See more...
@KKuser  Scenario 1: Independent Instances If the two Splunk instances are completely independent: Data ingested into one instance will not be visible in the other. Dashboards, reports, alerts, and searches are all tied to the data present in their respective instances. A dashboard created in Instance A will not be visible in Instance B unless manually exported and imported. Use Case: This setup is common when different teams or departments need isolated environments or when there are compliance or security boundaries. Scenario 2: Connected Instances (Search Head Clustering) If the instances are connected: Search Head Clustering: If both instances are part of a search head cluster, dashboards can be shared across the cluster. You could manually or automatically sync apps, including dashboards, between the instances.
@KKuser  There are 3 ways to file (or raise) a successful Support Case with Splunk. However you will need to be assigned to your Companies active Enterprise or Global Entitlement before doing so oth... See more...
@KKuser  There are 3 ways to file (or raise) a successful Support Case with Splunk. However you will need to be assigned to your Companies active Enterprise or Global Entitlement before doing so otherwise your case may navigate into our Community Queue which is not often monitored by our Engineers. 1. Support Portal (the best way to file a case) Login into "splunk.com" and navigate to "Support & Services" on the top left of the page and then click onto "Support Portal" on the drop down list 2. Email You can email us at "support@splunk.com" which will file a default P3 case. Please note your email address must be associated with your accounts entitlement. 3. Phone You can call our Support line on (855) SPLUNK-S or you can find local numbers Contact Information & Splunk Locations | Splunk
Thanks @livehybrid  not sure if its related to the mail i sent or an automatic process but at 3:00 AM i received an email that the app has been reinstated.  
@KKuser  Working with Splunk Support https://www.splunk.com/en_us/pdfs/support/working-with-support.pdf 
@KKuser  1. For a user to use Splunk support portal, should the user be granted access to the support portal? Don't they get the access inherently? Users need to be granted access to the Splunk Sup... See more...
@KKuser  1. For a user to use Splunk support portal, should the user be granted access to the support portal? Don't they get the access inherently? Users need to be granted access to the Splunk Support Portal. Access is not inherent and typically requires an active support entitlement. Users must log in with their credentials to access support resources. The support portal includes license management, support cases, downloads, and other resources that may contain sensitive or licensed information. So, access is usually restricted to certain roles within an organization. 2. Company has 2 different instances of Splunk. Will the dashboard created in one be visible in another as well? Are the 2 instances independent of each other? Can you paint a picture for me, how they'd be related? Yes, Splunk instances are generally independent of each other unless explicitly configured to share data. A dashboard created in one instance will not be visible in another unless: You export and import the dashboard manually. You set up search head clustering or data replication between instances.
@zksvcI suggest raising a Splunk support ticket.