All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

hi, Why do we have 2 fields  yearmonthday  and yearmonth in this query? 
I am using the following query to display a result on a dashboard (query with sample data which resembles the data I use):   | makeresults | eval zip="Test-10264,Production;Test-10262,Productio... See more...
I am using the following query to display a result on a dashboard (query with sample data which resembles the data I use):   | makeresults | eval zip="Test-10264,Production;Test-10262,Production;Test-102123,Production;MGM-1,Development;MGM-2,Development;MGM-3,Development;MGM-4,Development" | makemv delim=";" zip | mvexpand zip | table zip _time ```End of sample data``` | rex field=zip "(?<ticket>.+?),(?<Status>.+$)" | stats values(ticket) as tickets by Status | stats count(tickets) as amount by Status   And this is being returned by visualization:   The issue I'm facing is both columns have the same color, but I want to each column to have its own unique color (this doesn't have to be predefined, it would be okay if Splunk itself chooses random colors).  Thanks in advance!   Edit: typo  
@livehybrid    thank you, your solution almost works. I have saved search "dashboard_linux_logs_table_caddy": host="$select_hosts$" program="$select_daemon$" priority="$select_log_level$" | field... See more...
@livehybrid    thank you, your solution almost works. I have saved search "dashboard_linux_logs_table_caddy": host="$select_hosts$" program="$select_daemon$" priority="$select_log_level$" | fields _time,program   And Dashboard Studio DataSource:   "dataSources": { "ds_dashboard_linux_logs_table": { "type": "ds.search", "options": { "query": "| savedsearch \"dashboard_linux_logs_table_$select_daemon$\" select_hosts=\"$select_hosts$\" select_daemon=\"$select_daemon$\" select_log_level=\"$select_log_level$\"" }, "name": "dashboard_linux_logs_table" } }   When changing drop-down value (token: select_daemon), table does pick up right savedsearch. The only problem query parameter "| fields _time,program" of savedsearch is ignored. Still looking for solution.
@SN1  The procedure was: - deploy the splunk enterprise to the new server, use the same version you have on the existing server - tar the entire $SPLUNK_HOME/etc folder from the existing splunk En... See more...
@SN1  The procedure was: - deploy the splunk enterprise to the new server, use the same version you have on the existing server - tar the entire $SPLUNK_HOME/etc folder from the existing splunk Enterprise security server, but I recommend to stop the splunk service first, just to avoid any change from customers - Stop the splunk service at new server - copy the tar file to the new server at $SPLUNK_HOME/etc folder - Stop Splunk service on the current Splunk Enterprise server - Copy the bundle file from $SPLUNK_HOME/var/run from the existing server to the new one on the same path. Bundle file should be something like this servername-1570745614.bundle - Start splunk service on the new server - Monitor for any error message of lack of configuration issues Before you run this procedure, stop the existing Splunk server, run a full backup of etc, just to make sure if you the last updated configuration/apps in case you have any issues, you can recover from the point where everything is working properly on the current splunk environment.
@SN1  It is pretty easy. if you're speaking of a not clustered SH, you have only to copy the Enterprise Security apps from the old SH to new one. The easiest way it to install the same Splunk and E... See more...
@SN1  It is pretty easy. if you're speaking of a not clustered SH, you have only to copy the Enterprise Security apps from the old SH to new one. The easiest way it to install the same Splunk and ES on the new SH and copy the entire Splunk etc folder from the old SH to the new one and the end you can upgrade Splunk. Copy the entire `$SPLUNK_HOME/etc/*` and `$SPLUNK_HOME/var/run` directory space. Restart Splunk. This all presumes that you setup Splunk and ES correctly the first time (i.e. all index and summaries are on your indexers).
Hi Splunkers, I have some doubt about SSL use for S2S communication. First, let us remark what is sure, with no doubts: 1. SSL provide a compression ratio better than default one: 1:8 vs 1:2. 2. S... See more...
Hi Splunkers, I have some doubt about SSL use for S2S communication. First, let us remark what is sure, with no doubts: 1. SSL provide a compression ratio better than default one: 1:8 vs 1:2. 2. SSL Compression does NOT affect license. In general, ALL Compression on Splunk does not affect license. This means that: if before compression data has dimension X + Y and, after it, X, consumed license will be X + Y, not X. 3. From a security perspective, if I have multiple Splunk components, the best way to configure flows should be encrypt all of them. For example if I have UF -> HF -> IDX, for security purpose the best is to encrypt both UF -> HF flow and HF -> IDX one. Now, for a customer we have the following data flow: Log sources -> Intermediate Forwarder -> Heavy forwarders -> Indexers I know that when possible we should avoid HF and IF but, for different reason, we need them on this particular environment. Here, 2 doubt rise: Suppose we apply SSL only between IF and HF. 1. Data arrive compressed on HF. When they leaves it and goes to IDXs, they are still compressed? So, for example suppose we have original data with a total dimension of 800 MB: Between IF and HF exist SSL, so in HF there is a tcp-ssl input on port 9997 SSL compression is applied: now data have 100 MB dimension When they arrive to HF, they have 100 MB dimension When they leave the HF to go on IDXs, they still have 100 MB dimension? Suppose now we apply SSL on entire S2S data flow: between IF and HF and between HF and IDXs. In addition to a better security posture, which other advantage we should achieve going in this direction?    
OK so we have 2 search heads and we want to migrate enterprise security app from 1 search head to another . How should we do that step by step so that we don't face any issues.
I am pretty sure that a leave a reply to you yesterday, but I do found it. Anyway, I already tried what you suggest to do, thanks. Looking at metrics.log I found a line that I think demonstrate that ... See more...
I am pretty sure that a leave a reply to you yesterday, but I do found it. Anyway, I already tried what you suggest to do, thanks. Looking at metrics.log I found a line that I think demonstrate that splunk in some way get the data (ev=21) but discard it: 02-19-2025 10:49:29.584 +0100 INFO Metrics - group=per_source_thruput, series="/opt/splunk/etc/apps/adsmart_summary/bin/getcampaigndata.py", kbps=0.436, eps=0.677, kb=13.525, ev=21, avg_age=-3600.000, max_age=-3600 host = splunkidx01 source = /opt/splunk/var/log/splunk/metrics.log sourcetype = splunkd Maybe the issue is avg_age and max_age that are negative, so it is something about the timestamp the the script produce for the events.
Yes. this is my localhost so I use admin local user (the default one) and its have the max permission 
@SHEBHADAYANA  Did you implement this? In the Advanced Options, configure the throttling settings and select the duration (in seconds) to suppress alerts. Throttling prevents this correlation searc... See more...
@SHEBHADAYANA  Did you implement this? In the Advanced Options, configure the throttling settings and select the duration (in seconds) to suppress alerts. Throttling prevents this correlation search from generating duplicate notable events or alerts for the same issue every time it runs. If you apply grouping to one or more fields, throttling will be enforced on each unique combination of field values. For example, setting throttling by host once per day ensures that only one notable event of this type is generated per server per day. How can we suppress notable events in Splunk ITSI? https://community.splunk.com/t5/Splunk-ITSI/How-to-suppress-Notable-Events-in-ITSI/m-p/610503 
Hi @seiimonn ! I noticed that, every time i start AME Events, i get the following error. I appreciate your help. A. 2/20/25 8:40:34.605 AM   127.0.0.1 - splunk-system-user [20/Feb/2025:08... See more...
Hi @seiimonn ! I noticed that, every time i start AME Events, i get the following error. I appreciate your help. A. 2/20/25 8:40:34.605 AM   127.0.0.1 - splunk-system-user [20/Feb/2025:08:40:34.605 +0100] "GET /servicesNS/nobody/alert_manager_enterprise/messages/ame-index-resilience-default-error HTTP/1.0" 404 177 "-" "splunk-sdk-python/1.7.3" - - - 0ms
"Hello Team, I have created a Maintenance Window in Splunk ITSI to suppress alerts from certain correlation searches. However, despite the Maintenance Window being active, we are still receiving ale... See more...
"Hello Team, I have created a Maintenance Window in Splunk ITSI to suppress alerts from certain correlation searches. However, despite the Maintenance Window being active, we are still receiving alerts. What could be the possible reasons for this, and how can we troubleshoot and resolve the issue?"
Hi @KKuser  1. For a user to use Splunk support portal, should the user be granted access to the support portal? Don't they get the access inherently? If you have multiple cloud instances within yo... See more...
Hi @KKuser  1. For a user to use Splunk support portal, should the user be granted access to the support portal? Don't they get the access inherently? If you have multiple cloud instances within your organisation then these will be covered by individual "entitlements" within the Splunk Support Portal. This means that the operational and organisation contacts can be different for each, and that they shouldnt also be granted to both unless otherwise requested. For example your IT department might be an operational contact for both, however a user in Department A might be a contact on one, and Department B might be a contact on another. Users who are configured to use your Splunk instance do not automatically get added to the support portal, this must be done by request to Splunk Support/Account team or by configuration by an existing portal admin in your organisation. 2. Company has 2 different instances of Splunk. Will the dashboard created in one be visible in another as well? Are the 2 instances independent of each other? Can you paint a picture for me, how they'd be related? No - If you have 2 different instances of Splunk then dashboards and other knowledge objects/searches etc created in one will not automatically replicate to the other, as these are two different instances and not part of a Search Head Cluster. Data can be sent to one independently to the other, or you can have data sent to both. In other words you might find that the different data sources are sent to each, but some could be the same.  3. In order for me to know the answers to these questions, what concepts/topics should I know well? Check out the Splunk Cloud Admin training which will give details on how Splunk Cloud works, from getting data in to configuration of users/roles and apps. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will  
Hi @anooshac  Please be aware that if you are using Splunk >9.0.1 then you should use the V2 endpoints rather than the un-versioned endpoints mentioned in older answers which are now deprecated.  H... See more...
Hi @anooshac  Please be aware that if you are using Splunk >9.0.1 then you should use the V2 endpoints rather than the un-versioned endpoints mentioned in older answers which are now deprecated.  Here is a sample cURL request to run a search and export the results as a CSV using Splunk's /services/search/v2/jobs/export endpoint: curl -k -u "admin:yourpassword" \ -X POST \ https://splunk_server:8089/services/search/v2/jobs/export \ -d search="search index=_internal | head 10" \ -d output_mode=csv Explanation of the Command: curl: Command-line tool used for making HTTP requests. -k: This flag allows connections to SSL sites without certificates (useful if Splunk uses self-signed certificates) - This should be omitted if you have a valid SSL certificate on your management port, -u "admin:yourpassword": Specifies the authentication credentials (username:password). Replace admin and yourpassword with your actual Splunk credentials. Alternatively you can use Token based auth -X POST: Specifies that this is a POST request, as required by Splunk's search export API. https://splunk_server:8089/services/search/v2/jobs/export: The Splunk API endpoint for executing a search and exporting results immediately. Replace splunk_server with the actual Splunk host (e.g., splunk.yourcompany.com). 8089 is the default Splunk REST API management port. -d search="search index=_internal | head 10": The actual Splunk search query. index=_internal queries the internal Splunk logs. head 10 limits the output to the first 10 results. -d output_mode=csv: Specifies that the output format should be in CSV (other options include JSON and XML). Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
right now we have a online soultion in the central observability otel collector send to splunk   now im looking for the offline solution where internet connectivity may be lost for 14 hours... See more...
right now we have a online soultion in the central observability otel collector send to splunk   now im looking for the offline solution where internet connectivity may be lost for 14 hours+ on-premises cluster should send to both the FDR(factory data room) and central observability i need a solution like the above for offline how it can be achived for offline when there no internet for recent time?  
You must be running in FIPS mode. The settings will work fine in non-FIPS, but in FIPS mode if you even add the 1 setting of 'sslRootCAPath' your kvstore will fail. The only way to get kvstore to r... See more...
You must be running in FIPS mode. The settings will work fine in non-FIPS, but in FIPS mode if you even add the 1 setting of 'sslRootCAPath' your kvstore will fail. The only way to get kvstore to run in FIPS mode with your own certs is to rename your certs to be the default filepaths, which is $SPLUNK_HOME/etc/auth/server.pem and cacert.pem You will also have to cat 'appsCA.pem' to the end of your cacert.pem so that the Manage Apps UI works. Note that the kvstore is only needed on Search Heads, so you could disable or ignore it on non-SHs. The other problem you will still have is you cannot set 'requireClientCert=true' in FIPS mode, possibly in non-FIPS I have to confirm that again.  
If your problem is resolved, then please click the "Accept as Solution" button to help future readers.
@anooshac  Please check this documentations for solution.  https://community.splunk.com/t5/Installation/What-is-a-good-way-to-export-data-from-Splunk-via-rest-API/m-p/551542  https://docs.splunk.c... See more...
@anooshac  Please check this documentations for solution.  https://community.splunk.com/t5/Installation/What-is-a-good-way-to-export-data-from-Splunk-via-rest-API/m-p/551542  https://docs.splunk.com/Documentation/Splunk/9.4.0/Search/ExportdatausingRESTAPI  https://stackoverflow.com/questions/67525334/splunk-data-export-using-api 
@anooshac  have you seen this post ?  export search results using curl - Splunk Community 
@KKuser  Scenario 1: Independent Instances If the two Splunk instances are completely independent: Data ingested into one instance will not be visible in the other. Dashboards, reports, alerts, a... See more...
@KKuser  Scenario 1: Independent Instances If the two Splunk instances are completely independent: Data ingested into one instance will not be visible in the other. Dashboards, reports, alerts, and searches are all tied to the data present in their respective instances. A dashboard created in Instance A will not be visible in Instance B unless manually exported and imported. Use Case: This setup is common when different teams or departments need isolated environments or when there are compliance or security boundaries. Scenario 2: Connected Instances (Search Head Clustering) If the instances are connected: Search Head Clustering: If both instances are part of a search head cluster, dashboards can be shared across the cluster. You could manually or automatically sync apps, including dashboards, between the instances.