All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

It is not clear what your issue is here, nor what exactly you are doing and what is perhaps not working for you. Please can you provide more details and examples of what you are doing and the the res... See more...
It is not clear what your issue is here, nor what exactly you are doing and what is perhaps not working for you. Please can you provide more details and examples of what you are doing and the the results you are getting and explain why it is not what you expected/wanted.
hi @kiran_panchavat , thanks for the reply. Is it possible to automate this process? as i have a tool which is scheduled to run every 1 hour. So for every 1 hour the data needs to be imported.
@TallBear  For the Bar Chart panel we have written the option as : <option name="charting.fieldColors"> {"status":#00FF00,"date_hour":#FFF700,"count":#00009C}</option> Here status , date_hour and... See more...
@TallBear  For the Bar Chart panel we have written the option as : <option name="charting.fieldColors"> {"status":#00FF00,"date_hour":#FFF700,"count":#00009C}</option> Here status , date_hour and count are the fields names of the bar chart. Also you have to specify the hexadecimal codes of the colors for every fields which you want color  . You can put any hexadecimal color codes as per your wish. After adding the options in the source code click on Save to save the changes. NOTE: change your field name in the code. 
@TallBear  Green:- You can use hexadecimal color to change it.   
@TallBear  If you want to change the color, please use this.  In the source code you have to add an option inside the chart tag. For the Column Chart panel we have written the option as : ... See more...
@TallBear  If you want to change the color, please use this.  In the source code you have to add an option inside the chart tag. For the Column Chart panel we have written the option as : <option name="charting.fieldColors"> {"status":#66FF00,"date_hour":#FF0000,"amount":#00009C}</option> Here status , and amount are the fields names of the column chart. Also you have to specify the hexadecimal codes of the colors for every fields which you want color  . You can put any hexadecimal color codes as per your wish.  
 Charts are coloured by series i.e. each series has a different colour (until you have lots of series and the colours recycle). In your case, you only have one series, Status, which has two values. T... See more...
 Charts are coloured by series i.e. each series has a different colour (until you have lots of series and the colours recycle). In your case, you only have one series, Status, which has two values. To get different colours, you need different series. | makeresults | eval zip=split("Test-10264,Production;Test-10262,Production;Test-102123,Production;MGM-1,Development;MGM-2,Development;MGM-3,Development;MGM-4,Development",";") | mvexpand zip | table zip _time ```End of sample data``` | rex field=zip "(?<ticket>.+?),(?<Status>.+$)" | chart count(ticket) as tickets by _time Status
replaced | fields _time,program with | table _time,program and everything is working as expected. I really appreciate you help @livehybrid!
Hello all, Actually i have been using rest command  | rest /servicesNS/-/MYAPP/saved/searches | table title to call my saved searches and now we created a dashboard to updated this saved searc... See more...
Hello all, Actually i have been using rest command  | rest /servicesNS/-/MYAPP/saved/searches | table title to call my saved searches and now we created a dashboard to updated this saved search manually, but the scenario is to paste the entire query dynamically and run it, when the saved is updated in backend the same should be reflecting here. Thanks in advance and best regards.
hi, Why do we have 2 fields  yearmonthday  and yearmonth in this query? 
I am using the following query to display a result on a dashboard (query with sample data which resembles the data I use):   | makeresults | eval zip="Test-10264,Production;Test-10262,Productio... See more...
I am using the following query to display a result on a dashboard (query with sample data which resembles the data I use):   | makeresults | eval zip="Test-10264,Production;Test-10262,Production;Test-102123,Production;MGM-1,Development;MGM-2,Development;MGM-3,Development;MGM-4,Development" | makemv delim=";" zip | mvexpand zip | table zip _time ```End of sample data``` | rex field=zip "(?<ticket>.+?),(?<Status>.+$)" | stats values(ticket) as tickets by Status | stats count(tickets) as amount by Status   And this is being returned by visualization:   The issue I'm facing is both columns have the same color, but I want to each column to have its own unique color (this doesn't have to be predefined, it would be okay if Splunk itself chooses random colors).  Thanks in advance!   Edit: typo  
@livehybrid    thank you, your solution almost works. I have saved search "dashboard_linux_logs_table_caddy": host="$select_hosts$" program="$select_daemon$" priority="$select_log_level$" | field... See more...
@livehybrid    thank you, your solution almost works. I have saved search "dashboard_linux_logs_table_caddy": host="$select_hosts$" program="$select_daemon$" priority="$select_log_level$" | fields _time,program   And Dashboard Studio DataSource:   "dataSources": { "ds_dashboard_linux_logs_table": { "type": "ds.search", "options": { "query": "| savedsearch \"dashboard_linux_logs_table_$select_daemon$\" select_hosts=\"$select_hosts$\" select_daemon=\"$select_daemon$\" select_log_level=\"$select_log_level$\"" }, "name": "dashboard_linux_logs_table" } }   When changing drop-down value (token: select_daemon), table does pick up right savedsearch. The only problem query parameter "| fields _time,program" of savedsearch is ignored. Still looking for solution.
@SN1  The procedure was: - deploy the splunk enterprise to the new server, use the same version you have on the existing server - tar the entire $SPLUNK_HOME/etc folder from the existing splunk En... See more...
@SN1  The procedure was: - deploy the splunk enterprise to the new server, use the same version you have on the existing server - tar the entire $SPLUNK_HOME/etc folder from the existing splunk Enterprise security server, but I recommend to stop the splunk service first, just to avoid any change from customers - Stop the splunk service at new server - copy the tar file to the new server at $SPLUNK_HOME/etc folder - Stop Splunk service on the current Splunk Enterprise server - Copy the bundle file from $SPLUNK_HOME/var/run from the existing server to the new one on the same path. Bundle file should be something like this servername-1570745614.bundle - Start splunk service on the new server - Monitor for any error message of lack of configuration issues Before you run this procedure, stop the existing Splunk server, run a full backup of etc, just to make sure if you the last updated configuration/apps in case you have any issues, you can recover from the point where everything is working properly on the current splunk environment.
@SN1  It is pretty easy. if you're speaking of a not clustered SH, you have only to copy the Enterprise Security apps from the old SH to new one. The easiest way it to install the same Splunk and E... See more...
@SN1  It is pretty easy. if you're speaking of a not clustered SH, you have only to copy the Enterprise Security apps from the old SH to new one. The easiest way it to install the same Splunk and ES on the new SH and copy the entire Splunk etc folder from the old SH to the new one and the end you can upgrade Splunk. Copy the entire `$SPLUNK_HOME/etc/*` and `$SPLUNK_HOME/var/run` directory space. Restart Splunk. This all presumes that you setup Splunk and ES correctly the first time (i.e. all index and summaries are on your indexers).
Hi Splunkers, I have some doubt about SSL use for S2S communication. First, let us remark what is sure, with no doubts: 1. SSL provide a compression ratio better than default one: 1:8 vs 1:2. 2. S... See more...
Hi Splunkers, I have some doubt about SSL use for S2S communication. First, let us remark what is sure, with no doubts: 1. SSL provide a compression ratio better than default one: 1:8 vs 1:2. 2. SSL Compression does NOT affect license. In general, ALL Compression on Splunk does not affect license. This means that: if before compression data has dimension X + Y and, after it, X, consumed license will be X + Y, not X. 3. From a security perspective, if I have multiple Splunk components, the best way to configure flows should be encrypt all of them. For example if I have UF -> HF -> IDX, for security purpose the best is to encrypt both UF -> HF flow and HF -> IDX one. Now, for a customer we have the following data flow: Log sources -> Intermediate Forwarder -> Heavy forwarders -> Indexers I know that when possible we should avoid HF and IF but, for different reason, we need them on this particular environment. Here, 2 doubt rise: Suppose we apply SSL only between IF and HF. 1. Data arrive compressed on HF. When they leaves it and goes to IDXs, they are still compressed? So, for example suppose we have original data with a total dimension of 800 MB: Between IF and HF exist SSL, so in HF there is a tcp-ssl input on port 9997 SSL compression is applied: now data have 100 MB dimension When they arrive to HF, they have 100 MB dimension When they leave the HF to go on IDXs, they still have 100 MB dimension? Suppose now we apply SSL on entire S2S data flow: between IF and HF and between HF and IDXs. In addition to a better security posture, which other advantage we should achieve going in this direction?    
OK so we have 2 search heads and we want to migrate enterprise security app from 1 search head to another . How should we do that step by step so that we don't face any issues.
I am pretty sure that a leave a reply to you yesterday, but I do found it. Anyway, I already tried what you suggest to do, thanks. Looking at metrics.log I found a line that I think demonstrate that ... See more...
I am pretty sure that a leave a reply to you yesterday, but I do found it. Anyway, I already tried what you suggest to do, thanks. Looking at metrics.log I found a line that I think demonstrate that splunk in some way get the data (ev=21) but discard it: 02-19-2025 10:49:29.584 +0100 INFO Metrics - group=per_source_thruput, series="/opt/splunk/etc/apps/adsmart_summary/bin/getcampaigndata.py", kbps=0.436, eps=0.677, kb=13.525, ev=21, avg_age=-3600.000, max_age=-3600 host = splunkidx01 source = /opt/splunk/var/log/splunk/metrics.log sourcetype = splunkd Maybe the issue is avg_age and max_age that are negative, so it is something about the timestamp the the script produce for the events.
Yes. this is my localhost so I use admin local user (the default one) and its have the max permission 
@SHEBHADAYANA  Did you implement this? In the Advanced Options, configure the throttling settings and select the duration (in seconds) to suppress alerts. Throttling prevents this correlation searc... See more...
@SHEBHADAYANA  Did you implement this? In the Advanced Options, configure the throttling settings and select the duration (in seconds) to suppress alerts. Throttling prevents this correlation search from generating duplicate notable events or alerts for the same issue every time it runs. If you apply grouping to one or more fields, throttling will be enforced on each unique combination of field values. For example, setting throttling by host once per day ensures that only one notable event of this type is generated per server per day. How can we suppress notable events in Splunk ITSI? https://community.splunk.com/t5/Splunk-ITSI/How-to-suppress-Notable-Events-in-ITSI/m-p/610503 
Hi @seiimonn ! I noticed that, every time i start AME Events, i get the following error. I appreciate your help. A. 2/20/25 8:40:34.605 AM   127.0.0.1 - splunk-system-user [20/Feb/2025:08... See more...
Hi @seiimonn ! I noticed that, every time i start AME Events, i get the following error. I appreciate your help. A. 2/20/25 8:40:34.605 AM   127.0.0.1 - splunk-system-user [20/Feb/2025:08:40:34.605 +0100] "GET /servicesNS/nobody/alert_manager_enterprise/messages/ame-index-resilience-default-error HTTP/1.0" 404 177 "-" "splunk-sdk-python/1.7.3" - - - 0ms
"Hello Team, I have created a Maintenance Window in Splunk ITSI to suppress alerts from certain correlation searches. However, despite the Maintenance Window being active, we are still receiving ale... See more...
"Hello Team, I have created a Maintenance Window in Splunk ITSI to suppress alerts from certain correlation searches. However, despite the Maintenance Window being active, we are still receiving alerts. What could be the possible reasons for this, and how can we troubleshoot and resolve the issue?"