All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I need help with below splunk query   index=XXX_XXX_XXX | eval job_status=if( 'MSGTXT' = "*ABEND*","ko","ok") | where job_status="ko"   If I change job_status="ok" its working but not for abo... See more...
I need help with below splunk query   index=XXX_XXX_XXX | eval job_status=if( 'MSGTXT' = "*ABEND*","ko","ok") | where job_status="ko"   If I change job_status="ok" its working but not for above condition, appreciate any suggestion on this.   Regards  
Receving "Search auto-canceled" error while executing one month episod review. Please let us know if any quick solution.
Hello Splunker, After I upgraded to version 9.4 , KV store does not start , I generated a new certificate by renaming server.pem and restarting the splunk , And now I see the following error on mong... See more...
Hello Splunker, After I upgraded to version 9.4 , KV store does not start , I generated a new certificate by renaming server.pem and restarting the splunk , And now I see the following error on mongod.log [conn937] SSL peer certificate validation failed: self signed certificate in certificate chain NETWORK [conn937] Error receiving request from client: SSLHandshakeFailed: SSL peer certificate validation failed: self signed certificate in certificate chain. Ending connection from 127.0.0.1:38268 (connection id: 937) Does anyone have any idea what could be missing ? Appreciate your inputs in this regard, Thank you, Moh
I have a lookup table with a bunch of IP addresses (ipaddress.csv) and a blank column called hostname. I would like to search in splunk to find what hostnames each IP address have. I can find the hos... See more...
I have a lookup table with a bunch of IP addresses (ipaddress.csv) and a blank column called hostname. I would like to search in splunk to find what hostnames each IP address have. I can find the hostnames in index=fs sourcetype=inventory. I'm just having a hardtime with the query logic of using the lookup table IPs to output to a table in splunk with their corresponding hostnames. Ideally I want to output the results in a table in splunk and add the hostnames back to the lookup table in the hostname column. Any help would be greatly appreciated! Thank you
Hello,  Looking for some guidance on "Auto Retry"  on HTTP / Browser Test,  Have a Scheduled Test running every 30 mins ( 9:00, 9:30 ) When the test fails at 9:00 see the result updated as "Failed... See more...
Hello,  Looking for some guidance on "Auto Retry"  on HTTP / Browser Test,  Have a Scheduled Test running every 30 mins ( 9:00, 9:30 ) When the test fails at 9:00 see the result updated as "Failed (Auto Retry)" The next run occurs only at 9:30 a.m. Is this expected? (Does Auto Retry kick in only at the scheduled run time?)
At the moment, our tiny indexer has very little disk space and _introspection consumes roughly GB of storage a day, is there a way to minimize the space consumed by the index besides making the reten... See more...
At the moment, our tiny indexer has very little disk space and _introspection consumes roughly GB of storage a day, is there a way to minimize the space consumed by the index besides making the retention very short? 
Is it possible to connect Azure storage to a Splunk cloud instance? Our client wants to store data from their Splunk cloud instance in azure to eliminate their Splunk cloud storage overage 
I am trying to install Splunk_TA_Nix on my UFs. I am in air-gapped area, so can't copy errors and paste here.  I followed below steps: cd $SPLUNK_HOME/etc/apps/ tar xzvf $TMP/Splunk_TA_nix-4.7.0-1... See more...
I am trying to install Splunk_TA_Nix on my UFs. I am in air-gapped area, so can't copy errors and paste here.  I followed below steps: cd $SPLUNK_HOME/etc/apps/ tar xzvf $TMP/Splunk_TA_nix-4.7.0-156739.tgz mkdir $SPLUNK_HOME/etc/apps/Splunk_TA_nix/local cp $SPLUNK_HOME/etc/apps/Splunk_TA_nix/default/inputs.conf $SPLUNK_HOME/etc/apps/Splunk_TA_nix/local/. vi $SPLUNK_HOME/etc/apps/Splunk_TA_nix/local/inputs.conf chown -R splunkfwd:splunkfwd $SPLUNK_HOME/etc/apps/Splunk_TA_nix And restarted Splunk   I was able to get it working on 2 machines but then on next couple of machines, I am seeing: -0500 ERROR Configwatcher [32904 SplunkConfigChangeWatcherThread] - File =/opt/splunkforwarder/var/run/splunk/confsnapshot/baseline_default/apps/splunk_TA_nix/default/app.conf not available in baseline directory -0500 ERROR Configwatcher [32904 SplunkConfigChangeWatcherThread] - Unable to log the changes for path=/opt/splunkforwarder/etc/apps/Splunk_TA_nix/default/app.conf Similar errors for other file name as well, like ._tags.conf and eventtypes.conf.  It seems like a permission issue but I have compared and permissions on the add-on folder and all files/dirs seems to be just like other UFs where the same add-on is working.    Any help would be appreciated.     
Hi, So I wanted to check some possibilities of indexing data using TLS/SSL certificates. 1. I configured TLS only on the indexer, not on the heavy forwarder and data stopped indexing, but why? I di... See more...
Hi, So I wanted to check some possibilities of indexing data using TLS/SSL certificates. 1. I configured TLS only on the indexer, not on the heavy forwarder and data stopped indexing, but why? I did the same in the opposite direction. 2. Is it possible to configure TLS/SSL certificates on the "universal forwarder" and make a connection with the indexer? Will it work? 3. Can we index data using two different ports? For example 9997 - without TLS and 9998 - with TLS.
Hello, We have a field called client_ip which contains different IP addresses and in events different threat messages will be there.  So the ask is they want to exclude these IP addresses which con... See more...
Hello, We have a field called client_ip which contains different IP addresses and in events different threat messages will be there.  So the ask is they want to exclude these IP addresses which contains threat messages. IPs are dynamic (different IPs daily) and threat messages also dynamic (different). Normally to exclude this we need to give NOT (IP) NOT (IP)..... But here there are 100s of IPs and it will be big query. What can be done in this case? My thoughts.. Can I create a lookup table and user manually update that on daily basis and to exclude the IP addresses which are present in this lookup? Like just NOT (lookup table name)  If it is good please help me with the workaround and query to be followed?  Thanks in advance.
The user has been removed from Splunk, and I am unable to locate any orphaned searches, reports, or alerts that were assigned to him
Hi,  We recently migrated from a standalone Search Head to a clustered one. However, we are having some issue running some search commands. For example, this is a query that is not working on the ne... See more...
Hi,  We recently migrated from a standalone Search Head to a clustered one. However, we are having some issue running some search commands. For example, this is a query that is not working on the new SH cluster sourcetype=dataA index=deptA | where critC > 25   On the old search head, this query runs fine and we see the results as expected. But on the SH cluster, this doesn't yield anything.  I have run the "sourcetype=dataA index=deptA" search query by itself, and they both see the same events. I am not sure why the search with (| where citC > 25) on the standalone SH would work and the cluster would not. Any help would be appreciated. Thank you  
hi. Would it be possible for us to regularly read the statistics from the Protection Group Runs via Splunk Add-on? These fields, which are also available via Helios, are of interest to us: Start T... See more...
hi. Would it be possible for us to regularly read the statistics from the Protection Group Runs via Splunk Add-on? These fields, which are also available via Helios, are of interest to us: Start Time End Time Duration Status Sla Status Snapshot Status Object Name Source Name Group Name Policy Name Object Type Backup Type System Name Logical Size Bytes Data Read Bytes Data Written Bytes Organization Name This would make it much easier for us to create the necessary reports in Splunk. Thank you very much
Right now a have a table list with fields populated where one process_name is repeating across multiples hosts with same EventID.  index=main_sysmon sourcetype=xmlwineventlog process_exec=test Event... See more...
Right now a have a table list with fields populated where one process_name is repeating across multiples hosts with same EventID.  index=main_sysmon sourcetype=xmlwineventlog process_exec=test EventCode=11 dest=hosts* | strcat "Event ID: ", EventID " (" signature ")" timestampType | strcat "EventDescription: " EventDescription " | TargetFilename: " TargetFilename " | User: " User activity | strcat EventDescription ": " TargetFilename " by " User details | eval attck = "N/A" | table Computer , UtcTime, timestampType, activity, Channel, attck, process_name I want to have a total sum of counts per same host and process_name with all activity (or target file names listed under). For e.g Computer | UTC | timestamp | activity | process_name | count | 1 | File list | same - repeats | missing value 2 | File list | same - repeats | missing value  
我想配置我制作的仪表板以显示在这里。
Hello all, Consider we have X application requested on-boarding on to Splunk. Created index for this X application, a new role (restricted to X index) and assigned this role to X AD group. Likewise ... See more...
Hello all, Consider we have X application requested on-boarding on to Splunk. Created index for this X application, a new role (restricted to X index) and assigned this role to X AD group. Likewise we have Y, Z soon application. We do in the same manner. But now the requirement is this X,Y,Z application come under 'A' applications and they want all 'A' team members (probably X,Y,Z combined) to view X,Y,Z applications. How we can achieve this? Can't create single index for all X,Y, and Z application because the logs should not be mixed.
We are migrating the Splunk 9.0.3 Search Head from Virtual box to Physical box. Splunk services were up and running in new Physical box but in Splunk Web UI, I was unable to login using the my auth... See more...
We are migrating the Splunk 9.0.3 Search Head from Virtual box to Physical box. Splunk services were up and running in new Physical box but in Splunk Web UI, I was unable to login using the my authorized credentials and found the below error in Splunkd.log   01-21-2025 05:18:05.218 -0500 ERROR ExecProcessor [3275615 ExecProcessor] - message from "/apps/splunk/splunk/etc/apps/splunk_app_db_connect/bin/server.sh" action=task_server_start_failed error=com.splunk.HttpException: HTTP 503 -- KV Store initialization failed. Please contact your system administrator
I have such a search and it works fine but not in Dashboard!         index=unis | search *sarch* | eval name = coalesce(C_Name, PersonName) | eval "DoorName"=if(sourcetype=="ARX:db", $Door$,$Doo... See more...
I have such a search and it works fine but not in Dashboard!         index=unis | search *sarch* | eval name = coalesce(C_Name, PersonName) | eval "DoorName"=if(sourcetype=="ARX:db", $Door$,$DoorName$)       when I use this is in a dashboard it looks for Door and DoorName as tokens while they are values of those fields what should I do to make it work in dashboard studio error I get : Set token value to render visualization $Door$ $DoorName$ edit: if I remove all $  it still works same as in search but still not working in dashboard (without any error) it returns result but DoorName field will be empty
I want to access lookup editing app using python , how can I do that?
My requirement is that my start time is January 1, 2024 and end time is January 7, 2024. In addition to placing the start and end times in multi value fields, please also include each date in this ti... See more...
My requirement is that my start time is January 1, 2024 and end time is January 7, 2024. In addition to placing the start and end times in multi value fields, please also include each date in this time interval, such as January 2, 2024, January 3, 2024, January 4, 2024, January 5, 2024, January 6, 2024. The final field content should be January 1, 2024, January 2, 2024, January 3, 2024, January 4, 2024, January 5, 2024, January 6, 2024, and July. The SPL statement is as follows: | makeresults | eval start_date = "2024-01-01", end_date = "2024-01-07" | eval start_timestamp = strptime(start_date, "%Y-%m-%d") | eval end_timestamp = strptime(end_date, "%Y-%m-%d") | eval num_days = round((end_timestamp - start_timestamp) / 86400) | eval range = mvrange(1, num_days) | eval intermediate_dates = strftime(relative_time(start_timestamp, "+".tostring(range)."days"), "%Y-%m-%d") | eval all_dates = mvappend(start_date, intermediate_dates) | eval all_dates = mvappend(all_dates, end_date) | fields all_dates