All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi  Please help me to build cron expression. thanks in advance Alert runs Every 15min from 8am to 18pm, Everyday Alert runs Every 15min from 4am to 18pm weekdays only Alert runs Every 15mi... See more...
Hi  Please help me to build cron expression. thanks in advance Alert runs Every 15min from 8am to 18pm, Everyday Alert runs Every 15min from 4am to 18pm weekdays only Alert runs Every 15min from 8am to 18pm weekdays only Alert runs Every 15min from 9am to 17pm weekdays only Alert runs Every 15min from 8am to 18:45pm weekdays only Alert runs Every 15min from 23:01 pm to 18:59pm Everyday Alert runs Every 15min from 12am to 12:59am and 6am to 6:59am Everyday Alert runs Every 15min from 8am to 8:59am and 13pm to 13:59pm Everyday Alert runs Every 15min from 10am to 6:59am Everyday Alert runs Every 15min from 7am to 23:59pm Everyday Alert runs Every 15min from 8am to 10:59am Everyday
Hi  I am not receiving the data from Universal forwarders . What could  the possible reasons be? Thanks
I want to be able to perform a search across a list of internal IPs making http/https GET and POST requests to external sources AFTER or at the SAME TIME a specific external IP is making inbound conn... See more...
I want to be able to perform a search across a list of internal IPs making http/https GET and POST requests to external sources AFTER or at the SAME TIME a specific external IP is making inbound connection attempts of any kind to them.
Hi, I am modifying my logging in my application (Java spring boot) to include: key/value pair list and a JSON string of relevant data/information I want to log to trigger Splunk's Automatic Field Ext... See more...
Hi, I am modifying my logging in my application (Java spring boot) to include: key/value pair list and a JSON string of relevant data/information I want to log to trigger Splunk's Automatic Field Extraction. Key/value pair list:   [key1=val1, key2=val2, key3=val3, etc]   and JSON string:   { "field1" : "value1", "field2" : null }     Can anyone inform me how to possible generate some dummy events like this so I can test that the Automatic Field Extraction is indeed extraction the KEY from the list and assigning the correct VALUE? Similarly for the JSON string, if it's possible. If not , I suppose I can just use the spath feature.
I was using splunk db connect app 3.6.0, at the beginning when I installed it , it running ok dbxquery is also very fast on the same Mysql database.   but I don't why the dbxquery become very slow,... See more...
I was using splunk db connect app 3.6.0, at the beginning when I installed it , it running ok dbxquery is also very fast on the same Mysql database.   but I don't why the dbxquery become very slow, the DB is ok, because I can search data very fast with other way, but use db connect, it spent a lot of time to wait, i check the job inspect , found at " dispatch.evaluate.dbxquery" the phase ,it take a very long time ,at my search head, always 48.13 seconds.   I don't know why cause it, and I want know how to solve it.
We have logs , where first few lines needs to be omitted from ingesting. We only need to on-board the events , that start with the date/time in the following format: "%m/%d/%Y@%H:%M" Appreciate al... See more...
We have logs , where first few lines needs to be omitted from ingesting. We only need to on-board the events , that start with the date/time in the following format: "%m/%d/%Y@%H:%M" Appreciate all the ideas and suggestions. Here is  the log example (there are also empty lines before first "#-----------------------------------------" and after last "#-----------------------------------------"):       #-----------------------------------------       #DATE CREATED:  11/02/2021@04:16       #SUBJECT:       REPORT ON THE GENERAL STATUS OF AUTOSYS JOBS       #ENVIRONMENT:   CBA       #-----------------------------------------       11/02/2021@04:16,CBA,OTHER,CBA_CLIENT_REPORT_BOX,OI       11/02/2021@04:16,CBA,OTHER,CBA_copy_file_job,OI       11/02/2021@04:16,CBA,OTHER,CBA_ABC_SCHEDULER_BOX,OI       11/02/2021@04:16,CBA,OTHER,CBA_ABC_REPORT_BOX,OI
The problem is a simple one: I have a base search from which I want to exclude a subset based on a criteria determined in a different dataset.  But I cannot find an efficient way to do this. So far... See more...
The problem is a simple one: I have a base search from which I want to exclude a subset based on a criteria determined in a different dataset.  But I cannot find an efficient way to do this. So far, what I am doing is     basesearch | join joinkey [set diff [ basesearch | stats count by joinkey | fields - count ] [ criteria | stats count by joinkey | fields - count ] ]     While the logic works, it feels immensely inefficient.  Without even considering that set operations is itself expensive, but basesearch is performed two times with no change. What is the proper way of doing this simple exclusion?
The doc mentions IT Essentials Work. Tried to download IT Essentials work but I get an error message during installation. "App Installation failed" "invalid app contents: archive contains more t... See more...
The doc mentions IT Essentials Work. Tried to download IT Essentials work but I get an error message during installation. "App Installation failed" "invalid app contents: archive contains more than one immediate subdirectory: and DA-ITSI-DATABASE" I still have the "Splunk add-on for amazon web services" version 5.1.0 running on a heavy forwarder collecting the data and sending it to my indexes on splunkcloud...   (5.2.0 is broken)
Seems there are many ways to edit panel styles but is it possible to edit the actual dashboards title section? Seems that area at the very top of the board is very static. Looking to add a background... See more...
Seems there are many ways to edit panel styles but is it possible to edit the actual dashboards title section? Seems that area at the very top of the board is very static. Looking to add a background image to that section (not the main body section of the dashboard).
We have logs , where first few lines start with "#" and we don't need to ingest these lines.  We tired to use different methods , that didn't work. Appreciated the help/ideas from splunkers: 1st id... See more...
We have logs , where first few lines start with "#" and we don't need to ingest these lines.  We tired to use different methods , that didn't work. Appreciated the help/ideas from splunkers: 1st idea: use PREAMBLE_REGEX = ^#.* in props.conf  on Heavy Forwarders where data are being parsed 2nd idea : use TRANSFORMS-null = setnull in props.conf  and transforms.conf on Heavy Forwarders where data are being parsed transforms.conf: [setnull] REGEX = ^#.* DEST_KEY = queue FORMAT = nullQueue example of log: #----------------------------------------- #DATE CREATED:  11/02/2021@04:16 #SUBJECT:       REPORT ON THE GENERAL STATUS OF AUTOSYS JOBS #ENVIRONMENT:   CBA #----------------------------------------- 11/02/2021@04:16,CBA,OTHER,CBA_CLIENT_REPORT_BOX,OI 11/02/2021@04:16,CBA,OTHER,CBA_copy_file_job,OI 11/02/2021@04:16,CBA,OTHER,CBA_ABC_SCHEDULER_BOX,OI 11/02/2021@04:16,CBA,OTHER,CBA_ABC_REPORT_BOX,OI
Hello, Did anyone tried to configure the alerts to trigger an audio file whenever a condition met. I have tried looking for an app or add-on in splunk base but I haven't found any. Please help me ... See more...
Hello, Did anyone tried to configure the alerts to trigger an audio file whenever a condition met. I have tried looking for an app or add-on in splunk base but I haven't found any. Please help me with your thoughts.     Thanks
Hi , I am using splunk in monitoring of http status code responses from a server and I want to be alerted when the request to the server takes much time in returning back to the client the client I... See more...
Hi , I am using splunk in monitoring of http status code responses from a server and I want to be alerted when the request to the server takes much time in returning back to the client the client I am using has a timeout window of 55 seconds so when the server takes more than 55 sec to respond, the client sends a timeout error  I want to be alerted when the percentage of the times when the request takes more than 55 sec exceeds 10 %
Hi , I am using splunk in monitoring the http status code response from a server and I want to get alerted when the percentage of 504 or 500 exceeds 10%  
Hello all, kindly help with Regex.. I am seeing the below messages in splunkd logs. Though values are actually being extracted properly, below messages are annoying and I want to get rid of those.... See more...
Hello all, kindly help with Regex.. I am seeing the below messages in splunkd logs. Though values are actually being extracted properly, below messages are annoying and I want to get rid of those. Need urgent help to construct a better Regex to avoid these messages. I have increased MATCHLIMIT in transforms.conf, but still seeing these messages. 11-17-2021 17:12:34.927 +1100 ERROR Regex - Failed in pcre_exec: Error PCRE_ERROR_MATCHLIMIT for regex 11-17-2021 17:12:34.927 +1100 WARN regexExtractionProcessor - Regular expression for stanza security_index exceeded configured PCRE match limit. One or more fields might not have their values extracted, which can lead to incorrect search results. Fix the regular expression to improve search performance or increase the MATCH_LIMIT in props.conf to include the missing field extractions. Part of my regex looks like this. I have repeated the same regex corresponding to different VM and file combinations. Can someone please help with a better Regex. REGEX=(\VNFCs\"\:\".*(ops|opslb|ntf|ntfsync|telemetry|diagnostic|db|lb).*class\"\:\"(/var/log/auth.log|/var/log/messages.log|/var/log/syslog.log|/var/log/firewall/firewall.log|/var/log/audit/audit.log)\").. followed by \VNFCs\"\:\".*(ops|opslb|ntf).*class\"\:\"(/opt/function/applicatio.log)\").. etc... My Sample event is as below. There are more than 10 type of VM's (like ops,db,sync,etc), more than 300 VM's and 250 different files and I made a Single regex. I need to create regex considering below two criteria:  1) The VM's : "xyz001vm002400-ops-vm01","xyz002vm002400-db-vm01", etc. 2) Log file beginning with a class keyword : "class":"/var/log/syslog.log , "class":"/opt/cat/audit.log , etc. Sample Event 1: {"VNFType":"xyz","VNFs":"xyz001vm002400","VNFCType":"ops","VNFCs":"xyz001vm002400-ops-vm01","event":{"log_message":"2021-11-18T02:45:01.085777+10:00, xyz001vm002400, xyz001vm002400-ops-vm01, ops, ops, info, cron, xyz001vm002400-ops-vm01, CROND[16311]:, (root) CMD (/usr/sbin/KillIdleSessions)\n2021-11-18T02:45:01.085777+10:00, xyz001vm002400, xyz001vm002400-ops-vm01, ops, ops, info, cron, xyz001vm002400-ops-vm01, CROND[16311]:, (root) CMD (/usr/sbin/KillIdleSessions)\n2021-11-18T02:45:01.085777+10:00, xyz001vm002400, xyz001vm002400-ops-vm01, ops, ops, info, cron, xyz001vm002400-ops-vm01, CROND[16311]:, (root) CMD (/usr/sbin/KillIdleSessions)\n2021-11-18T02:45:01.085777+10:00, xyz001vm002400, xyz001vm002400-ops-vm01, ops, ops, info, cron, xyz001vm002400-ops-vm01, CROND[16311]:, (root) CMD (/usr/sbin/KillIdleSessions)\n2021-11-18T02:45:01.085777+10:00, xyz001vm002400, xyz001vm002400-ops-vm01, ops, ops, info, cron, xyz001vm002400-ops-vm01, CROND[16311]:, (root) CMD (/usr/sbin/KillIdleSessions)\n2021-11-18T02:45:01.085777+10:00, xyz001vm002400, xyz001vm002400-ops-vm01, ops, ops, info, cron, xyz001vm002400-ops-vm01, CROND[16311]:, (root) CMD (/usr/sbin/KillIdleSessions)\n","class":"/var/log/syslog.log","log_event_time_stamp":"2021-11-18T02:45:08+10:00"}} Sample Event 2: {"VNFType":"xyz","VNFs":"xyz002vm002400","VNFCType":"db","VNFCs":"xyz002vm002400-db-vm01","event":{"log_message":"2021-11-18T02:45:01.085777+10:00, xyz002vm002400, xyz002vm002400-db-vm01, db, db, info, cron, xyz002vm002400-db-vm01, CROND[16311]:, (root) CMD (/usr/sbin/KillIdleSessions)\n2021-11-18T02:45:01.085777+10:00, xyz002vm002400, xyz002vm002400-db-vm01, db, db, info, cron, xyz002vm002400-db-vm01, CROND[16311]:, (root) CMD (/usr/sbin/KillIdleSessions)\n2021-11-18T02:45:01.085777+10:00, xyz002vm002400, xyz002vm002400-db-vm01, db, db, info, cron, xyz002vm002400-db-vm01, CROND[16311]:, (root) CMD (/usr/sbin/KillIdleSessions)\n2021-11-18T02:45:01.085777+10:00, xyz002vm002400, xyz002vm002400-db-vm01, db, db, info, cron, xyz002vm002400-db-vm01, CROND[16311]:, (root) CMD (/usr/sbin/KillIdleSessions)\n2021-11-18T02:45:01.085777+10:00, xyz002vm002400, xyz002vm002400-db-vm01, db, db, info, cron, xyz002vm002400-db-vm01, CROND[16311]:, (root) CMD (/usr/sbin/KillIdleSessions)\n2021-11-18T02:45:01.085777+10:00, xyz002vm002400, xyz002vm002400-db-vm01, db, db, info, cron, xyz002vm002400-db-vm01, CROND[16311]:, (root) CMD (/usr/sbin/KillIdleSessions)\n","class":"/opt/cat/audit.log","log_event_time_stamp":"2021-11-18T02:45:08+10:00"}} @chrisyounger @harsmarvania57  - Any help is much appreciated.
When pushing the Windows add on for Splunk using a deployment server, my inputs.conf files on the clients are not updating. The clients are regularly checking in with the deployment server, and splun... See more...
When pushing the Windows add on for Splunk using a deployment server, my inputs.conf files on the clients are not updating. The clients are regularly checking in with the deployment server, and splunk has been restarted on both deployment and client servers several times. This is creating an issue because updates to inputs.conf stored in the local folder are not being updated across my clients. If anyone has any further troubleshooting ideas to get the clients to fluently sync up to the proper inputs.conf from the deployment server please let me know.  If it matters - The specific changes (simply enabling them by changing disabled=1 to 0) were made to the scripted inputs below. The timestamp on inputs.conf on the client is much older than the changes and still left at disabled=1.  ###### Scripted Input (See also wmi.conf) [script://.\bin\win_listening_ports.bat] disabled = 0 ## Run once per hour interval = 3600 sourcetype = Script:ListeningPorts [script://.\bin\win_installed_apps.bat] disabled = 0 ## Run once per day interval = 86400
A while ago i set up the monitoring console. However, I am seeing I have some screens working well and others are just blank. For example.CPU disk information I can access all that   However,... See more...
A while ago i set up the monitoring console. However, I am seeing I have some screens working well and others are just blank. For example.CPU disk information I can access all that   However, if I want to see the Monitoring Console - > search > Search activity: instance and see the skipped searches. It's all blank. So is there something else I need to do? I am on 1 SH 3 Indexes Cluster + 1 MN.  
So there is a query on my splunk cloud instance. Which is below: index=windows EventCode=4688     [| inputlookup "lotl_commands.csv"     | rename suscmd as search ]     NOT Account_Name=*$     N... See more...
So there is a query on my splunk cloud instance. Which is below: index=windows EventCode=4688     [| inputlookup "lotl_commands.csv"     | rename suscmd as search ]     NOT Account_Name=*$     NOT (net "use ")     NOT InteractionScripter.NET.exe     NOT (Account_Name=itreports sqlcmd.exe)     NOT (Account_Name=SRV_EtlProd winscp.exe OR MSSQLSERVER OR SQLSERVERAGENT)     NOT (Account_Name=SRV_EDW_SQLEngine sqlcmd.exe conhost.exe OR sqldiag.exe)     NOT (Creator_Process_Name="C:\\Windows\\System32\\net.exe" New_Process_Name="C:\\Windows\\System32\\conhost.exe")     NOT (New_Process_Name=C:\\Windows\\System32\\conhost.exe)     NOT (Creator_Process_Name="*\\MicroStrategy Services.exe" New_Process_Name=C:\\Windows\\System32\\cscript.exe)     NOT Account_Name="SVCBTSCAN" `comment(INC0036469)`     NOT Account_Name="SVCBTFUNC" `comment(INC0036469)`     NOT Account_Name="SRV_Vulscanning" `comment(INC0036582)`     NOT (Account_Name="SRV_Lansweep_4Server" csc.exe) | table _time EventCode ComputerName Account_Name Creator_Process_Name New_Process_Name Process_Command_Line | sort _time   Whenever it runs, it triggers an alert for file path: C:\Program Files (x86)\MySQL\MySQL Notifier 1.1\MySQLNotifier.exe C:\Windows\SysWOW64\schtasks.exe Now this file path is running legitimately and I am trying to exempt it from being searched again so another alert does not trigger so the 10th line that starts with " NOT (Creator_Process_Name=" I created another line like that under it and inserted both file paths but when I do a 24hr search it still comes up, which means it is still not exempting that file path. So please i need help being able to exempt that file path from the search. Thanks.
Tried upgrading my splunk add-on for amazon web services on my heavy forwarder 3 times and each time I have the same issue when upgrading to 5.2.0 Started with version 5.0.3 disabled all inputs - n... See more...
Tried upgrading my splunk add-on for amazon web services on my heavy forwarder 3 times and each time I have the same issue when upgrading to 5.2.0 Started with version 5.0.3 disabled all inputs - no pycache dir upgrade to 5.2.0 - and restart no visible errors until try to launch - the account tab spins forever and never opens - all other tabs fine rolled back to tar backup 5.0.3 Tried again - same rolled back to tar backup 5.0.3 download 5.0.4 and upgraded, restarted - all tabs in add-on fine download 5.1.0 and upgraded, restarted - all tabs in add-on fine download 5.2.0 and upgraded, restart, same issues as before.... anyone else encountered this and know the fix - i have opened a ticket with support but still waiting...  
|eval SNOW_Description=case(EMGC_ADMINSERVER_Status!="k1","Java Process EMGC_ADMINSERVER data not available in splunk on host, EMGC_ORACLE_Status!="k2","Java Process EMGC_ORACLE data not available in... See more...
|eval SNOW_Description=case(EMGC_ADMINSERVER_Status!="k1","Java Process EMGC_ADMINSERVER data not available in splunk on host, EMGC_ORACLE_Status!="k2","Java Process EMGC_ORACLE data not available in splunk on host)   Here I am trying to use multiple fields inside case statement. I am not getting correct output. How can this be achieved?
I have a Splunk query:   index=my_index cf_app_name=$app_name$ msg!="*Hikari*" $log_type$ | sort -_time | table msg   It populates Splunk with results.  Now, the msg field has log_type as INFO, ... See more...
I have a Splunk query:   index=my_index cf_app_name=$app_name$ msg!="*Hikari*" $log_type$ | sort -_time | table msg   It populates Splunk with results.  Now, the msg field has log_type as INFO, ERROR, WARNING. Example:   2021-11-17 15:03:34.921 INFO 22 --- [ taskExecutor-1] c.c.p.r.e.EventService : Event sent to event ID: 2111 - REPRICING has finished 2021-11-16 22:23:54.905 ERROR 22 --- [ taskExecutor-1] c.c.p.r.service.SftpService : Could not delete file: /-/PCS.P.KSZ4750J.TRIG.FILE - 4: Failure 2021-11-16 22:23:54.905 WARNING 22 --- [ taskExecutor-1] c.c.p.r.service.SftpService : Could not delete file: /-/PCS.P.KSZ4750J.TRIG.FILE - 4: Failure   Now, My goals is to COLOR the log_type field in the "msg" to Green if it's INFO, Red if it's ERROR, and Yellow if it's WARNING.  I don't want to color the entire msg field, just the words INFO, ERROR and WARNING should be turned to those specific colors.  @scelikok @somesoni2