All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Has anyone gotten Splunk Analytics for Hadoop to work with MapR 6.x? If so what are you using for the JobTracker node and what port? Any special config to get it working?  Thanks!
Hi,   I am trying to figure out a splunk generic warning dialog box . I want to alert the users about certain changes to splunk. Either a banner or a pop up, so users can acknowledge 
I have a log file in a table structured form like this, Code send_id dest_id AW 96 45 BX 65 78 Now here I have to change that send_id column id's to the name(like 96 = Alex and 65= James) and reg... See more...
I have a log file in a table structured form like this, Code send_id dest_id AW 96 45 BX 65 78 Now here I have to change that send_id column id's to the name(like 96 = Alex and 65= James) and regenerate the log file like the below format. Code send_id dest_id AW Alex 45 BX James 78 How do I extract it and again regenerate it after changing?
I have a Lookup "Consumer_Lookup.csv" (30 rows approx) Consumer     Restricted A                         Y B                         Y C                         N Search - Index = "xyz" |stats c... See more...
I have a Lookup "Consumer_Lookup.csv" (30 rows approx) Consumer     Restricted A                         Y B                         Y C                         N Search - Index = "xyz" |stats count(Status) AS Total by ClientId|Table ClientId TOtal ClientId         Total A                       500 C                       200 my requirement is to find any Consumer in Lookup with Restricted = "Y" not in the search result. Can you please advise on how to proceed. Should I use Join or any other alternative .   Thanks for your help!!  
Hi Team, As part of an integration from Splunk ES into a ticketing system, we're trying to monitor the notable_events KV Store and create a scheduled search when the status field changes that also s... See more...
Hi Team, As part of an integration from Splunk ES into a ticketing system, we're trying to monitor the notable_events KV Store and create a scheduled search when the status field changes that also sends an email when it notices the status change  Has anyone else tried this or could the logic work to do this? Appreciate this is a clunky way to do this but would be great to get some ideas
Hi everyone,   Recently we are trying to connect to DB2 database in z/OS.  I downloaded the JDBC 4.0 drivers and place it into /opt/splunk/etc/apps/splunk_app_db_connect/drivers/db2jcc4-libs as p... See more...
Hi everyone,   Recently we are trying to connect to DB2 database in z/OS.  I downloaded the JDBC 4.0 drivers and place it into /opt/splunk/etc/apps/splunk_app_db_connect/drivers/db2jcc4-libs as per suggestion in this thread: https://community.splunk.com/t5/All-Apps-and-Add-ons/DB-Connect-using-DB2-zOS-invalid-license/td-p/354081 But we still have this error:     [jcc][t4][10509][13454][4.27.25] Connection to the data server failed. The IBM Data Server for JDBC and SQLJ license was invalid or was not activated for the DB2 for z/OS subsystem. If you are connecting directly to the data server and using DB2 Connect Unlimited Edition for System z, perform the activation step by running the activation program in the license activation kit. If you are using any other edition of DB2 Connect, obtain the license file, db2jcc_license_cisuz.jar, from the license activation kit, and follow the installation directions to include the license file in the class path. ERRORCODE=-4230, SQLSTATE=42968     I have tried different version for the JDBC driver (4.16, 4.25, and 4.27).  From my understanding, if we are using a JDBC 4.0 driver, we should not need to provide a license file correct?  Also tried adding a new database type as per mention in this thread: https://community.splunk.com/t5/Archive/SPLUNK-DB-CONNECT-DB2/m-p/194145 [db2zOS] displayName = DB2 on zOS jdbcDriverClass = com.ibm.db2.jcc.DB2Driver defaultPort = 3750 connectionUrlFormat = jdbc:db2://{0}:{1}/{2} testQuery = SELECT 1 FROM SYSIBM.SYSDUMMY1; Still have no luck? Any idea?   Cheers, S 
Hi, I'm adding a dropdown input to my dashboard, this is my query:   index="prod_super_cc" source="InventorySnapshot" | spath input=data.InventoryData| search "{}.NodeId"="*" | stats values({}.Nod... See more...
Hi, I'm adding a dropdown input to my dashboard, this is my query:   index="prod_super_cc" source="InventorySnapshot" | spath input=data.InventoryData| search "{}.NodeId"="*" | stats values({}.NodeId) as Stores Or this index="prod_super_cc" source="InventorySnapshot" | spath input=data.InventoryData output=Stores path={}.NodeId | table Stores | mvexpand Stores| stats values(Stores) as Stores at the end bot queries are working with same result, e.g.:   Stores 1053 1075 1118 1159 1160 1196 1248 1408 1492 2766 2829 2830 3025 3034 3140 3177 3199 3223   But when filling the dropdown, y got this: How can i get separated values for each Store value?   Thanks!      
Hi, I want to generate a new dashboard from the splunk logs . I want all the fields that are present in the raw data . Not only the one that is generated by the splunk.  I have this criteria: ind... See more...
Hi, I want to generate a new dashboard from the splunk logs . I want all the fields that are present in the raw data . Not only the one that is generated by the splunk.  I have this criteria: index=abc ns=xyz app_name=gateway* I want all the fields that are present for this query in raw data. Can someone guide we how can we get all the fields. Thanks in advance.
I have installed Windows infrastructure app on Splunk search head (which is  a server) The app requires multiple indexes(msad, perfmon, wineventlog) and all indexes are receiving data except for ms... See more...
I have installed Windows infrastructure app on Splunk search head (which is  a server) The app requires multiple indexes(msad, perfmon, wineventlog) and all indexes are receiving data except for msad   This is my inputs.conf file     # Copyright (C) 2019 Splunk Inc. All Rights Reserved. # DO NOT EDIT THIS FILE! # Please make all changes to files in $SPLUNK_HOME/etc/apps/Splunk_TA_windows/local. # To make changes, copy the section/stanza you want to change from $SPLUNK_HOME/etc/apps/Splunk_TA_windows/default # into ../local and edit there. # ###### OS Logs ###### [WinEventLog://Application] disabled = 0 start_from = oldest current_only = 0 checkpointInterval = 5 renderXml=true index= wineventlog [WinEventLog://Security] disabled = 0 start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 blacklist1 = EventCode="4662" Message="Object Type:(?!\s*groupPolicyContainer)" blacklist2 = EventCode="566" Message="Object Type:(?!\s*groupPolicyContainer)" renderXml=true index= wineventlog [WinEventLog://System] disabled = 0 start_from = oldest current_only = 0 checkpointInterval = 5 renderXml=true index= wineventlog ###### Forwarded WinEventLogs (WEF) ###### [WinEventLog://ForwardedEvents] disabled = 0 start_from = oldest current_only = 0 checkpointInterval = 5 ## The addon supports only XML format for the collection of WinEventLogs using WEF, hence do not change the below renderXml parameter to false. renderXml=true host=WinEventLogForwardHost index= wineventlog ###### WinEventLog Inputs for Active Directory ###### ## Application and Services Logs - DFS Replication [WinEventLog://DFS Replication] disabled = 0 renderXml=true index= wineventlog ## Application and Services Logs - Directory Service [WinEventLog://Directory Service] disabled = 0 renderXml=true index= wineventlog ## Application and Services Logs - File Replication Service [WinEventLog://File Replication Service] disabled = 0 renderXml=true index= wineventlog ## Application and Services Logs - Key Management Service [WinEventLog://Key Management Service] disabled = 0 renderXml=true index= wineventlog ###### WinEventLog Inputs for DNS ###### [WinEventLog://DNS Server] disabled=1 renderXml=true index= wineventlog ###### DHCP ###### [monitor://$WINDIR\System32\DHCP] disabled = 0 whitelist = DhcpSrvLog* crcSalt = <SOURCE> sourcetype = DhcpSrvLog index = windows ###### Windows Update Log ###### ## Enable below stanza to get WindowsUpdate.log for Windows 8, Windows 8.1, Server 2008R2, Server 2012 and Server 2012R2 [monitor://$WINDIR\WindowsUpdate.log] disabled = 0 sourcetype = WindowsUpdateLog index = windows ## Enable below powershell and monitor stanzas to get WindowsUpdate.log for Windows 10 and Server 2016 ## Below stanza will automatically generate WindowsUpdate.log daily [powershell://generate_windows_update_logs] script = ."$SplunkHome\etc\apps\Splunk_TA_windows\bin\powershell\generate_windows_update_logs.ps1" schedule = 0 */24 * * * disabled = 0 index = windows ## Below stanza will monitor the generated WindowsUpdate.log in Windows 10 and Server 2016 [monitor://$SPLUNK_HOME\var\log\Splunk_TA_windows\WindowsUpdate.log] disabled = 0 sourcetype = WindowsUpdateLog index = windows ###### Monitor Inputs for Active Directory ###### [monitor://$WINDIR\debug\netlogon.log] sourcetype=MSAD:NT6:Netlogon disabled=0 index = msad ###### Monitor Inputs for DNS ###### [MonitorNoHandle://$WINDIR\System32\Dns\dns.log] sourcetype=MSAD:NT6:DNS disabled=0 index = msad ###### Scripted Input (See also wmi.conf) [script://.\bin\win_listening_ports.bat] disabled = 0 ## Run once per hour interval = 3600 sourcetype = Script:ListeningPorts [script://.\bin\win_installed_apps.bat] disabled = 0 ## Run once per day interval = 86400 sourcetype = Script:InstalledApps index = windows [script://.\bin\win_timesync_status.bat] disabled = 0 ## Run once per hour interval = 3600 sourcetype = Script:TimesyncStatus index = windows [script://.\bin\win_timesync_configuration.bat] disabled = 0 ## Run once per hour interval = 3600 sourcetype = Script:TimesyncConfiguration index = windows [script://.\bin\netsh_address.bat] disabled = 0 ## Run once per day interval = 86400 sourcetype = Script:NetworkConfiguration index = windows ###### Scripted/Powershell Mod inputs Active Directory ###### ## Replication Information NT6 [script://.\bin\runpowershell.cmd nt6-repl-stat.ps1] source = Powershell sourcetype = MSAD:NT6:Replication interval = 300 disabled = 0 index = msad ## Replication Information 2012r2 and 2016 [powershell://Replication-Stats] script = & "$SplunkHome\etc\apps\Splunk_TA_windows\bin\Invoke-MonitoredScript.ps1" -Command ".\powershell\2012r2-repl-stats.ps1" schedule = 0 */5 * ? * * source = Powershell sourcetype = MSAD:NT6:Replication disabled = 0 index = msad ## Health and Topology Information NT6 [script://.\bin\runpowershell.cmd nt6-health.ps1] source=Powershell sourcetype = MSAD:NT6:Health interval = 300 disabled = 0 index = msad ## Health and Topology Information 2012r2 and 2016 [powershell://AD-Health] script = & "$SplunkHome\etc\apps\Splunk_TA_windows\bin\Invoke-MonitoredScript.ps1" -Command ".\powershell\2012r2-health.ps1" schedule = 0 */5 * ? * * source = Powershell sourcetype = MSAD:NT6:Health disabled = 0 index = msad ## Site, Site Link and Subnet Information NT6 [script://.\bin\runpowershell.cmd nt6-siteinfo.ps1] source = Powershell sourcetype = MSAD:NT6:SiteInfo interval = 3600 disabled = 0 index = msad ## Site, Site Link and Subnet Information 2012r2 and 2016 [powershell://Siteinfo] script = & "$SplunkHome\etc\apps\Splunk_TA_windows\bin\Invoke-MonitoredScript.ps1" -Command ".\powershell\2012r2-siteinfo.ps1" schedule = 0 15 * ? * * source = Powershell sourcetype = MSAD:NT6:SiteInfo disabled = 0 index = msad ##### Scripted Inputs for DNS ##### ## DNS Zone Information Collection [script://.\bin\runpowershell.cmd dns-zoneinfo.ps1] source = Powershell sourcetype = MSAD:NT6:DNS-Zone-Information interval = 3600 disabled = 0 index = msad ## DNS Health Information Collection [script://.\bin\runpowershell.cmd dns-health.ps1] source = Powershell sourcetype = MSAD:NT6:DNS-Health interval = 3600 disabled = 0 index = msad ###### Host monitoring ###### [WinHostMon://Computer] interval = 600 disabled = 0 type = Computer index = windows [WinHostMon://Process] interval = 600 disabled = 0 type = Process index = windows [WinHostMon://Processor] interval = 600 disabled = 0 type = Processor index = windows [WinHostMon://NetworkAdapter] interval = 600 disabled = 0 type = NetworkAdapter index = windows [WinHostMon://Service] interval = 600 disabled = 0 type = Service index = windows [WinHostMon://OperatingSystem] interval = 600 disabled = 0 type = OperatingSystem index = windows [WinHostMon://Disk] interval = 600 disabled = 0 type = Disk index = windows [WinHostMon://Driver] interval = 600 disabled = 0 type = Driver index = windows [WinHostMon://Roles] interval = 600 disabled = 0 type = Roles index = windows ###### Print monitoring ###### [WinPrintMon://printer] type = printer interval = 600 baseline = 1 disabled = 0 index = windows [WinPrintMon://driver] type = driver interval = 600 baseline = 1 disabled = 0 index = windows [WinPrintMon://port] type = port interval = 600 baseline = 1 disabled = 0 index = windows ###### Network monitoring ###### [WinNetMon://inbound] direction = inbound disabled = 0 index = windows [WinNetMon://outbound] direction = outbound disabled = 0 index = windows ###### Splunk 5.0+ Performance Counters ###### ## CPU [perfmon://CPU] disabled = 0 instances = * interval = 10 mode = single object = Processor useEnglishOnly=true index = perfmon ## Logical Disk [perfmon://LogicalDisk] disabled = 0 instances = * interval = 10 mode = single object = LogicalDisk useEnglishOnly=true index = perfmon ## Physical Disk [perfmon://PhysicalDisk] disabled = 0 instances = * interval = 10 mode = single object = PhysicalDisk useEnglishOnly=true index = perfmon ## Memory [perfmon://Memory] disabled = 0 interval = 10 mode = single object = Memory useEnglishOnly=true index = perfmon ## Network [perfmon://Network] disabled = 0 instances = * interval = 10 mode = single object = Network Interface useEnglishOnly=true index = perfmon ## Process [perfmon://Process] disabled = 0 instances = * interval = 10 mode = single object = Process useEnglishOnly = true index = perfmon ## ProcessInformation [perfmon://ProcessorInformation] counters = % Processor Time; Processor Frequency disabled = 0 instances = * interval = 10 mode = single object = Processor Information useEnglishOnly = true index = perfmon ## System [perfmon://System] disabled = 0 instances = * interval = 10 mode = single object = System useEnglishOnly = true index = perfmon ###### Perfmon Inputs from TA-AD/TA-DNS ###### [perfmon://Processor] instances = * interval = 10 disabled = 0 mode = single useEnglishOnly = true index = perfmon [perfmon://Network_Interface] object = Network Interface instances = * interval = 10 disabled = 0 mode = single useEnglishOnly = true index = perfmon [perfmon://DFS_Replicated_Folders] object = DFS Replicated Folders instances = * interval = 30 disabled = 0 mode = single useEnglishOnly = true index = perfmon [perfmon://NTDS] object = NTDS interval = 10 disabled = 0 mode = single useEnglishOnly = true index = perfmon [perfmon://DNS] object = DNS counters = Total Query Received; Total Query Received/sec; UDP Query interval = 10 disabled = 0 mode = single useEnglishOnly = true index = perfmon [admon://default] disabled = 0 monitorSubtree = 1 index = perfmon [WinRegMon://default] disabled = 0 hive = .* proc = .* type = rename|set|delete|create index = perfmon [WinRegMon://hkcu_run] disabled = 0 hive = \\REGISTRY\\USER\\.*\\Software\\Microsoft\\Windows\\CurrentVersion\\Run\\.* proc = .* type = set|create|delete|rename index = perfmon [WinRegMon://hklm_run] disabled = 0 hive = \\REGISTRY\\MACHINE\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Run\\.* proc = .* type = set|create|delete|rename index = perfmon      
I have installed Windows infrastructure app on Splunk search head (which is  a server) The app requires multiple indexes(msad, perfmon, wineventlog) and all indexes are receiving data except for msa... See more...
I have installed Windows infrastructure app on Splunk search head (which is  a server) The app requires multiple indexes(msad, perfmon, wineventlog) and all indexes are receiving data except for msad Attached is my indexes.conf file   [msad] coldPath = $SPLUNK_DB/msad/colddb enableDataIntegrityControl = 0 enableTsidxReduction = 0 homePath = $SPLUNK_DB/msad/db maxTotalDataSizeMB = 512000 thawedPath = $SPLUNK_DB/msad/thaweddb [perfmon] coldPath = $SPLUNK_DB/perfmon/colddb enableDataIntegrityControl = 0 enableTsidxReduction = 0 homePath = $SPLUNK_DB/perfmon/db maxTotalDataSizeMB = 512000 thawedPath = $SPLUNK_DB/perfmon/thaweddb [wineventlog] coldPath = $SPLUNK_DB/wineventlog/colddb enableDataIntegrityControl = 0 enableTsidxReduction = 0 homePath = $SPLUNK_DB/wineventlog/db maxTotalDataSizeMB = 512000 thawedPath = $SPLUNK_DB/wineventlog/thaweddb [windows] coldPath = $SPLUNK_DB\windows\colddb enableDataIntegrityControl = 0 enableTsidxReduction = 0 homePath = $SPLUNK_DB\windows\db maxTotalDataSizeMB = 512000 thawedPath = $SPLUNK_DB\windows\thaweddb S 
I want to tracking login and logout users on computers with timebased lookup. I have logon and logoff time for example in timebased-lookup; _time,user,host,type 09:00AM, someuser1, ComptuerA,logon... See more...
I want to tracking login and logout users on computers with timebased lookup. I have logon and logoff time for example in timebased-lookup; _time,user,host,type 09:00AM, someuser1, ComptuerA,logon 10:00AM, someuser1, ComputerA,logoff 10:00PM, otheruser2, ComptuerA,logon 11:00PMi otheruser2, ComputerA,logoff and if I do another search with just the account name ı want to see logged user in a timerange. The other raw log is; 09:00AM host=ComptuerA type=infection file=malware.exe for example ; 11:00AM host=ComputerA type=scanning 11:34PM host=ComputerA type=cleaning How do I add username someuser1 only to events between 9 o'clock and 10 o'clock on computerA with timebased-lookup? Thank you for helping.
One of our teams is running  Java script that uses REST API to fetch data from Splunk Cloud using the search. They run it around 1am EST, once per day. Usually it runs successfully but sometimes, no... See more...
One of our teams is running  Java script that uses REST API to fetch data from Splunk Cloud using the search. They run it around 1am EST, once per day. Usually it runs successfully but sometimes, not clear due to what reason, the script fails due to the following error: "Events might not be returned in sub-second order due to search memory limits. See search.log for more information. Increase the value of the following limits.conf setting:[search]:max_rawsize_perchunk." Here is the more detailed output from the script log: 2020-09-09 01:59:17,172 [INFO ] [mcsplunk] Connecting to splunk     2020-09-09 01:59:17,173 [INFO ] [mcsplunk] Setting up proxy to establish connection     2020-09-09 01:59:17,234 [INFO ] [mcsplunk] Successfully established connection with Splunk on host "our_host" using User "our_user" in app conext "our_app_name"     2020-09-09 01:59:17,234 [INFO ] [mcsplunk] Starting to fetch data from Splunk     2020-09-09 01:59:29,740 [INFO ] [mcsplunk] Splunk query execution has completed. Scanned:723492, Matched:723036,Results:1, Run Duration:11.146     09-09-2020 05:01:28.899 +0000 ERROR SearchMessages - orig_component="CursoredSearch" sid="1234567.123jobid"     peer_name="indexer01.ourdomain.com" message=[indexer01.ourdomain.com] Events might not be returned in sub-second order due to search memory limits. See search.log for more information. Increase the value of the following limits.conf setting:[search]:max_rawsize_perchunk. But when we attempt to run the same splunk search in Splunk Cloud GUI - it works fine, no errors.  Any idea what could be the cause of such behavior? 
Hi, Does anyone know if there is a way to ingest Windows Event Logs in to Log Analytics?
I want to be able to split the TID field into two new fields (Ingress_TID and Egress_TID) by correlating against the OMS_ID which is the same ID number for both.   Example data:   TID         ... See more...
I want to be able to split the TID field into two new fields (Ingress_TID and Egress_TID) by correlating against the OMS_ID which is the same ID number for both.   Example data:   TID                                      PORTAID                  OMSID BLTNMNFSWP001 1/7 oms-100483 MNNTMNICWP001 3/13 oms-100483   in the end want it to transform to  Ingress_TID                               Egress_TID                         PORTAID             OMSID BLTNMNFSWP001                                                                      1/7                     oms-100483                                                       MNNTMNICWP001               3/13                  oms-100483 Any help would be great.  Thank you!!
I use alert manager datamodel to keep track of all the invoked alerts month over month. Using the following: (index=* OR index=_*) (eventtype="alert_metadata") | where label != "NULL" |chart count... See more...
I use alert manager datamodel to keep track of all the invoked alerts month over month. Using the following: (index=* OR index=_*) (eventtype="alert_metadata") | where label != "NULL" |chart count over label by Time | rename label AS Alert | addcoltotals labelfield=Alert label=TOTAL   It was working great. I was able to see the trends of how many time the alerts were triggered each month for the past 3 or 5 months. Then, the numbers that I saw last month does not match with what I see this month. For example: When the spl above ran on Aug. 3rd, it showed the the number of alerts fired for July is 171 When the spl above ran on Sept. 3rd, it showed the the number of alerts fired for July is 145 ... missing 26 When the spl above ran on Sept. 9, it showed the the number of alerts fired for July is 123 ... missing 22 It seems the alerts keep disappearing from the alert manager data model. Has any body seen this type of behavior before? Is there any suggestion to fix this problem? Thanks.  
Hi, I have been using Meta Woot! to keep track of assets and their logging times. This search in particular, shows a single value of Hosts checking in but I'm unable to use timechart correctly to sh... See more...
Hi, I have been using Meta Woot! to keep track of assets and their logging times. This search in particular, shows a single value of Hosts checking in but I'm unable to use timechart correctly to show trend indicators. I've tried adding the following to no avail. Any suggestions would be greatly appreciated.    | timechart span=1d count by host   Search string:   inputlookup meta_woot where index=* sourcetype=* host=* | where recentTime>(now()-86400) | eval latency= round((recentTime-lastTime)/60,2) | eval latency_type=if(latency<0,"Logging Ahead","Logging Behind") | eval latency=abs(latency) | eval latency_type=if(latency="0.00","No Latency",latency_type) | where latency>=0 | stats dc(host) as "Total Hosts"    Thanks!
Greetings Splunkers, I have a lookup file that has a list of set jobs with a frequency timestamp (e.g. Mon-Fri @ 3:30) of when the job should be seen within Splunk.  I'm wanting to create an eval ... See more...
Greetings Splunkers, I have a lookup file that has a list of set jobs with a frequency timestamp (e.g. Mon-Fri @ 3:30) of when the job should be seen within Splunk.  I'm wanting to create an eval that will allow me to match the index time of an event/job with its frequency timestamp. The dilemma I'm having is incorporating a +/- 5 min time span into the matching criteria.  Any assistance in figuring this out would be greatly appreciated.
I have created a lookup test123.csv owned by me and  A user queries and he gets the error - "User has insufficient permissions to perform this operation. list_workload_pools and select_workload_poo... See more...
I have created a lookup test123.csv owned by me and  A user queries and he gets the error - "User has insufficient permissions to perform this operation. list_workload_pools and select_workload_pools capabilities are required." When I do the same query I don't get any error but it also return 0 events.  I dont get that error is it because I own the file ? index=123 AND organizationId=00TY00000005677 AND logRecordType=ailtn (Lightning_Console) | dedup sessionKey | lookup test123.csv UserId AS userId OUTPUT UserId AS userId | table userId, sessionKey, _time Would highly appreciate thoughts and suggestions on this . 
Hello, I was taking a look at "Deep Learning Toolkit for Splunk" and was wondering if someone could point me to the docs for securing the different services (i.e. Juypter, TensorBoard, etc) used by t... See more...
Hello, I was taking a look at "Deep Learning Toolkit for Splunk" and was wondering if someone could point me to the docs for securing the different services (i.e. Juypter, TensorBoard, etc) used by the app? (By securing, I mean locking it down to users coming from Splunk). From what I think I saw in the code the different services are just running in the Docker env on different ports and the "Deep Learning Toolkit for Splunk" is aware of which ports things are running on and just opens a new tab in your browser linking out to the different services directly. Is that correct, or does it integrate with Splunk more and the auth is handled in some way there? Or do you have to build your own containers to lock things down? Thanks!
I am experiencing an issue where the rules in place are firing as expected but have suddenly the past 2 weeks stopped sending email alerts. while this wouldnt be difficult to troubleshoot if it was A... See more...
I am experiencing an issue where the rules in place are firing as expected but have suddenly the past 2 weeks stopped sending email alerts. while this wouldnt be difficult to troubleshoot if it was ALL alerts, its only a select few. The configuration of the email alerts are the same as the alerts that are working and emailing as expected. Has anyone experienced this issue before?