All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We need to capture Oozie workflow Failure events in Splunk - HTTP Event Collector (Example: https://qa.splunk.organization.com/services/collector). To achieve this, we created a separate Oozie Java... See more...
We need to capture Oozie workflow Failure events in Splunk - HTTP Event Collector (Example: https://qa.splunk.organization.com/services/collector). To achieve this, we created a separate Oozie Java action to log the failure event to Splunk. The problem with this approach is, we have more than 100 oozie workflows and adding a new workflow action for Splunk is not feasible. Is there any better approach to capture Oozie workflow failure in Splunk HTTP Event Collector?
Splunk installed on windows server, getting the following errors in web UI:    KV Store process terminated abnormally (exit code 14, status exited with code 14). See mongod.log and splunkd.log for ... See more...
Splunk installed on windows server, getting the following errors in web UI:    KV Store process terminated abnormally (exit code 14, status exited with code 14). See mongod.log and splunkd.log for details. KV Store changed status to failed. KVStore process terminated. Failed to start KV Store process. See mongod.log and splunkd.log for details.   Checking mongod.log has the following entry:  [initandlisten] Detected unclean shutdown - C:\Program Files\Splunk\var\lib\splunk\kvstore\mongo\mongod.lock is not empty. I JOURNAL [initandlisten] journal dir=C:\Program Files\Splunk\var\lib\splunk\kvstore\mongo\journal I JOURNAL [initandlisten] recover begin I JOURNAL [initandlisten] info no lsn file in journal/ directory I JOURNAL [initandlisten] recover lsn: 0 I JOURNAL [initandlisten] recover C:\Program Files\Splunk\var\lib\splunk\kvstore\mongo\journal\j._0 F CONTROL [initandlisten] CreateFileW for C:\Program Files\Splunk\var\lib\splunk\kvstore\mongo\journal\j._0 failed with Access is denied. (file size is 8192) in MemoryMappedFile::map F - [initandlisten] Fatal Assertion 16334 at src\mongo\db\storage\mmap_v1\mmap.cpp 129 F - [initandlisten] ***aborting after fassert() failure   Any ideas?
Hello Splunk TEAM, I have a question about my searchs in splunk. I have 3 index and I want to search and compare some information.  But when I do my search Tiempo_Ejecutado its wrong I dont know... See more...
Hello Splunk TEAM, I have a question about my searchs in splunk. I have 3 index and I want to search and compare some information.  But when I do my search Tiempo_Ejecutado its wrong I dont know what happen!   (index="inlooxtt" StatusName!=Completed StatusName!=Cancelled PerformedByName!=Donado* CreatedDate>2020-05-30 ProjectName!="Capac* General" ProjectName!="Preventas*") OR (index="inlooxtasks" ProjectStatusName!=Completed ProjectStatusName!=Cancelled ContactDisplayName!=Donado* ContactDisplayName!="null" ProjectName!="Capac* General" ProjectName!="Preventas*") OR (index="inlooxprojects" StatusName!="Completed" StatusName!="Cancelled" StatusName!="Pausado" IsRecycled!="true" FirstTeamMember!="Inloox - Alejandro Donado (deleted)" Name!="Capacit* General" Name!=Preventas*) | eval Proyectos=coalesce(ProjectName, Name) | eval Tiempo_Ejecutado=(DurationMinutes/60), Tiempo_Planeado=WorkAmount, Tiempo_Vendido=Ventas | stats dedup_splitvals=true sum(Tiempo_Ejecutado) as Tiempo_Ejecutado, sum(Tiempo_Planeado) as Tiempo_Planeado, sum(Tiempo_Vendido) as Tiempo_Vendido by Proyectos | eval Tiempo_Ejecutado=round(Tiempo_Ejecutado,2) | eval Tiempo_Planeado=round(Tiempo_Planeado,2) | sort Proyectos     Index1 have ProjectName Index2 ProjectName Index Name Thanks ALL!
Hello   My output is: signature, count BitTorrent DHT ping request, 896 Bittorrent P2P Client User-Agent (uTorrent), 350 BitTorrent DHT announce_peers request, 296 BitTorrent announce reque... See more...
Hello   My output is: signature, count BitTorrent DHT ping request, 896 Bittorrent P2P Client User-Agent (uTorrent), 350 BitTorrent DHT announce_peers request, 296 BitTorrent announce request, 201 BitTorrent DHT nodes reply, 121 Observed DNS Query to .biz TLD, 53586 Observed DNS Query to .cloud TLD, 24277 DYNAMIC_DNS Query to, 5896 DynDNS CheckIp External IP Address Server Response, 2894 OpenDNS DNSCrypt, 577   I to united similia events and output should be this: signature, count  Torrent, 1864 DNS, 87230 Can someone help me with the search pattern that will solve my issue? One of the main criteria it's should be easy to scale and without the creation of a new field. The transformation should be before I will use command stats.
Is there a way to read logs generated by Splunk via API or SDK, I will be very grateful if someone can please share their thoughts on this.
For the two indexes.conf volume settings below - would one take precedence over the other if they use the same path? [volume:hotwarm] path = /splunk-data/hot-warm # ~5.8 TB maxVolumeDataSizeMB = ... See more...
For the two indexes.conf volume settings below - would one take precedence over the other if they use the same path? [volume:hotwarm] path = /splunk-data/hot-warm # ~5.8 TB maxVolumeDataSizeMB = 5800000 [volume:_splunk_summaries] path = /splunk-data/hot-warm # ~ 100GB maxVolumeDataSizeMB = 100000 I am observing that the volume setting with the lower value appears to be taking precidence over the volume with the higher value. Every index that is configured to use the "hotwarm" volume looks to be getting capped at 100GB when looking at my MC.(indexing -> indexes and volumes -> volume detail:instance) Below is the sample data I see within the MC for one indexer for hotwarm. Volume Usage Volume       Volume Usage (GB)    Volume Capacity (GB)    Volume Path hotwarm            948.96                            5664.06                        /splunk-data/hot-warm Index Directories Using This Volume Index:Directory                         Disk Usage (GB)        Data Age (Days) paloalto:home                             95.53                                   11 asa:home                                       91.57                                  15 wsa:home                                      87.06                                   96 extrahop:home                            86.46                                 157 firepower:home                           85.50                                 188 adaudit:home                               80.08                                  539 phantom_container:home     78.44                                   497 sep:home                                       75.07                                   394 waf:home                                       64.47                                  489 For the sample above, the volume usage for hotwarm never goes past 1TB so it leads me to believe the lower setting does take precedence because the individual indexes don't go past 100GB each. After a ton of digging (btool, SPL, linux commands, etc...) the same setting keeps on showing up as the possible culprit (maxVolumeDataSizeMB = 100000). If this behavior happens to be true then I assume I'd need to create another volume for my summaries? If not true then I am truly stumped as I see no other comparable setting anywhere within the system that would cause the limitation. By the way, the indexers are clustered (10) with SF=2 RF=3 on Red Hat 7.3 and using Splunk 7.3.1. Also, I saw that this type of configuration was shown in the 7.3.1 admin manual as an example (https://docs.splunk.com/Documentation/Splunk/7.3.1/Admin/Indexesconf). It leads me to believe it is possible to have a configuration that can share volumes. Example subset from Admin Manual below (using the same path): ### Indexes may be allocated space in effective groups by sharing volumes ### # perhaps we only want to keep 100GB of summary data and other # low-volume information [volume:small_indexes] path = /mnt/splunk_indexes maxVolumeDataSizeMB = 100000 # and this is our main event series, allowing 50 terabytes [volume:large_indexes] path = /mnt/splunk_indexes maxVolumeDataSizeMB = 50000000
How to compare the average value of the field in two different time frames i.e same time today with same time yesterday. Compare the today time frame with yesterday's time frame.
Hello, I am new to using REX wich is I believe is what I need. I have a field that has data that looks like this... 10.231.247.162--WTLDNDAA001--Can't ping DSLAM 10.44.69.250--TCSUAZMS--Visibilit... See more...
Hello, I am new to using REX wich is I believe is what I need. I have a field that has data that looks like this... 10.231.247.162--WTLDNDAA001--Can't ping DSLAM 10.44.69.250--TCSUAZMS--VisibilityOnly--Can't ping DSLAM  10.44.69.250--TCSUAZMS--Can't ping DSLAM--VisibilityOnly  172.31.247.148--CLSPCO32H01.2--Can't ping DSLAM  172.31.166.155--RSBGORBU--Can't ping DSLAM  I want my table to ONLY show whats between the hyphens Example I want to get  WTLDNDAA-001 TCSUAZMS CLSPCO32H01.2 RSBGORBU Can anybody help me with creating a rex that removes everything not between the the 2 sets of hyphens. Would be greatly appreciated!!
Greetings--- I have Wireless logs that I am attempting to create a Timechart, splitting by Area, and displaying stacked by AccessPoint. Here is my base search:     index="wlan" EventType=Authe... See more...
Greetings--- I have Wireless logs that I am attempting to create a Timechart, splitting by Area, and displaying stacked by AccessPoint. Here is my base search:     index="wlan" EventType=Authentication | rex field=AP (?<=AP-)(?<area>.*?)(?=-) | eval area = upper(area)     Ideally, I would like to create a Trellis of ...     | timechart dc(Mac) AS count by AP     grouped by the field area. I see loads of examples on answers... this is the closest that I get: https://community.splunk.com/t5/Splunk-Search/Timechart-count-by-ip-by-fw-name-over-time-with-trellis/m-p/498737 So with this, I get..     index="wlan" EventType=Authentication | rex field=AP (?<=AP-)(?<area>.*?)(?=-) | eval area = upper(area) | bucket span=30m _time | stats dc(Mac) AS count by _time area AP | eval {AP}=count | fields - AP count     Which is VERY close.. My trellis is grouped by area, and it displays my bar chart over time. However, it 1) does not support stacking, and 2) does not seem to filter the legend...  in my case, I have around 10 areas and 160 APs., and each chart in the trellis by area displays correctly, but it also displays the all 160 APs in the legend space per chart. Any help is appreciated.
Hi All, Currently running a distributed instance of Splunk on prem. Is there a way to monitor splunk manually through search rather than the DMC? The reason is because i would like to alert teams of... See more...
Hi All, Currently running a distributed instance of Splunk on prem. Is there a way to monitor splunk manually through search rather than the DMC? The reason is because i would like to alert teams of faults (like an indexer not being seen by a cluster master etc.). I've also had an issue where we've not noticed where a data source has stopped ingesting, has anyone been able to successfully implement a search to detect when a log source stops ingesting? All ideas are welcome! Thank you in advance
Hi Splunkers, Has anyone seen this or something similar?  We are collecting Windows Events Logs from Windows servers via a logging tool and they are being forwarded to a Splunk HF.  Due to the loggi... See more...
Hi Splunkers, Has anyone seen this or something similar?  We are collecting Windows Events Logs from Windows servers via a logging tool and they are being forwarded to a Splunk HF.  Due to the logging system format, Splunk is not parsing the fields automatically.  Here is a sample event:   Jun 11 12:00:08 LOGGING-SERVER.domain.com 1 2020-06-11T19:00:03.529Z WINSVR001 - - - [Originator@6876 eventid="4624" task="Logon" keywords="Audit Success" level="Information" channel="Security" opcode="Info" eventrecordid="383694869" providername="Microsoft-Windows-Security-Auditing"] An account was successfully logged on. Subject: Security ID: S-1-5-20 Account Name: WINSVR001-V$ Account Domain: AD Logon ID: 0x3e4 Logon Type: 8 New Logon: Security ID: S-1-5-21-503695880-123456789-3595387526-4510 Account Name: jdoe Account Domain: AD Logon ID: 0x139c67d30 Logon GUID: {F325D620-6114-0657-01BF-F25C4AD21656} Process Information: Process ID: 0x3f8 Process Name: D:\Program Files\Microsoft\Exchange Server\V14\ClientAccess\PopImap\Microsoft.Exchange.Imap4.exe Network Information: Workstation Name: WINSVR001-V Source Network Address: - Source Port: - Detailed Authentication Information: Logon Process: Advapi Authentication Package: Negotiate Transited Services: - Package Name (NTLM only): - Key Length: 0     We are using Splunk Add-on for Microsoft Windows 8.0.  Is it possible to modify the existing conf files to have the fields parsed?  Using the add-on with all the defined fields will integrate with CIM and ES nicely. I'm trying to avoid reinventing the wheel and doing a brute force regex on the whole event. If you're up to the challenge, I'm looking for: -Is it possible to modify the Splunk Add-on for Microsoft Windows 8.0 to recognize the above wineventlog format? -Can some help me with the regex to parse all the wineventlog fields and values?   I appreciate the help in advance.   Thanks, H      
We have an email set to run on a schedule from a dashboard.   It worked yesterday but not today.  I can see in the log where the email is sending.  I tested by changing the scheduled time and again, ... See more...
We have an email set to run on a schedule from a dashboard.   It worked yesterday but not today.  I can see in the log where the email is sending.  I tested by changing the scheduled time and again, it says the email is sending, but it does not arrive.   Another email that comes from a scheduled report but it did not arrive today either.
Hello Splunk TEAM, I have a problem with my search because I use to different index and the data which I want to compare when I want to define by a field is different for example. I have two Inde... See more...
Hello Splunk TEAM, I have a problem with my search because I use to different index and the data which I want to compare when I want to define by a field is different for example. I have two Index  in one I have ContactByName and in the other index I have PerformedByName. I the two fields I have the same data but when I want to compare the data in that information I cant. I try to rename ContactByName as PerformedByName to do my search again but is not a good idea.  I have this Right now:  (index="inlooxtt" StatusName!=Completed StatusName!=Cancelled PerformedByName!=Donado* CreatedDate>2020-05-30 ProjectName!="Capac* General" ProjectName!="Preventas*") OR (index="inlooxtasks" ProjectStatusName!=Completed ProjectStatusName!=Cancelled ContactDisplayName!=Donado* ContactDisplayName!="null" ProjectName!="Capac* General" ProjectName!="Preventas*") | rename ContactDisplayName as PerformedByname | eval Tiempo_Ejecutado=(DurationMinutes/60), Tiempo_Planeado=WorkAmount | stats dedup_splitvals=true sum(Tiempo_Ejecutado) as Tiempo_Ejecutado, sum(Tiempo_Planeado) as Tiempo_Planeado by PerformedByname But I have this:  The Tiempo_Ejecutado didnt appear   Thanks all
Hi, Please suggest me, Methods for freeing some space on /opt/splunkcolddata on indexer. how to reduce the retention data, for indexed data.  Thanks. 
All I’m trying to do is show the percentage of used memory (available/total).  I have the query that returns the Available and the Total from one of the dashboards:   index=*index* sourcetype=tss:a... See more...
All I’m trying to do is show the percentage of used memory (available/total).  I have the query that returns the Available and the Total from one of the dashboards:   index=*index* sourcetype=tss:action host=*host* category=monitoring_wp OR category=monitoring_as measure="memory health status" OR measure=mem OR measure="available bytes" | eval "Mem Health" = case(measure == "memory health status", value) | eval memory = case(measure == "mem", round(value/1024/1024/1024,2)) | eval "Avail Memory" = case(measure == "available bytes",round(value/1024/1024/1024,2)) | bin span=5m _time | timechart avg("Avail Memory") as "Available Memory (Gb)" avg(memory) as "Heap Mem(Gb) Avg" span=5m     This seems pretty straight forward.  I thought all I would need to do is add another ‘eval’ statement to find the fraction for  the percentage of used memory:   index=*index* sourcetype=tss:action host=*host* category=monitoring_wp OR category=monitoring_as measure="memory health status" OR measure=mem OR measure="available bytes" | eval "Mem Health" = case(measure == "memory health status", value) | eval memory = case(measure == "mem", round(value/1024/1024/1024,2)) | eval "Avail Memory" = case(measure == "available bytes",round(value/1024/1024/1024,2)) | eval "Percentage" = 'Avail Memory'/memory  <----- *****Here is what I added ****** | bin span=5m _time | timechart avg("Avail Memory") as "Available Memory (Gb)" avg(memory) as "Heap Mem(Gb) Avg" avg("PercentageUsed") as "Percentage"  span=5m     But that doesn’t work, that field (PercentageUsed) always returns as empty (null) in the results set.  The  ‘eval’ statement and format works correctly if I replace it with only one of the variables:   (| eval "Percentage" = 'Avail Memory')   or if I just replace one of the variables with a number:   (eval "Percentage" = 'Avail Memory'/2)   Any thoughts on what I could do here?  I was thinking the problem might be that the “Avail Memory” and “memory” are coming from different reports, so when one is not null, the other will be null.  Or I just don’t exactly understand how Splunk is generating these results.
I had Splunk Enterprise 8.0.1 this morning and I installed the most recent version (8.0.4.1). After I did this upgrade I restarted Splunk and then I verified in Splunk Web and I had the version 8.0.4... See more...
I had Splunk Enterprise 8.0.1 this morning and I installed the most recent version (8.0.4.1). After I did this upgrade I restarted Splunk and then I verified in Splunk Web and I had the version 8.0.4.1 But now when I modified the splunk-launch.conf and the web.conf for other ip, by following this site: https://docs.splunk.com/Documentation/Splunk/8.0.3/Admin/BindSplunktoanIP I restarted Splunk and all seem alright, but then I tried to search the same ip and it gave me 500 error like it shows in the printscreen. If someone can help me, please reply.
Been getting messages saying that some identities are exceeding the field limits. I've increased the limit on some of them, but I'm having difficulty finding the exact field that is causing this issu... See more...
Been getting messages saying that some identities are exceeding the field limits. I've increased the limit on some of them, but I'm having difficulty finding the exact field that is causing this issue. Is there a way to find the exact instance where this limit is being exceeded? Identity: 25 assets are currently exceeding the field limits set in the Asset and Identity Management page. Data truncation will occur unless the field limits are increased. Sources: [merge].
    Search Lag Root Cause(s): The number of extremely lagged searches (1) over the last hour exceeded the red threshold (1) on this Splunk instance Last 50 related messages: 06-12-2020 10:15... See more...
    Search Lag Root Cause(s): The number of extremely lagged searches (1) over the last hour exceeded the red threshold (1) on this Splunk instance Last 50 related messages: 06-12-2020 10:15:28.204 -0400 INFO SavedSplunker - Scheduler Health Report recording a extremely lagged search="Splunk Web Login Attempts" with lag=267 search_period=60 06-11-2020 23:03:00.663 -0400 INFO SavedSplunker - Scheduler Health Report recording a extremely lagged search="Splunk Web Login Attempts" with lag=2100 search_period=60 06-11-2020 22:17:54.510 -0400 INFO SavedSplunker - Scheduler Health Report recording a extremely lagged search="Splunk Web Login Attempts" with lag=354 search_period=60 06-11-2020 18:39:31.208 -0400 INFO SavedSplunker - Scheduler Health Report recording a extremely lagged search="Splunk Web Login Attempts" with lag=1770 search_period=60 06-11-2020 17:09:09.800 -0400 INFO SavedSplunker - Scheduler Health Report recording a extremely lagged search="Splunk Web Login Attempts" with lag=189 search_period=60 06-11-2020 16:15:55.517 -0400 INFO SavedSplunker - savedsearch_id="nobody;search;Splunk Web Login Attempts", search_type="", user="kmill78", app="search", savedsearch_name="Splunk Web Login Attempts", priority=default, status=success, digest_mode=0, scheduled_time=1591906445, window_time=0, dispatch_time=1591906447, run_time=107.691, result_count=1, alert_actions="", sid="rt_scheduler__kmill78__search__RMD52fa94ba1191f811b_at_1591906445_1.12", suppressed=1, fired=0, skipped=1, action_time_ms=44, thread_id="AlertNotifierWorker-0", message="", workload_pool="" 06-11-2020 16:15:50.575 -0400 INFO SavedSplunker - savedsearch_id="nobody;search;Splunk Web Login Attempts", search_type="", user="kmill78", app="search", savedsearch_name="Splunk Web Login Attempts", priority=default, status=success, digest_mode=0, scheduled_time=1591906445, window_time=0, dispatch_time=1591906447, run_time=102.711, result_count=2, alert_actions="", sid="rt_scheduler__kmill78__search__RMD52fa94ba1191f811b_at_1591906445_1.11", suppressed=2, fired=0, skipped=2, action_time_ms=52, thread_id="AlertNotifierWorker-0", message="", workload_pool="" 06-11-2020 16:15:45.572 -0400 INFO SavedSplunker - savedsearch_id="nobody;search;Splunk Web Login Attempts", search_type="", user="kmill78", app="search", savedsearch_name="Splunk Web Login Attempts", priority=default, status=success, digest_mode=0, scheduled_time=1591906445, window_time=0, dispatch_time=1591906447, run_time=97.714, result_count=2, alert_actions="", sid="rt_scheduler__kmill78__search__RMD52fa94ba1191f811b_at_1591906445_1.10", suppressed=2, fired=0, skipped=2, action_time_ms=48, thread_id="AlertNotifierWorker-0", message="", workload_pool="" 06-11-2020 16:15:40.578 -0400 INFO SavedSplunker - savedsearch_id="nobody;search;Splunk Web Login Attempts", search_type="", user="kmill78", app="search", savedsearch_name="Splunk Web Login Attempts", priority=default, status=success, digest_mode=0, scheduled_time=1591906445, window_time=0, dispatch_time=1591906447, run_time=92.709, result_count=2, alert_actions="", sid="rt_scheduler__kmill78__search__RMD52fa94ba1191f811b_at_1591906445_1.9", suppressed=2, fired=0, skipped=2, action_time_ms=55, thread_id="AlertNotifierWorker-0", message="", workload_pool="" 06-11-2020 16:15:35.575 -0400 INFO SavedSplunker - savedsearch_id="nobody;search;Splunk Web Login Attempts", search_type="", user="kmill78", app="search", savedsearch_name="Splunk Web Login Attempts", priority=default, status=success, digest_mode=0, scheduled_time=1591906445, window_time=0, dispatch_time=1591906447, run_time=87.719, result_count=2, alert_actions="", sid="rt_scheduler__kmill78__search__RMD52fa94ba1191f811b_at_1591906445_1.8", suppressed=2, fired=0, skipped=2, action_time_ms=43, thread_id="AlertNotifierWorker-0", message="", workload_pool="" 06-11-2020 16:15:30.520 -0400 INFO SavedSplunker - savedsearch_id="nobody;search;Splunk Web Login Attempts", search_type="", user="kmill78", app="search", savedsearch_name="Splunk Web Login Attempts", priority=default, status=success, digest_mode=0, scheduled_time=1591906445, window_time=0, dispatch_time=1591906447, run_time=82.709, result_count=2, alert_actions="", sid="rt_scheduler__kmill78__search__RMD52fa94ba1191f811b_at_1591906445_1.7", suppressed=2, fired=0, skipped=2, action_time_ms=30, thread_id="AlertNotifierWorker-0", message="", workload_pool="" 06-11-2020 16:15:25.550 -0400 INFO SavedSplunker - savedsearch_id="nobody;search;Splunk Web Login Attempts", search_type="", user="kmill78", app="search", savedsearch_name="Splunk Web Login Attempts", priority=default, status=success, digest_mode=0, scheduled_time=1591906445, window_time=0, dispatch_time=1591906447, run_time=77.703, result_count=2, alert_actions="", sid="rt_scheduler__kmill78__search__RMD52fa94ba1191f811b_at_1591906445_1.6", suppressed=2, fired=0, skipped=2, action_time_ms=41, thread_id="AlertNotifierWorker-0", message="", workload_pool="" 06-11-2020 16:15:20.579 -0400 INFO SavedSplunker - savedsearch_id="nobody;search;Splunk Web Login Attempts", search_type="", user="kmill78", app="search", savedsearch_name="Splunk Web Login Attempts", priority=default, status=success, digest_mode=0, scheduled_time=1591906445, window_time=0, dispatch_time=1591906447, run_time=72.702, result_count=2, alert_actions="", sid="rt_scheduler__kmill78__search__RMD52fa94ba1191f811b_at_1591906445_1.5", suppressed=2, fired=0, skipped=2, action_time_ms=67, thread_id="AlertNotifierWorker-0", message="", workload_pool="" 06-11-2020 16:15:15.563 -0400 INFO SavedSplunker - savedsearch_id="nobody;search;Splunk Web Login Attempts", search_type="", user="kmill78", app="search", savedsearch_name="Splunk Web Login Attempts", priority=default, status=success, digest_mode=0, scheduled_time=1591906445, window_time=0, dispatch_time=1591906447, run_time=67.707, result_count=2, alert_actions="", sid="rt_scheduler__kmill78__search__RMD52fa94ba1191f811b_at_1591906445_1.4", suppressed=2, fired=0, skipped=2, action_time_ms=47, thread_id="AlertNotifierWorker-0", message="", workload_pool="" 06-11-2020 16:15:10.567 -0400 INFO SavedSplunker - savedsearch_id="nobody;search;Splunk Web Login Attempts", search_type="", user="kmill78", app="search", savedsearch_name="Splunk Web Login Attempts", priority=default, status=success, digest_mode=0, scheduled_time=1591906445, window_time=0, dispatch_time=1591906447, run_time=62.706, result_count=2, alert_actions="", sid="rt_scheduler__kmill78__search__RMD52fa94ba1191f811b_at_1591906445_1.3", suppressed=2, fired=0, skipped=2, action_time_ms=48, thread_id="AlertNotifierWorker-0", message="", workload_pool="" 06-11-2020 16:15:05.565 -0400 INFO SavedSplunker - savedsearch_id="nobody;search;Splunk Web Login Attempts", search_type="", user="kmill78", app="search", savedsearch_name="Splunk Web Login Attempts", priority=default, status=success, digest_mode=0, scheduled_time=1591906445, window_time=0, dispatch_time=1591906447, run_time=57.705, result_count=2, alert_actions="", sid="rt_scheduler__kmill78__search__RMD52fa94ba1191f811b_at_1591906445_1.2", suppressed=2, fired=0, skipped=2, action_time_ms=47, thread_id="AlertNotifierWorker-0", message="", workload_pool="" 06-11-2020 16:15:00.518 -0400 INFO SavedSplunker - savedsearch_id="nobody;search;Splunk Web Login Attempts", search_type="", user="kmill78", app="search", savedsearch_name="Splunk Web Login Attempts", priority=default, status=success, digest_mode=0, scheduled_time=1591906445, window_time=0, dispatch_time=1591906447, run_time=52.681, result_count=2, alert_actions="", sid="rt_scheduler__kmill78__search__RMD52fa94ba1191f811b_at_1591906445_1.1", suppressed=2, fired=0, skipped=2, action_time_ms=62, thread_id="AlertNotifierWorker-0", message="", workload_pool="" 06-11-2020 16:14:55.518 -0400 INFO SavedSplunker - savedsearch_id="nobody;search;Splunk Web Login Attempts", search_type="", user="kmill78", app="search", savedsearch_name="Splunk Web Login Attempts", priority=default, status=success, digest_mode=0, scheduled_time=1591906445, window_time=0, dispatch_time=1591906447, run_time=47.676, result_count=1, alert_actions="", sid="rt_scheduler__kmill78__search__RMD52fa94ba1191f811b_at_1591906445_1.0", suppressed=0, fired=1, skipped=0, action_time_ms=50, thread_id="AlertNotifierWorker-0", message="", workload_pool="" 06-11-2020 14:02:46.137 -0400 INFO SavedSplunker - savedsearch_id="nobody;splunk_monitoring_console;DMC Asset - Build Standalone Asset Table", search_type="scheduled", user="nobody", app="splunk_monitoring_console", savedsearch_name="DMC Asset - Build Standalone Asset Table", priority=default, status=success, digest_mode=1, scheduled_time=1591898534, window_time=0, dispatch_time=1591898565, run_time=0.252, result_count=4, alert_actions="populate_lookup", sid="scheduler__nobody_c3BsdW5rX21vbml0b3JpbmdfY29uc29sZQ__RMD54740dfff07b17ef1_at_1591898534_0", suppressed=0, thread_id="AlertNotifierWorker-0", workload_pool="" 06-11-2020 14:02:45.291 -0400 INFO SavedSplunker - DCSS: completed reading history for continuous scheduled searches
Hi there, Any thoughts on how I can get a list/count of all searches, both saved/ad-hoc etc that run on all servers in 24h period? It would be good if I could choose between concurrent and not. Tha... See more...
Hi there, Any thoughts on how I can get a list/count of all searches, both saved/ad-hoc etc that run on all servers in 24h period? It would be good if I could choose between concurrent and not. Thanks! 
We have a license for only DNS and Netflow data sources. Is their a way to edit the license to allow additional sourcetypes to be utilized with this license? Currently,  the following sourcetypes are... See more...
We have a license for only DNS and Netflow data sources. Is their a way to edit the license to allow additional sourcetypes to be utilized with this license? Currently,  the following sourcetypes are only allowed to be used with the license:   <sourcetype>*dns*</sourcetype>  <sourcetype>flowintegrator</sourcetype>       <sourcetype>*netflow*</sourcetype>   Any help would be appreciated.