All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I've noticed an issue with the documentation and configuration for DA-ITSI-OS. https://docs.splunk.com/Documentation/ITSI/4.13.1/IModules/OSmoduleconfiguration Firstly, the documentation sugges... See more...
I've noticed an issue with the documentation and configuration for DA-ITSI-OS. https://docs.splunk.com/Documentation/ITSI/4.13.1/IModules/OSmoduleconfiguration Firstly, the documentation suggests that If using Splunk_TA_nix, I should enable metrics inputs with the following: [script://./bin/vmstat.sh] interval = 60 sourcetype = vmstat source = vmstat # index = os disabled = 0 [script://./bin/iostat.sh] interval = 60 sourcetype = iostat source = iostat # index = os disabled = 0 [script://./bin/nfsiostat.sh] interval = 60 sourcetype = nfsiostat source = nfsiostat # index = os disabled = 0 [script://./bin/ps.sh] interval = 30 sourcetype = ps source = ps # index = os disabled = 0 [script://./bin/bandwidth.sh] interval = 60 sourcetype = bandwidth source = bandwidth # index = os disabled = 0 [script://./bin/df.sh] interval = 300 sourcetype = df source = df # index = os disabled = 0 [script://./bin/cpu.sh] sourcetype = cpu source = cpu interval = 30 # index = os disabled = 0 [script://./bin/hardware.sh] sourcetype = hardware source = hardware interval = 36000 # index = os disabled = 0 [script://./bin/version.sh] disabled = false # index = os interval = 86400 source = Unix:Version sourcetype = Unix:Version The problem is, that these are inputs for event metrics and everything else is set up for metrics! In the actual Splunk_TA_nix, the inputs for metrics versions of those scripts have a different stanza, such as cpu_metric df_metric interfaces_metric iostat_metric ps_metric vmstat_metric If I simply change the sourcetype, it breaks the input, so by default, all those metrics based scripts output with the metric name using the _metric suffix. Unfortunately, ALL the ITSI OS module searches are looking for the un suffixed metric names, E.G. cpu, ps, vmstat! If I alter the searches to look for the updated suffixed metric names, I don't get the OS Host Information panel appearing on the entity within the deep dive or entity view. So I don't know how, under the configured searches, any of this will work unless heavily modified, or why the documentation points to event log collection scripts  but the module requires metrics indexes given the use of mstats to search. What am I missing here?
Based on the last row which is "Average", check the values of avg_cpu_utilization and avg_mem_usage and where ever the difference is more then 3 change it's colour or mark it in bold. cluster... See more...
Based on the last row which is "Average", check the values of avg_cpu_utilization and avg_mem_usage and where ever the difference is more then 3 change it's colour or mark it in bold. cluster_name hypervisor_name avg_cpu_utilization avg_mem_usage max_cpu_readiness max_cpu_utilization max_mem_usage Cluster Host1 8.2 29.62 0.18 17.65 29.63 Cluster Host2 5.5 26.41 0.08 14.31 26.42 Cluster Host3 1.7 30.51 0.01 3.48 30.52 Average   3.98 29.61 0.07 9.39 29.62   For Example- if we see avg_cpu_utilization field it's average is 3.98, so it should check all the values in that column (8.2,5.5,1.7) and where ever average difference is more then 3 mark it in bold, so in this case if we compare 3.98 value with other 3 values then for Host1 it is 8.2, which should be marked in bold or colour should be changed for it. Output should be below- cluster_name hypervisor_name avg_cpu_utilization avg_mem_usage max_cpu_readiness max_cpu_utilization max_mem_usage Cluster Host1 8.2 29.62 0.18 17.65 29.63 Cluster Host2 5.5 26.41 0.08 14.31 26.42 Cluster Host3 1.7 30.51 0.01 3.48 30.52 Average   3.98 29.61 0.07 9.39 29.62
I have syslog-ng configuration that started duplicating the events after the Linux box reboot  is there any way to avoid it  ? the are 2 heavy forwarders defined for the same load balancer and on... See more...
I have syslog-ng configuration that started duplicating the events after the Linux box reboot  is there any way to avoid it  ? the are 2 heavy forwarders defined for the same load balancer and only 1 is duplicating the events in the syslog files created    [root@ilissplfwd07 syslog-ng]# cat syslog-ng.conf @version:3.5 @include "scl.conf" # syslog-ng configuration file. # # This should behave pretty much like the original syslog on RedHat. But # it could be configured a lot smarter. # # See syslog-ng(8) and syslog-ng.conf(5) for more information. # # Note: it also sources additional configuration files (*.conf) # located in /etc/syslog-ng/conf.d/ options { flush_lines (0); time_reopen (10); log_fifo_size (1000); chain_hostnames (off); use_dns (no); use_fqdn (no); owner("splunk"); group("splunk"); dir-owner("splunk"); dir-group("splunk"); create_dirs (yes); keep_hostname (yes); };     ## add Default 514 udp/tcp & Filtered based don't modify below line ############################################# # Syslog 514 #source s_syslog { udp(port(514)); tcp(port(514) keep-alive(yes)); }; source s_syslog518 { udp(port(518)); }; source s_syslog1513 { tcp(port(1513) keep-alive(yes)); }; source s_syslog1514 { tcp(port(1514) keep-alive(yes)); }; source s_syslog1515 { tcp(port(1515) keep-alive(yes)); }; source s_syslog1516 { tcp(port(1516) keep-alive(yes)); }; destination d_1513 { file("/splunksyslog/port1513/$HOST/syslog_$FACILITY_$YEAR-$MONTH-$DAY-$HOUR-$(/ $MIN 1).log");}; log { source(s_syslog1513); destination(d_1513); };   destination d_1514 { file("/splunksyslog/port1514/$HOST/syslog_$FACILITY_$YEAR-$MONTH-$DAY-$HOUR-$(/ $MIN 1).log");}; log { source(s_syslog1514); destination(d_1514); };   destination d_1515 { file("/splunksyslog/port1515/$HOST/syslog_$FACILITY_$YEAR-$MONTH-$DAY-$HOUR-$(/ $MIN 1).log");}; log { source(s_syslog1515); destination(d_1515); };   destination d_1516 { file("/splunksyslog/port1516/$HOST/syslog_$FACILITY_$YEAR-$MONTH-$DAY-$HOUR-$(/ $MIN 1).log");}; log { source(s_syslog1516); destination(d_1516); };   # destination d_catch { file("/splunksyslog/catch/$HOST/$YEAR-$MONTH-$DAY-$HOUR-catch.log");}; # log { source(s_syslog); destination(d_catch); };   destination d_518 { file("/splunksyslog/port518/$HOST/syslog_$FACILITY_$YEAR-$MONTH-$DAY-$HOUR-$(/ $MIN 1).log");}; log { source(s_syslog518); destination(d_518); }; @include "/etc/syslog-ng/conf.d/*.conf"  
I want to combine two search results, whereby I'm only interested in the last x/y events from each subquery. Something like this:     | multisearch [search index="sli-index" | eval testt... See more...
I want to combine two search results, whereby I'm only interested in the last x/y events from each subquery. Something like this:     | multisearch [search index="sli-index" | eval testtype="endp-health" | head 3] [search index="sli-index" | eval testtype="enp-system" | head 6]      This leads to following error: ...Error in 'multisearch' command: Multisearch subsearches might only contain purely streaming operations (subsearch 1 contains a non-streaming command).... Any idea how this can be achieved?
Hi Team,                     I need some information on Victoria experience (if possible advantages and disadvantages) to send a clear report to my client providing the all the details why we need t... See more...
Hi Team,                     I need some information on Victoria experience (if possible advantages and disadvantages) to send a clear report to my client providing the all the details why we need to upgrade to Victoria experience from classic experience. I have already gone through the Splunk docs on this. https://docs.splunk.com/Documentation/SplunkCloud/latest/Admin/Experience                           But not getting enough solid points where I can add in the report and showcase to client. If anyone has any hands-on experience about the Victoria and classic. Please provide me with your inputs. It will help me alot. Thanks in advance.
I am trying to find the failure rate for individual events.  Each event has a result which is classified as a success or failure.  For this simple run-anywhere example I would like the output to be: ... See more...
I am trying to find the failure rate for individual events.  Each event has a result which is classified as a success or failure.  For this simple run-anywhere example I would like the output to be:  Event              failed_percent open               .50 close               .66666 lift                    .25 |makeresults|eval Event="open", State="success" |append[|makeresults|eval Event="open", State="locked"] |append[|makeresults|eval Event="close", State="blocked"] |append[|makeresults|eval Event="close", State="blocked"] |append[|makeresults|eval Event="lift", State="too heavy"] |append[|makeresults|eval Event="lift", State="success"] |append[|makeresults|eval Event="lift", State="success"] | eval Success=mvfilter(match(State,"success")) | eval Failed=mvfilter(match(State,"locked") OR match(State,"blocked") OR match(State,"too heavy")) | streamstats count(Success) as success_count,count(Failed) as failed_count | eval failed_percent=(failed_count)/(success_count+failed_count) | table Event,success_count,failed_count, failed_percent   This lists each of the 7 events separately and the counts always add to the total, not by event.   I have tried many different ways to achieve this with no success.  I started with the simple search below and ended up with the search above.  I am not sure how to do an eval(count) on the items in Result.  This is obviously not correct SPL, but I tried | eval failure=sum (|where Result="failed").  Plus it would do nothing to group by Event type.   | eval Result=case (like(State,"success"),"success", like(State,"locked"),"failed", like(State,"blocked"),"failed", like(State,"too slow"),"failed", like(State,"too heavy"),"failed", 1=1,"success") | stats count by Result I'm not even sure if this is possible.  I could do it with a separate search for each event type, but I want a single table in the end.  I also thought of doing a lot of joins with different searches, but that seems crazy. Thanks you your help! Using  Splunk 8.1.6
Hi, We've setup our Splunk instances to use SAML for signon, but are having difficulty setting a time an automatic inactivity logout. I've configured it to be 5m in both web.conf and server.conf bu... See more...
Hi, We've setup our Splunk instances to use SAML for signon, but are having difficulty setting a time an automatic inactivity logout. I've configured it to be 5m in both web.conf and server.conf but still don't get an automatic logout. It does seem to logout automatically every 24hours (not consistently), but when this happens Splunk redirects to the IDP which then redirects back to Splunk with a new SAML token. This happens without the IDP even asking for login details from the user.   Any help would be appreciated 
Hi Team, I am ingesting job logs to SPlunk and below is one of the job log (job ran on 27th June) which was ingested with wrong _time value to SPlunk. Job log: (14.2) 06-27-22 10:31:03 (35312:2... See more...
Hi Team, I am ingesting job logs to SPlunk and below is one of the job log (job ran on 27th June) which was ingested with wrong _time value to SPlunk. Job log: (14.2) 06-27-22 10:31:03 (35312:24804) PRINTFN: 2022.06.25 (14.2) 06-27-22 10:31:10 (35312:24804) JOB: Job <ALERTS_MORNING> is completed successfully. As the job ran on 27th June, the _time value in Splunk is showing as 25th June (hope it is derived from printfn in the logs). The date_mday field under _time is showing as 25 instead of 27. Can someone help in how _time is derived (it is the ingested timestamp, but in this case it was calculated wrongly) and how to dervie correct ingested timestamp. Regards, Karthikeyan.SV
Hi Team, Using Splunk_TA_nix addon Version 8.4. While running below three scripts getting below Errors.  Customer is using Solaris 10 OS.  Splunk_TA_nix/bin/vmstat_metric.sh" awk: record `HARD_DR... See more...
Hi Team, Using Splunk_TA_nix addon Version 8.4. While running below three scripts getting below Errors.  Customer is using Solaris 10 OS.  Splunk_TA_nix/bin/vmstat_metric.sh" awk: record `HARD_DRIVES ssd2262 ...' too long Splunk_TA_nix/bin/vmstat.sh" awk: record `HARD_DRIVES ssd1038 ...' too long Splunk_TA_nix/bin/hardware.sh" awk: record `HARD_DRIVES ssd1038 ...' too long upgraded Addon to the latest version 8.5, but after that below issue has been started and no data was ingested.  Splunk_TA_nix/bin/uptime.sh: $(dirname /opt/splunkforwarder/etc/apps/Splunk_TA_nix/bin/uptime.sh)/common.sh: not found Splunk_TA_nix/bin/cpu_metric.sh: $(dirname /opt/splunkforwarder/etc/apps/Splunk_TA_nix/bin/cpu_metric.sh)/common.sh: not found So to solve, rollback Addon to version 8.4 but again awk: record `HARD_DRIVES ssd1038 error started. Can anyone help to solve this issue. 
Hello all, I have an event that looks similar to the following: field_A="US", field_B="true", field_C="AB/CD,XYZ, <>DD,CT", field_D= "60" I am trying to get the count occurrence of field_C duri... See more...
Hello all, I have an event that looks similar to the following: field_A="US", field_B="true", field_C="AB/CD,XYZ, <>DD,CT", field_D= "60" I am trying to get the count occurrence of field_C during the past 3 months by using below query: field_A="US", field_B="true" | stats count as ruleFired by field_C   It works fine for all the other values that don't have comma "," in field_C.   but if there is any comma in field_C the count doesn't calculated correctly.   for example all below will count as a same group   A -- > field_C="AB/CD,XYZ, <>DD,CT" A -- > field_C="AB/CD,XYZ, DD,CT" A -- > field_C="AB/CD,ABC, <>DD,CT" A -- > field_C="AB/CD,ABC, DD,CT" the result will be  AB/CD 4 Versus  AB/CD,XYZ, <>DD,CT 1 AB/CD,XYZ, DD,CT 1 AB/CD,ABC, <>DD,CT 1 AB/CD,ABC, DD,CT          1   Any help would be much appreciated.
I want to know how long will tolerant  match takes to give the result between the event requested and the event  response  using any correlation Id. It should also calculate the average duration medi... See more...
I want to know how long will tolerant  match takes to give the result between the event requested and the event  response  using any correlation Id. It should also calculate the average duration median duration.
Dear team, Unable to access email setting feature while my 60 days also trial not expired. in licensing setting tried to change licence from free license to enterprise license it prompted to instal... See more...
Dear team, Unable to access email setting feature while my 60 days also trial not expired. in licensing setting tried to change licence from free license to enterprise license it prompted to install a license. can you please check & suggest for same.
I want to create a bar plot which displays the total number of events on the 1st of every month for the last 12 months. I can't query data for the last 12 months because search timeouts in 5 minutes ... See more...
I want to create a bar plot which displays the total number of events on the 1st of every month for the last 12 months. I can't query data for the last 12 months because search timeouts in 5 minutes as we have billions of events. Is there a way we can do this using timechart or other mechanism? Thanks
Hi, We've recently tested out a new path for data to flow into our Splunk environment from Universal Forwarders. We have rolled this out to a small porsion of our UFs for now, and we've been runni... See more...
Hi, We've recently tested out a new path for data to flow into our Splunk environment from Universal Forwarders. We have rolled this out to a small porsion of our UFs for now, and we've been running into an issue a few times already, that I can't figure out. Data flow is as follows: UFs sending logs with [httpout] -> Load Balancer -> A set of Heavy Forwarders receiving data via [http://..], and forwarding data with [tcpout] -> Index cluster receiving data with [tcp://..]. The heavy forwarders are there to do some filtering, and also routing of specific data to other Splunk environments. The Heavy Forwarders are also configured with parallelIngestionPipelines=2. I can also mention that all parts of this environment is running on Windows. The issue: A few times I've suddenly had 100% queue on all the four data queues(parsing/aggregation/typing/indexing) on one of the Heavy Forwarders and only on one of its pipelines, but not the other. It turns out after looking in splunkd.log, that it seems like one pipeline has simply shut down (I didn't know this was even possible). At first, splunkd.log is filled with the usual over and over:         07-01-2022 12:35:55.724 +0200 INFO TailReader [2092 tailreader1] - Batch input finished reading file='D:\Splunk\var\spool\splunk\tracker.log' 07-01-2022 12:35:59.129 +0200 INFO AutoLoadBalancedConnectionStrategy [4860 TcpOutEloop] - Connected to idx=X.X.X.X:9997:0, pset=1, reuse=0. autoBatch=1 07-01-2022 12:35:59.602 +0200 INFO AutoLoadBalancedConnectionStrategy [7160 TcpOutEloop] - Connected to idx=Y.Y.Y.Y:9997:0, pset=0, reuse=0. autoBatch=1         Until it suddenly logs this one or a few times:         07-01-2022 12:36:15.628 +0200 WARN TcpOutputProc [4360 indexerPipe_1] - Pipeline data does not have indexKey. [_path] = C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe\n[_raw] = \n[_meta] = punct::\n[_stmid] = FSDNhUnXitGgjKJ.C\n[MetaData:Source] = source::WinEventLog\n[MetaData:Host] = host::HOSTNAME-1\n[MetaData:Sourcetype] = sourcetype::WinEventLog\n[_done] = _done\n[_linebreaker] = _linebreaker\n[_conf] = source::WinEventLog|host::HOSTNAME-1|WinEventLog|\n 07-01-2022 12:36:18.966 +0200 WARN TcpOutputProc [4360 indexerPipe_1] - Pipeline data does not have indexKey. [_path] = C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe\n[_raw] = \n[_meta] = punct::\n[_stmid] = WpC6rhVY9yDblPB.C\n[MetaData:Source] = source::WinEventLog\n[MetaData:Host] = host::HOSTNAME-2\n[MetaData:Sourcetype] = sourcetype::WinEventLog\n[_done] = _done\n[_linebreaker] = _linebreaker\n[_conf] = source::WinEventLog|host::HOSTNAME-2|WinEventLog|\n         Then in the same millisecond this happens:         07-01-2022 12:36:18.966 +0200 INFO AutoLoadBalancedConnectionStrategy [4860 TcpOutEloop] - Shutting down auto load balanced connection strategy 07-01-2022 12:36:18.966 +0200 INFO AutoLoadBalancedConnectionStrategy [4860 TcpOutEloop] - Auto load balanced connection strategy shutdown finished 07-01-2022 12:36:18.966 +0200 INFO TcpOutputProc [4360 indexerPipe_1] - Waiting for TcpOutputGroups to shutdown 07-01-2022 12:36:18.966 +0200 INFO AutoLoadBalancedConnectionStrategy [4860 TcpOutEloop] - Shutting down auto load balanced connection strategy 07-01-2022 12:36:18.966 +0200 INFO AutoLoadBalancedConnectionStrategy [4860 TcpOutEloop] - Auto load balanced connection strategy shutdown finished 07-01-2022 12:36:19.968 +0200 INFO TcpOutputProc [4360 indexerPipe_1] - Received shutdown control key. 07-01-2022 12:36:19.968 +0200 INFO IndexProcessor [4360 indexerPipe_1] - shutting down: start 07-01-2022 12:36:19.968 +0200 INFO IndexWriter [4360 indexerPipe_1] - idx=_metrics Sync before shutdown 07-01-2022 12:36:19.968 +0200 INFO IndexWriter [4360 indexerPipe_1] - idx=_audit Sync before shutdown 07-01-2022 12:36:19.968 +0200 INFO IndexWriter [4360 indexerPipe_1] - idx=_configtracker Sync before shutdown 07-01-2022 12:36:19.968 +0200 INFO IndexWriter [4360 indexerPipe_1] - idx=_internal Sync before shutdown 07-01-2022 12:36:19.968 +0200 INFO IndexWriter [4360 indexerPipe_1] - idx=_thefishbucket Sync before shutdown 07-01-2022 12:36:19.968 +0200 INFO IndexWriter [4360 indexerPipe_1] - idx=_introspection Sync before shutdown 07-01-2022 12:36:19.968 +0200 INFO IndexWriter [4360 indexerPipe_1] - idx=_metrics_rollup Sync before shutdown 07-01-2022 12:36:19.968 +0200 INFO IndexWriter [4360 indexerPipe_1] - idx=_telemetry Sync before shutdown 07-01-2022 12:36:19.968 +0200 INFO IndexWriter [4360 indexerPipe_1] - idx=history Sync before shutdown 07-01-2022 12:36:19.968 +0200 INFO IndexWriter [4360 indexerPipe_1] - idx=main Sync before shutdown 07-01-2022 12:36:19.968 +0200 INFO IndexWriter [4360 indexerPipe_1] - idx=summary Sync before shutdown 07-01-2022 12:36:19.968 +0200 INFO IndexProcessor [4360 indexerPipe_1] - request state change from=RUN to=SHUTDOWN_IN_PROGRESS 07-01-2022 12:36:19.968 +0200 INFO IndexWriter [4360 indexerPipe_1] - idx=_audit Handling shutdown or signal, reason=1 07-01-2022 12:36:19.968 +0200 INFO IndexWriter [4360 indexerPipe_1] - idx=_configtracker Handling shutdown or signal, reason=1 07-01-2022 12:36:19.968 +0200 INFO IndexWriter [4360 indexerPipe_1] - idx=_internal Handling shutdown or signal, reason=1 07-01-2022 12:36:19.968 +0200 INFO IndexWriter [4360 indexerPipe_1] - idx=_introspection Handling shutdown or signal, reason=1 07-01-2022 12:36:19.968 +0200 INFO IndexWriter [4360 indexerPipe_1] - idx=_metrics Handling shutdown or signal, reason=1 07-01-2022 12:36:19.968 +0200 INFO IndexWriter [4360 indexerPipe_1] - idx=_metrics_rollup Handling shutdown or signal, reason=1 07-01-2022 12:36:19.968 +0200 INFO IndexWriter [4360 indexerPipe_1] - idx=_telemetry Handling shutdown or signal, reason=1 07-01-2022 12:36:19.968 +0200 INFO IndexWriter [4360 indexerPipe_1] - idx=_thefishbucket Handling shutdown or signal, reason=1 07-01-2022 12:36:19.968 +0200 INFO IndexWriter [4360 indexerPipe_1] - idx=history Handling shutdown or signal, reason=1 07-01-2022 12:36:19.968 +0200 INFO IndexWriter [4360 indexerPipe_1] - idx=main Handling shutdown or signal, reason=1 07-01-2022 12:36:19.968 +0200 INFO IndexWriter [4360 indexerPipe_1] - idx=summary Handling shutdown or signal, reason=1 07-01-2022 12:36:19.968 +0200 INFO HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=_metrics 07-01-2022 12:36:19.968 +0200 INFO HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=_audit 07-01-2022 12:36:19.968 +0200 INFO HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=_configtracker 07-01-2022 12:36:19.968 +0200 INFO HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=_internal 07-01-2022 12:36:19.968 +0200 INFO HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=_thefishbucket 07-01-2022 12:36:19.968 +0200 INFO HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=_introspection 07-01-2022 12:36:19.968 +0200 INFO HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=_metrics_rollup 07-01-2022 12:36:19.968 +0200 INFO HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=_telemetry 07-01-2022 12:36:19.968 +0200 INFO HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=history 07-01-2022 12:36:19.968 +0200 INFO HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=main 07-01-2022 12:36:19.968 +0200 INFO HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=summary 07-01-2022 12:36:19.968 +0200 INFO IndexProcessor [4360 indexerPipe_1] - request state change from=SHUTDOWN_IN_PROGRESS to=SHUTDOWN_COMPLETE 07-01-2022 12:36:19.968 +0200 INFO IndexProcessor [4360 indexerPipe_1] - shutting down: end         So it seems like pipeline-1 shut down, and pipeline-0 continues to work(this over and over, with only pset=0):         07-01-2022 12:36:29.527 +0200 INFO AutoLoadBalancedConnectionStrategy [7160 TcpOutEloop] - Connected to idx=X.X.X.X:9997:0, pset=0, reuse=0. autoBatch=1         Pipeline-1 seems to still receive data somehow, as I can see queues for that one pipeline is constantly 100% in metrics.log, and we're also loosing logs to our indexers. And then this starts appearing in splunkd.log over and over until I restart splunk:         07-01-2022 12:36:30.522 +0200 ERROR PipelineComponent [8012 CallbackRunnerThread] - Monotonic time source didn't increase; is it stuck? 07-01-2022 12:36:32.525 +0200 ERROR PipelineComponent [8012 CallbackRunnerThread] - Monotonic time source didn't increase; is it stuck? 07-01-2022 12:36:34.528 +0200 ERROR PipelineComponent [8012 CallbackRunnerThread] - Monotonic time source didn't increase; is it stuck? 07-01-2022 12:36:36.532 +0200 ERROR PipelineComponent [8012 CallbackRunnerThread] - Monotonic time source didn't increase; is it stuck?           I've had this happen to me at least 3-4 times the last week. The heavy forwarders are running Splunk 9.0.0, but I also had this happen once a few weeks ago when I ran 8.2.4. I didn't think too much of it at the time, as I was in the process of replacing them with new servers anyway, and with 9.0.0. But the logs and the symptoms are exactly the same on 9.0.0 as they were that one time at 8.2.4. I tried looking into the "Pipeline data does not have indexKey"-part, but as this is coming from UFs with identical config where all input stanzas has defined an index, I couldn't find anything wrong there. That said, even if I were missing index for some log sources, the Heavy Forwarders should not just randomly shut down a pipeline anyway. Has anyone seen this happen before, or have any suggestion to what I should try or check. It really seems like a bug to me, but I of course don't know that.  Best Regards Morten
Hey all, So I have some playbooks which were working fine previously, but I don't know if something has changed on SOAR side. I'm not sure if this is a new feature, or if i just have never noticed ... See more...
Hey all, So I have some playbooks which were working fine previously, but I don't know if something has changed on SOAR side. I'm not sure if this is a new feature, or if i just have never noticed it before, but when you add multiple connections to one block, the function name changes to "join_*" where * is the original function name. e.g: join_review_results My playbooks get to part where they would call "join_*" but then doesn't actually run the next function. I guess because it cannot find a function with that name. Not sure why it's not working. Can i prevent SOAR from renaming the functions? Do i have to rebuilt the playbook to get it to work again? If i create a test playbook, like attached, it seems to work fine.  It's only affecting my existing playbooks.
Hi,  implementing Splunk for the very first time in a new project Need to do set up Splunk from the scratch Multi site clustered env 2 TB License How to calculate the number of Indexers ... See more...
Hi,  implementing Splunk for the very first time in a new project Need to do set up Splunk from the scratch Multi site clustered env 2 TB License How to calculate the number of Indexers and Search heads? Please let me know the end to end steps to take care of
Hi, I need to validate que total number of events received each day from my sources to find gaps during the last 60 days, but this query is too heavy as we are talking about thousand of millions of... See more...
Hi, I need to validate que total number of events received each day from my sources to find gaps during the last 60 days, but this query is too heavy as we are talking about thousand of millions of logs. Is there any trick to speed up this kind of queries? My queries are being stopped by the system as they are too heavy. I just need to count events and print them in a chart. index=main_index AND source=main_source_* | timechart span=1d count by source
Hi, Splunk On Splunk (SOS) app - Is this an old app? May I know the name of the new app?
It should assign values to each values in the specific field, if the same query executes at second time, it should start with previously ended values, i.e.., from 8 This should be conti... See more...
It should assign values to each values in the specific field, if the same query executes at second time, it should start with previously ended values, i.e.., from 8 This should be continue at every time the query executes. consider this is the search,  index=linux host="*" memUsedPct="*" sourcetype="vmstat" earliest=-60m latest=-1m | eval host=mvindex(split(host,"."),0) | stats avg(memUsedPct) AS memUsedPct by host | eval memUsedPct=round(memUsedPct,1) | where (memUsedPct>80 AND memUsedPct<90) ,This will return list of host, it should be numbered from 1, and if the next time query runs, it should start from previous value,