All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Team, Using Splunk_TA_nix addon Version 8.4. While running below three scripts getting below Errors.  Customer is using Solaris 10 OS.  Splunk_TA_nix/bin/vmstat_metric.sh" awk: record `HARD_DR... See more...
Hi Team, Using Splunk_TA_nix addon Version 8.4. While running below three scripts getting below Errors.  Customer is using Solaris 10 OS.  Splunk_TA_nix/bin/vmstat_metric.sh" awk: record `HARD_DRIVES ssd2262 ...' too long Splunk_TA_nix/bin/vmstat.sh" awk: record `HARD_DRIVES ssd1038 ...' too long Splunk_TA_nix/bin/hardware.sh" awk: record `HARD_DRIVES ssd1038 ...' too long upgraded Addon to the latest version 8.5, but after that below issue has been started and no data was ingested.  Splunk_TA_nix/bin/uptime.sh: $(dirname /opt/splunkforwarder/etc/apps/Splunk_TA_nix/bin/uptime.sh)/common.sh: not found Splunk_TA_nix/bin/cpu_metric.sh: $(dirname /opt/splunkforwarder/etc/apps/Splunk_TA_nix/bin/cpu_metric.sh)/common.sh: not found So to solve, rollback Addon to version 8.4 but again awk: record `HARD_DRIVES ssd1038 error started. Can anyone help to solve this issue. 
Hello all, I have an event that looks similar to the following: field_A="US", field_B="true", field_C="AB/CD,XYZ, <>DD,CT", field_D= "60" I am trying to get the count occurrence of field_C duri... See more...
Hello all, I have an event that looks similar to the following: field_A="US", field_B="true", field_C="AB/CD,XYZ, <>DD,CT", field_D= "60" I am trying to get the count occurrence of field_C during the past 3 months by using below query: field_A="US", field_B="true" | stats count as ruleFired by field_C   It works fine for all the other values that don't have comma "," in field_C.   but if there is any comma in field_C the count doesn't calculated correctly.   for example all below will count as a same group   A -- > field_C="AB/CD,XYZ, <>DD,CT" A -- > field_C="AB/CD,XYZ, DD,CT" A -- > field_C="AB/CD,ABC, <>DD,CT" A -- > field_C="AB/CD,ABC, DD,CT" the result will be  AB/CD 4 Versus  AB/CD,XYZ, <>DD,CT 1 AB/CD,XYZ, DD,CT 1 AB/CD,ABC, <>DD,CT 1 AB/CD,ABC, DD,CT          1   Any help would be much appreciated.
I want to know how long will tolerant  match takes to give the result between the event requested and the event  response  using any correlation Id. It should also calculate the average duration medi... See more...
I want to know how long will tolerant  match takes to give the result between the event requested and the event  response  using any correlation Id. It should also calculate the average duration median duration.
Dear team, Unable to access email setting feature while my 60 days also trial not expired. in licensing setting tried to change licence from free license to enterprise license it prompted to instal... See more...
Dear team, Unable to access email setting feature while my 60 days also trial not expired. in licensing setting tried to change licence from free license to enterprise license it prompted to install a license. can you please check & suggest for same.
I want to create a bar plot which displays the total number of events on the 1st of every month for the last 12 months. I can't query data for the last 12 months because search timeouts in 5 minutes ... See more...
I want to create a bar plot which displays the total number of events on the 1st of every month for the last 12 months. I can't query data for the last 12 months because search timeouts in 5 minutes as we have billions of events. Is there a way we can do this using timechart or other mechanism? Thanks
Hi, We've recently tested out a new path for data to flow into our Splunk environment from Universal Forwarders. We have rolled this out to a small porsion of our UFs for now, and we've been runni... See more...
Hi, We've recently tested out a new path for data to flow into our Splunk environment from Universal Forwarders. We have rolled this out to a small porsion of our UFs for now, and we've been running into an issue a few times already, that I can't figure out. Data flow is as follows: UFs sending logs with [httpout] -> Load Balancer -> A set of Heavy Forwarders receiving data via [http://..], and forwarding data with [tcpout] -> Index cluster receiving data with [tcp://..]. The heavy forwarders are there to do some filtering, and also routing of specific data to other Splunk environments. The Heavy Forwarders are also configured with parallelIngestionPipelines=2. I can also mention that all parts of this environment is running on Windows. The issue: A few times I've suddenly had 100% queue on all the four data queues(parsing/aggregation/typing/indexing) on one of the Heavy Forwarders and only on one of its pipelines, but not the other. It turns out after looking in splunkd.log, that it seems like one pipeline has simply shut down (I didn't know this was even possible). At first, splunkd.log is filled with the usual over and over:         07-01-2022 12:35:55.724 +0200 INFO TailReader [2092 tailreader1] - Batch input finished reading file='D:\Splunk\var\spool\splunk\tracker.log' 07-01-2022 12:35:59.129 +0200 INFO AutoLoadBalancedConnectionStrategy [4860 TcpOutEloop] - Connected to idx=X.X.X.X:9997:0, pset=1, reuse=0. autoBatch=1 07-01-2022 12:35:59.602 +0200 INFO AutoLoadBalancedConnectionStrategy [7160 TcpOutEloop] - Connected to idx=Y.Y.Y.Y:9997:0, pset=0, reuse=0. autoBatch=1         Until it suddenly logs this one or a few times:         07-01-2022 12:36:15.628 +0200 WARN TcpOutputProc [4360 indexerPipe_1] - Pipeline data does not have indexKey. [_path] = C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe\n[_raw] = \n[_meta] = punct::\n[_stmid] = FSDNhUnXitGgjKJ.C\n[MetaData:Source] = source::WinEventLog\n[MetaData:Host] = host::HOSTNAME-1\n[MetaData:Sourcetype] = sourcetype::WinEventLog\n[_done] = _done\n[_linebreaker] = _linebreaker\n[_conf] = source::WinEventLog|host::HOSTNAME-1|WinEventLog|\n 07-01-2022 12:36:18.966 +0200 WARN TcpOutputProc [4360 indexerPipe_1] - Pipeline data does not have indexKey. [_path] = C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe\n[_raw] = \n[_meta] = punct::\n[_stmid] = WpC6rhVY9yDblPB.C\n[MetaData:Source] = source::WinEventLog\n[MetaData:Host] = host::HOSTNAME-2\n[MetaData:Sourcetype] = sourcetype::WinEventLog\n[_done] = _done\n[_linebreaker] = _linebreaker\n[_conf] = source::WinEventLog|host::HOSTNAME-2|WinEventLog|\n         Then in the same millisecond this happens:         07-01-2022 12:36:18.966 +0200 INFO AutoLoadBalancedConnectionStrategy [4860 TcpOutEloop] - Shutting down auto load balanced connection strategy 07-01-2022 12:36:18.966 +0200 INFO AutoLoadBalancedConnectionStrategy [4860 TcpOutEloop] - Auto load balanced connection strategy shutdown finished 07-01-2022 12:36:18.966 +0200 INFO TcpOutputProc [4360 indexerPipe_1] - Waiting for TcpOutputGroups to shutdown 07-01-2022 12:36:18.966 +0200 INFO AutoLoadBalancedConnectionStrategy [4860 TcpOutEloop] - Shutting down auto load balanced connection strategy 07-01-2022 12:36:18.966 +0200 INFO AutoLoadBalancedConnectionStrategy [4860 TcpOutEloop] - Auto load balanced connection strategy shutdown finished 07-01-2022 12:36:19.968 +0200 INFO TcpOutputProc [4360 indexerPipe_1] - Received shutdown control key. 07-01-2022 12:36:19.968 +0200 INFO IndexProcessor [4360 indexerPipe_1] - shutting down: start 07-01-2022 12:36:19.968 +0200 INFO IndexWriter [4360 indexerPipe_1] - idx=_metrics Sync before shutdown 07-01-2022 12:36:19.968 +0200 INFO IndexWriter [4360 indexerPipe_1] - idx=_audit Sync before shutdown 07-01-2022 12:36:19.968 +0200 INFO IndexWriter [4360 indexerPipe_1] - idx=_configtracker Sync before shutdown 07-01-2022 12:36:19.968 +0200 INFO IndexWriter [4360 indexerPipe_1] - idx=_internal Sync before shutdown 07-01-2022 12:36:19.968 +0200 INFO IndexWriter [4360 indexerPipe_1] - idx=_thefishbucket Sync before shutdown 07-01-2022 12:36:19.968 +0200 INFO IndexWriter [4360 indexerPipe_1] - idx=_introspection Sync before shutdown 07-01-2022 12:36:19.968 +0200 INFO IndexWriter [4360 indexerPipe_1] - idx=_metrics_rollup Sync before shutdown 07-01-2022 12:36:19.968 +0200 INFO IndexWriter [4360 indexerPipe_1] - idx=_telemetry Sync before shutdown 07-01-2022 12:36:19.968 +0200 INFO IndexWriter [4360 indexerPipe_1] - idx=history Sync before shutdown 07-01-2022 12:36:19.968 +0200 INFO IndexWriter [4360 indexerPipe_1] - idx=main Sync before shutdown 07-01-2022 12:36:19.968 +0200 INFO IndexWriter [4360 indexerPipe_1] - idx=summary Sync before shutdown 07-01-2022 12:36:19.968 +0200 INFO IndexProcessor [4360 indexerPipe_1] - request state change from=RUN to=SHUTDOWN_IN_PROGRESS 07-01-2022 12:36:19.968 +0200 INFO IndexWriter [4360 indexerPipe_1] - idx=_audit Handling shutdown or signal, reason=1 07-01-2022 12:36:19.968 +0200 INFO IndexWriter [4360 indexerPipe_1] - idx=_configtracker Handling shutdown or signal, reason=1 07-01-2022 12:36:19.968 +0200 INFO IndexWriter [4360 indexerPipe_1] - idx=_internal Handling shutdown or signal, reason=1 07-01-2022 12:36:19.968 +0200 INFO IndexWriter [4360 indexerPipe_1] - idx=_introspection Handling shutdown or signal, reason=1 07-01-2022 12:36:19.968 +0200 INFO IndexWriter [4360 indexerPipe_1] - idx=_metrics Handling shutdown or signal, reason=1 07-01-2022 12:36:19.968 +0200 INFO IndexWriter [4360 indexerPipe_1] - idx=_metrics_rollup Handling shutdown or signal, reason=1 07-01-2022 12:36:19.968 +0200 INFO IndexWriter [4360 indexerPipe_1] - idx=_telemetry Handling shutdown or signal, reason=1 07-01-2022 12:36:19.968 +0200 INFO IndexWriter [4360 indexerPipe_1] - idx=_thefishbucket Handling shutdown or signal, reason=1 07-01-2022 12:36:19.968 +0200 INFO IndexWriter [4360 indexerPipe_1] - idx=history Handling shutdown or signal, reason=1 07-01-2022 12:36:19.968 +0200 INFO IndexWriter [4360 indexerPipe_1] - idx=main Handling shutdown or signal, reason=1 07-01-2022 12:36:19.968 +0200 INFO IndexWriter [4360 indexerPipe_1] - idx=summary Handling shutdown or signal, reason=1 07-01-2022 12:36:19.968 +0200 INFO HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=_metrics 07-01-2022 12:36:19.968 +0200 INFO HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=_audit 07-01-2022 12:36:19.968 +0200 INFO HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=_configtracker 07-01-2022 12:36:19.968 +0200 INFO HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=_internal 07-01-2022 12:36:19.968 +0200 INFO HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=_thefishbucket 07-01-2022 12:36:19.968 +0200 INFO HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=_introspection 07-01-2022 12:36:19.968 +0200 INFO HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=_metrics_rollup 07-01-2022 12:36:19.968 +0200 INFO HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=_telemetry 07-01-2022 12:36:19.968 +0200 INFO HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=history 07-01-2022 12:36:19.968 +0200 INFO HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=main 07-01-2022 12:36:19.968 +0200 INFO HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=summary 07-01-2022 12:36:19.968 +0200 INFO IndexProcessor [4360 indexerPipe_1] - request state change from=SHUTDOWN_IN_PROGRESS to=SHUTDOWN_COMPLETE 07-01-2022 12:36:19.968 +0200 INFO IndexProcessor [4360 indexerPipe_1] - shutting down: end         So it seems like pipeline-1 shut down, and pipeline-0 continues to work(this over and over, with only pset=0):         07-01-2022 12:36:29.527 +0200 INFO AutoLoadBalancedConnectionStrategy [7160 TcpOutEloop] - Connected to idx=X.X.X.X:9997:0, pset=0, reuse=0. autoBatch=1         Pipeline-1 seems to still receive data somehow, as I can see queues for that one pipeline is constantly 100% in metrics.log, and we're also loosing logs to our indexers. And then this starts appearing in splunkd.log over and over until I restart splunk:         07-01-2022 12:36:30.522 +0200 ERROR PipelineComponent [8012 CallbackRunnerThread] - Monotonic time source didn't increase; is it stuck? 07-01-2022 12:36:32.525 +0200 ERROR PipelineComponent [8012 CallbackRunnerThread] - Monotonic time source didn't increase; is it stuck? 07-01-2022 12:36:34.528 +0200 ERROR PipelineComponent [8012 CallbackRunnerThread] - Monotonic time source didn't increase; is it stuck? 07-01-2022 12:36:36.532 +0200 ERROR PipelineComponent [8012 CallbackRunnerThread] - Monotonic time source didn't increase; is it stuck?           I've had this happen to me at least 3-4 times the last week. The heavy forwarders are running Splunk 9.0.0, but I also had this happen once a few weeks ago when I ran 8.2.4. I didn't think too much of it at the time, as I was in the process of replacing them with new servers anyway, and with 9.0.0. But the logs and the symptoms are exactly the same on 9.0.0 as they were that one time at 8.2.4. I tried looking into the "Pipeline data does not have indexKey"-part, but as this is coming from UFs with identical config where all input stanzas has defined an index, I couldn't find anything wrong there. That said, even if I were missing index for some log sources, the Heavy Forwarders should not just randomly shut down a pipeline anyway. Has anyone seen this happen before, or have any suggestion to what I should try or check. It really seems like a bug to me, but I of course don't know that.  Best Regards Morten
Hey all, So I have some playbooks which were working fine previously, but I don't know if something has changed on SOAR side. I'm not sure if this is a new feature, or if i just have never noticed ... See more...
Hey all, So I have some playbooks which were working fine previously, but I don't know if something has changed on SOAR side. I'm not sure if this is a new feature, or if i just have never noticed it before, but when you add multiple connections to one block, the function name changes to "join_*" where * is the original function name. e.g: join_review_results My playbooks get to part where they would call "join_*" but then doesn't actually run the next function. I guess because it cannot find a function with that name. Not sure why it's not working. Can i prevent SOAR from renaming the functions? Do i have to rebuilt the playbook to get it to work again? If i create a test playbook, like attached, it seems to work fine.  It's only affecting my existing playbooks.
Hi,  implementing Splunk for the very first time in a new project Need to do set up Splunk from the scratch Multi site clustered env 2 TB License How to calculate the number of Indexers ... See more...
Hi,  implementing Splunk for the very first time in a new project Need to do set up Splunk from the scratch Multi site clustered env 2 TB License How to calculate the number of Indexers and Search heads? Please let me know the end to end steps to take care of
Hi, I need to validate que total number of events received each day from my sources to find gaps during the last 60 days, but this query is too heavy as we are talking about thousand of millions of... See more...
Hi, I need to validate que total number of events received each day from my sources to find gaps during the last 60 days, but this query is too heavy as we are talking about thousand of millions of logs. Is there any trick to speed up this kind of queries? My queries are being stopped by the system as they are too heavy. I just need to count events and print them in a chart. index=main_index AND source=main_source_* | timechart span=1d count by source
Hi, Splunk On Splunk (SOS) app - Is this an old app? May I know the name of the new app?
It should assign values to each values in the specific field, if the same query executes at second time, it should start with previously ended values, i.e.., from 8 This should be conti... See more...
It should assign values to each values in the specific field, if the same query executes at second time, it should start with previously ended values, i.e.., from 8 This should be continue at every time the query executes. consider this is the search,  index=linux host="*" memUsedPct="*" sourcetype="vmstat" earliest=-60m latest=-1m | eval host=mvindex(split(host,"."),0) | stats avg(memUsedPct) AS memUsedPct by host | eval memUsedPct=round(memUsedPct,1) | where (memUsedPct>80 AND memUsedPct<90) ,This will return list of host, it should be numbered from 1, and if the next time query runs, it should start from previous value,
Does anyone has try to integrate OTEL with SQL Server 2008? I have some error that can't run otel,  tls: server selected unsupported protocol version 301   error not support TLS, is this make int... See more...
Does anyone has try to integrate OTEL with SQL Server 2008? I have some error that can't run otel,  tls: server selected unsupported protocol version 301   error not support TLS, is this make integration can't load in the dashboard?
Hi What is the different between "bin span=5m" vs "timechart span=5m"? I mean it is better to use bin span then use timechart without timechart? which one efficient? what is the different at all? ... See more...
Hi What is the different between "bin span=5m" vs "timechart span=5m"? I mean it is better to use bin span then use timechart without timechart? which one efficient? what is the different at all? Thanks,
Hi All, I need to know about EUM monitoring. Can we can set up End-user monitoring without a reverse proxy server in an on-premise setup or not. If not possible, it is necessary that the reverse ... See more...
Hi All, I need to know about EUM monitoring. Can we can set up End-user monitoring without a reverse proxy server in an on-premise setup or not. If not possible, it is necessary that the reverse proxy server should be in the DMZ zone.
Hello I am setting up netflow ticket collection on splunk. I am a very occasional user, and I come to you ask help. What interests me are specific dialogs of my network infrastructure : src... See more...
Hello I am setting up netflow ticket collection on splunk. I am a very occasional user, and I come to you ask help. What interests me are specific dialogs of my network infrastructure : src=net_A to dest=net_B or src=net_B to dest=net_A All the rest i don't want splunk to keep it and store it, for example net_B to net_B, net_B to net_C, ..... I think I must use CIRDMATCH for my need, to do the filtering I think it must be done on the forwarder but not sure Is there any possibility of doing this ? My splunk infrastructure: splunk 8.1.1 2 Forwarder 2 Indexer 2 Search Head 1 server deployment / license thank you
I made the column chart and overlaid 2 fields as line charts. Can I display these 2 overlay fields  on separate Y axes? (One field : Y1 axes and Other field : Y2 axes)
Hello Folks, We are using the NAS storage to store our Splunk Frozen data, I'd like to know, if in case the NAS storage is not available (let's say NAS server down due to reboot, or any internal ne... See more...
Hello Folks, We are using the NAS storage to store our Splunk Frozen data, I'd like to know, if in case the NAS storage is not available (let's say NAS server down due to reboot, or any internal network issue between Splunk and NAS), what will happen to the rolling data, would it hold in Splunk until the Frozen path come back online, or Splunk will just ignore and drop the data? Your response much appreciated.  
Hi, I need to add a condition to a query where I am checking whether a field equals a specific URL: URL_WITH_GET_PARAM="/support/order-status/en-ca/order-detail?v=i14miCmn%2fGNPxkcL6EcMTz2YFrQ8... See more...
Hi, I need to add a condition to a query where I am checking whether a field equals a specific URL: URL_WITH_GET_PARAM="/support/order-status/en-ca/order-detail?v=i14miCmn%2fGNPxkcL6EcMTz2YFrQ8V6c7cYOovCLWD4jQIQ5u1Trd%2bqtiMhUYGmuT&src=oc&amp;ref=secUrlPurchase&amp;lwp=rt&amp;t=OS_E" However, when I add this onto my form, it is giving me invalid character error:   I know I need to escape certain characters in XML but I am unsure where .... can you please help? Many thanks!
Hello guys, I am very new to splunk enterprise so please bear with me... Just want some advice or getting started tips on how can I use splunk in company router for its event analysis. Is there... See more...
Hello guys, I am very new to splunk enterprise so please bear with me... Just want some advice or getting started tips on how can I use splunk in company router for its event analysis. Is there any specific configuration should I add to my router?  
SO I have a data set User      Vehicle User_a    Car User_b    Car User_a    MotorBike User_c    MotorBike User_d    Car User_c    Bicycle User_a    Bicycle User_c    Scooter User_e    Ca... See more...
SO I have a data set User      Vehicle User_a    Car User_b    Car User_a    MotorBike User_c    MotorBike User_d    Car User_c    Bicycle User_a    Bicycle User_c    Scooter User_e    Car What I need is to be able to run a search against this type of dataset and pull out only one return per username based upon those with a CAR, then Motorbike, then bicycle then scooter. But I only need ONE return for any given user - if they have all four - based upon priority they are reported as a car owner.  If they only have two or three of the four, they only get reported as the owner of the highest priority vehicle. I'm currently doing a search cars, score 1pt, append motobike score 2pt, and so on but that is slow on a big datasaet.