All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Depending on how those event should be ingested I'd try to investigate if they are being properly sent to Splunk. As there are many ways of getting the data into Splunk you need to verify the particu... See more...
Depending on how those event should be ingested I'd try to investigate if they are being properly sent to Splunk. As there are many ways of getting the data into Splunk you need to verify the particular way used in your case. Be it verifying UF connectivity, be it checking syslog traffic or whatever else. There are no miracles. If your config didn't change and there are no events, they must have stopped "flowing".
I want to identify where the rate that an index's _indextime changes by a specific amount, with a tolerence that increases the faster the rate. For example: 1. Index A - It indexes once every 6... See more...
I want to identify where the rate that an index's _indextime changes by a specific amount, with a tolerence that increases the faster the rate. For example: 1. Index A - It indexes once every 6 hours and populates the past 6 hours of events. In this circumstance I would want to know if it hasn't indexed for 8 hours or more. The tolerance is therefore relatively small (around 30% extra). 2. Index B - It indexes every second, in this circumstance I may forgive it not indexing for a few seconds, but I'd definitely want to know if it hasn't indexed in 10 minutes. The tolerence is therefore relatively large.  I don't think _time is right to use, as that would retrospectively backfill the indexes and I'm thinking it'd give false results.  I feel that either the _internal index or tstats has the answer, but I've not yet come close.
Hi everyone! I need to capture an endpoint that is requested by the method PATCH. Has anyone found a way to do this? In the detection rules I could only find GET, POST, DELETE, PUT.
I am working on building an SRE dashboard. Similar to https://www.appdynamics.com/blog/product/software-reliability-metrics/. Help me how to build a month error budget burn chart? Thank you.
It is not clear whether you are matching hostname and vulnerability or dev and vulnerability. In either case, your table doesn't appear to have any rows where patch should be NO (according to your lo... See more...
It is not clear whether you are matching hostname and vulnerability or dev and vulnerability. In either case, your table doesn't appear to have any rows where patch should be NO (according to your logic). Please can you clarify your requirement. If the table was supposed to be the result, rather than the events, please can you share some sample events.
"I have an issue with creating a field named 'Path' which should be populated with 'YES' or 'NO' based on the following information: I have fields like 'Hostname', 'dev', and 'vulnerability'. I need... See more...
"I have an issue with creating a field named 'Path' which should be populated with 'YES' or 'NO' based on the following information: I have fields like 'Hostname', 'dev', and 'vulnerability'. I need to take the values in 'dev' and 'vulnerability' and check if there are other rows with the same 'hostname' and 'vulnerability'. If there is a match, I write 'NO' in the 'Path' field; otherwise, I write 'YES'." Hostname  dev vulnerabilita patch A B apache SI A B sql NO B 0 apache NO B 0 python NO C A apache SI
Thanks Yuanliu, This is working but not completely. There are 75 records that I should get in the resilt get as I am getting 75 rows if I just search for index="myindex" "/app1/service/site/upload ... See more...
Thanks Yuanliu, This is working but not completely. There are 75 records that I should get in the resilt get as I am getting 75 rows if I just search for index="myindex" "/app1/service/site/upload failed" AND "source=Web" AND "confirmationNumber=ND_*" But when I update the script to the above provided then I am getting only 23 rows. Going back to the original requirement - First the script needs to search all the records that it can get by providing - index="myindex" "/app1/service/site/upload failed" AND "source=Web" AND "confirmationNumber=ND_*" Fetch _time, clmNumber, confirmationNumber, and name from that event in the table (4 columns). Then check the second line [for same sessionid] for an exception (Exception from executeScript) and provide whatever is after it as a fifth column in the table. May be I was not clear on the requirements earlier.
Hello I'm collecting cloudtrail logs by installing Splunk add on AWS in the Splunk heavy forwarder. The following logs are occurring in the aws:cloudtrail:log source type in the _internal index. "... See more...
Hello I'm collecting cloudtrail logs by installing Splunk add on AWS in the Splunk heavy forwarder. The following logs are occurring in the aws:cloudtrail:log source type in the _internal index. " ~ level=WARNING pid=3386853 tid=Thread-7090 logger=urllib3.connectionpool pos=connectionpool.py:_put_conn:308 | Connection pool is full, discarding connection: bucket.vpce-abc1234.s3.ap-northeast-2.vpce.amazonaws.com. Connection pool size: 10" Should Splunk add on AWS increase the Connection pool size? How can I increase the Connection pool size? Curiously, I would like to know the solution for this log. Thank you.
I misunderstood your initial question. Fieldformat can be used I think to handle X-series values. Y-series must be numeric. (You probably could try to add your own JS to a dashboard (not report) to d... See more...
I misunderstood your initial question. Fieldformat can be used I think to handle X-series values. Y-series must be numeric. (You probably could try to add your own JS to a dashboard (not report) to dynamically convert the data or try to write your own visualization but that's a completely different story and - frankly - quite an overkill)
Splunk is not good at finding something that isn't there - you need to help it! | append [| makeresults | fields - _time | eval message.content.country=split("CANADA,USA,UK,FRANCE,SPAIN,... See more...
Splunk is not good at finding something that isn't there - you need to help it! | append [| makeresults | fields - _time | eval message.content.country=split("CANADA,USA,UK,FRANCE,SPAIN,IRELAND",",") | mvexpand message.content.country | eval maxtime=now()] | stats min(maxtime) as maxtime by message.content.country
Not getting data from universal forwarder (ubuntu). 1) Installed Splunk UF version 9.2.0  and credential package from splunk cloud as it should be reporting to splunk cloud.  2)There are no error l... See more...
Not getting data from universal forwarder (ubuntu). 1) Installed Splunk UF version 9.2.0  and credential package from splunk cloud as it should be reporting to splunk cloud.  2)There are no error logs in splunkd.log and no metric log in internal splunk index in splunk cloud. 3) Port connectivity 9997 is working fine. The only logs received in splunkd.log is  02-16-2024 15:53:30.843 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/search_messages.log'. 02-16-2024 15:53:30.852 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/splunkd_ui_access.log'. 02-16-2024 15:53:30.859 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/btool.log'. 02-16-2024 15:53:30.876 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/mergebuckets.log'. 02-16-2024 15:53:30.885 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/wlm_monitor.log'. 02-16-2024 15:53:30.891 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/license_usage_summary.log'. 02-16-2024 15:53:30.898 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/searchhistory.log'. 02-16-2024 15:53:30.907 +0000 INFO WatchedFile [156345 tailreader0] - Will begin reading at offset=2859 for file='/opt/splunkforwarder/var/log/watchdog/watchdog.log'. 02-16-2024 15:53:31.112 +0000 INFO AutoLoadBalancedConnectionStrategy [156338 TcpOutEloop] - Connected to idx=1.2.3.4:9997:2, pset=0, reuse=0. autoBatch=1 02-16-2024 15:53:31.112 +0000 WARN AutoLoadBalancedConnectionStrategy [156338 TcpOutEloop] - Current dest host connection 1.2.3.4:9997, oneTimeClient=0, _events.size()=0, _refCount=1, _waitingAckQ.size()=0, _supportsACK=0, _lastHBRecvTime=Fri Feb 16 15:53:31 2024 is using 18446604251980134224 bytes. Total tcpout queue size is 512000. Warningcount=1 02-16-2024 15:54:00.446 +0000 INFO ScheduledViewsReaper [156309 DispatchReaper] - Scheduled views reaper run complete. Reaped count=0 scheduled views 02-16-2024 15:54:00.446 +0000 INFO CascadingReplicationManager [156309 DispatchReaper] - Using value for property max_replication_threads=2. 02-16-2024 15:54:00.446 +0000 INFO CascadingReplicationManager [156309 DispatchReaper] - Using value for property max_replication_jobs=5. 02-16-2024 15:54:00.447 +0000 WARN AutoLoadBalancedConnectionStrategy [156338 TcpOutEloop] - Current dest host connection 1.2.3.4:9997, oneTimeClient=0, _events.size()=0, _refCount=1, _waitingAckQ.size()=0, _supportsACK=0, _lastHBRecvTime=Fri Feb 16 15:53:31 2024 is using 18446604251980134224 bytes. Total tcpout queue size is 512000. Warningcount=21 02-16-2024 15:54:03.379 +0000 INFO TailReader [156345 tailreader0] - Batch input finished reading file='/opt/splunkforwarder/var/spool/splunk/tracker.log' 02-16-2024 15:54:05.447 +0000 INFO BackgroundJobRestarter [156309 DispatchReaper] - inspect_count=0, restart_count=0
  So we have a query:   (index="it_ops") source="bank_sys" message.content.country IN ("CANADA","USA","UK","FRANCE","SPAIN","IRELAND") message.content.code <= 399 | stats max(message.timestamp) a... See more...
  So we have a query:   (index="it_ops") source="bank_sys" message.content.country IN ("CANADA","USA","UK","FRANCE","SPAIN","IRELAND") message.content.code <= 399 | stats max(message.timestamp) as maxtime by message.content.country   Now this returns a two column result with country, maxtime. However, when there is no hit for country that country is omitted. I tried fillnull but it is only adding columns not rows.  How do we set a default maxtime for countries that are not found.
This issues was resolved in version 9.1.2: https://docs.splunk.com/Documentation/Splunk/9.1.2/ReleaseNotes/Fixedissues#Monitoring_Console_issues 
This issues was resolved in version 9.1.2: https://docs.splunk.com/Documentation/Splunk/9.1.2/ReleaseNotes/Fixedissues#Monitoring_Console_issues 
I've tried the below with the fieldformat before and after the chart command, same results, the duration_U field still shows as a unix date, to the chart is technically correct, but the y axis inform... See more...
I've tried the below with the fieldformat before and after the chart command, same results, the duration_U field still shows as a unix date, to the chart is technically correct, but the y axis information is not human readable.   Just shows values ranging from 70,000 to 90,000. index= source= | strcat date "000000" BDATE | eval duration_U=strptime(end_time,"%Y-%m-%d %H:%M:%S.%N") - strptime(BDATE,"%Y%m%d%H%M%S") |fieldformat duration_U=tostring(duration_U,"duration")| chart latest(duration_U) over system by date  
Hello everyone,   Quick question : I need to forward data from HF to Indexer cluster. Right now, I'm using S2S tcpout function, with useAck, default loadbalancing and maxQueueSize I study the pos... See more...
Hello everyone,   Quick question : I need to forward data from HF to Indexer cluster. Right now, I'm using S2S tcpout function, with useAck, default loadbalancing and maxQueueSize I study the possibility to use the httpout instead of tcpout, due to traffic filtering.   The documentation seems a bit light about httpout, is it possible to use Indexer loadbalancer, ack, and maxQueueSize function? Thanks for your help!   Jonas
Hello Splunk Community, I'm currently facing an issue with integrating Group-IB threat intelligence feeds into my Splunk environment and could really use some assistance. Here's a brief overview of... See more...
Hello Splunk Community, I'm currently facing an issue with integrating Group-IB threat intelligence feeds into my Splunk environment and could really use some assistance. Here's a brief overview of the problem: 1. Inconsistent Sourcetype Ingestion: Upon integrating the Group-IB threat intel feeds and installing the corresponding app on my Search Head, I've noticed inconsistent behavior in terms of sourcetype ingestion. Sometimes only one sourcetype is ingested, while other times it's five or seven. This variability is puzzling, and I'm not sure what's causing it. 2. Ingestion Interruption: Additionally, after a few days of seemingly normal ingestion, I observed that the ingestion process stopped abruptly. Upon investigating further, I found the following message in the logs: *Health Check msg="A script exited abnormally with exit status 1" input="opt/splunk/etc/apps/gib_tia/bin/gib_tia.py" stanza = "xxx"* This message indicates that the intelligence downloads of a specific sourcetype have failed on the host. This issue is critical for our security operations, and I'm struggling to identify and resolve the root cause. If anyone has encountered similar challenges or has insights into troubleshooting such issues with threat intel feed integrations, I would greatly appreciate your assistance. Thanks in advance,
@rzv424  Solution1: You can create two alerts with the same logic with different CRONs. 1st alert CRON will run every day except on Wed and Fri. Cron is: */30 * * * 0,1,2,4,6 Second alert CRON w... See more...
@rzv424  Solution1: You can create two alerts with the same logic with different CRONs. 1st alert CRON will run every day except on Wed and Fri. Cron is: */30 * * * 0,1,2,4,6 Second alert CRON will run every 30 minutes on Wednesday and Friday and will stop from 5AM to 8AM. Cron is: */30 0-5,8-23 * * 3,5 Solution2: You can create one alert with a CRON to run every day of the week at 30 minutes interval, Cron is */30 * * * * And you can add the filtering at the logic of query itself: Use an EVAL command to output the current day and hour after your logic ends. and then filter or don't show your outputs as per your exception requirement ......| eval now_day=strftime(now(), "%a"), now_hour=strftime(now(), "%H") | search NOT ((now_day="Wed" AND (now_hour="5" OR now_hour="6" OR now_hour="7" OR now_hour="8")) OR (now_day="Fri" AND (now_hour="5" OR now_hour="6" OR now_hour="7" OR now_hour="8")))
You have two options: Duplicate the alert and use a different cron expression for the different days/time periods Use now() function to determine when the search is running and modify the results ... See more...
You have two options: Duplicate the alert and use a different cron expression for the different days/time periods Use now() function to determine when the search is running and modify the results so that the alert isn't triggered.
We want an alert to run every day (Monday-Sunday) on a 30 minutes interval with one exception. The exception is it should not run specifically on Wednesday and Friday from 5AM to 8AM. However it shou... See more...
We want an alert to run every day (Monday-Sunday) on a 30 minutes interval with one exception. The exception is it should not run specifically on Wednesday and Friday from 5AM to 8AM. However it should run on other hours on Wednesday and Friday as well  (apart from 5AM to 8AM) One cron is not able to achieve that. Hence want to change in the alert logic.