All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have seen this error = "The lookup table 'debit.csv' requires a .csv or KV store lookup definition." The issue that I encountered was due to the context of where I was running the query from.   Ch... See more...
I have seen this error = "The lookup table 'debit.csv' requires a .csv or KV store lookup definition." The issue that I encountered was due to the context of where I was running the query from.   Check on the lookup table permission.  Edit the permission and ensure you have the proper permission set.  Make sure you are granting the appropriate permission for the specific app, or all apps, or private.   When I encountered this error it was due to me having the permission set to private and I was searching in a different app.  I changed the permission from private to the app and I'm able to view the lookup table and no longer get this error message.  Hope this helps.
  No sure if you've resolved this issue yet, but you can try to make sure your csv file is in this format?  The format you have is a coma-delimited, but Splunk lookup table may not like that par... See more...
  No sure if you've resolved this issue yet, but you can try to make sure your csv file is in this format?  The format you have is a coma-delimited, but Splunk lookup table may not like that particular format. So you may want to try to fix the lookup table to make it look like this?  Just sharing my suggestion.  I encountered something that looks similar to this issue.  Hope this helps.
I have a timechart that shows the last 30d and with the timechart I also have a trendline showing the sma7.  The problem is that on the timechart, the trendline doesn't show anything for days 1-6, wh... See more...
I have a timechart that shows the last 30d and with the timechart I also have a trendline showing the sma7.  The problem is that on the timechart, the trendline doesn't show anything for days 1-6, which I understand is because there is no data from the previous days for the sma7 to calculate. I thought that the solution could be to change my search for the last 37d and then only timechart days 7-37d (if that makes sense) but can't seem to figure out how to implement that or if that is even a possible solution. Existing search   index=palo eventtype=user_logon earliest=-37d@d | bin span=1d _time | timechart count(eval(like(user_auth, "%-Compliant"))) as compliant count as total | eval compliant=round(((compliant/total)*100),2) | trendline sma7(compliant) as compliant7sma | eval compliant7sma=round(compliant7sma,2) | table _time, compliant, compliant7sma      
This is what I have so far - but it seems way too complex! It does a baseline inner search to work out the average rate on the -48h -> -24h, and then joins that to the same search but -24h to now. ... See more...
This is what I have so far - but it seems way too complex! It does a baseline inner search to work out the average rate on the -48h -> -24h, and then joins that to the same search but -24h to now. | tstats count WHERE earliest=-24h latest=now() index=* BY index sourcetype _indextime | top limit=5 _indextime by index sourcetype | streamstats range(_indextime) as range_indextime by sourcetype index | stats avg(range_indextime) as observed_avg_range_indextime by index sourcetype | join type=inner index sourcetype [| tstats count WHERE earliest=-48h latest=-24h index=* BY index sourcetype _indextime | top limit=5 _indextime by index sourcetype | streamstats range(_indextime) as range_indextime by sourcetype index | stats avg(range_indextime) as avg_range_indextime by index sourcetype]  
Update: Had a session today with O365 support about message in python.log: (posted before) SendAsDenied; ticket@eremote.nl not allowed to send as Splunk_eRemote@uBDC01;" Answer: As discussed... See more...
Update: Had a session today with O365 support about message in python.log: (posted before) SendAsDenied; ticket@eremote.nl not allowed to send as Splunk_eRemote@uBDC01;" Answer: As discussed on the call, the bounceback (NDR report) you received shows that your office 365 account (ticket@eremote.nl) is not allowed to send email as Splunk_eRemote@uBDC01. This shows that there is a setting in Splunk that is preventing this action. And like I mentioned on the call, from Microsoft 365 perspective, email relayed is allowed and that is why you can send normal email from the application, and it delivers. De 'Splunk_eRemote@uBDC01'  comes from the SMTP-server settings in Splunk: Note: 1) We use O365 and some 4-5 months ago we stepped over from a personal account to a shared account (but did not notice the defect in the Alerting_email_function, just last week :-(. As Sendemail was working in Dasboards and in SPL code as expected. 2) We now use a shared mailbox under O365, which means you can not use any Alias. Before we could. 3) Using my personal account (with Alias's) is 2FA and is not possible according to O365 support. 4) Last week tried many things -including different fields A en B, without succes. Also tested it with use of a GMAIL account, no succes either. 5) Yesterday (Sunday) rebooted our W2029 server as part of our weekly maintenance schedule,  Today I found out: Note: Field A (username) must be same as SendAS (in the past we used only the word "Splunk" in field B) Also note that keeping the B feidl empty it does a discovery and comes up with "Splunk@uBDC01"  so it appears.  Now it is working again (in test) with field-A = field-B! (what is de use? apperantly only working in combination with personal-account and proper Alias, I conclude) I will close this post now and thank you for the repsonse PickeRick :-)! AshleyP
You can do your calculations based on _indextime but still you have to select your data with _time. There is no other way with Splunk since _time is the primary "ordering field". So you can do somet... See more...
You can do your calculations based on _indextime but still you have to select your data with _time. There is no other way with Splunk since _time is the primary "ordering field". So you can do something like index=whatever earliest=-8h | stats max(_indextime) to find out when the latest indexed event was indexed. You just need to get the initial timerange with a sufficient margin. If I remember correctly, _indextime can be used with tstats as well (just not as a field with which you can bin with a given span).
Depending on how those event should be ingested I'd try to investigate if they are being properly sent to Splunk. As there are many ways of getting the data into Splunk you need to verify the particu... See more...
Depending on how those event should be ingested I'd try to investigate if they are being properly sent to Splunk. As there are many ways of getting the data into Splunk you need to verify the particular way used in your case. Be it verifying UF connectivity, be it checking syslog traffic or whatever else. There are no miracles. If your config didn't change and there are no events, they must have stopped "flowing".
I want to identify where the rate that an index's _indextime changes by a specific amount, with a tolerence that increases the faster the rate. For example: 1. Index A - It indexes once every 6... See more...
I want to identify where the rate that an index's _indextime changes by a specific amount, with a tolerence that increases the faster the rate. For example: 1. Index A - It indexes once every 6 hours and populates the past 6 hours of events. In this circumstance I would want to know if it hasn't indexed for 8 hours or more. The tolerance is therefore relatively small (around 30% extra). 2. Index B - It indexes every second, in this circumstance I may forgive it not indexing for a few seconds, but I'd definitely want to know if it hasn't indexed in 10 minutes. The tolerence is therefore relatively large.  I don't think _time is right to use, as that would retrospectively backfill the indexes and I'm thinking it'd give false results.  I feel that either the _internal index or tstats has the answer, but I've not yet come close.
Hi everyone! I need to capture an endpoint that is requested by the method PATCH. Has anyone found a way to do this? In the detection rules I could only find GET, POST, DELETE, PUT.
I am working on building an SRE dashboard. Similar to https://www.appdynamics.com/blog/product/software-reliability-metrics/. Help me how to build a month error budget burn chart? Thank you.
It is not clear whether you are matching hostname and vulnerability or dev and vulnerability. In either case, your table doesn't appear to have any rows where patch should be NO (according to your lo... See more...
It is not clear whether you are matching hostname and vulnerability or dev and vulnerability. In either case, your table doesn't appear to have any rows where patch should be NO (according to your logic). Please can you clarify your requirement. If the table was supposed to be the result, rather than the events, please can you share some sample events.
"I have an issue with creating a field named 'Path' which should be populated with 'YES' or 'NO' based on the following information: I have fields like 'Hostname', 'dev', and 'vulnerability'. I need... See more...
"I have an issue with creating a field named 'Path' which should be populated with 'YES' or 'NO' based on the following information: I have fields like 'Hostname', 'dev', and 'vulnerability'. I need to take the values in 'dev' and 'vulnerability' and check if there are other rows with the same 'hostname' and 'vulnerability'. If there is a match, I write 'NO' in the 'Path' field; otherwise, I write 'YES'." Hostname  dev vulnerabilita patch A B apache SI A B sql NO B 0 apache NO B 0 python NO C A apache SI
Thanks Yuanliu, This is working but not completely. There are 75 records that I should get in the resilt get as I am getting 75 rows if I just search for index="myindex" "/app1/service/site/upload ... See more...
Thanks Yuanliu, This is working but not completely. There are 75 records that I should get in the resilt get as I am getting 75 rows if I just search for index="myindex" "/app1/service/site/upload failed" AND "source=Web" AND "confirmationNumber=ND_*" But when I update the script to the above provided then I am getting only 23 rows. Going back to the original requirement - First the script needs to search all the records that it can get by providing - index="myindex" "/app1/service/site/upload failed" AND "source=Web" AND "confirmationNumber=ND_*" Fetch _time, clmNumber, confirmationNumber, and name from that event in the table (4 columns). Then check the second line [for same sessionid] for an exception (Exception from executeScript) and provide whatever is after it as a fifth column in the table. May be I was not clear on the requirements earlier.
Hello I'm collecting cloudtrail logs by installing Splunk add on AWS in the Splunk heavy forwarder. The following logs are occurring in the aws:cloudtrail:log source type in the _internal index. "... See more...
Hello I'm collecting cloudtrail logs by installing Splunk add on AWS in the Splunk heavy forwarder. The following logs are occurring in the aws:cloudtrail:log source type in the _internal index. " ~ level=WARNING pid=3386853 tid=Thread-7090 logger=urllib3.connectionpool pos=connectionpool.py:_put_conn:308 | Connection pool is full, discarding connection: bucket.vpce-abc1234.s3.ap-northeast-2.vpce.amazonaws.com. Connection pool size: 10" Should Splunk add on AWS increase the Connection pool size? How can I increase the Connection pool size? Curiously, I would like to know the solution for this log. Thank you.
I misunderstood your initial question. Fieldformat can be used I think to handle X-series values. Y-series must be numeric. (You probably could try to add your own JS to a dashboard (not report) to d... See more...
I misunderstood your initial question. Fieldformat can be used I think to handle X-series values. Y-series must be numeric. (You probably could try to add your own JS to a dashboard (not report) to dynamically convert the data or try to write your own visualization but that's a completely different story and - frankly - quite an overkill)
Splunk is not good at finding something that isn't there - you need to help it! | append [| makeresults | fields - _time | eval message.content.country=split("CANADA,USA,UK,FRANCE,SPAIN,... See more...
Splunk is not good at finding something that isn't there - you need to help it! | append [| makeresults | fields - _time | eval message.content.country=split("CANADA,USA,UK,FRANCE,SPAIN,IRELAND",",") | mvexpand message.content.country | eval maxtime=now()] | stats min(maxtime) as maxtime by message.content.country
Not getting data from universal forwarder (ubuntu). 1) Installed Splunk UF version 9.2.0  and credential package from splunk cloud as it should be reporting to splunk cloud.  2)There are no error l... See more...
Not getting data from universal forwarder (ubuntu). 1) Installed Splunk UF version 9.2.0  and credential package from splunk cloud as it should be reporting to splunk cloud.  2)There are no error logs in splunkd.log and no metric log in internal splunk index in splunk cloud. 3) Port connectivity 9997 is working fine. The only logs received in splunkd.log is  02-16-2024 15:53:30.843 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/search_messages.log'. 02-16-2024 15:53:30.852 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/splunkd_ui_access.log'. 02-16-2024 15:53:30.859 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/btool.log'. 02-16-2024 15:53:30.876 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/mergebuckets.log'. 02-16-2024 15:53:30.885 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/wlm_monitor.log'. 02-16-2024 15:53:30.891 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/license_usage_summary.log'. 02-16-2024 15:53:30.898 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/searchhistory.log'. 02-16-2024 15:53:30.907 +0000 INFO WatchedFile [156345 tailreader0] - Will begin reading at offset=2859 for file='/opt/splunkforwarder/var/log/watchdog/watchdog.log'. 02-16-2024 15:53:31.112 +0000 INFO AutoLoadBalancedConnectionStrategy [156338 TcpOutEloop] - Connected to idx=1.2.3.4:9997:2, pset=0, reuse=0. autoBatch=1 02-16-2024 15:53:31.112 +0000 WARN AutoLoadBalancedConnectionStrategy [156338 TcpOutEloop] - Current dest host connection 1.2.3.4:9997, oneTimeClient=0, _events.size()=0, _refCount=1, _waitingAckQ.size()=0, _supportsACK=0, _lastHBRecvTime=Fri Feb 16 15:53:31 2024 is using 18446604251980134224 bytes. Total tcpout queue size is 512000. Warningcount=1 02-16-2024 15:54:00.446 +0000 INFO ScheduledViewsReaper [156309 DispatchReaper] - Scheduled views reaper run complete. Reaped count=0 scheduled views 02-16-2024 15:54:00.446 +0000 INFO CascadingReplicationManager [156309 DispatchReaper] - Using value for property max_replication_threads=2. 02-16-2024 15:54:00.446 +0000 INFO CascadingReplicationManager [156309 DispatchReaper] - Using value for property max_replication_jobs=5. 02-16-2024 15:54:00.447 +0000 WARN AutoLoadBalancedConnectionStrategy [156338 TcpOutEloop] - Current dest host connection 1.2.3.4:9997, oneTimeClient=0, _events.size()=0, _refCount=1, _waitingAckQ.size()=0, _supportsACK=0, _lastHBRecvTime=Fri Feb 16 15:53:31 2024 is using 18446604251980134224 bytes. Total tcpout queue size is 512000. Warningcount=21 02-16-2024 15:54:03.379 +0000 INFO TailReader [156345 tailreader0] - Batch input finished reading file='/opt/splunkforwarder/var/spool/splunk/tracker.log' 02-16-2024 15:54:05.447 +0000 INFO BackgroundJobRestarter [156309 DispatchReaper] - inspect_count=0, restart_count=0
  So we have a query:   (index="it_ops") source="bank_sys" message.content.country IN ("CANADA","USA","UK","FRANCE","SPAIN","IRELAND") message.content.code <= 399 | stats max(message.timestamp) a... See more...
  So we have a query:   (index="it_ops") source="bank_sys" message.content.country IN ("CANADA","USA","UK","FRANCE","SPAIN","IRELAND") message.content.code <= 399 | stats max(message.timestamp) as maxtime by message.content.country   Now this returns a two column result with country, maxtime. However, when there is no hit for country that country is omitted. I tried fillnull but it is only adding columns not rows.  How do we set a default maxtime for countries that are not found.
This issues was resolved in version 9.1.2: https://docs.splunk.com/Documentation/Splunk/9.1.2/ReleaseNotes/Fixedissues#Monitoring_Console_issues 
This issues was resolved in version 9.1.2: https://docs.splunk.com/Documentation/Splunk/9.1.2/ReleaseNotes/Fixedissues#Monitoring_Console_issues