All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Perhaps I just need to check when more than 7 days have passed between one VA and the next.
I will explain my issue from the beginning to make it clearer. I have an index that contains vulnerabilities related to an IP, and on Splunk, I receive VA data every week. I would like to check base... See more...
I will explain my issue from the beginning to make it clearer. I have an index that contains vulnerabilities related to an IP, and on Splunk, I receive VA data every week. I would like to check based on my IP and vulnerabilities for different cases: Which vulnerabilities are new, i.e., those VA that appear only in the current week. Which vulnerabilities have reappeared in a week after being absent (I think I should check when a VA is missing for a week and then reappears, perhaps by looking at when the time between results is greater than 7 days). When a vulnerability has disappeared, i.e., when the last week in which we had that VA is not the same as the current one.**
I am a bit confused on the guidance here... Does this re-enable the log(s) ?  We use the file /opt/splunkforwarder/var/log/splunk/metrics.log to check on our linux UF deploys that the /var/log/me... See more...
I am a bit confused on the guidance here... Does this re-enable the log(s) ?  We use the file /opt/splunkforwarder/var/log/splunk/metrics.log to check on our linux UF deploys that the /var/log/messages and auditd are appearing to send with some basic foo in our deploy scripts. With the SPL-263518 this is disabled by default now and we either need to identify another method of a simple local check or we need to re-enable group=per_source_thruput so we can rely on that check sudo grep -c /opt/splunkforwarder/var/log/splunk/metrics.log -e 'INFO Metrics - group=per_source_thruput, series="/var/log/messages", kbps=') -ne 0   Is there a full writeup on SPL-263518 that has more info than the simple blurb on known-issues starting with 9.3.x? aka: was this removed for a security reason or just simply to reduce local log writes, etc? 
Hi @omcollia , I suppose that your inserted the weeksum extraction with eventstat before the eval. Ciao. Giuseppe
This is just to add some pieces of information. The Windows Event Log data is written to disk at least before and after a reboot or a restart of the "Windows Event Log" service. These files are then... See more...
This is just to add some pieces of information. The Windows Event Log data is written to disk at least before and after a reboot or a restart of the "Windows Event Log" service. These files are then saved under C:\Windows\System32\winevt\Logs with names such as Application.evtx or Security.evtx These files are in a somehow "binary" format, but this format is known and there are tools to extract their data in text format. E.g. using the Python language there's a module named "python-evtx". I did not try using this module inside a Linux based Indexer to directly read the data from the files. Doing this is probably a bad idea for the standard Windows Event Logs as these are best read using the solution provided above, but in case of "standalone" event files, which other applications might create, using such tools is a way to go.
  Here’s the translation of your text into English: "If I run this command: | eval year=substr(weeksum,1,4) the field remains empty, maybe because my field weeksum comes from an eve... See more...
  Here’s the translation of your text into English: "If I run this command: | eval year=substr(weeksum,1,4) the field remains empty, maybe because my field weeksum comes from an eventstats command: | eventstats values(week) as weeksum by IP,dest_ip,plugin_id and maybe the multivalue field is in a format that's not readable?"
Another system is trying to send logs to the splunk in a TLS way. I was wondering if I need to create a certificate in Splunk, and I want to know how to set up TLS reception in Splunk.
You probably wanted to do something like stats count(eval(isnotnull(attack_type))) I must say though that I don't like the stats eval syntax - it can be confusing. I prefer to do stuff explicitly. ... See more...
You probably wanted to do something like stats count(eval(isnotnull(attack_type))) I must say though that I don't like the stats eval syntax - it can be confusing. I prefer to do stuff explicitly. Like this: | eval isattack=if(isnotnull(attack_type),1,0) | stats sum(attack_type) PS: Oh, and don't search across all your indexes. While it might work relatively not that bad on some small deployments or for a user with very limited permissions, it's a very bad habit which doesn't scale well. And don't use wildcards at the beginning of your search term (like *juniper*).
When I launch an application, the name of the application no longer appears at the top right      
What exactly are you trying to do? You should never need another party's private key.
If I understand you correctly you want to remove all-empty columns from your original data, right? <your_search> | transpose 0 include_empty=f  
Yes. crcSalt is rarely the way to go. The solution is usually to raise the initCrcLength value so that if you have a constant "header" in your file it's getting skipped. As for your original questio... See more...
Yes. crcSalt is rarely the way to go. The solution is usually to raise the initCrcLength value so that if you have a constant "header" in your file it's getting skipped. As for your original question - there can be several different reasons for it. Try checking output of splunk list monitor and splunk list inputstatus regarding those problematic files
Border case question (I like those) - how do you know how many weeks a year has? As silly as it sounds - depending on a particular year and how you're counting a year can have between 52 and 54 weeks.
Hi @omcollia , you could use the delta command to check if the difference between one value and the following is 1, something like this: <your_search> | eval year=substr(weeksum,1,4), week=substr(w... See more...
Hi @omcollia , you could use the delta command to check if the difference between one value and the following is 1, something like this: <your_search> | eval year=substr(weeksum,1,4), week=substr(weeksum,5,2) | sort year week | delta weeksum AS prevweeksum | delta week AS prevweek | delta year AS prevyear | eval diff=week-prevweek | search year=prevyear diff>1 | table weeksum prevweeksum year prevyear week prevweek in this way, if the search will have results there's some error. Ciao. Giuseppe
Hi @dj064  My suggestion would be not use crcSalt setting for log rotation files. Can you please disable it and restart splunk to check status. also if you can share some log files with maksing im... See more...
Hi @dj064  My suggestion would be not use crcSalt setting for log rotation files. Can you please disable it and restart splunk to check status. also if you can share some log files with maksing imp data   crcSalt = <SOURCE>
  I have a multivalue field called weeksum that contains the following values 2024:47 2024:48 2024:49 2024:50 2024:51 2024:52 2025:01 2025:02 2025:03 In this case, from the first to the last wee... See more...
  I have a multivalue field called weeksum that contains the following values 2024:47 2024:48 2024:49 2024:50 2024:51 2024:52 2025:01 2025:02 2025:03 In this case, from the first to the last week, there are no missing weeks. I would like to create a field that identifies if there are any missing weeks in the sequence. For example, if week 2024:51 is missing, the field should indicate that there is a gap in the sequence. Please note that the weeksum multivalue field already consists of pre-converted values, so converting them back to epoch (using something like | eval week = strftime(_time, "%Y:%U")) does not work.
with the current query it is calculating the downtime between the slot removed and added but the real problem is, its calculating previous downtime and adding the time and making it as single event. ... See more...
with the current query it is calculating the downtime between the slot removed and added but the real problem is, its calculating previous downtime and adding the time and making it as single event.   my point is, I need the seperate events for every downtime in servers so looking for dashboard which should show hostname, date, slot and the downtime
Hi - thanks for the idea, Sure, I could build that into the search, true. On the output dashboard what you end up with is "1 2 3 next..." on the bottom right, so you need to click through to see all... See more...
Hi - thanks for the idea, Sure, I could build that into the search, true. On the output dashboard what you end up with is "1 2 3 next..." on the bottom right, so you need to click through to see all possible values from the lookup that we have on hand. Often enugh there are 4-6 rows of empty fields in the result set, because the data is transpose'd. I'm looking to make the returned data more compact, if you will. 
We are facing a log indexing issue with the log paths mentioned below. Previously, with the same inputs.conf configuration, logs were being ingested without issues, but suddenly, it stopped sending l... See more...
We are facing a log indexing issue with the log paths mentioned below. Previously, with the same inputs.conf configuration, logs were being ingested without issues, but suddenly, it stopped sending logs. Each log file contains logs for a single day, but splunk reports that it has already read these logs and skips them. Below is the inputs.conf configuration: [monitor://C:\Ticker\out\] whitelist = .*_Mcast2Msg\\logs\\.*log$ index = rtd disabled = false followTail = 0 ignoreOlderThan = 3d recursive = true sourcetype = rtd_mcast crcSalt = <SOURCE> source path: C:\Ticker\out\Equiduct_Mcast2Msg\logs\EquiductTest-01-21-25.log C:\Ticker\out\Istanbul_Mcast2Msg\logs\Istanbul-01-16-25.log C:\Ticker\out\JSE_Mcast2Msg\logs\JSE-01-16-25.log C:\Ticker\out\JSE_Mcast2Msg\logs\JSEtst-01-17-25.log C:\Ticker\out\Warsaw_Mcast2Msg\logs\Warsaw-01-14-25.log _internal logs: 01-21-2025 14:48:20.745 +0000 INFO WatchedFile [708 tailreader0] - Will begin reading at offset=105 for file='C:\Ticker\out\Equiduct_Mcast2Msg\logs\Equiduct-Limit-1-01-21-25.log'. 01-21-2025 14:48:13.586 +0000 INFO WatchedFile [708 tailreader0] - Will begin reading at offset=171 for file='C:\Ticker\out\Equiduct_Mcast2Msg\logs\Equiduct-Limit-1-01-20-25.log'. 01-21-2025 14:48:06.332 +0000 INFO WatchedFile [708 tailreader0] - Will begin reading at offset=66 for file='C:\Ticker\out\Istanbul_Mcast2Msg\logs\Istanbul-01-21-25.log'. 01-21-2025 14:47:57.650 +0000 INFO WatchedFile [708 tailreader0] - Will begin reading at offset=66 for file='C:\Ticker\out\Istanbul_Mcast2Msg\logs\Istanbul-01-20-25.log'. 01-21-2025 14:47:51.466 +0000 INFO WatchedFile [708 tailreader0] - Will begin reading at offset=65 for file='C:\Ticker\out\JSE_Mcast2Msg\logs\JSE-01-20-25.log'. 01-21-2025 14:47:45.271 +0000 INFO WatchedFile [708 tailreader0] - Will begin reading at offset=65 for file='C:\Ticker\out\JSE_Mcast2Msg\logs\JSE-01-21-25.log'. 01-21-2025 14:47:39.644 +0000 INFO WatchedFile [708 tailreader0] - Will begin reading at offset=114 for file='C:\Ticker\out\Warsaw_Mcast2Msg\logs\Warsaw-01-21-25.log'. 01-21-2025 14:47:35.855 +0000 INFO WatchedFile [708 tailreader0] - Will begin reading at offset=114 for file='C:\Ticker\out\Warsaw_Mcast2Msg\logs\Warsaw-01-20-25.log'. 01-21-2025 14:47:35.660 +0000 INFO TailingProcessor [6536 MainTailingThread] - Adding watch on path: C:\Ticker\out. 01-21-2025 14:47:35.659 +0000 INFO TailingProcessor [6536 MainTailingThread] - Parsing configuration stanza: monitor://C:\Ticker\out\. Issue Details: 1) When we update the very first line of a log file, only the updated first line is ingested by Splunk, and the rest of the content is skipped. 2) We have deleted the fishbucket, but the issue persists. 3) Even after reinstalling the Splunk forwarder (version 8.2.12), the problem continues.
Hi To transfer TLS from Deep Security to Splunk I think Privatekey, Certificate, and Certificatechain should be created in the splunk. Please kindly explain the procedure for creating Privatekey, ... See more...
Hi To transfer TLS from Deep Security to Splunk I think Privatekey, Certificate, and Certificatechain should be created in the splunk. Please kindly explain the procedure for creating Privatekey, Certificate, and Certificatechain. And what settings do I need to set up when I receive TCP from Splunk?