All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This is just to add some pieces of information. The Windows Event Log data is written to disk at least before and after a reboot or a restart of the "Windows Event Log" service. These files are then... See more...
This is just to add some pieces of information. The Windows Event Log data is written to disk at least before and after a reboot or a restart of the "Windows Event Log" service. These files are then saved under C:\Windows\System32\winevt\Logs with names such as Application.evtx or Security.evtx These files are in a somehow "binary" format, but this format is known and there are tools to extract their data in text format. E.g. using the Python language there's a module named "python-evtx". I did not try using this module inside a Linux based Indexer to directly read the data from the files. Doing this is probably a bad idea for the standard Windows Event Logs as these are best read using the solution provided above, but in case of "standalone" event files, which other applications might create, using such tools is a way to go.
  Here’s the translation of your text into English: "If I run this command: | eval year=substr(weeksum,1,4) the field remains empty, maybe because my field weeksum comes from an eve... See more...
  Here’s the translation of your text into English: "If I run this command: | eval year=substr(weeksum,1,4) the field remains empty, maybe because my field weeksum comes from an eventstats command: | eventstats values(week) as weeksum by IP,dest_ip,plugin_id and maybe the multivalue field is in a format that's not readable?"
Another system is trying to send logs to the splunk in a TLS way. I was wondering if I need to create a certificate in Splunk, and I want to know how to set up TLS reception in Splunk.
You probably wanted to do something like stats count(eval(isnotnull(attack_type))) I must say though that I don't like the stats eval syntax - it can be confusing. I prefer to do stuff explicitly. ... See more...
You probably wanted to do something like stats count(eval(isnotnull(attack_type))) I must say though that I don't like the stats eval syntax - it can be confusing. I prefer to do stuff explicitly. Like this: | eval isattack=if(isnotnull(attack_type),1,0) | stats sum(attack_type) PS: Oh, and don't search across all your indexes. While it might work relatively not that bad on some small deployments or for a user with very limited permissions, it's a very bad habit which doesn't scale well. And don't use wildcards at the beginning of your search term (like *juniper*).
When I launch an application, the name of the application no longer appears at the top right      
What exactly are you trying to do? You should never need another party's private key.
If I understand you correctly you want to remove all-empty columns from your original data, right? <your_search> | transpose 0 include_empty=f  
Yes. crcSalt is rarely the way to go. The solution is usually to raise the initCrcLength value so that if you have a constant "header" in your file it's getting skipped. As for your original questio... See more...
Yes. crcSalt is rarely the way to go. The solution is usually to raise the initCrcLength value so that if you have a constant "header" in your file it's getting skipped. As for your original question - there can be several different reasons for it. Try checking output of splunk list monitor and splunk list inputstatus regarding those problematic files
Border case question (I like those) - how do you know how many weeks a year has? As silly as it sounds - depending on a particular year and how you're counting a year can have between 52 and 54 weeks.
Hi @omcollia , you could use the delta command to check if the difference between one value and the following is 1, something like this: <your_search> | eval year=substr(weeksum,1,4), week=substr(w... See more...
Hi @omcollia , you could use the delta command to check if the difference between one value and the following is 1, something like this: <your_search> | eval year=substr(weeksum,1,4), week=substr(weeksum,5,2) | sort year week | delta weeksum AS prevweeksum | delta week AS prevweek | delta year AS prevyear | eval diff=week-prevweek | search year=prevyear diff>1 | table weeksum prevweeksum year prevyear week prevweek in this way, if the search will have results there's some error. Ciao. Giuseppe
Hi @dj064  My suggestion would be not use crcSalt setting for log rotation files. Can you please disable it and restart splunk to check status. also if you can share some log files with maksing im... See more...
Hi @dj064  My suggestion would be not use crcSalt setting for log rotation files. Can you please disable it and restart splunk to check status. also if you can share some log files with maksing imp data   crcSalt = <SOURCE>
  I have a multivalue field called weeksum that contains the following values 2024:47 2024:48 2024:49 2024:50 2024:51 2024:52 2025:01 2025:02 2025:03 In this case, from the first to the last wee... See more...
  I have a multivalue field called weeksum that contains the following values 2024:47 2024:48 2024:49 2024:50 2024:51 2024:52 2025:01 2025:02 2025:03 In this case, from the first to the last week, there are no missing weeks. I would like to create a field that identifies if there are any missing weeks in the sequence. For example, if week 2024:51 is missing, the field should indicate that there is a gap in the sequence. Please note that the weeksum multivalue field already consists of pre-converted values, so converting them back to epoch (using something like | eval week = strftime(_time, "%Y:%U")) does not work.
with the current query it is calculating the downtime between the slot removed and added but the real problem is, its calculating previous downtime and adding the time and making it as single event. ... See more...
with the current query it is calculating the downtime between the slot removed and added but the real problem is, its calculating previous downtime and adding the time and making it as single event.   my point is, I need the seperate events for every downtime in servers so looking for dashboard which should show hostname, date, slot and the downtime
Hi - thanks for the idea, Sure, I could build that into the search, true. On the output dashboard what you end up with is "1 2 3 next..." on the bottom right, so you need to click through to see all... See more...
Hi - thanks for the idea, Sure, I could build that into the search, true. On the output dashboard what you end up with is "1 2 3 next..." on the bottom right, so you need to click through to see all possible values from the lookup that we have on hand. Often enugh there are 4-6 rows of empty fields in the result set, because the data is transpose'd. I'm looking to make the returned data more compact, if you will. 
We are facing a log indexing issue with the log paths mentioned below. Previously, with the same inputs.conf configuration, logs were being ingested without issues, but suddenly, it stopped sending l... See more...
We are facing a log indexing issue with the log paths mentioned below. Previously, with the same inputs.conf configuration, logs were being ingested without issues, but suddenly, it stopped sending logs. Each log file contains logs for a single day, but splunk reports that it has already read these logs and skips them. Below is the inputs.conf configuration: [monitor://C:\Ticker\out\] whitelist = .*_Mcast2Msg\\logs\\.*log$ index = rtd disabled = false followTail = 0 ignoreOlderThan = 3d recursive = true sourcetype = rtd_mcast crcSalt = <SOURCE> source path: C:\Ticker\out\Equiduct_Mcast2Msg\logs\EquiductTest-01-21-25.log C:\Ticker\out\Istanbul_Mcast2Msg\logs\Istanbul-01-16-25.log C:\Ticker\out\JSE_Mcast2Msg\logs\JSE-01-16-25.log C:\Ticker\out\JSE_Mcast2Msg\logs\JSEtst-01-17-25.log C:\Ticker\out\Warsaw_Mcast2Msg\logs\Warsaw-01-14-25.log _internal logs: 01-21-2025 14:48:20.745 +0000 INFO WatchedFile [708 tailreader0] - Will begin reading at offset=105 for file='C:\Ticker\out\Equiduct_Mcast2Msg\logs\Equiduct-Limit-1-01-21-25.log'. 01-21-2025 14:48:13.586 +0000 INFO WatchedFile [708 tailreader0] - Will begin reading at offset=171 for file='C:\Ticker\out\Equiduct_Mcast2Msg\logs\Equiduct-Limit-1-01-20-25.log'. 01-21-2025 14:48:06.332 +0000 INFO WatchedFile [708 tailreader0] - Will begin reading at offset=66 for file='C:\Ticker\out\Istanbul_Mcast2Msg\logs\Istanbul-01-21-25.log'. 01-21-2025 14:47:57.650 +0000 INFO WatchedFile [708 tailreader0] - Will begin reading at offset=66 for file='C:\Ticker\out\Istanbul_Mcast2Msg\logs\Istanbul-01-20-25.log'. 01-21-2025 14:47:51.466 +0000 INFO WatchedFile [708 tailreader0] - Will begin reading at offset=65 for file='C:\Ticker\out\JSE_Mcast2Msg\logs\JSE-01-20-25.log'. 01-21-2025 14:47:45.271 +0000 INFO WatchedFile [708 tailreader0] - Will begin reading at offset=65 for file='C:\Ticker\out\JSE_Mcast2Msg\logs\JSE-01-21-25.log'. 01-21-2025 14:47:39.644 +0000 INFO WatchedFile [708 tailreader0] - Will begin reading at offset=114 for file='C:\Ticker\out\Warsaw_Mcast2Msg\logs\Warsaw-01-21-25.log'. 01-21-2025 14:47:35.855 +0000 INFO WatchedFile [708 tailreader0] - Will begin reading at offset=114 for file='C:\Ticker\out\Warsaw_Mcast2Msg\logs\Warsaw-01-20-25.log'. 01-21-2025 14:47:35.660 +0000 INFO TailingProcessor [6536 MainTailingThread] - Adding watch on path: C:\Ticker\out. 01-21-2025 14:47:35.659 +0000 INFO TailingProcessor [6536 MainTailingThread] - Parsing configuration stanza: monitor://C:\Ticker\out\. Issue Details: 1) When we update the very first line of a log file, only the updated first line is ingested by Splunk, and the rest of the content is skipped. 2) We have deleted the fishbucket, but the issue persists. 3) Even after reinstalling the Splunk forwarder (version 8.2.12), the problem continues.
Hi To transfer TLS from Deep Security to Splunk I think Privatekey, Certificate, and Certificatechain should be created in the splunk. Please kindly explain the procedure for creating Privatekey, ... See more...
Hi To transfer TLS from Deep Security to Splunk I think Privatekey, Certificate, and Certificatechain should be created in the splunk. Please kindly explain the procedure for creating Privatekey, Certificate, and Certificatechain. And what settings do I need to set up when I receive TCP from Splunk?
Wildcards don't work everywhere and the eval function may be one of those places.  Try using isnotnull(), instead. index=* *jupiter* | stats count as "Total Traffic" count(eval(isnotnull(attack_typ... See more...
Wildcards don't work everywhere and the eval function may be one of those places.  Try using isnotnull(), instead. index=* *jupiter* | stats count as "Total Traffic" count(eval(isnotnull(attack_type))) as "Attack Traffic"  On the subject of wildcards, avoid using index=*, except in special circumstances.  Also, a leading wildcard in the search command (as in "*jupiter*") is very inefficient.
I am trying to get total traffic vs attack traffic splunk query in order to keep it in dashboard panel. We have a field called attack_type which contains all the attacks and those will be dynamic (... See more...
I am trying to get total traffic vs attack traffic splunk query in order to keep it in dashboard panel. We have a field called attack_type which contains all the attacks and those will be dynamic (daily new ones will be coming). For last 24 hours, we have 1000 total events and 400 attack_type events. how can I show this in single dashboard panel: tried to write this query: index=* *jupiter* | stats count as "Total Traffic" count(eval(attack_type="*")) as "Attack Traffic" but getting this error: Error in 'stats' command: The eval expression for dynamic field 'attack_type=*' is invalid. Error='The expression is malformed. An unexpected character is reached at '*'.'. please help me in this regards.
Where are you seeing the 400 error? The Hec client said timeout in this post, didnt seem to mention 400 bad request? if its format then something is fundamentally wrong with the payload or you sen... See more...
Where are you seeing the 400 error? The Hec client said timeout in this post, didnt seem to mention 400 bad request? if its format then something is fundamentally wrong with the payload or you sending to the wrong url, etc.  Anyways suggest you post your own post with any info and config, especially the hec url config and any splunk internal logs that align.  Otherwise would try support or the github issues.  found this issue that sounded kinda similar, but hard to tell without you providing config details or logs.  https://github.com/splunk/splunk-connect-for-syslog/issues/1329
no it is not busy - it is getting basically a 400 error which means format if i take and only have the default destionation be d_hec_default or d_hec_other basically if i go only 2 one of the splunk ... See more...
no it is not busy - it is getting basically a 400 error which means format if i take and only have the default destionation be d_hec_default or d_hec_other basically if i go only 2 one of the splunk HEC site it is fine.  It only gets the busy or format error on the second site if only the second site is configured so if i switch it whatever is the first site works and whatever is the second site doesnt work.  So no it is not busy or that it can not injest the logs it is sc4s doesnt seem to work well with multi destinations.  I am looking for help with someone that has done that.