All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Border case question (I like those) - how do you know how many weeks a year has? As silly as it sounds - depending on a particular year and how you're counting a year can have between 52 and 54 weeks.
Hi @omcollia , you could use the delta command to check if the difference between one value and the following is 1, something like this: <your_search> | eval year=substr(weeksum,1,4), week=substr(w... See more...
Hi @omcollia , you could use the delta command to check if the difference between one value and the following is 1, something like this: <your_search> | eval year=substr(weeksum,1,4), week=substr(weeksum,5,2) | sort year week | delta weeksum AS prevweeksum | delta week AS prevweek | delta year AS prevyear | eval diff=week-prevweek | search year=prevyear diff>1 | table weeksum prevweeksum year prevyear week prevweek in this way, if the search will have results there's some error. Ciao. Giuseppe
Hi @dj064  My suggestion would be not use crcSalt setting for log rotation files. Can you please disable it and restart splunk to check status. also if you can share some log files with maksing im... See more...
Hi @dj064  My suggestion would be not use crcSalt setting for log rotation files. Can you please disable it and restart splunk to check status. also if you can share some log files with maksing imp data   crcSalt = <SOURCE>
  I have a multivalue field called weeksum that contains the following values 2024:47 2024:48 2024:49 2024:50 2024:51 2024:52 2025:01 2025:02 2025:03 In this case, from the first to the last wee... See more...
  I have a multivalue field called weeksum that contains the following values 2024:47 2024:48 2024:49 2024:50 2024:51 2024:52 2025:01 2025:02 2025:03 In this case, from the first to the last week, there are no missing weeks. I would like to create a field that identifies if there are any missing weeks in the sequence. For example, if week 2024:51 is missing, the field should indicate that there is a gap in the sequence. Please note that the weeksum multivalue field already consists of pre-converted values, so converting them back to epoch (using something like | eval week = strftime(_time, "%Y:%U")) does not work.
with the current query it is calculating the downtime between the slot removed and added but the real problem is, its calculating previous downtime and adding the time and making it as single event. ... See more...
with the current query it is calculating the downtime between the slot removed and added but the real problem is, its calculating previous downtime and adding the time and making it as single event.   my point is, I need the seperate events for every downtime in servers so looking for dashboard which should show hostname, date, slot and the downtime
Hi - thanks for the idea, Sure, I could build that into the search, true. On the output dashboard what you end up with is "1 2 3 next..." on the bottom right, so you need to click through to see all... See more...
Hi - thanks for the idea, Sure, I could build that into the search, true. On the output dashboard what you end up with is "1 2 3 next..." on the bottom right, so you need to click through to see all possible values from the lookup that we have on hand. Often enugh there are 4-6 rows of empty fields in the result set, because the data is transpose'd. I'm looking to make the returned data more compact, if you will. 
We are facing a log indexing issue with the log paths mentioned below. Previously, with the same inputs.conf configuration, logs were being ingested without issues, but suddenly, it stopped sending l... See more...
We are facing a log indexing issue with the log paths mentioned below. Previously, with the same inputs.conf configuration, logs were being ingested without issues, but suddenly, it stopped sending logs. Each log file contains logs for a single day, but splunk reports that it has already read these logs and skips them. Below is the inputs.conf configuration: [monitor://C:\Ticker\out\] whitelist = .*_Mcast2Msg\\logs\\.*log$ index = rtd disabled = false followTail = 0 ignoreOlderThan = 3d recursive = true sourcetype = rtd_mcast crcSalt = <SOURCE> source path: C:\Ticker\out\Equiduct_Mcast2Msg\logs\EquiductTest-01-21-25.log C:\Ticker\out\Istanbul_Mcast2Msg\logs\Istanbul-01-16-25.log C:\Ticker\out\JSE_Mcast2Msg\logs\JSE-01-16-25.log C:\Ticker\out\JSE_Mcast2Msg\logs\JSEtst-01-17-25.log C:\Ticker\out\Warsaw_Mcast2Msg\logs\Warsaw-01-14-25.log _internal logs: 01-21-2025 14:48:20.745 +0000 INFO WatchedFile [708 tailreader0] - Will begin reading at offset=105 for file='C:\Ticker\out\Equiduct_Mcast2Msg\logs\Equiduct-Limit-1-01-21-25.log'. 01-21-2025 14:48:13.586 +0000 INFO WatchedFile [708 tailreader0] - Will begin reading at offset=171 for file='C:\Ticker\out\Equiduct_Mcast2Msg\logs\Equiduct-Limit-1-01-20-25.log'. 01-21-2025 14:48:06.332 +0000 INFO WatchedFile [708 tailreader0] - Will begin reading at offset=66 for file='C:\Ticker\out\Istanbul_Mcast2Msg\logs\Istanbul-01-21-25.log'. 01-21-2025 14:47:57.650 +0000 INFO WatchedFile [708 tailreader0] - Will begin reading at offset=66 for file='C:\Ticker\out\Istanbul_Mcast2Msg\logs\Istanbul-01-20-25.log'. 01-21-2025 14:47:51.466 +0000 INFO WatchedFile [708 tailreader0] - Will begin reading at offset=65 for file='C:\Ticker\out\JSE_Mcast2Msg\logs\JSE-01-20-25.log'. 01-21-2025 14:47:45.271 +0000 INFO WatchedFile [708 tailreader0] - Will begin reading at offset=65 for file='C:\Ticker\out\JSE_Mcast2Msg\logs\JSE-01-21-25.log'. 01-21-2025 14:47:39.644 +0000 INFO WatchedFile [708 tailreader0] - Will begin reading at offset=114 for file='C:\Ticker\out\Warsaw_Mcast2Msg\logs\Warsaw-01-21-25.log'. 01-21-2025 14:47:35.855 +0000 INFO WatchedFile [708 tailreader0] - Will begin reading at offset=114 for file='C:\Ticker\out\Warsaw_Mcast2Msg\logs\Warsaw-01-20-25.log'. 01-21-2025 14:47:35.660 +0000 INFO TailingProcessor [6536 MainTailingThread] - Adding watch on path: C:\Ticker\out. 01-21-2025 14:47:35.659 +0000 INFO TailingProcessor [6536 MainTailingThread] - Parsing configuration stanza: monitor://C:\Ticker\out\. Issue Details: 1) When we update the very first line of a log file, only the updated first line is ingested by Splunk, and the rest of the content is skipped. 2) We have deleted the fishbucket, but the issue persists. 3) Even after reinstalling the Splunk forwarder (version 8.2.12), the problem continues.
Hi To transfer TLS from Deep Security to Splunk I think Privatekey, Certificate, and Certificatechain should be created in the splunk. Please kindly explain the procedure for creating Privatekey, ... See more...
Hi To transfer TLS from Deep Security to Splunk I think Privatekey, Certificate, and Certificatechain should be created in the splunk. Please kindly explain the procedure for creating Privatekey, Certificate, and Certificatechain. And what settings do I need to set up when I receive TCP from Splunk?
Wildcards don't work everywhere and the eval function may be one of those places.  Try using isnotnull(), instead. index=* *jupiter* | stats count as "Total Traffic" count(eval(isnotnull(attack_typ... See more...
Wildcards don't work everywhere and the eval function may be one of those places.  Try using isnotnull(), instead. index=* *jupiter* | stats count as "Total Traffic" count(eval(isnotnull(attack_type))) as "Attack Traffic"  On the subject of wildcards, avoid using index=*, except in special circumstances.  Also, a leading wildcard in the search command (as in "*jupiter*") is very inefficient.
I am trying to get total traffic vs attack traffic splunk query in order to keep it in dashboard panel. We have a field called attack_type which contains all the attacks and those will be dynamic (... See more...
I am trying to get total traffic vs attack traffic splunk query in order to keep it in dashboard panel. We have a field called attack_type which contains all the attacks and those will be dynamic (daily new ones will be coming). For last 24 hours, we have 1000 total events and 400 attack_type events. how can I show this in single dashboard panel: tried to write this query: index=* *jupiter* | stats count as "Total Traffic" count(eval(attack_type="*")) as "Attack Traffic" but getting this error: Error in 'stats' command: The eval expression for dynamic field 'attack_type=*' is invalid. Error='The expression is malformed. An unexpected character is reached at '*'.'. please help me in this regards.
Where are you seeing the 400 error? The Hec client said timeout in this post, didnt seem to mention 400 bad request? if its format then something is fundamentally wrong with the payload or you sen... See more...
Where are you seeing the 400 error? The Hec client said timeout in this post, didnt seem to mention 400 bad request? if its format then something is fundamentally wrong with the payload or you sending to the wrong url, etc.  Anyways suggest you post your own post with any info and config, especially the hec url config and any splunk internal logs that align.  Otherwise would try support or the github issues.  found this issue that sounded kinda similar, but hard to tell without you providing config details or logs.  https://github.com/splunk/splunk-connect-for-syslog/issues/1329
no it is not busy - it is getting basically a 400 error which means format if i take and only have the default destionation be d_hec_default or d_hec_other basically if i go only 2 one of the splunk ... See more...
no it is not busy - it is getting basically a 400 error which means format if i take and only have the default destionation be d_hec_default or d_hec_other basically if i go only 2 one of the splunk HEC site it is fine.  It only gets the busy or format error on the second site if only the second site is configured so if i switch it whatever is the first site works and whatever is the second site doesnt work.  So no it is not busy or that it can not injest the logs it is sc4s doesnt seem to work well with multi destinations.  I am looking for help with someone that has done that.
Please provide some sample events which demonstrate the issue you have with your search
I am working on Splunk Enteprise Security.  | savedsearch "Traffic  - Total Count" is working fine and giving me desired output in the search but when calling it in source code not showing any r... See more...
I am working on Splunk Enteprise Security.  | savedsearch "Traffic  - Total Count" is working fine and giving me desired output in the search but when calling it in source code not showing any result. {     "type": "ds.savedSearch",     "options": {         "query": "'| savedsearch \"Traffic - Total Count\"'",         "ref": "Traffic - Total Count"     },     "meta": {         "name": "Traffic - Total Count"     } } Do I need to do any configurations to get output on this dashboard???
Hi, By default, if no timestamp exist in a field, Splunk defaulting timestamp of previous event On one hand, I do want Splunk to do it, but on the other hand I don't want Splunk to treat it as a "T... See more...
Hi, By default, if no timestamp exist in a field, Splunk defaulting timestamp of previous event On one hand, I do want Splunk to do it, but on the other hand I don't want Splunk to treat it as a "Timestamp Parsing Issues" in the Data quality. Is there any way explicitly to tell Splunk to do it? I just want Splunk to treat it as error. Thanks
Thank you for providing the link. Let me confirm once again. My client requires all nodes to be kept in a private subnet. So, by using indexer discovery, I can place both the manager node and peer ... See more...
Thank you for providing the link. Let me confirm once again. My client requires all nodes to be kept in a private subnet. So, by using indexer discovery, I can place both the manager node and peer nodes in the private subnet, then set up an NLB in the public subnet in front of the manager node, with TLS communication encryption enabled. In this case, in the forwarders’ configuration, I only need to set this NLB to the manager_uri, correct?
Your stated requirement does not match completely with your examples. For example, some expected outputs have fewer "words" than are available in the "field". Also, is there an unwritten requirement ... See more...
Your stated requirement does not match completely with your examples. For example, some expected outputs have fewer "words" than are available in the "field". Also, is there an unwritten requirement that your "words" begin with a letter but could contain numbers? Making some assumptions derived from your written requirement and expected outputs, you could try something like this | makeresults format=csv data="raw 00012243asdsfgh - No recommendations from System A. Message - ERROR: System A | No Matching Recommendations 001b135c-5348-4arf-b3vbv344v - Validation Exception reason - Empty/Invalid Page_Placement Value ::: Input received - Channel1; ::: Other details - 001sss-445-4f45-b3ad-gsdfg34 - Incorrect page and placement found: Channel1; 00assew-34df-34de-d34k-sf34546d :: Invalid requestTimestamp : 2025-01-21T21:36:21.224Z 01hg34hgh44hghg4 - Exception while calling System A - null" | rex field=raw " (?<dozenwords>([A-Za-z][A-Za-z0-9]*[^A-Za-z0-9]+){0,11}[A-Za-z][A-Za-z0-9]*)"
Hi When using email templates, how do I capture the current threshold value configured and the value that was observed. with the below template, I couldn't get the threshold value and actual... See more...
Hi When using email templates, how do I capture the current threshold value configured and the value that was observed. with the below template, I couldn't get the threshold value and actual observed value Health Rule Violation: ${latestEvent.healthRule.name} What is impacted: $impacted Summary: ${latestEvent.summaryMessage} Event Time: ${latestEvent.eventTime} Threshold Value: ${latestEvent.threshold} Actual Observed Value: ${latestEvent.observedValue} Output:
Hi , same issue for me, I deleted these files :  /opt/splunk/bin/python2.7 /opt/splunk/bin/jp.py and i restarted Splunk Then the messages disappeared
We could close this topic. Thanks to Mario