All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Splunk stores that information in the "fishbucket" at /opt/splunkforwarder/var/lib/splunk/fishbucket/splunk_private_db.  That database cannot be changed or moved, but you should be able to backup and... See more...
Splunk stores that information in the "fishbucket" at /opt/splunkforwarder/var/lib/splunk/fishbucket/splunk_private_db.  That database cannot be changed or moved, but you should be able to backup and restore it.
we have nearly 700+ index configured in splunk and more than 1000+ sourcetypes associated with it. So  I will need to find out which index and sourcetype is not used by user in any of the savedsearch... See more...
we have nearly 700+ index configured in splunk and more than 1000+ sourcetypes associated with it. So  I will need to find out which index and sourcetype is not used by user in any of the savedsearch, dashboard, macro, Ad-hoc searches, alerts. I was looking into audit index for last 90 days but didnt get accurate result.   i  will need splunk query to get the report to show unused index and sourcetype. 
Hello to everyone! One of the source types contains messages with no timestamp   <172>hostname: -Traceback: 0x138fc51 0x13928fa 0x1399b28 0x1327c33 0x3ba6c07dff 0x7fba45b0339d   To resolve th... See more...
Hello to everyone! One of the source types contains messages with no timestamp   <172>hostname: -Traceback: 0x138fc51 0x13928fa 0x1399b28 0x1327c33 0x3ba6c07dff 0x7fba45b0339d   To resolve this problem, I created a transform rule that successfully eliminated this "junk" from index   [wlc_syslog_rt0] REGEX = ^<\d+>.*?:\s-Traceback:\s+ DEST_KEY = queue FORMAT = nullQueue   But after it, I still have messages that indicate timestamp extraction failed   01-31-2024 15:08:17.539 +0300 WARN DateParserVerbose [17276 merging_0] - Failed to parse timestamp in first MAX_TIMESTAMP_LOOKAHEAD (20) characters of event. Defaulting to timestamp of previous event (Wed Jan 31 15:08:05 2024). Context: source=udp:1100|host=172.22.0.11|wlc_syslog|\r\n 566 similar messages suppressed. First occurred at: Wed Jan 31 15:03:13 2024     All events from this sourcetype look like this:   <172>hostname: *spamApTask0: Jan 31 12:58:47.692: %LWAPP-4-SIG_INFO1: [PA]spam_lrad.c:56582 Signature information; AP 00:57:d2:86:c0:30, alarm ON, standard sig Auth flood, track per-Macprecedence 5, hits 300, slot 0, channel 1, most offending MAC 54:14:f3:c8:a1:b3     Before asking, I tried to find events without a timestamp by using regex and cluster commands but didn't find anything So, is it normal behavior, and splunk indicates timestamp absence before moving to nullQueue or did I do something wrong?
Hi @davidwaugh , as @ITWhisperer said it isn't always a best practice to haveasterisk at the beginning and the end of a field value, but, for the index field isn't a grave sin. I'm curious to under... See more...
Hi @davidwaugh , as @ITWhisperer said it isn't always a best practice to haveasterisk at the beginning and the end of a field value, but, for the index field isn't a grave sin. I'm curious to understand why you have so many indexes: indexes aren't database tables, usually in Splunk you use different indexes when you have different retentions or different access grants, so why do you have so many indexes? Using many indexes you haven't any advantage and many problems in management. So I hint to redesign your data structure and use some indexes. You can differentiate data flows using sourcetype and other fields. Ciao. Giuseppe
It is not clear what you are trying to achieve when _time is from the previous day. Also, note that you could consider using | eval time_difference=tostring(now() - _time, "duration")  
Hello Splunk community, I would like to know if there is a way to change the database location of monitored file in slunk universal forwarder, similarly to what fluentbit allow with the DB propert... See more...
Hello Splunk community, I would like to know if there is a way to change the database location of monitored file in slunk universal forwarder, similarly to what fluentbit allow with the DB property (https://docs.fluentbit.io/manual/pipeline/inputs/tail). My splunk universal forwarder is running in a container and access a shared mount containing my applications log files and in case the the splunk uf container restart I would like to prevent the monitored files to be reindexed from the beginning. Is there a config to choose the database location? Cheers in advance
@PickleRick  Sorry for the late reply. Thank you for the clear explanation. I understand!!!   
Thank you! It is working
Using leading wildcards in searches is generally not a good idea, however, since this is on index it won't be searching all events in all indexes to see if the index matches, it will find the indexes... See more...
Using leading wildcards in searches is generally not a good idea, however, since this is on index it won't be searching all events in all indexes to see if the index matches, it will find the indexes from the list of indexes and only search those.
Hello I have a question. We have lots of indexes, and rather than specify each one, I use index=*proxy* to search across index=some_proxy1 and index=some_proxy2 I understand that obviously index=* ... See more...
Hello I have a question. We have lots of indexes, and rather than specify each one, I use index=*proxy* to search across index=some_proxy1 and index=some_proxy2 I understand that obviously index=* is a bad thing to do, but does index=*proxy* really cause bad things to happen in Splunk? I've been using syntax like this for several years, and nothing bad has ever happened. I did a test on one index with index=*proxy* This search has completed and has returned 1,000 results by scanning 117,738 events in 7.115 seconds with index=some_proxy1 This search has completed and has returned 1,000 results by scanning 121,162 events in 7.318 seconds As you can see in the example using *proxy* over the same time period was actually quicker.
Hi,  I have this query that calulates how much time the alerts are open, so far so good, but unfortunatelly if the rule name repeats (duplicate rule name) in a new event, then now() function does no... See more...
Hi,  I have this query that calulates how much time the alerts are open, so far so good, but unfortunatelly if the rule name repeats (duplicate rule name) in a new event, then now() function does not know how to calculate the correct time for the first rule that triggered.  How can I calculate SLA time without deleting duplicates and keeping the same structure as showed in the picture ?   
First, surround the token name with dollar signs, i.e., $time_token$.  Second, if _time in your second search is Splunk's built in event time, its value is epoch, and will never equal a string like "... See more...
First, surround the token name with dollar signs, i.e., $time_token$.  Second, if _time in your second search is Splunk's built in event time, its value is epoch, and will never equal a string like "2024-01-23".  Third, when you rename _time in the first search to Date then use fieldformat on this field, you are only changing the display.  $time_token$ transmitted to the second search is still the original _time value, which is NOT the date value, but the precise event value in the first search.  As such, chances are extremely slim that the second search will find a match. You need to rethink what value to send to the second search.  The solution depends very much on what are you doing with Date field in the first search and what exact value you are trying to match in the second search.  No one else knows those conditions but yourself.  So, you will need to describe them very clearly. I will give you one example.  Suppose _time in your second search is event time, Date in the first search is just for display, and that you want to match calendar date between the first and second searches, even though the events' time of day is different. (These are big IFs.  Like I said, no one else knows what your use case is and how data look like.)  In this case, you can keep the first search, and work on the second search to match calendar date like this: | where relative_time(_time, "-0d@d") == relative_time($time_token$, "-0d@d") This is perhaps the most expressive way to implement the use case I exemplified above, although it is not the most semantic in accordance to your original design.  If you want to be semantical, both the first search and second search need a change.
That's simply how Splunk shows the _time field. The data is consistent, the presentation might indeed be a bit confusing. You can get  around it as @ITWhisperer showed already.
in table , _time is converted into month buckets but in chart, in X-axis its not getting showing monthly buckets  
| where _time = strptime(time_token,"%Y-%m-%d")
The data in the chart is consistent with the data in the table - the issue is that the chart is treating _time as a special case of field - you can get around this by creating a new field called time... See more...
The data in the chart is consistent with the data in the table - the issue is that the chart is treating _time as a special case of field - you can get around this by creating a new field called time and removing _time - you would need to ensure that the time field is listed first so that it becomes the x-axis | gentimes start=-365 | rename starttime as _time | fields _time | eval location=mvindex(split("ABCDEFGH",""),random()%8) ``` the lines above generate random data for testing ``` | timechart span=1mon count by location | tail 6 | eval time=strftime(_time,"%Y-%m") | fields - _time | table time *
What do you mean by "different results"? They seem pretty much consistent.
Iam getting different results for same query when checked in statistics and visualizations, Attaching both screenshots        
For conditional evaluations you can use if() or case() functions with the eval command. I still don't understand what you want to "not consider". You want to return values not matching a filter? Eva... See more...
For conditional evaluations you can use if() or case() functions with the eval command. I still don't understand what you want to "not consider". You want to return values not matching a filter? Evaluate a field only for some subset of events? Something else? A multiselect is a widget in a dashboard. You're posting this in Splunk Search section. What's the connection between one and the other? Please post some sample of events (anonymized if needed), desired outcome and additional conditions affecting the search (like this multisearch).
Hi @PickleRick ,    I don't want that condition to be considered, Its a multiselect value, when some other values are passed along with this, its working but when DMZ alone is passed, its not workin... See more...
Hi @PickleRick ,    I don't want that condition to be considered, Its a multiselect value, when some other values are passed along with this, its working but when DMZ alone is passed, its not working, because in type, we don't categorized dmz, so we weren't use that value, so I want to skip only for that value, is that possible.