All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Yes, that technique works - are you saying it doesn't?  
Ah, so I missed your point somewhat in that the list contains all the values you want and you should alert if one is missing from the data. You can do this | stats count by Time Value | append [ ... See more...
Ah, so I missed your point somewhat in that the list contains all the values you want and you should alert if one is missing from the data. You can do this | stats count by Time Value | append [ | inputlookup lookup.csv ``` Filter the entries you expect here, e.g. using addinfo ``` ``` | where Time is in the range you want ``` ] | stats count by Time Value | where count=1 which is adding the rows from the lookup to the end of your found data and then doing the stats count again. If count=1 then it has only come from the lookup. The filtering (where...) will need to work out what time range your search covers. Use the addinfo command to get info_min_time and info_max_time fields which you can then use to filter those values from the lookup you want.
Hi did you resolve this problem?  now i also run out it, i cannot parse these data 
Thank you, but I'm not very good at using CSS, could you give me a example base on my question?
https://docs.splunk.com/Documentation/Splunk/9.2.0/Data/WhatSplunkcanmonitor
Thanks!  I did not know about indexed field, that would be something interesting. Is there a way to add on another field that is always present for all models? For example in addition to. source, ... See more...
Thanks!  I did not know about indexed field, that would be something interesting. Is there a way to add on another field that is always present for all models? For example in addition to. source, sourcetype, host, _raw, _time, is it possible to add like source_zone or something that works for all models? I saw that the source, sourcetype, host, etc are inherited but unsure from where is the inheritance from.
Apologies, my Value field is a combination of two separate field values from my index. It's to uniquely classify an event. Like you mentioned the count will not give a non numeric value. The command ... See more...
Apologies, my Value field is a combination of two separate field values from my index. It's to uniquely classify an event. Like you mentioned the count will not give a non numeric value. The command would be | stats count by Time Value| fields - count For using the lookup, should I sort the lookup or the live data from the index before compare ? Reason for asking is even though I can manually confirm that there is a mismatch, the script is unable to locate that. E.g if the live data has 18 entries and the lookup as 20, the 2 missing entries are not showing up in the script.
My lookup file already has the entry for Time, in the expected format. Should I still proceed with the time formatting ?
Courtesy of this post, I renamed "Microsoft-Windows-DNS-Server" to "Microsoft-Windows-DNSServer" and now I am seeing DNS events in my Splunk server. "Microsoft-Windows-DNS-Server" is part of log nam... See more...
Courtesy of this post, I renamed "Microsoft-Windows-DNS-Server" to "Microsoft-Windows-DNSServer" and now I am seeing DNS events in my Splunk server. "Microsoft-Windows-DNS-Server" is part of log name, while "Microsoft-Windows-DNSServer" (no space) is the provider name in XML event. Go figure.
Here is the index stanza:   [new_dex] homePath = volume:hotwarm/new_dex/db coldPath = volume:cold/new_dex/colddb thawedPath = $SPLUNK_DB/new_dex/thaweddb maxTotalDataSizeMB = 2355200 homePath.maxDa... See more...
Here is the index stanza:   [new_dex] homePath = volume:hotwarm/new_dex/db coldPath = volume:cold/new_dex/colddb thawedPath = $SPLUNK_DB/new_dex/thaweddb maxTotalDataSizeMB = 2355200 homePath.maxDataSizeMB = 2944000 maxWarmDBCount = 4294967295 // I know this is wrong, but need help setting it frozenTimePeriodInSecs = 15552000 maxDataSize = auto_high_volume repFactor=auto   Also, any other key = pair should be added. There are 18 indexers deployed each with a 16T of size. frozenTimePeriodInSecs has been met, but data is not being moved/deleted.           what configuration/details am I missing here? I needed data gone!
A bar chart will give you a different colour for each series, so you would need to do something like | chart max(duration) over _time by ESService  
Consequences... Poor performance of your dashboard Poor performance for other users Excessive usage of an SVC licence if using SVC in Splunk Cloud - potentially causing additional licence costs t... See more...
Consequences... Poor performance of your dashboard Poor performance for other users Excessive usage of an SVC licence if using SVC in Splunk Cloud - potentially causing additional licence costs to the organisation Skipped searches Your application will not be liked by others in your organisation Alerts may not fire and as such you may miss critical security detections the could indicate hackers are attacking your system, or that critical infrastructure is having performance issues, resulting in an outage of your primary web site. These are some, but not all of the consequences. All will depend on what you are using Splunk for, but I hope you get the picture. I've seen one such dashboard such as yours with 60 panels all on auto refresh, all searching the same data independently and that one dashboard, out of 1000 others, was using a significant proportion of the compute cost across the search head cluster.  
Honestly, I have no idea what they mean by importing the logs here. Anyway, you checked the btool output which shows the config. Check the inputstatus as well (this shows - as the name says - status... See more...
Honestly, I have no idea what they mean by importing the logs here. Anyway, you checked the btool output which shows the config. Check the inputstatus as well (this shows - as the name says - status of the inputs).
I thought "[WinEventLog://DNS Server]" is the same as "[WinEventLog://Microsoft-Windows-DNS-Server/Audit]". But yes I am using explicit log name (path). I also stayed away from [WinEventLog://DNS Se... See more...
I thought "[WinEventLog://DNS Server]" is the same as "[WinEventLog://Microsoft-Windows-DNS-Server/Audit]". But yes I am using explicit log name (path). I also stayed away from [WinEventLog://DNS Server] because of this doc . It says importing log is needed, which is confusing.  Below is the trimmed inputsstatus list output PS C:\Program Files\SplunkUniversalForwarder> bin\splunk.exe btool inputs list --debug | Select-String "dns" C:\Program Files\SplunkUniversalForwarder\etc\system\default\inputs.conf evt_dns_name = C:\Program Files\SplunkUniversalForwarder\etc\system\local\inputs.conf [WinEventLog://Microsoft-Windows-DNS-Server/Audit] C:\Program Files\SplunkUniversalForwarder\etc\system\default\inputs.conf evt_dns_name = <snip> C:\Program Files\SplunkUniversalForwarder\etc\system\default\inputs.conf connection_host = dns C:\Program Files\SplunkUniversalForwarder\etc\system\default\inputs.conf evt_dns_name = <snip>
Something like | rex "<<<\s*(?<LogType>[^\s]*)\s*:[^:]*:[^:]*:[^:]*:(?<Class>[^:]*).*REQS REQUID\s*::\s*(?<ReqsRequid>[^:]*).*SUB REQUID::\s*(?<SubRequid>[^:]*).*Application\s*:(?<Application>[^:]*:... See more...
Something like | rex "<<<\s*(?<LogType>[^\s]*)\s*:[^:]*:[^:]*:[^:]*:(?<Class>[^:]*).*REQS REQUID\s*::\s*(?<ReqsRequid>[^:]*).*SUB REQUID::\s*(?<SubRequid>[^:]*).*Application\s*:(?<Application>[^:]*::\s*Org\s*:\s*(?<Org>[^:]*)"
Your examples are round seconds, but if you have epoch times to search between use the epoch with decimal places where required, so your example (which actually has no millisecond time) could be ind... See more...
Your examples are round seconds, but if you have epoch times to search between use the epoch with decimal places where required, so your example (which actually has no millisecond time) could be index=my_app earliest=1710525600.000 latest=1710532800.000 env=production service=my-service  
eventstats is a way to get stats without losing fields you want to retain, but it is not an efficient command. If you do use eventstats, make sure you use the fields statement before eventstats, as ... See more...
eventstats is a way to get stats without losing fields you want to retain, but it is not an efficient command. If you do use eventstats, make sure you use the fields statement before eventstats, as all the data is transferred to the search head before the stats are calculated - you will reduce the data transfer from the indexers. Another efficient way to get stats without losing fields is to do | fields a b c etc... client_ip | stats count values(*) as * by client_ip | where count<10 This will do the aggregations but will retain all the values of the other fields in the returned row for the client ip. This may not be how you want to see the data, but from a performance point of view, if you have large datasets, then eventstats can be very slow, whereas stats will be fast. You can refine this further by doing something like | fields a b c d e f client_ip | stats count values(*) as * by client_ip a b c | eventstats sum(count) as total by client_ip | where total<10 where your split will collect some other fields as well as ip and then you can use eventstats on the much smaller dataset to calculate total count for the IP - this will generally be faster than eventstats at the start. Hope this helps  
Ok. I assume you checked the name for this particular Event Log (the name of the stanza must match the "Full Name" property from the EventLog properties page). The "DNS-Server" alone won't do. Do yo... See more...
Ok. I assume you checked the name for this particular Event Log (the name of the stanza must match the "Full Name" property from the EventLog properties page). The "DNS-Server" alone won't do. Do you have any errors related to this input in your splunkd.log? What does your splunk list inputstatus say?  
Yes the events were showed in event viewer in near real-time.  
There is quite a lot going on here and I'm not fully sure you're getting proper results. Anyway. 1. The "base" search is not very well done. Firstly, you're searching for separate short tokens inst... See more...
There is quite a lot going on here and I'm not fully sure you're getting proper results. Anyway. 1. The "base" search is not very well done. Firstly, you're searching for separate short tokens instead of whole strings in some of those parentheses. Secondly, you use wildcards at the beginning of your search terms. It kills performance. 2. The transaction command combines values of the same field across the whole transaction. That's how it works. 3. I'd try to combine the events by stats instead of transaction of course. Transaction is best avoided if you can.