All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

If you get Limit of '100', you must be picking mind-reading #1 as you didn't pick #2.  That is just the problem with list.  You can increase this limit somewhat (see [stats|sistats]).  But be very ca... See more...
If you get Limit of '100', you must be picking mind-reading #1 as you didn't pick #2.  That is just the problem with list.  You can increase this limit somewhat (see [stats|sistats]).  But be very careful. As to BATCH_ID, I still don't know what 134 and 26 mean.  One correlationId?  All events?  Is it because of the 100 limit on list_maxsize?  You should probably post a new question with proper set up and detailed explanation.
I'm using Splunk Cloud so cannot upload css and js files as a self service thing.  When I use  <dashboard version="1.1" script="table_icons_inline.js" stylesheet="table_decorations.css"> from wi... See more...
I'm using Splunk Cloud so cannot upload css and js files as a self service thing.  When I use  <dashboard version="1.1" script="table_icons_inline.js" stylesheet="table_decorations.css"> from within the Dashboard Examples app (simple_xml_examples) it works fine. My question is that I want to use those js and css files from Dashboard Example apps in other apps too. I believe there should be some way to add those files providing relative paths. Like: <dashboard version="1.1" script="apps/simple_xml_examples/appserver/static/table_icons_inline.js" stylesheet="apps/simple_xml_examples/appserver/static/table_decorations.css"> Any ideas??
I am facing the same issue while using a scripted input. Did you find any ways to identify the root cause and fix it ? We are receiving data from a scripted input. we also tried putting that data in ... See more...
I am facing the same issue while using a scripted input. Did you find any ways to identify the root cause and fix it ? We are receiving data from a scripted input. we also tried putting that data in a csv file which has all the data. but still we are observing issues with data missing 
The vector Splunk_hec_log [1] support compression algorithms gzip,snappy,zlib and zstd.   It seems the server splunk HEC  only supports gzip(I am using docker.io/splunk/splunk 9.2).  Does splunk HEC ... See more...
The vector Splunk_hec_log [1] support compression algorithms gzip,snappy,zlib and zstd.   It seems the server splunk HEC  only supports gzip(I am using docker.io/splunk/splunk 9.2).  Does splunk HEC  support snappy,zlib or zstd? Is this possible to enable this algorithms beside of gzip?     [1] https://vector.dev/docs/reference/configuration/sinks/splunk_hec_logs/#compression
Yes, that technique works - are you saying it doesn't?  
Ah, so I missed your point somewhat in that the list contains all the values you want and you should alert if one is missing from the data. You can do this | stats count by Time Value | append [ ... See more...
Ah, so I missed your point somewhat in that the list contains all the values you want and you should alert if one is missing from the data. You can do this | stats count by Time Value | append [ | inputlookup lookup.csv ``` Filter the entries you expect here, e.g. using addinfo ``` ``` | where Time is in the range you want ``` ] | stats count by Time Value | where count=1 which is adding the rows from the lookup to the end of your found data and then doing the stats count again. If count=1 then it has only come from the lookup. The filtering (where...) will need to work out what time range your search covers. Use the addinfo command to get info_min_time and info_max_time fields which you can then use to filter those values from the lookup you want.
Hi did you resolve this problem?  now i also run out it, i cannot parse these data 
Thank you, but I'm not very good at using CSS, could you give me a example base on my question?
https://docs.splunk.com/Documentation/Splunk/9.2.0/Data/WhatSplunkcanmonitor
Thanks!  I did not know about indexed field, that would be something interesting. Is there a way to add on another field that is always present for all models? For example in addition to. source, ... See more...
Thanks!  I did not know about indexed field, that would be something interesting. Is there a way to add on another field that is always present for all models? For example in addition to. source, sourcetype, host, _raw, _time, is it possible to add like source_zone or something that works for all models? I saw that the source, sourcetype, host, etc are inherited but unsure from where is the inheritance from.
Apologies, my Value field is a combination of two separate field values from my index. It's to uniquely classify an event. Like you mentioned the count will not give a non numeric value. The command ... See more...
Apologies, my Value field is a combination of two separate field values from my index. It's to uniquely classify an event. Like you mentioned the count will not give a non numeric value. The command would be | stats count by Time Value| fields - count For using the lookup, should I sort the lookup or the live data from the index before compare ? Reason for asking is even though I can manually confirm that there is a mismatch, the script is unable to locate that. E.g if the live data has 18 entries and the lookup as 20, the 2 missing entries are not showing up in the script.
My lookup file already has the entry for Time, in the expected format. Should I still proceed with the time formatting ?
Courtesy of this post, I renamed "Microsoft-Windows-DNS-Server" to "Microsoft-Windows-DNSServer" and now I am seeing DNS events in my Splunk server. "Microsoft-Windows-DNS-Server" is part of log nam... See more...
Courtesy of this post, I renamed "Microsoft-Windows-DNS-Server" to "Microsoft-Windows-DNSServer" and now I am seeing DNS events in my Splunk server. "Microsoft-Windows-DNS-Server" is part of log name, while "Microsoft-Windows-DNSServer" (no space) is the provider name in XML event. Go figure.
Here is the index stanza:   [new_dex] homePath = volume:hotwarm/new_dex/db coldPath = volume:cold/new_dex/colddb thawedPath = $SPLUNK_DB/new_dex/thaweddb maxTotalDataSizeMB = 2355200 homePath.maxDa... See more...
Here is the index stanza:   [new_dex] homePath = volume:hotwarm/new_dex/db coldPath = volume:cold/new_dex/colddb thawedPath = $SPLUNK_DB/new_dex/thaweddb maxTotalDataSizeMB = 2355200 homePath.maxDataSizeMB = 2944000 maxWarmDBCount = 4294967295 // I know this is wrong, but need help setting it frozenTimePeriodInSecs = 15552000 maxDataSize = auto_high_volume repFactor=auto   Also, any other key = pair should be added. There are 18 indexers deployed each with a 16T of size. frozenTimePeriodInSecs has been met, but data is not being moved/deleted.           what configuration/details am I missing here? I needed data gone!
A bar chart will give you a different colour for each series, so you would need to do something like | chart max(duration) over _time by ESService  
Consequences... Poor performance of your dashboard Poor performance for other users Excessive usage of an SVC licence if using SVC in Splunk Cloud - potentially causing additional licence costs t... See more...
Consequences... Poor performance of your dashboard Poor performance for other users Excessive usage of an SVC licence if using SVC in Splunk Cloud - potentially causing additional licence costs to the organisation Skipped searches Your application will not be liked by others in your organisation Alerts may not fire and as such you may miss critical security detections the could indicate hackers are attacking your system, or that critical infrastructure is having performance issues, resulting in an outage of your primary web site. These are some, but not all of the consequences. All will depend on what you are using Splunk for, but I hope you get the picture. I've seen one such dashboard such as yours with 60 panels all on auto refresh, all searching the same data independently and that one dashboard, out of 1000 others, was using a significant proportion of the compute cost across the search head cluster.  
Honestly, I have no idea what they mean by importing the logs here. Anyway, you checked the btool output which shows the config. Check the inputstatus as well (this shows - as the name says - status... See more...
Honestly, I have no idea what they mean by importing the logs here. Anyway, you checked the btool output which shows the config. Check the inputstatus as well (this shows - as the name says - status of the inputs).
I thought "[WinEventLog://DNS Server]" is the same as "[WinEventLog://Microsoft-Windows-DNS-Server/Audit]". But yes I am using explicit log name (path). I also stayed away from [WinEventLog://DNS Se... See more...
I thought "[WinEventLog://DNS Server]" is the same as "[WinEventLog://Microsoft-Windows-DNS-Server/Audit]". But yes I am using explicit log name (path). I also stayed away from [WinEventLog://DNS Server] because of this doc . It says importing log is needed, which is confusing.  Below is the trimmed inputsstatus list output PS C:\Program Files\SplunkUniversalForwarder> bin\splunk.exe btool inputs list --debug | Select-String "dns" C:\Program Files\SplunkUniversalForwarder\etc\system\default\inputs.conf evt_dns_name = C:\Program Files\SplunkUniversalForwarder\etc\system\local\inputs.conf [WinEventLog://Microsoft-Windows-DNS-Server/Audit] C:\Program Files\SplunkUniversalForwarder\etc\system\default\inputs.conf evt_dns_name = <snip> C:\Program Files\SplunkUniversalForwarder\etc\system\default\inputs.conf connection_host = dns C:\Program Files\SplunkUniversalForwarder\etc\system\default\inputs.conf evt_dns_name = <snip>
Something like | rex "<<<\s*(?<LogType>[^\s]*)\s*:[^:]*:[^:]*:[^:]*:(?<Class>[^:]*).*REQS REQUID\s*::\s*(?<ReqsRequid>[^:]*).*SUB REQUID::\s*(?<SubRequid>[^:]*).*Application\s*:(?<Application>[^:]*:... See more...
Something like | rex "<<<\s*(?<LogType>[^\s]*)\s*:[^:]*:[^:]*:[^:]*:(?<Class>[^:]*).*REQS REQUID\s*::\s*(?<ReqsRequid>[^:]*).*SUB REQUID::\s*(?<SubRequid>[^:]*).*Application\s*:(?<Application>[^:]*::\s*Org\s*:\s*(?<Org>[^:]*)"
Your examples are round seconds, but if you have epoch times to search between use the epoch with decimal places where required, so your example (which actually has no millisecond time) could be ind... See more...
Your examples are round seconds, but if you have epoch times to search between use the epoch with decimal places where required, so your example (which actually has no millisecond time) could be index=my_app earliest=1710525600.000 latest=1710532800.000 env=production service=my-service