All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Yes. Verify your sources and their config in Splunk. Without more information we can't tell you anything more than that.
Hi Experts,  I need to compare server lists from two different csv lookups and create a flag based on the comparison results,  I have two lookups abc.csv - contains list of servers being monito... See more...
Hi Experts,  I need to compare server lists from two different csv lookups and create a flag based on the comparison results,  I have two lookups abc.csv - contains list of servers being monitored in dashboard def.csv -contains list of servers from another source   I need to identify servers present in both abc.csv and def.csv not found in dashboard (i.e abc.csv) and not found in def.csv How to compare it and create a flag? Any guidance or example queries would be greatly appreciated. Thank You
Hi @Splunk-Star, After using table or stats commands Splunk shows only outputs of these commands. This does not mean they are not extracted. If you need to access other fields, add them to the table... See more...
Hi @Splunk-Star, After using table or stats commands Splunk shows only outputs of these commands. This does not mean they are not extracted. If you need to access other fields, add them to the table command.   
Please let me know the correct data extraction?   index=* "Unknown message for StatusConsumer" topicId marshall | rex field=_raw "\"topicId\":\"(?<topicId>\d+)\"" | table topicId   Datas are not ... See more...
Please let me know the correct data extraction?   index=* "Unknown message for StatusConsumer" topicId marshall | rex field=_raw "\"topicId\":\"(?<topicId>\d+)\"" | table topicId   Datas are not getting parsed after giving table name on splunk query.
Hi community, When using datamodels, is it possible to remove/exclude the portion of the autoextractSearch: | search (index=* OR index=_*) 
Now that is a valid use case for modifying segmentation; however, the impact is wide-reaching. You may also want to look at setting INTERMEDIATE_MAJORS = true, although that could result in a signifi... See more...
Now that is a valid use case for modifying segmentation; however, the impact is wide-reaching. You may also want to look at setting INTERMEDIATE_MAJORS = true, although that could result in a significant indexing performance impact. Which access log formats and source types do you most commonly use?
Query: index=new "application status" AND Condition=Begin OR Condition=Done |rex field = _raw "DIDS \s+\[?<data>[^\]]+)" |dedup data |timechart span=1d count by application Result: _time appli... See more...
Query: index=new "application status" AND Condition=Begin OR Condition=Done |rex field = _raw "DIDS \s+\[?<data>[^\]]+)" |dedup data |timechart span=1d count by application Result: _time application1 application2 2022-01-06 10 20 2022-01-07 12 14 2022-01-08 18 30   I want to include Condition field as well in the table, how can i do it???
Is there anything i can do to make the error go away?
Hey folks, breaking news for the TERM/PREFIX enthusiasts! Brace yourselves – our TERM searches cannot find punycode encoded domains! xn--bcher-kva.de https://en.m.wikipedia.org/wiki/Punycode  
Well, double hyphen is really a poor-man's approximation of an em-dash or en-dash and I don't recall seeing them outside of TeX sources so I was pretty surprised to find it in segmenters. Anyway, pu... See more...
Well, double hyphen is really a poor-man's approximation of an em-dash or en-dash and I don't recall seeing them outside of TeX sources so I was pretty surprised to find it in segmenters. Anyway, punctuation is not a part of the script. Many languages using latin script use (slightly) different punctuation systems and languages using different scripts (like cyryllic) use very similar punctuation But we're drifting heavily off-topic.
It was just a Wikipedia joke: "In Latin script, the double hyphen ⹀ is a punctuation mark that consists of two parallel hyphens. It was a development of the earlier double oblique hyphen ...." I'm as... See more...
It was just a Wikipedia joke: "In Latin script, the double hyphen ⹀ is a punctuation mark that consists of two parallel hyphens. It was a development of the earlier double oblique hyphen ...." I'm assuming an early developer analyzed a suitable corpus of log content and determined a double hyphen or long dash should be considered a major breaker.
Double oblique hyphen is U+2E17 and looks like this: ⸗
Somewhere in Splunk history, there's a developer who did the lexicographically correct thing knowing it would stymy future Splunkers. Let's raise a glass to the double oblique hyphen (thanks, Wikiped... See more...
Somewhere in Splunk history, there's a developer who did the lexicographically correct thing knowing it would stymy future Splunkers. Let's raise a glass to the double oblique hyphen (thanks, Wikipedia)!
My two cents - most if not all small home routers use just linux kernel, some typical linux networking tools and custom WebUI. (Legality here is sometimes questionable). So it's not "Asus router log... See more...
My two cents - most if not all small home routers use just linux kernel, some typical linux networking tools and custom WebUI. (Legality here is sometimes questionable). So it's not "Asus router logs", it's just linux logs - in this case normal logs from the LOG module of iptables.
Nice one. I even checked the specs for segmenters.conf and while I noticed the single dash as minor segmenter, I completely missed the double dash. (Though it is "hidden" relatively far in the defaul... See more...
Nice one. I even checked the specs for segmenters.conf and while I noticed the single dash as minor segmenter, I completely missed the double dash. (Though it is "hidden" relatively far in the default declaration and surounded by all those other entities).
Nice explanation and nice way to get values to work with tstats!
Hi @PavelP, This isn't an issue with TERM or PREFIX but with how Splunk indexes abc--xyz. We can use walklex to list terms in our index: | walklex index=main type=term | table term We'll find the... See more...
Hi @PavelP, This isn't an issue with TERM or PREFIX but with how Splunk indexes abc--xyz. We can use walklex to list terms in our index: | walklex index=main type=term | table term We'll find the following: abc abc##xyz abc$$xyz abc%%xyz abc..xyz abc//xyz abc==xyz abc@@xyz abc\\xyz abc__xyz xyz Note that abc--xyz is missing. Let's look at segmenters.conf. The default segmenter stanza is [indexing]: [indexing] INTERMEDIATE_MAJORS = false MAJOR = [ ] < > ( ) { } | ! ; , ' " * \n \r \s \t & ? + %21 %26 %2526 %3B %7C %20 %2B %3D -- %2520 %5D %5B %3A %0A %2C %28 %29 MINOR = / : = @ . - $ # % \\ _ Note that -- is a major breaker. If we index abc-xyz with a single hyphen, we should find abc-xyz in the list of terms: abc abc##xyz abc$$xyz abc%%xyz abc-xyz abc..xyz abc//xyz abc==xyz abc@@xyz abc\\xyz abc__xyz xyz If walklex returns a missing merged_lexicon.lex message, we can force optimization of the bucket(s) to generate the data, e.g.: $SPLUNK_HOME/bin/splunk-optimize-lex -d $SPLUNK_HOME/var/lib/splunk/main/db/hot_v1_0 We can override major breakers in a custom segmenters.conf stanza and reference the stanza in props.conf. Ensure the segmenter name is unique and remove -- from the MAJOR setting: # segmenters.conf [tmp_test_txt] INTERMEDIATE_MAJORS = false MAJOR = [ ] < > ( ) { } | ! ; , ' " * \n \r \s \t & ? + %21 %26 %2526 %3B %7C %20 %2B %3D %2520 %5D %5B %3A %0A %2C %28 %29 MINOR = / : = @ . - $ # % \\ _ # props.conf [source::///tmp/test.txt] SEGMENTATION = tmp_test_txt Deploy props.conf and segmenters.conf to both search heads and search peers (indexers). With the new configuration in place, walklex should return abc--xyz in the list of terms: abc abc##xyz abc$$xyz abc%%xyz abc--xyz abc..xyz abc//xyz abc==xyz abc@@xyz abc\\xyz abc__xyz xyz We can now use TERM and PREFIX as expected: | tstats values(PREFIX(abc--)) as vals where index=main TERM(abc--*) by PREFIX(abc--) abc-- vals xyz xyz   As always, we should ask ourselves if changing the default behavior is both required and desired. Isolating the segmentation settings by source or sourcetype will help mitigate risk.
@splunkreal , the filters are still there but at each individual column level, you can use those to apply filters.
Still having issues trying to exclude private IPs. This works for individual IPs: index="syslog" process="kernel" SRC!=192.168.1.160 SRC!=0.0.0.0 But still can't exclude blocks. How can I exclude... See more...
Still having issues trying to exclude private IPs. This works for individual IPs: index="syslog" process="kernel" SRC!=192.168.1.160 SRC!=0.0.0.0 But still can't exclude blocks. How can I exclude with wildcard the 192.x.x.x.? Tried this - source="udp:514" index="syslog" sourcetype="syslog" where not like(src, "%192%") Nooby blues. Google is not my friend today.
Hello, thanks for solution, so "enhanced" view removes those useful filters, strange...