All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Query: index=new "application status" AND Condition=Begin OR Condition=Done |rex field = _raw "DIDS \s+\[?<data>[^\]]+)" |dedup data |timechart span=1d count by application Result: _time appli... See more...
Query: index=new "application status" AND Condition=Begin OR Condition=Done |rex field = _raw "DIDS \s+\[?<data>[^\]]+)" |dedup data |timechart span=1d count by application Result: _time application1 application2 2022-01-06 10 20 2022-01-07 12 14 2022-01-08 18 30   I want to include Condition field as well in the table, how can i do it???
Is there anything i can do to make the error go away?
Hey folks, breaking news for the TERM/PREFIX enthusiasts! Brace yourselves – our TERM searches cannot find punycode encoded domains! xn--bcher-kva.de https://en.m.wikipedia.org/wiki/Punycode  
Well, double hyphen is really a poor-man's approximation of an em-dash or en-dash and I don't recall seeing them outside of TeX sources so I was pretty surprised to find it in segmenters. Anyway, pu... See more...
Well, double hyphen is really a poor-man's approximation of an em-dash or en-dash and I don't recall seeing them outside of TeX sources so I was pretty surprised to find it in segmenters. Anyway, punctuation is not a part of the script. Many languages using latin script use (slightly) different punctuation systems and languages using different scripts (like cyryllic) use very similar punctuation But we're drifting heavily off-topic.
It was just a Wikipedia joke: "In Latin script, the double hyphen ⹀ is a punctuation mark that consists of two parallel hyphens. It was a development of the earlier double oblique hyphen ...." I'm as... See more...
It was just a Wikipedia joke: "In Latin script, the double hyphen ⹀ is a punctuation mark that consists of two parallel hyphens. It was a development of the earlier double oblique hyphen ...." I'm assuming an early developer analyzed a suitable corpus of log content and determined a double hyphen or long dash should be considered a major breaker.
Double oblique hyphen is U+2E17 and looks like this: ⸗
Somewhere in Splunk history, there's a developer who did the lexicographically correct thing knowing it would stymy future Splunkers. Let's raise a glass to the double oblique hyphen (thanks, Wikiped... See more...
Somewhere in Splunk history, there's a developer who did the lexicographically correct thing knowing it would stymy future Splunkers. Let's raise a glass to the double oblique hyphen (thanks, Wikipedia)!
My two cents - most if not all small home routers use just linux kernel, some typical linux networking tools and custom WebUI. (Legality here is sometimes questionable). So it's not "Asus router log... See more...
My two cents - most if not all small home routers use just linux kernel, some typical linux networking tools and custom WebUI. (Legality here is sometimes questionable). So it's not "Asus router logs", it's just linux logs - in this case normal logs from the LOG module of iptables.
Nice one. I even checked the specs for segmenters.conf and while I noticed the single dash as minor segmenter, I completely missed the double dash. (Though it is "hidden" relatively far in the defaul... See more...
Nice one. I even checked the specs for segmenters.conf and while I noticed the single dash as minor segmenter, I completely missed the double dash. (Though it is "hidden" relatively far in the default declaration and surounded by all those other entities).
Nice explanation and nice way to get values to work with tstats!
Hi @PavelP, This isn't an issue with TERM or PREFIX but with how Splunk indexes abc--xyz. We can use walklex to list terms in our index: | walklex index=main type=term | table term We'll find the... See more...
Hi @PavelP, This isn't an issue with TERM or PREFIX but with how Splunk indexes abc--xyz. We can use walklex to list terms in our index: | walklex index=main type=term | table term We'll find the following: abc abc##xyz abc$$xyz abc%%xyz abc..xyz abc//xyz abc==xyz abc@@xyz abc\\xyz abc__xyz xyz Note that abc--xyz is missing. Let's look at segmenters.conf. The default segmenter stanza is [indexing]: [indexing] INTERMEDIATE_MAJORS = false MAJOR = [ ] < > ( ) { } | ! ; , ' " * \n \r \s \t & ? + %21 %26 %2526 %3B %7C %20 %2B %3D -- %2520 %5D %5B %3A %0A %2C %28 %29 MINOR = / : = @ . - $ # % \\ _ Note that -- is a major breaker. If we index abc-xyz with a single hyphen, we should find abc-xyz in the list of terms: abc abc##xyz abc$$xyz abc%%xyz abc-xyz abc..xyz abc//xyz abc==xyz abc@@xyz abc\\xyz abc__xyz xyz If walklex returns a missing merged_lexicon.lex message, we can force optimization of the bucket(s) to generate the data, e.g.: $SPLUNK_HOME/bin/splunk-optimize-lex -d $SPLUNK_HOME/var/lib/splunk/main/db/hot_v1_0 We can override major breakers in a custom segmenters.conf stanza and reference the stanza in props.conf. Ensure the segmenter name is unique and remove -- from the MAJOR setting: # segmenters.conf [tmp_test_txt] INTERMEDIATE_MAJORS = false MAJOR = [ ] < > ( ) { } | ! ; , ' " * \n \r \s \t & ? + %21 %26 %2526 %3B %7C %20 %2B %3D %2520 %5D %5B %3A %0A %2C %28 %29 MINOR = / : = @ . - $ # % \\ _ # props.conf [source::///tmp/test.txt] SEGMENTATION = tmp_test_txt Deploy props.conf and segmenters.conf to both search heads and search peers (indexers). With the new configuration in place, walklex should return abc--xyz in the list of terms: abc abc##xyz abc$$xyz abc%%xyz abc--xyz abc..xyz abc//xyz abc==xyz abc@@xyz abc\\xyz abc__xyz xyz We can now use TERM and PREFIX as expected: | tstats values(PREFIX(abc--)) as vals where index=main TERM(abc--*) by PREFIX(abc--) abc-- vals xyz xyz   As always, we should ask ourselves if changing the default behavior is both required and desired. Isolating the segmentation settings by source or sourcetype will help mitigate risk.
@splunkreal , the filters are still there but at each individual column level, you can use those to apply filters.
Still having issues trying to exclude private IPs. This works for individual IPs: index="syslog" process="kernel" SRC!=192.168.1.160 SRC!=0.0.0.0 But still can't exclude blocks. How can I exclude... See more...
Still having issues trying to exclude private IPs. This works for individual IPs: index="syslog" process="kernel" SRC!=192.168.1.160 SRC!=0.0.0.0 But still can't exclude blocks. How can I exclude with wildcard the 192.x.x.x.? Tried this - source="udp:514" index="syslog" sourcetype="syslog" where not like(src, "%192%") Nooby blues. Google is not my friend today.
Hello, thanks for solution, so "enhanced" view removes those useful filters, strange...
That is interesting. Didn't have oportunity to test it but if it is so, it looks like a support case material. See my other reply.
As far as I remember, there are two kinds of account that have names ending with $ (in Windows - for other systems it's highly unlikely that there will be an account named this way; but it would be n... See more...
As far as I remember, there are two kinds of account that have names ending with $ (in Windows - for other systems it's highly unlikely that there will be an account named this way; but it would be nice to account for that) - Managed Service Accounts (which @gcusello already mentioned) and computer accounts. Both of those account types are authenticated without using interactive authentication modes so they're irrelevant to the events you're looking for in this dataset.
Again, that's still a relatively unusual setup because normally you'd rather have a single bigger license and you should just set up a License Manager and split the license "internally" between index... See more...
Again, that's still a relatively unusual setup because normally you'd rather have a single bigger license and you should just set up a License Manager and split the license "internally" between indexers. If you managed to get two separate smaller license "counted" as one, each of them might indeed be non-enforcing. If you open Settings->Licensing and click on "All license details" you'll see if your installed license has "ConditionalLicensingEnforcement" or not. If it's indeed non-enforcing it will... well, not enforce license limits. (remember though that if you keep exceeding your license entitlement it might show, for example, in diag package when you create a support case and it might lead to some uncomfortable questions ;-))
If you're not sure about the assumptions then consider sharing the inputs.conf stanza so others can check it for you. Can you search for other data sources from the same UF?  Is the monitored file b... See more...
If you're not sure about the assumptions then consider sharing the inputs.conf stanza so others can check it for you. Can you search for other data sources from the same UF?  Is the monitored file being updated? How are you trying to search for the data?  Try using earliest=-1y latest=+1y in case timestamps are incorrect.
Hi @splunkreal, user names ending with $ are windows service accounts and usually they aren't relevant in authentication monitoring. Ciao. Giuseppe
To give you guys some more context, the 50GB is only a part of the complete license. We are migrating into a newly built environment and moved 100+GB to from the old to the new environment and starte... See more...
To give you guys some more context, the 50GB is only a part of the complete license. We are migrating into a newly built environment and moved 100+GB to from the old to the new environment and started migrating, thats why we are exceeding only the last few days on the old evironment now. So its not one big license but instead 2 different sized ones. Both are indeed no enforcement.