Activity Feed
- Got Karma for Re: Using props/transforms to assign sourcetype and extract fields?. 01-23-2025 02:54 PM
- Got Karma for Re: Synchronizing the passwd file between Splunk servers with a shared splunk.secret. 01-09-2025 03:29 PM
- Got Karma for Re: Synchronizing the passwd file between Splunk servers with a shared splunk.secret. 12-09-2024 02:08 AM
- Got Karma for Re: I am sending a table in mail as an alert but I want to hide some fields of the table. But after hiding with fields - I am not able to access those fields in mail body with $result.field_name$. 07-03-2024 01:23 PM
- Karma Re: Expand json messages by default for patkujawa_wf. 01-18-2024 02:37 AM
- Got Karma for Re: I am sending a table in mail as an alert but I want to hide some fields of the table. But after hiding with fields - I am not able to access those fields in mail body with $result.field_name$. 11-21-2023 07:55 PM
- Got Karma for Re: How Many Escapes "\" Do I Need in .conf File Regex. 07-07-2023 12:34 AM
- Got Karma for Re: How Many Escapes "\" Do I Need in .conf File Regex. 06-12-2023 10:39 PM
- Got Karma for Re: How Many Escapes "\" Do I Need in .conf File Regex. 06-12-2023 06:45 PM
- Karma Re: How Many Escapes "\" Do I Need in .conf File Regex for emottola. 06-12-2023 01:58 PM
- Got Karma for Re: How Many Escapes "\" Do I Need in .conf File Regex. 06-12-2023 01:05 PM
- Got Karma for Re: How Many Escapes "\" Do I Need in .conf File Regex. 06-12-2023 12:11 PM
- Posted Re: How Many Escapes "\" Do I Need in .conf File Regex on Splunk Enterprise. 06-12-2023 11:09 AM
- Got Karma for Re: Cooked Connection. 05-24-2023 07:52 AM
- Got Karma for Re: Splunk UF: getting error ERROR ExecProcessor. 04-19-2023 01:31 AM
- Got Karma for Re: How to merge two fields values into a single field?. 10-07-2022 10:30 AM
- Got Karma for Re: Wrong time format in Microsoft DNS TA. 07-06-2022 01:10 AM
- Karma Re: Where to place common python libraries for tiagofbmm. 06-01-2022 07:14 AM
- Got Karma for Re: How do you restart splunk?. 02-10-2022 03:55 PM
- Got Karma for Re: Why did my cold buckets roll to frozen?. 01-03-2022 03:51 PM
Topics I've Started
No posts to display.
06-12-2023
11:09 AM
5 Karma
Just to make sure, because this is likely the most regularly confused topic in Splunk when using regexes. First, create a clean regex in regex101.com - that means, no unnecessary escapes. What is an unnecessary escape backslash? Well, if you remove it, and your regex still works, and the explanation on the right for that part didn't change - it was unnecessary. Example 1: \" can be used in regex, but the backslash is unneeded. The quote does not have any special meaning in regex, so " has exactly the same effect. Example 2: If you wanna match a literal asterisk, it has to be escaped \* - because the asterisk has a special meaning in regex. Now, when you have your clean regex - just use it as it is in any .conf file. It will work. However - the | rex and | regex command is different (well, anything in SPL with regex is). Why? The SPL parser also knows characters with special meaning (e.g. quotes). However, it uses the same escape character as regex - the backslash. Now, to avoid strange behaviour when using regexes in your SPL, you need to escape them again. Example 1: You want to match Domain\user in your event. The regex would be Domain\\user. In SPL this would have to be Domain\\\\user - every backslash in the regex needs it's own escape backslash. Example 2: You want to match "Domain\user" - the regex would simply be "Domain\\user" - quotes have no special meaning in regex. However, in SPL, this would have to be \"Domain\\\\user\" - for the reasons above, and because the quotes have a special meaning. Addendum: When you use the last regex in SPL in the rex command, it gets put into quotes - like | rex "\"Domain\\\\User\"". Crazy, right? Purple is code/literal text/commands. Green is regex escapes. Red is escapes + quotes required by SPL PS: I know that SPL sometimes works even without the proper amount of escape backslashes - but sometimes it doesn't. I still haven't found out why. If you have the Splunk source code, send me a mail 😉 PPS: As everything in Splunk, there's likely that one setting in that one .conf file where this does not apply, because $consistency. If I were to bet, I'd bet on something related to Windows/Powershell 😈
... View more
11-26-2019
03:02 AM
2 Karma
You have to patch every instance that parses data that could contain such timestamps with two digit years or epoch format.
This definitely includes all indexers and HF. If you can be 100% sure that you will never ingest such logs on your SH, CM, DS, etc... you may be able to ignore them, but as some Linux logs on those boxes might be ingested now or in the future, I'd advise to patch them too. Maybe you can use the opportunity to update Splunk, too.
Be aware that you might even have to patch the UFs, if you use features like INDEXED_EXTRACTIONS or force_local_processing. An example of such a built-in sourcetype that would be affected is CSV.
... View more
03-22-2019
05:03 PM
As far as I know, the action.lookup is only available in systemd >=226. CentOS/RHEL 7 ship with 219, so if you're using that, this is the reason. Current Debian stable ships 232, so you should be good there 🙂
... View more
There's an even easier available now, a Python project called splunksecrets that can be installed via PIP and gives you an easy CLI to encrypt + decrypt new and old secrets:
https://pypi.org/project/splunksecrets/
... View more
03-19-2019
05:06 PM
Hey @ChrisG - care to update this post with the release code names since 6.4? 😉
... View more
03-19-2019
03:15 AM
1 Karma
As this is still the #1 Google search result for this question, I'll cross reference an Answers solution to this issue:
https://answers.splunk.com/answers/320626/what-is-the-curl-command-used-on-the-deployer-to-a.html#answer-321559
It seems that this API endpoint is not documented in the REST API docs though.
... View more
02-27-2019
07:46 AM
Tried it in direct comparison to the Splunk logo - this is exactly right as far as I can see.
... View more
02-19-2019
09:39 AM
So far I can not exactly confirm this. My "not in RAM" auto lookup works, but it seems to be applied AFTER the "in RAM" lookups. In my case, the latter depends on a field from the former, and therefore it fails.
... View more
02-18-2019
06:56 AM
2 Karma
As this is still a top result for this issue, I'd like to add:
In general, it works.
But - if a lookup is larger than the max_memtable_size in limits.conf (default: 10 MB), it will be indexed to disk. This seems to result in it being applied later - so if the lookup a in the above example is too big, this won't work anymore.
Raising the limit will fix the issue.
Thanks a ton to @starcher for pointing this out to me!
... View more
01-10-2019
07:15 AM
This seems to be the only place where this information is to be found, thanks @cbtadmin!
It can be checked like this:
/splunk cmd openssl x509 -text -in /opt/splunk/etc/auth/your_server_cert_and_key.pem
You should see a line like this:
X509v3 Extended Key Usage:
TLS Web Server Authentication, TLS Web Client Authentication
... View more
I know this is an old question, but was the first Google result so I thought I'd give it an update.
Sharing the passwd file works without a shared splunk.secret .
Let's take an example line from a passwd file:
:admin:$6$sG0AOkrCThdXQjTF$5Aiq4/slyL4ve0eKrP/iIUP3kE15S2aJOBWrn1YXZzp3o8eqs1luBK8XBBX93ZHg1y6X.Bs5NqTDB98OqSX6Z1::Administrator:admin:changeme@example.com:::32683
The $6$ at the beginning indicates a SHA512 hash.
A so-called salt is saved between the $6$ and the next $ - in this case sG0AOkrCThdXQjTF . It is selected randomly and different each time the password is set/changed. Read more about why salts are used here: Salt@Wikipedia
The final string after the $ is the actual password hash. Even if you set the same password every time, it will be different each time, because a new salt will be randomly chosen and added to the password before it's hashed. In this case, the hash is 5Aiq4/slyL4ve0eKrP/iIUP3kE15S2aJOBWrn1YXZzp3o8eqs1luBK8XBBX93ZHg1y6X.Bs5NqTDB98OqSX6Z1 .
However, when a user tries to login, Splunk takes the salt stored in the passwd file instead of selecting a random one, adds the salt to the password the user entered, and runs that string through the SHA512 function. If the result of that operation matches the string after the $ , it's considered the right password.
That is the reason that the same password results in different strings on different instances, but you can just copy them over.
Again - no shared splunk.secret required.
Hope that helps @dflodstrom!
... View more
11-16-2018
02:58 AM
Thanks a lot. Minor typo - step 7 should be "disable maintenance-mode", not enable.
Is there a reason you disable and enable maintenance-mode between every Indexer change? Can't you keep it on and change them all and then disable?
... View more
10-31-2018
01:29 AM
@gjanders: Did you ever get an update on that enhancement request?
... View more
10-20-2018
10:38 AM
As a former Networking guy, I hate that PS by default does not show me the actual failure reason (refused/timeout/etc), but only says it fails. Adding -Debug , I've to press Enter 4 times before the call finishes.
If anybody knows a workaround for that mess, please let me know!
... View more
06-11-2018
12:54 PM
2 Karma
Yeah, the | rex command is a little tricky, as stuff has to be double-escaped. Try replacing \\ with \\\ - that should work.
Hint: This is usually not necessary in config files, but in searches, as those have to be escaped once for the SPL parser and once for the regex parser.
Hope that helps - if it does I'd be happy if you would upvote/accept this answer, so others could profit from it. 🙂
... View more
06-11-2018
06:57 AM
4 Karma
The recommend best practice for your case and pretty much all syslog setups is to send the syslogs to a central server, where a syslog daemon (preferrably syslog-ng, or rsyslog) collects all data, and writes it to disk, split by hostname, date, etc.
You can then have a UF (or HF, if necessary) monitor those files and forward them to an indexer.
This is best practice for a bunch of reasons:
- Less prone to data loss, because syslog daemons are less frequently restarted, and restart much faster than Splunk
- Less prone to data loss, because all data is immediatly written to disk, where it can be picked up by Splunk
- better performance, because syslog daemons are better at ingesting syslog data than Splunk TCP/UDP inputs
In this case, a network failure between UF/HF and IDX won't cause any problems, because the data is still being written to disk - UF/HF will just continue to forward it when the connection is available again. I'd prefer this over Splunk persistent queues, because it also allows for much easier troubleshooting - because the data is written to disk, and you can look at it. If something is wrong, you can check if you got bad data, or if something went wrong after that with your Splunk parsing.
You can find more details in the .conf 2017 recording/slides for "The Critical Syslog Tricks That No One Seems to Know About", or in the blog posts Using Syslog-ng with Splunk and Splunk Success with Syslog.
Hope that helps - if it does I'd be happy if you would upvote/accept this answer, so others could profit from it. 🙂
... View more
06-07-2018
10:39 AM
Well, with the data already gone, it might be difficult to determine the cause.
However - if it still grows fast now, you could simply take a look at what kind of messages appear very frequently, e.g. using the Pattern tab.
This would most likely give you an idea why this has happened.
Also - is this a personal instance, or a corporate one? Production, dev or test? Available from the internet, or LAN only?
Hope that helps - if it does I'd be happy if you would upvote/accept this answer, so others could profit from it. 🙂
... View more
06-03-2018
09:16 AM
I've seen this happen sometimes, but I have never figured out why.
If you have a valid support contract, I'd say this would be the time to make use of it. 😉
... View more
06-02-2018
10:23 AM
The IPs used in the example are "real" IPs, meaning they are valid public IPv4 adresses. If you choose the right addresses for your example data, you can map them to a location - it just depends on the IPs you use and if they're available in the Geo IP database.
... View more
06-02-2018
08:10 AM
Hey,
Sure you can. Transaction combines all events of a transaction into one event, so if you append a | search Type=*x* after the transaction, it should do what you want.
... View more
05-29-2018
04:13 PM
It would actually be slower, because the forwarding causes some overhead.
You can just have the Splunk instance on that server do the input.
Consider the Universal Forwarder to be a subset of a full Splunk instance. A full Splunk instance can do everything a UF can do, at the same speed - but a UF can only do a subset of what full Splunk can do. The UF is only lightweight, and therefore deployed on servers whose primary task isn't Splunk, but something else.
Therefore - just do what ever you want to do using the full Splunk instance.
Hope that helps - if it does I'd be happy if you would upvote/accept this answer, so others could profit from it. 🙂
... View more
05-29-2018
12:14 PM
Can you show an example of what you have right now, and how you would like it to be?
... View more
05-29-2018
12:05 PM
Is this a single level relation?
Like, do all jobs belong to some parent job, and that's it? Or do some jobs have child jobs, and those have child jobs, and so on?
... View more