Activity Feed
- Karma Re: ERROR: Failed to migrate to storage engine wiredTiger, reason=[blank] for dbray_sd. 03-24-2022 09:03 AM
- Karma Why receiving an ERROR when updating mmapv1 storage engine to wiredTiger? for kserverman. 03-24-2022 09:03 AM
- Karma Re: Splunk Health Alerts after upgrade to 8.2 for gjanders. 12-20-2021 05:52 AM
- Karma Re: Combine events based on shared session field value efficently in verbose log source for DalJeanis. 06-05-2020 12:49 AM
- Karma Why do I get "ERROR : cannot concatenate 'str' and 'NoneType' objects: whenever I try to upload a log file? for hayloiuy. 06-05-2020 12:48 AM
- Karma Re: How to prepend a value to a field at search-time to sanitize malicious URLs? for woodcock. 06-05-2020 12:48 AM
- Karma Why is the splunkd process is consuming 100% swap memory, and what makes the splunkd process get killed so frequently? for Hemnaath. 06-05-2020 12:48 AM
- Karma Re: What happens if I forward the exact same data to an index twice? for yannK. 06-05-2020 12:48 AM
- Karma How to monitor Fedora OS logs using journald (binary file) in Splunk? for cab007. 06-05-2020 12:48 AM
- Karma In time range picker, is the time range "Last 1 Day" the same as "Yesterday"? for SplunkLunk. 06-05-2020 12:48 AM
- Karma Re: How to parse more complex xml files? for niketn. 06-05-2020 12:48 AM
- Karma SSL is not working in splunkweb in chrome afetr installing sha2 certificate. for sathyasubburaj. 06-05-2020 12:48 AM
- Karma Re: Sending logs over scp to heavy forwarder, why does splunk mangle, improperly break some of the events? for MuS. 06-05-2020 12:48 AM
- Karma Re: Sending logs over scp to heavy forwarder, why does splunk mangle, improperly break some of the events? for mattymo. 06-05-2020 12:48 AM
- Got Karma for splunk support for zfs linux 6.4 won't start. 06-05-2020 12:48 AM
- Got Karma for splunk support for zfs linux 6.4 won't start. 06-05-2020 12:48 AM
- Got Karma for splunk support for zfs linux 6.4 won't start. 06-05-2020 12:48 AM
- Got Karma for splunk support for zfs linux 6.4 won't start. 06-05-2020 12:48 AM
- Got Karma for Is it possible to selectively line merge syslog-ng truncated events?. 06-05-2020 12:48 AM
- Karma How to extract fields from JSON data in Splunk? for kotig. 06-05-2020 12:47 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
1 | |||
0 | |||
0 | |||
4 | |||
0 |
12-11-2019
10:35 AM
This was an AppD problem. Now that that has been fix, this is no longer relevant.
... View more
04-30-2019
11:45 AM
All alerts come into splunk as critical, regardless if their lower severity. Is there a fix for this?
... View more
04-25-2018
06:18 AM
This got me on the right path, thank you!
UPDATE: Removed %N, it is not actually seconds.
UPDATE2: Turns out some timestamps start 19 chars in, I have updated the props below to reflect this. The log source timestamps are a mess (some local tz, others utc), splunk isn't able to differentiate them, so I removed the TZ from the format. It will require a few source types to get it to a very clean place.
Timezone stuff is still weird, so it will need more tuning there on my end, but this ended up working for parsing:
SHOULD_LINEMERGE = false
TIME_PREFIX = ^\w+\s{1,5}\w+:(\s+\w+|\w+):
TIME_FORMAT = %Y-%m-%d %Hh%M.%S
... View more
04-24-2018
02:30 PM
I'm inputting openvas logs into splunk. Works great for .messages, not so much for .log files. Below is how the lines look:
Info<18 chars pos>2018-04-23 23h04.55 utc:31730 SOME MESSAGE
Info<18 chars pos>2018-04-24 10h25.34 CDT:539 SOME MESSAGE
So, time zone starts at the 18 character. Splunk cannot read it no matter how far I go.
Timestamp prefix regex:
^.{18}\K
I also tried the more pythonic way of doing this regex:
^.{18}
Still can't get it detected in splunk, so I added strftime:
%Y-%M-%d %Hh%M.%S %Z:%s
Any ideas on how to get the timestamp recognized correctly in splunk?
... View more
04-04-2018
07:47 AM
Take a look at these:
https://www.digitalocean.com/community/tutorials/how-to-use-journalctl-to-view-and-manipulate-systemd-logs
https://www.splunk.com/blog/2015/04/30/integrating-splunk-with-docker-coreos-and-journald.html
The latter will do what you want, works great (just omit the Docker lines for the systemd unit). You basically write the journal to a json file. Splunk loves json and it will map the fields perfectly. If you don't restart the service much, make sure you rotate or clean the file periodically, or it will continue to grow.
... View more
11-27-2017
12:28 PM
This isn't exactly what I was going for, but it did encourage me to make the searches a little more efficient, thank you.The problem with this is that it starts by pulling in all of the common events for the initial search, which makes it really slow. I want to take the session value results of the rare events and use that to parse the common events to minimize the amount of index parsing needed (thus speeding things up).
The format command is helpful, thank you for mentioning that also.
... View more
11-22-2017
01:06 PM
I have an index with an excessive amount of logs from an application. The application divides these by event types contained in one index. The event type I'm interested in reporting on does not have a high volume of events, so I've started with that.
Typically I'll have 15 events in an hour for my event type I'm most interested in. I'd like to take the session value (common across every event type) from that search and use it to search another event type. I've tried transaction, which works well but it's incredibly slow because it's not distinguishing that I only care about retrieving those 15 or so event session values from the first search.
An example is this: A user logins in and this is noted in the initialize_event log type and has things like username, src_ip, useragent, unique session value. 5 minutes later, they buy shoes (shoe_event). The shoe_event logs have data about the type of shoe, shares the session value, but doesn't have info about the username, src_ip, or useragent. I'd like to have the info about the shoe type merged with the initial log in information based on the matching session value. This way, it appears as if they were one event so I can do reporting across event types. Ideally splunk would take the results of the initial search and use that to only look for those session values (I plan on also excluding the shoe_event type from the secondary search as well).
... View more
- Tags:
- splunk-enterprise
08-04-2017
11:54 AM
I did get some vague explanation from support about larger events not being processed correctly due to some logic on the backbend that may fix my issue.
Also, I updated the answer, appears this is specific to the forwarder component of splunk. As an FYI, I have not tested the latest 6.6.2 UF (the problem was present on both when this started).
... View more
08-04-2017
08:55 AM
Upgrading to 6.6.2 and it now works fine (testing for 48 hours anyway).
EDIT: To clarify, I only updated the heavy forwarder to 6.6.2, and this appears to have fixed it.
... View more
07-11-2017
07:02 AM
Thanks, that works great for the statistics tab, but the events view is still one big xml and fields aren't extracting at all. I'm going to try a few other settings to see if splunk will just recognize the KV pair. I suppose I could evaluate for conditions off of your solution, but that seems overly complex for what I'm trying to do.
Here is what I used after the raw data search (works great for a table):
| spath path=Jabber.userConfig{@name} output=names | spath path=Jabber.userConfig{@value} output=values | table names values
... View more
07-10-2017
01:54 PM
I'm working with some configuration files I'd like splunk to monitor for changes, specifically Cisco Jabber on a Windows box. When I import this into my dev box (with KV_MODE=xml), it doesn't know what to do with the key value pair (I set the encoding, linemerge=true).
Do I need to use regex to grab these fields? I was really hoping to just import them into splunk and have it create the name : value pair into fields. My absolute last resort would be using python to convert these to json for splunk (not ideal).
Here is a snippet of the config file I'm trying to get splunk to recognize:
<?xml version="1.0" encoding="UTF-8"?>
<Jabber>
<userConfig name="somename" value="true"/>
<userConfig name="stores" value="filename:24:filename2:76"/>
...
</Jabber>
... View more
06-15-2017
10:22 AM
Quick update, I have routed the original logs to a dev box (not a cluster, indexer and search head, 6.6.1). The problem is reproducible. I have turned on debug logging and support is looking at sending this to an engineering team. I won't say the b word again, but man, this has been a real pain in the rear to get such simple logs into splunk!
@mmodestino, do you have a case number on your end? Support is interested in merging cases if another customer is having issues with the ESA logs via scp or the malformed events.
... View more
06-12-2017
12:58 PM
I checked all log files for those, no mention of cache AND full, or TcpOutputProc.
... View more
06-10-2017
01:51 PM
Nope, still happening.
... View more
06-10-2017
01:36 PM
I just have autoLB configured (an old deprecated setting). I see I don't need that, so I removed it. It appears that the forceTimeBasedAutoLB option is by default false, but I set it to that anyway in the output. This did not change anything, the problem persisted.
Trying the EVENT_BREAKER_ENABLE option on the HF now.
... View more
06-09-2017
06:54 PM
Thanks, I didn't think of hpn patches for splunk, but I've used them before for other large transfers. I'll look at giving that, and the configuration suggestions you mentioned a shot for sure.
Yeah, it's a cloud appliance, so no dice sending that stuff in the clear. I use TLS for syslog-ng receiver when I can, but, Cisco doesn't support it for sending from their appliances. It's pretty well wrapped up as an appliance, otherwise I'd say TLS netcat would be worth a go to.
I understand the ssh performance side, but again, elasticsearch beats handling it without issue really points me back to splunk as being the problem. However, with the WSA, that is on prem (also doing scp - because of syslog message length limitations) and it doesn't have these issues. So, it's possible splunk just doesn't like how long it takes to fully copy the file via scp.
... View more
06-09-2017
06:49 PM
I do have "autolb" set on the output conf, but it seems 6.6.1 doesn't really even support that anymore... Is that what you're referring to?
We have two indexers, when PS set this up, it was before 6.4 came along with better support.
... View more
06-09-2017
11:04 AM
I added a script to concatenate the files to a their own respective log files after not being modified for a minute and then deleting them (only if step 1 is successful). This leaves splunk to follow these new log files completely and logrotate handles clean up. This seems to have worked for several hours.
The fact that elasticsearch can monitor the files just fine, and splunk works fine with a buffer tells me there is some kind of a bug with the monitoring feature of the heavy forwarder receiving these types of logs over scp. I will keep this open and also push to have my ticket resolved appropriately.
... View more
06-07-2017
01:55 PM
Sure thing. It's happening on both indexers, and the index time is always a few seconds after the original, full event. So, to splunk, it gets queued up as an event without a timestamp. This shows up in splunkd.log, so it gives it a later timestamp based off of the last event because it couldn't find a timestamp. Hopefully that answers your question.
I agree, I'll work on a script to send these over to another folder after modtime hits a certain point. Sounds like a reasonable temporary workaround until a solution is found.
... View more
06-07-2017
12:26 PM
When I uploaded to the dev box, it works fine. I think you're on to something with the ESA uploading via scp several files and splunk ingesting them in near real-time. I used time_before_close=120 and it didn't do anything.
I just had a short file come in while tailing, about 10 lines that splunk mangled twice. The tail -f command showed the file as it is on the filesystem (correctly), so me tailing the file isn't showing a problem.
What is odd is, the WSA (same Cisco appliance OS) sends the logs over in a similar fashion and doesn't have this problem at all (we've had this running for years now). Not sure why now it is all of a sudden a problem.
... View more
06-07-2017
08:01 AM
Thanks for the suggestion, this also doesn't work (still breaks and ends randomly).
Thinking this is a bug with monitoring feature on the heavy forwarder, but I'm open to more suggestions.
A side note, elasticsearch and filebeat sending the events from the same box don't have this problem at all; seems indicative that this is a splunk bug. Either way, once I find something that works through support, community, or just banging my head against the splunk wall until it works, I'll post it here.
... View more
06-06-2017
01:28 PM
I threw in one file parsed by splunk as an example. Top entry is bad (also have unable to parse timestamp error in splunkd.log as well).
When I say no line breaking, all events are always one line, splunk is doing the line breaking (why, I don't know, that's why I'm here). Support is scratching their heads on this so far too.
... View more
06-06-2017
12:19 PM
Changed in edit above.
... View more
06-06-2017
10:02 AM
Nope, still happening.
... View more
06-06-2017
09:48 AM
I've tried time_before_close = 120 which didn't do anything.
I'll try the multiline option, however all events are one line and I have line merge disabled (as it should be; all events are one line).
I turned on time before close and added the multi-line option, I'll see what happens.
... View more