Activity Feed
- Got Karma for Re: Adding an Index in Distributed Setup. 08-01-2023 05:01 AM
- Got Karma for Re: How to troubleshoot why 1 of 2 files is no longer getting indexed after updating glibc and restarting our heavy forwarders using Splunk 5.0.11?. 06-05-2020 12:47 AM
- Got Karma for Re: How to troubleshoot why 1 of 2 files is no longer getting indexed after updating glibc and restarting our heavy forwarders using Splunk 5.0.11?. 06-05-2020 12:47 AM
- Karma Re: Troubleshooting high Search Head CPU for bandit. 06-05-2020 12:46 AM
- Karma Re: Include searched date range in search output? for bbingham. 06-05-2020 12:46 AM
- Karma Can PDF reports be created & saved without emailing? for kbb. 06-05-2020 12:46 AM
- Karma Re: Restrict Index access for GKC_DavidAnso. 06-05-2020 12:46 AM
- Karma Re: Restrict Index access for _d_. 06-05-2020 12:46 AM
- Karma Re: indexes.conf location for lguinn2. 06-05-2020 12:46 AM
- Karma Re: Timezones - what am i missing? for kristian_kolb. 06-05-2020 12:46 AM
- Karma Re: max_content_length error for Ellen. 06-05-2020 12:46 AM
- Karma Re: Troubleshooting high Search Head CPU for hexx. 06-05-2020 12:46 AM
- Karma Handling Data with multiple formats for jhallman. 06-05-2020 12:46 AM
- Karma Re: Did _time format (when displaying it) change? for MHibbin. 06-05-2020 12:46 AM
- Karma suppression of events for Michael_Schyma1. 06-05-2020 12:46 AM
- Got Karma for Troubleshooting high Search Head CPU. 06-05-2020 12:46 AM
- Got Karma for enable boot start at install time. 06-05-2020 12:46 AM
- Got Karma for Re: Adding an Index in Distributed Setup. 06-05-2020 12:46 AM
- Got Karma for Re: Adding an Index in Distributed Setup. 06-05-2020 12:46 AM
- Got Karma for Restrict Index access. 06-05-2020 12:46 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
1 | |||
0 | |||
0 | |||
0 | |||
2 | |||
1 | |||
0 | |||
0 | |||
2 |
02-13-2015
10:47 AM
2 Karma
I ended up working with Splunk Support on this one. For reasons neither of us can pinpoint, it seems that rebooting the whole server that the Heavy forwarder ran on stopped proper timestamp parsing for the events from that one logfile.
Explicitly specifying the timezone resolved the issue.
... View more
02-02-2015
11:05 PM
A little more info: I saw references to my missing sourcetype in the metrics.log on one of the source servers. Also, hitting /services/admin/inputstatus/TailingProcessor%3AFileStatus on a source server showed the files I am looking for as having been read to 100% completion.
And I checked the per_sourcetype_thruput on the indexers, and that seems to show no difference in volume from previous days. However, I absolutely do NOT see any indication in the index itself that the data is present. I even searched index=* just to be sure.
Now I'm really puzzled....
... View more
02-02-2015
06:22 PM
Hi. This is regarding Splunk 5.0.11 Universal Forwarder and Heavy Forwarder.
We rebooted 2 Heavy Forwarders today (after updating glibc) and now, we are only seeing 1 of the 2 files that each of our Universal Forwarders reads and forwards.
I know the data is not getting to the Indexer (or at least is not getting indexed).
Is there a best practice for determining whether or not data is at least getting TO the heavy forwarder?
I did try adding crcSalt= in the stanza in inputs.conf on the universal forwarder that specifies the data we're missing, just in case something cropped up.
Thanks for any suggestions to help us get started with this one...
... View more
11-03-2014
12:25 PM
I see this was added as a bug in March of '14. With 6.2 out now, I still see this behavior. Has this not been addressed yet?
... View more
04-25-2014
01:20 PM
1 Karma
Hi. We're starting to deploy Splunk Universal Forwarder (currently v 5.0.8) using Puppet rather than a straight RPM-and-post-install-configuration routine.
All works flowlessly except for the fact that we cannot set Splunk to enable boot-start without actually executing a command (obviously, splunk enable boot-start) and also provide the admin credentials -- which we wouldn't really want to store inside the puppet module.
Is there a way we can enable auto-start at installation time? We use the /prefix= option and I wondered if there is a similar option to enable boot-start
Thanks.
... View more
11-18-2013
12:51 PM
Hi. For some events in a particular index, users (including Admins) are getting an error of "Show Source not available for this event" when we try to display it.
The article here suggests that this was reported and found to be a bug, but that it would be fixed as of 5.0.3
We are on 5.0.3 and are experiencing the problem.
Anyone else ever see this? This is with a VERY basic search consisting of 'index=INDEXNAME' and a bare-word search term. Nothing more.
... View more
08-05-2013
02:23 PM
Hi folks.
We have an entry in props to parse our custom datestamps (format is YYYYMMDD HHMMSS.nnn) as follows:
MAX_TIMESTAMP_LOOKAHEAD = 50
NO_BINARY_CHECK = 1
SHOULD_LINEMERGE = false
TIME_FORMAT = %Y%m%d %H%M%S.%3N
TIME_PREFIX = ^
pulldown_type = 1
Quite often, our Splunk heavy forwarder reports that it cannot parse timestamps for these sourcetypes. What I see in the source logfiles is that sometimes, our logs do contain multiple lines (in the form of Java errors)... In that case, should we change the SHOULD_LINEMERGE from no to yes? Or do we need a far more complex regex to indicate the start of a new line?
Example (seriously munged and truncated, but just to give you an idea):
0130805 160141.074 some message about an error
com.stuff.api. Transactior timedout
at com.stuff.exception
at com.stuff.exception
at com.stuff.exception
at com.stuff
at com.stuff
at com.stuff
... View more
06-26-2013
10:56 AM
This turned out to be where the problem was. There were session and session.lock files going back for over a year -- roughly 2 million. Caused by over-monitoring of the systems and an apparent bug (from what I read) in this older version of Splunk in cleaning up the files. Newer versions have this fixed.
... View more
05-21-2013
07:12 AM
Thank you. I actually tried to find documentation for the earlier versions but failed. I followed links on the App page, which seemed to only lead to the older version packages. Again, thanks!
... View more
05-16-2013
12:47 PM
Hi. I see that the most recent Splunk for Exchange app is only compatible with Splunk 5.0.2.
Since we are on 4.3.3 and won't be upgrading to 5 for another couple of months, I wanted to find out what the latest version is that is compatible with 4.3.3
How can I find that out?
Thanks!
... View more
03-20-2013
11:44 AM
2 Karma
Hi. I have been struggling with getting to the root of some performance problems on our pool of search heads...which are two beefy servers. We do NOT see this performance issue on our other, identical site. The only difference is the users of the site and any searches they may run.
When I try a splunk restart, splunkweb always hangs and the python process ultimately has to be killed manually.
I have started using SoS to try to help figure this out.
It shows occasional Splunkweb CPU spikes but nothing that lasts and explains the persistent slowness of our system. However, "top" shows Splunkd as the culprit, so I'm unsure where to go from there.
Can anyone suggest how I might start narrowing this problem down?
I have already disabled any glaringly obvious user searches that would hose the system.
... View more
03-19-2013
10:09 AM
1 Karma
Hi. We are trying the Splunk on Splunk app for the first time because one of our two environments is constantly being hammered.
We have search heads in a pool and we have 4 Indexers for distributed search.
Splunk version is 4.3.3. Latest S.o.S. is installed on the search heads and the SoS TA is installed on the indexers. On all servers, I have enabled the two scripted inputs.
When I pull up the 20 most memory intensive searches, I get No Data returned. The Job Inspector shows the following information, but I have no idea why all of these fields are missing. I'm hoping someone has some insight! Thanks.
DEBUG: Specified field(s) missing from results: '_time', 'search', 'search_head', 'user'
DEBUG: [splunk1-brn] search context: user="sqig", app="sos", bs-pathname="/app/splunk/var/run/searchpeers/splunk3-head-1363707911"
DEBUG: [splunk2-brn] search context: user="sqig", app="sos", bs-pathname="/app/splunk/var/run/searchpeers/splunk3-head-1363707911"
DEBUG: [splunk3-brn] search context: user="sqig", app="sos", bs-pathname="/app/splunk/var/run/searchpeers/splunk3-head-1363707911"
DEBUG: [splunk4-brn] search context: user="sqig", app="sos", bs-pathname="/app/splunk/var/run/searchpeers/splunk3-head-1363707911"
DEBUG: [subsearch]: base lispy: [ AND index::_audit search splunk_server::splunk3-head-brn1 ]
DEBUG: base lispy: [ AND index::sos sourcetype::ps ]
DEBUG: search context: user="amurray", app="sos", bs-pathname="/app/splunk_mounted/etc"
... View more
02-07-2013
11:24 AM
Hi.
Some of our more ... enthusiastic ... users have been scheduling great big searches far too close together and for wild time ranges.
Is there any way I, as an admin, can let them schedule searches but restrict how often they can do so?
I suspect there is an XML file buried somewhere with the values... If I could lock them out of crontab-formatted schedules and then edit the list of available freqency times, that should do it!
Thanks.
... View more
10-02-2012
10:43 AM
If I could remove a question I have posted, I would in this case. This was user error on my part and not anything to do with Splunk.
... View more
09-28-2012
12:52 PM
Hi.
We have some log data where each line starts with a timestamp that looks like this:
2012-09-28 15:44:35,302
Nothing else in the data looks anything like a timestamp.
Splunk is indexing this as UTC, so it displays 4 hours earlier.
The timezone on the source server is in Eastern.
We are running a Splunk Universal Forwarder there, so on the Heavy Forwarder, I have the following:
[my_sourcetype_here]
TZ = US/Eastern
For what it's worth, I also tried with [host::hostnamepattern*]
Neither seem to have taken effect with newly-indexed events, despite actually restarting the Heavy forwarders!
Am I missing something here?
Thanks.
... View more
- Tags:
- timezone
08-31-2012
10:48 AM
Thank you very much. This does work. I'm in contact with Splunk regarding this as well, so they know about this issue.
... View more
08-31-2012
09:37 AM
2 Karma
Hi. We recently upgraded from a 4.2 installation to 4.3.3 and a report that includes the _time field (which used to come out in epoch format) now displays the field as a formatted string.
I have changed nothing with the query. It has always been a search | timchart | eval (to rename a field) | fields _time,some,other,fields
That's all! No formatting, etc.
The _time value used to look like this: 1341288000
Now shows up like this, including the quotation marks (this is not from the same number though!): "2012-07-24 00:00:00.000 EDT"
Someone recently asked how to get epoch time, and the answer was to reference _time but as you can see. that has ceased to work for me!
Anyone know about this? The documentation still says this:
"The _time field is stored internally in UTC Format. It is translated to human-readable Unix time format when Splunk renders the search results (the very last step of search time event processing)."
... View more
- Tags:
- _time
08-24-2012
12:59 PM
I assume when you say "send one per event" you mean you only want to INDEX one instance of an event and not its duplicates, is that correct? If so, we are also interested in the answer to this, because we are going to some pretty ridiculous extents to keep from indexing data from (necessarily) redundant systems.
... View more
08-21-2012
10:24 AM
Thanks. I thought to copy pooled/etc/apps back pre-upgrade but not pooled/etc/users. Looks like i have to roll back on one server and re-upgrade in order for all the apps to see their rightful owners.
... View more
08-21-2012
10:20 AM
I'm not sure I'm following.
With Splunk, you point it at a logfile and it consumes the entire file. It then continues to consume new lines as they get added to the log file. So you are actually indexing the full volume in the file, not just whatever your results of searches are.
... View more
08-21-2012
08:03 AM
Hi. We just upgraded from 4.2.2 to 4.3.3.
We are using search head pooling, so we followed the specific instructions for dealing with that situation (ie, unpool, upgrade each head, repool).
Now, it seems that Views and Saved Searches by some of our users are showing up as having no owner.
I checked and it looks to me like the user's directory exists in $SHARED/etc/users and (as you might expect) not in $SPLUNK_HOME/etc/users
Has anyone else run into this? I did make a backup of everything before the upgrade, so if the upgrade clobbered some critical files I'm not aware of, I could replace them, I just don't even know where to start!
... View more
08-21-2012
07:22 AM
When you go to Manager -> License, what does it show as your daily volume?
My guess would be that you may be Indexing things you are not aware of.
What does Splunk thing you indexed? Try searches like these to check your daily indexing volume totals or volume sorted by index or sourcetype. This will help you confirm that you really are not Indexing more data than 500MB per day.
Total:
index=_internal per_index_thruput earliest=-7d@d latest=now | timechart span=1d eval(sum(kb)/1024) as "Daily Indexing Volume in MB"
By Index:
index=internal metrics kb series!=* "group=per_index_thruput" daysago=7| eval indexed_mb = kb / 1024 | timechart fixedrange=t span=1d sum(indexed_mb) by series
By Sourcetype:
index=internal metrics kb series!=* "group=per_sourcetype_thruput" daysago=7| eval indexed_mb = kb / 1024 | timechart fixedrange=t span=1d sum(indexed_mb) by series
Edit: Original was in GB... I converted to MB for this post.
... View more
08-21-2012
07:14 AM
Wow, I feel like a dope! I was thinking about this in way more complex terms than I needed to. Probably because I started with something that just gave me each host and then a list of its SomeText field contents...then wanted to add one more element to that. So I was working with one query and trying to add to it.
This solution, of course, works. Thanks. The only thing I don't like about it is that the host name appears on every line. I'll work with it and see if I can make it appear only once.
... View more
08-20-2012
03:24 PM
1 Karma
Hi. Been trying to work this one out for hours... I'm close!!!
We are Splunking data such that each Host has a field "SomeText" which is some arbitrary string, and that string may be repeated on that host any number of times. It may also appear on other hosts... Basically, think of something like a syslog file... your crond message can be any number of different strings.
Let's say that Host1 has the following strings:
"The quick brown fox" shows up 5 times
"jumps over the" shows up 2 times
"lazy dog" shows up 10 times
"My dog has fleas" shows up 2 times
"So does yours" also shows up 2 times
I want a chart that shows me:
Host1 10 "lazy dog"
5 "The quick brown fox"
2 "jumps over the"
2 "My dog has fleas"
2 "So does yours"
But what I GET is this:
Host1 1 "lazy dog"
2 "The quick brown fox"
5 "jumps over the"
"My dog has fleas"
"So does yours"
(I think the string column is actually sorted alphabetically).
This is a mockup of the search I'm running, with field names obviously simplified:
index=myindex earliest=-24h | stats count(SomeText) as textCount by SomeText host | stats values(textCount) as Count,values(SomeText) as "Text" by host
What am I missing? How can I marry up the # of times a message appears with that message?
Thanks for any ideas.
... View more
06-20-2012
11:06 AM
We have always been advised by Splunk to keep the Indexers and Search Heads on the same version. Various different people from Splunk have mentioned this at different times to us.
... View more