Activity Feed
- Got Karma for Re: What issue would cause a heavy forwarder to show a status of "SplunkForwarder UNCONFIGURED ENABLED"?. 06-05-2020 12:47 AM
- Got Karma for How to configure props.conf to recognize the exact timestamp format hh:mm.ss,sss in our data?. 06-05-2020 12:47 AM
- Posted Re: Why do I get "Invalid key in stanza [tcp-ssl://:1470] ... connection_host=dns your indexes and inputs are not internally consistent"? on Getting Data In. 08-29-2016 11:43 AM
- Posted Re: how can I format and chart a status message? on Splunk Search. 08-03-2016 11:18 AM
- Posted Re: how can I format and chart a status message? on Splunk Search. 08-03-2016 10:03 AM
- Posted Re: how can I format and chart a status message? on Splunk Search. 08-03-2016 10:03 AM
- Posted Re: how can I format and chart a status message? on Splunk Search. 08-03-2016 10:03 AM
- Posted Re: how can I format and chart a status message? on Splunk Search. 08-03-2016 10:02 AM
- Posted how can I format and chart a status message? on Splunk Search. 08-03-2016 08:50 AM
- Tagged how can I format and chart a status message? on Splunk Search. 08-03-2016 08:50 AM
- Tagged how can I format and chart a status message? on Splunk Search. 08-03-2016 08:50 AM
- Posted Re: how can I sift out TRACE and DEBUG entries so that splunk doesn't index them when pulling other data from monitored logs at clients? on Getting Data In. 05-13-2016 01:08 PM
- Posted how can I sift out TRACE and DEBUG entries so that splunk doesn't index them when pulling other data from monitored logs at clients? on Getting Data In. 05-13-2016 11:52 AM
- Tagged how can I sift out TRACE and DEBUG entries so that splunk doesn't index them when pulling other data from monitored logs at clients? on Getting Data In. 05-13-2016 11:52 AM
- Tagged how can I sift out TRACE and DEBUG entries so that splunk doesn't index them when pulling other data from monitored logs at clients? on Getting Data In. 05-13-2016 11:52 AM
- Tagged how can I sift out TRACE and DEBUG entries so that splunk doesn't index them when pulling other data from monitored logs at clients? on Getting Data In. 05-13-2016 11:52 AM
- Posted Re: Is an entry in props.conf required to allow an entry in transforms.conf to be effective? on Getting Data In. 05-09-2016 02:14 PM
- Posted Is an entry in props.conf required to allow an entry in transforms.conf to be effective? on Getting Data In. 05-09-2016 01:41 PM
- Tagged Is an entry in props.conf required to allow an entry in transforms.conf to be effective? on Getting Data In. 05-09-2016 01:41 PM
- Tagged Is an entry in props.conf required to allow an entry in transforms.conf to be effective? on Getting Data In. 05-09-2016 01:41 PM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
1 |
08-29-2016
11:43 AM
Thank you all.
... View more
08-03-2016
11:18 AM
Thanks all,
between your suggestions and finding splunk app will do most of this work for me, I'm seeing some light....
thanks for the quick feedback
have a great day.
... View more
08-03-2016
10:03 AM
Thank you. here is an excerpt from specific event
state SYNCED; deltatime: 57, datamoving: true, outcount: 50
this status event is provided once/second and I'd like a moving guage/bar, etc that responds to deltaTime and another moving guage/bar that responds to outcount, and perhaps set a "redline" with an alert that triggers when a value exceeds the redline...
Thank you very much.
... View more
08-03-2016
10:03 AM
Thank you. here is an excerpt from specific event
state SYNCED; deltatime: 57, datamoving: true, outcount: 50
this status event is provided once/second and I'd like a moving guage/bar, etc that responds to deltaTime and another moving guage/bar that responds to outcount, and perhaps set a "redline" with an alert that triggers when a value exceeds the redline...
Thank you very much.
... View more
08-03-2016
10:03 AM
Thank you. here is an excerpt from specific event
state SYNCED; deltatime: 57, datamoving: true, outcount: 50
this status event is provided once/second and I'd like a moving guage/bar, etc that responds to deltaTime and another moving guage/bar that responds to outcount, and perhaps set a "redline" with an alert that triggers when a value exceeds the redline...
Thank you very much.
... View more
08-03-2016
10:02 AM
Thank you. here is an excerpt from specific event
state SYNCED; deltatime: 57, datamoving: true, outcount: 50
this status event is provided once/second and I'd like a moving guage/bar, etc that responds to deltaTime and another moving guage/bar that responds to outcount, and perhaps set a "redline" with an alert that triggers when a value exceeds the redline...
Thank you very much.
... View more
08-03-2016
08:50 AM
Hello,
a device in our system returns a status message that looks like the following (as seen in splunk search results)
state (UP | DN| STDBY | UNK); parma: (string) , parmb:(int), etc
notice that there is no delimiter between "state" and the value of state
notice also that some parameters have integer values and some are strings. most of the parameters that are strings are limited to true/false, or up/down, etc.....
I'd like to have a realtime chart/display that is updated once/interval I choose.
for the parameters that have string values, it's OK to translate those to a numerical or color representation for display
I'm trying to read and understand how to use the chart command, but I'm having a hard time translating the "syntax" descriptions to "real life"....
I'd appreciate some help getting started on my learning curve.
thanks so much
... View more
05-13-2016
01:08 PM
thank you Rich7177
the inputs.conf files with the "monitor:///" stanza are in the splunkforwarder configs on each client. then in each LAN (of many LANs) we have heavy forwarders which all ultimately route data to a single indexer.....
given that scenario, am I to edit the props.conf and transforms.conf on my heavy forwarders or on my indexer?
thanks so much.
... View more
05-13-2016
11:52 AM
Hello,
our splunkforwarders are configured to pull in certain logs from various clients with a "[monitor://]" entry in the inputs.conf file on each client.
there is still on-going development work on these clients and the developers routinely set log levels to TRACE or DEBUG. these entries are required in the log, but we do not need them in splunk and they are causing our license volume to be exceeded.
how can I amend the stanzas for these monitored logs to prevent the TRACE and DEBUG entries from being routed to the indexer while allowing all other entries to continue to be processed?
while I find information at the following: http://docs.splunk.com/Documentation/Splunk/6.1.3/Forwarding/Routeandfilterdatad#Keep_specific_events_and_discard_the_rest
it is not clear to me if I am to update the props.conf and transforms.conf at our heavy forwarders, or on our indexer to accomplish the filtering.
thanks so much
thanks so much.
Michael.
... View more
05-09-2016
02:14 PM
very nice...thank you for the quick answer.
... View more
05-09-2016
01:41 PM
When the following question was asked in this forum:
What is the role of transforms.conf vs. props.conf for field extraction?
The answer was:
The high-level answer is that props.conf says what rules are applied to any event and when they are applied, and transforms.conf actually defines those rules.
but is the entry in props.conf REQUIRED to map to an entry in transforms.conf so that the rule is applied?
thank you
... View more
01-21-2016
03:59 PM
We can close this. Of the many servers (splunk light forwarders) that were failing to report, I rebooted one of the ones that was reporting all the forwarding blocked error messages. Within 2 minutes the other servers began reporting in and within 15 minutes, all 34 servers in the domain had successfully reported and forwarded a days' worth of data to the heavy forwarders.
Though the issue is fixed, I'd like to know if there is something that we did or something in our config to cause this to happen. Is there a tuning param set too tight, for example.
thank again to Jkat54
Thanks for any feedback you can give here.
... View more
01-21-2016
11:06 AM
Jkat54 - thanks for your response - here is some more data
I'm seeing the light forwarders connecting on/off to the heavy fwds, but the connections keep dropping
On light forwards, I'm getting errors like :
read operation timed out expecting ack form
Possible duplication of events with channel=source ...offset = on host Raw connection to timed out Forwarding blocked...
Applying quarantine to
Removing quarantine from
On heavy fwds, I get erros like :
Forwarding to blocked
From the point of view of the deployment monitor, all the light fowrders in the system keep toggling between active and missing....
if on the light forwarders I do: ./splunk list forward-server, I do not get consistent results...
we're using ssl...netstat reports connections on port 8081 (used from light fwds to heavyfwds) and 8082 (heavy fwds to indexer)
Thanks..
Michael.
... View more
01-20-2016
06:00 PM
hello
We have a Linux server running Splunk forwarder which forwards to one of two heavy forwarders in an autolb configuration.
The Splunk forwarder reports that it connects to the heavy forwarder, but I get a message in splunkd.log that says
forwarding to indexer group default-autolb group blocked for <nnnnn> seconds.
From the point of view of the deployment monitor running on the indexer, the Splunk forwarder in question is "missing".
Please help us diagnose our problem as we have a demo to a customer tomorrow.
thank you
... View more
12-15-2015
08:20 AM
Thanks for the input Iguinn.
I tried each of your suggestions and I still get the same error on startup.
I changed the name of the stanza to tcp-ssl:1470 - still get the same error on startup.
I retyped the key-value pair "connection_host=dns" to ensure no special characters and I still get the error on startup.
thanks for your interest in my problem
msantich
... View more
12-14-2015
12:53 PM
Hello,
Our /opt/splunk/etc/apps/search/local/inputs.conf file on our forwarder contains:
[tcp-ssl://:1470]
connection_host=dns
sourcetype=apm_log
index=security_logs
queueSize=5MB
When starting the forwarder, I get:
checking for conf file problems:...
invalid key in stanza [tcp-ssl://:1470] in /opt/splunk/etc/apps/search/local/inputs.conf ...connection_host=dns
your indexes and inputs are not internally consistent.
btool output offers no additional information.
Can anyone offer advice?
Thank you so much.
msantich
... View more
12-14-2015
12:43 PM
ahhh...thank you MuS....
I appreciate the clarification....
Have a great day.
... View more
12-03-2015
01:39 PM
We're losing data to the frozen directory pre-maturely. We have requirements to keep data searchable for 5 years, but had left the MaxIndexSize at the default 500,000 MB and have now reached that limit earlier than expected. We have a coldtofrozen path specified, so our data is safe there, but just not searchable.
I have an open ticket to address an entire solution, but in the near term would like to stop the data from rolling to frozen.
If I set frozenTimePeriodInSecs for the index in question in indexes.conf, what behavior can I expect given that the index is already at max size? Will it have the effect I'm hoping for and simply allow the index to grow without regard to the 500,000 MB limit until such time as records meet the frozenTimePeriodInSecs value and can thus roll to frozen?
thanks for any advice.
Michael
... View more
12-12-2014
08:54 AM
Thanks all. Almost there but can you help me with one more iteration?
I've added the following to my props.conf per your recommendations and I'm now properly indexing the target event will all included lines. BUT, when the log contains other event entries with the EXACT same timestamp, splunk is not indexing them (they do show up if I select "show source" - but the events are not indexed:
[YourSourceType]
MAX_TIMESTAMP_LOOKAHEAD=30
NO_BINARY_CHECK=1
SHOULD_LINEMERGE=true
TIME_FORMAT=%Y-%m-%d %H:%M:%S.%3q
TIME_PREFIX=^
so in the raw log I have:
2014-12-10 03:31:14.843 TRACE [Thread-7] ...EventTxt
2014-12-10 03:31:14.844 TRACE [Thread-7] ...EventText
2014-12-10 03:31:14.844 TRACE [Thread-7] ...EventText- - LOONG XML with embedded time stamps -
2014-12-10 03:31:14.844 DEBUG [Thread-7] ...EventText
2014-12-10 03:31:14.844 TRACE [pool-2-thread-2] ...EventText
2014-12-10 03:31:14.844 TRACE [pool-2-thread-2] ...EventText
2014-12-10 03:31:14.846 TRACE [pool-2-thread-2] ...EventText
but splunk indexes ONLY
2014-12-10 03:31:14.843 TRACE [Thread-7] ...EventTxt
2014-12-10 03:31:14.844 TRACE [Thread-7] ...EventText- - LOONG XML with embedded time stamps -
2014-12-10 03:31:14.846 TRACE [pool-2-thread-2] ...EventText
Thanks so much for you continued help
... View more
12-11-2014
03:07 PM
Thank you so much somesoni2 and mmueller.
so to be clear the props.conf entry should look like this with the recommendation from somesoni1 as amended by recommendation from mmueller?
[YourSourceType]
MAX_TIMESTAMP_LOOKAHEAD=30
NO_BINARY_CHECK=1
SHOULD_LINEMERGE=true
TIME_FORMAT=%Y-%m-%d %H:%M:%S.%3q
TIME_PREFIX=^
... View more
12-11-2014
09:21 AM
Splunk (version 4.2.5) interprets timestamps that are embedded in an event as though it is a separate event. The target event is then effectively broken up (with the first "piece" terminating at the last line prior to the embedded time stamp) and another event is indexed for each subsequent embedded time stamp.
The event entry in the native log file looks like this:
2014-11-24 21:13:26.991 EventText EventText EventText EventText EventText EventText EventText
-xml-
-xmlxmlxmlxmlxmlxmxmlxmlxmlxml xmlxmlxmlxmlxmlxmxmlxmlxmlxml xmlxmlxmlxmlxmlxmxmlxmlxmlxml xmlxmlxmlxmlxmlxmxmlxmlxmlxml xmlxmlxmlxmlxmlxmxmlxmlxmlxml xmlxmlxmlxmlxmlxmxmlxmlxmlxml xmlxmlxmlxmlxmlxmxmlxmlxmlxml xmlxmlxmlxmlxmlxmxmlxmlxmlxml
xmlxmlxmlxmlxmlxmxmlxmlxmlxml xmlxmlxmlxmlxmlxmxmlxmlxmlxml xmlxmlxmlxmlxmlxmxmlxmlxmlxml xmlxmlxmlxmlxmlxmxmlxmlxmlxml-
-timeCreated-2014-11-24T21:13:26.914Z-/timeCreated-
-xml_tag-non-time VALUE-/xml_tag-
-xml_tag- non-time VALUE -/xml_tag-
-xml_tag- non-time VALUE -/xml_tag-
-xml_tag- non-time VALUE -/xml_tag-
-xml_tag- non-time VALUE -/xml_tag-
-xml_tag- non-time VALUE -/xml_tag-
-start-2014-11-24T21:21:00Z-/start-
-end-2014-11-24T21:21:00Z-/end-
-xml_tag-non-time VALUE-/xml_tag-
-xml_tag- non-time VALUE -/xml_tag-
-xml_tag- non-time VALUE -/xml_tag-
-xml_tag- non-time VALUE -/xml_tag-
-xml_tag- non-time VALUE -/xml_tag-
-xml_tag- non-time VALUE -/xml_tag-
etc...
Splunk reacts to this in a few different ways,
A: When splunk indexes the “single” event above, it actually creates 3 events.
The first event looks like:
2014-11-24 21:13:26.991 EventText EventText EventText EventText EventText EventText EventText
-xml-
-xmlxmlxmlxmlxmlxmxmlxmlxmlxml xmlxmlxmlxmlxmlxmxmlxmlxmlxml xmlxmlxmlxmlxmlxmxmlxmlxmlxml xmlxmlxmlxmlxmlxmxmlxmlxmlxml xmlxmlxmlxmlxmlxmxmlxmlxmlxml xmlxmlxmlxmlxmlxmxmlxmlxmlxml xmlxmlxmlxmlxmlxmxmlxmlxmlxml xmlxmlxmlxmlxmlxmxmlxmlxmlxml
xmlxmlxmlxmlxmlxmxmlxmlxmlxml xmlxmlxmlxmlxmlxmxmlxmlxmlxml xmlxmlxmlxmlxmlxmxmlxmlxmlxml xmlxmlxmlxmlxmlxmxmlxmlxmlxml-
-timeCreated-2014-11-24T21:13:26.914Z-/timeCreated-
-xml_tag-non-time VALUE-/xml_tag-
-xml_tag- non-time VALUE -/xml_tag-
-xml_tag- non-time VALUE -/xml_tag-
-xml_tag- non-time VALUE -/xml_tag-
-xml_tag- non-time VALUE -/xml_tag-
-xml_tag- non-time VALUE -/xml_tag-
Note that splunk actually does “look past” the second time tag and simply includes it as part of the message (this is the behavior we want for all subsequent embedded time stamps) but this event ends when splunk hits the next embedded time tag.
The second event looks like:
-start-2014-11-24T21:21:00Z-/start-
The third event looks like:
-end-2014-11-24T21:21:00Z-/end-
-xml_tag-non-time VALUE-/xml_tag-
-xml_tag- non-time VALUE -/xml_tag-
-xml_tag- non-time VALUE -/xml_tag-
-xml_tag- non-time VALUE -/xml_tag-
-xml_tag- non-time VALUE -/xml_tag-
-xml_tag- non-time VALUE -/xml_tag-
etc...
B. sometimes Splunk creates just 2 events, then indicates “n lines omitted”
The first event looks like:
2014-11-24 21:13:26.991 EventText EventText EventText EventText EventText EventText EventText
-xml-
-xmlxmlxmlxmlxmlxmxmlxmlxmlxml xmlxmlxmlxmlxmlxmxmlxmlxmlxml xmlxmlxmlxmlxmlxmxmlxmlxmlxml xmlxmlxmlxmlxmlxmxmlxmlxmlxml xmlxmlxmlxmlxmlxmxmlxmlxmlxml xmlxmlxmlxmlxmlxmxmlxmlxmlxml xmlxmlxmlxmlxmlxmxmlxmlxmlxml xmlxmlxmlxmlxmlxmxmlxmlxmlxml
xmlxmlxmlxmlxmlxmxmlxmlxmlxml xmlxmlxmlxmlxmlxmxmlxmlxmlxml xmlxmlxmlxmlxmlxmxmlxmlxmlxml xmlxmlxmlxmlxmlxmxmlxmlxmlxml-
-timeCreated-2014-11-24T21:13:26.914Z-/timeCreated-
-xml_tag-non-time VALUE-/xml_tag-
-xml_tag- non-time VALUE -/xml_tag-
-xml_tag- non-time VALUE -/xml_tag-
-xml_tag- non-time VALUE -/xml_tag-
-xml_tag- non-time VALUE -/xml_tag-
-xml_tag- non-time VALUE -/xml_tag-
The second event looks like:
-start-2014-11-24T21:21:00Z-/start-
300 lines omitted….
Most of the events in the native log are typical events with a single event time stamp and some message text. Some events (like this one), have embedded xml, but no additional time stamps within the xml….these events are all indexed appropriately.
The events like the one here are few, but we need a way to tell splunk to “look beyond” other embedded time stamps when indexing events from this log without interfering with the proper indexing of other “normal events”
Thank you so much for any assistance offered.
MichaelS
... View more
11-11-2014
11:46 AM
1 Karma
ANSWERED, but still curious.
although the output of the list forward-server showed all active forwards correctly, we re-issued the add forward-server command and now the events are correctly being forwarded.
there must be something subtle that requires that the add forward server command be run even though all forward servers are already configured.....
if anyone can comment on this...we'd appreciated it..
anyway, we're up now....thanks all.
... View more
11-11-2014
10:04 AM
Splunk heavy forwarders had been working....recent upgrade of OS (Linux) and re-create of forwarder results in heavy forwarders NOT relaying events from lower tier universal forwarders
we're just missing something on re-create effort
... View more
11-11-2014
09:52 AM
deployment monitor shows forwarders are "connecting" to indexer
events generated locally on the forwarders ARE getting to the indexer.
Only events from universal forwarders are not getting though.
from universal forwarders, list forward-server shows the heavy forwarder and indexer OK.
... View more
11-11-2014
09:44 AM
Thanks much....version 4.2.5
... View more