Activity Feed
- Karma Re: Has anyone come across monitoring an XML logfile that gets completely re-indexed when an event is added? How do you handle this? for lguinn2. 06-05-2020 12:47 AM
- Got Karma for How do I edit my props.conf for proper line breaking when indexing a CSV file with a large amount of quotes and newlines?. 06-05-2020 12:47 AM
- Got Karma for Has anyone come across monitoring an XML logfile that gets completely re-indexed when an event is added? How do you handle this?. 06-05-2020 12:47 AM
- Got Karma for Has anyone come across monitoring an XML logfile that gets completely re-indexed when an event is added? How do you handle this?. 06-05-2020 12:47 AM
- Karma Re: Linking outbound and inbound messages, then finding incomplete ones for kristian_kolb. 06-05-2020 12:46 AM
- Karma Re: Is this a linebreaking issue? for datasearchninja. 06-05-2020 12:46 AM
- Karma Re: Is this a linebreaking issue? for somesoni2. 06-05-2020 12:46 AM
- Karma Re: Is this a linebreaking issue? for somesoni2. 06-05-2020 12:46 AM
- Posted Re: How do I edit my props.conf for proper line breaking when indexing a CSV file with a large amount of quotes and newlines? on Getting Data In. 07-27-2015 03:40 PM
- Posted How do I edit my props.conf for proper line breaking when indexing a CSV file with a large amount of quotes and newlines? on Getting Data In. 07-27-2015 03:18 PM
- Tagged How do I edit my props.conf for proper line breaking when indexing a CSV file with a large amount of quotes and newlines? on Getting Data In. 07-27-2015 03:18 PM
- Tagged How do I edit my props.conf for proper line breaking when indexing a CSV file with a large amount of quotes and newlines? on Getting Data In. 07-27-2015 03:18 PM
- Tagged How do I edit my props.conf for proper line breaking when indexing a CSV file with a large amount of quotes and newlines? on Getting Data In. 07-27-2015 03:18 PM
- Tagged How do I edit my props.conf for proper line breaking when indexing a CSV file with a large amount of quotes and newlines? on Getting Data In. 07-27-2015 03:18 PM
- Posted Can I create a minimum capability user role on a Linux Universal Forwarder so events can be accepted and forwarded to the indexer? on Getting Data In. 06-29-2015 06:23 PM
- Tagged Can I create a minimum capability user role on a Linux Universal Forwarder so events can be accepted and forwarded to the indexer? on Getting Data In. 06-29-2015 06:23 PM
- Tagged Can I create a minimum capability user role on a Linux Universal Forwarder so events can be accepted and forwarded to the indexer? on Getting Data In. 06-29-2015 06:23 PM
- Tagged Can I create a minimum capability user role on a Linux Universal Forwarder so events can be accepted and forwarded to the indexer? on Getting Data In. 06-29-2015 06:23 PM
- Tagged Can I create a minimum capability user role on a Linux Universal Forwarder so events can be accepted and forwarded to the indexer? on Getting Data In. 06-29-2015 06:23 PM
- Tagged Can I create a minimum capability user role on a Linux Universal Forwarder so events can be accepted and forwarded to the indexer? on Getting Data In. 06-29-2015 06:23 PM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
1 | |||
0 | |||
2 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 |
07-27-2015
03:40 PM
Sorry, I think I've given you the wrong idea with my fictional data. The actual data's last column may or may not have a value in it. I'll edit my example data when I can.
... View more
07-27-2015
03:18 PM
1 Karma
I have a csv file that's giving me a headache while trying to index it.
It has 100+ columns, several of which are making life difficult by containing large amounts of things like quotes and newlines.
A sanitised example showing the header line and a problem event:
field1,field2,field3,field4,field5,field6
"55634","Barney","","this field behaves well","","1436504081000"
"","Fred","","Here, have some data
that will make your life very difficult
""should"" you try to parse this puppy","F6E25B","1435307738000"
(The quotes around should are intentional, there's sections of the data that look exactly like that)
I've tried using the following props, to no avail - Barney does the right thing, but Fred's line breaking goes wrong. Can someone point out where I'm going wrong?
BREAK_ONLY_AFTER=\"$
HEADER_FIELD_LINE_NUMBER=1
NO_BINARY_CHECK=true
SHOULD_LINEMERGE=true
TIMESTAMP_FIELDS=field6
This file is created completely new at a regular interval - it's a scheduled database dump. I want to index the entirety each time.
I want to keep the inputs.conf as simple as possible, only defining host, sourcetype and destination index. A parsing app on the indexer will have the props.conf.
Thanks in advance for any help.
... View more
06-29-2015
06:23 PM
I have a Linux Universal Forwarder that will be receiving events via the REST interface's simple receiver.
https://linuxUF:8089/services/receivers/simple?host=xxx&source=xxx&index=xxx&sourcetype=xxx&check-index=false
Can I set up a minimum capability role (i.e. not admin) user on the UF so that events can be accepted and forwarded to the indexer? I'd like to create a local user on the UF, and give that user this role.
... View more
12-04-2014
01:43 PM
2 Karma
I need to monitor an application logfile, and have a problem with the default way Splunk "tails" a file. This particular log file doesn't append new rows to the file, it inserts them.
< root >
< a >
< b >
< c >
< /root >
becomes
"< root >
< a >
< b >
< c >
< d >
< /root >"
This causes Splunk to consider it a completely new file, and it reindexes the whole thing, when I only want < d >. Has anyone come across this before and solved it? If so, how?
... View more
10-16-2014
03:01 AM
Splunk v 6.1.4
SA-ldapsearch v2.0.0
ldap.conf
[default]
port = 636
server = adhost.mydomain.local
ssl = 1
[mydomain.local]
alternatedomain =
basedn = dc=mydomain,dc=local
binddn = cn=mycredentials,cn=Service Account,cn=Domain Services,dc=mydomain,dc=local
When I hit the 'test connection' button, I get this in SA-ldapsearch.log every time
2014-10-16 20:46:02,628, Level=ERROR, Pid=15390, File=search_command.py, Line=342, Traceback (most recent call last):
File "/opt/splunk/shared/etc/apps/SA-ldapsearch/bin/packages/splunklib/searchcommands/search_command.py", line 316, in process
self.execute(operation, reader, writer)
File "/opt/splunk/shared/etc/apps/SA-ldapsearch/bin/packages/splunklib/searchcommands/generating_command.py", line 79, in _execute
for record in operation():
File "/opt/splunk/shared/etc/apps/SA-ldapsearch/bin/ldapsearch.py", line 87, in generate
password=configuration.credentials.password) as connection:
File "/opt/splunk/shared/etc/apps/SA-ldapsearch/bin/packages/ldap3/core/connection.py", line 264, in __enter_
self.open()
File "/opt/splunk/shared/etc/apps/SA-ldapsearch/bin/packages/ldap3/strategy/syncWait.py", line 53, in open
self.connection.refresh_dsa_info()
File "/opt/splunk/shared/etc/apps/SA-ldapsearch/bin/packages/ldap3/core/connection.py", line 618, in refresh_dsa_info
self.server.get_info_from_server(self)
File "/opt/splunk/shared/etc/apps/SA-ldapsearch/bin/packages/ldap3/core/server.py", line 273, in get_info_from_server
self.get_schema_info(connection)
File "/opt/splunk/shared/etc/apps/SA-ldapsearch/bin/packages/ldap3/core/server.py", line 229, in _get_schema_info
schema_entry = connection.response[0]['attributes']['subschemaSubentry'][0] if result else None
File "/opt/splunk/shared/etc/apps/SA-ldapsearch/bin/packages/ldap3/utils/caseInsensitiveDictionary.py", line 33, in __getitem_
return self._store[self._getkey(key)]
KeyError: 'subschemaSubentry'
Where am I going wrong?
... View more
10-12-2014
04:56 PM
I'm running out of space in my cold bucket volume, and want to reduce the default frozenTimePeriodInSecs to force a bunch of older cold data to roll to frozen. I've got plenty of space in frozen.
Is there a way I can get an idea of how much cold volume space I can reclaim if I know how much I want to reduce frozenTimePeriodInSecs?
... View more
02-18-2014
07:09 PM
I have a transaction defined where a trade goes through some stages in its lifecycle. Unfortunately, the markers for these stages aren't consistent in their form, eg.
"<$DealId> : The deal has hit stage BOOKED"
"<$DealId> : The deal is EXECUTED"
"<$DealId> : Received CONFIRMATION"
will be found at various times in the transaction.
I have come up with a query, but it keeps throwing errors : "Fields cannot be assigned a boolean result. Instead, try if([bool expr], [expr], [expr]). "
Here's the query
DealId="*" | transaction DealId |
eval confirmed=searchmatch("Received CONFIRMATION") |
eval executed=searchmatch("The deal is EXECUTED" AND "DownstreamSystemId" |
eval booked=searchmatch("Deal has hit stage EXECUTED") |
eval status=case(confirmed==true, "CONFIRMED", executed==true, "EXECUTED", booked==true, "BOOKED")
| table DealId, allocQty, price, value, transactTime, status
A completed transaction will have all of these stages, I'm trying to keep track of which stage a particular deal is up to.
It also feels horribly inefficient, is there a better way of writing this query?
... View more
- Tags:
- eval
02-13-2014
02:09 PM
Is there a way to put more than one of these in a single dashboard panel?
... View more
01-29-2014
01:28 PM
Yes.
I'm playing around with changing the BREAK_ONLY_BEFORE to LINE_BREAK=secs\S\n+ or something similar.
Is there a reason I wouldn't use LINE_BREAK as opposed to BREAK_ONLY_BEFORE?
... View more
01-28-2014
06:44 PM
It's not working, but I'm not sure I'm doing this right. These events have a sourcetype of "garbagecollectionlog", and I have in etc/system/local/props.conf
[sourcetype::garbagecollectionlog]
BREAK_ONLY_BEFORE=\d+.\d+:
MAX_TIMESTAMP_LOOKAHEAD=20
NO_BINARY_CHECK=1
SHOULD_LINEMERGE=true
Is there anything else I need to do?
... View more
01-23-2014
05:13 PM
This is the complete field.
And you're right - the rest of the event is there, but under a different timestamp.
... View more
01-23-2014
05:10 PM
No, there's no timestamp. I'm dependent on Splunk providing the timestamp as when the event was indexed.
This props.conf is the one on the indexer, right?
... View more
01-22-2014
08:17 PM
I'm collecting events from a logfile that look like this :
270929.542: [GC 270929.542: [ParNew
Desired survivor size 1288490184 bytes, new threshold 16 (max 31)
- age 1: 34518968 bytes, 34518968 total
- age 2: 257792 bytes, 34776760 total
- age 11: 60416 bytes, 34837176 total
: 3156097K->34336K(4718592K), 0.0357680 secs] 3548065K->426305K(17301504K), 0.0359060 secs]
However, when I see them in Splunk, I only get the first line. The entire 6 lines of this log get written to the file at once, but Splunk seems to only be storing the first line. Does anyone have any ideas as to what could be going on here? The last line contains the info I really want to work with.
... View more
- Tags:
- line-break
01-13-2014
02:45 PM
Also, there will be other pairs starting/finishing at the same time. Processing is not linear.
... View more
01-13-2014
02:39 PM
I am analysing a logfile where there'll be a message that describes an outbound message going to an external system, and a short period of time later, a reciprocal message from the external system - a "job done" message, containing a unique deal ID common to both.
Finding the out and in messages won't be hard.
The hard part will be writing a query that tells me about all deal IDs that have an outbound but not an inbound component. What would be the best approach to do this?
... View more
- Tags:
- transactions
01-07-2014
06:31 PM
Fantastic, thank you!
I think I'll be able to extend it to look for more than one process quite easily.
... View more
01-07-2014
04:53 PM
I need to create a dashboard to indicate the existence of several processes.
I have the Splunk *ix add-on providing the ps info I want, but am struggling with how to get this meaningfully onto a dashboard. All of the visualisations I can see are fine for numerical values, but I really only need a boolean one. Is there some kind of binary (such as red light/green light) visual I can use to indicate the presence of a process within the last set of ps data received?
... View more
- Tags:
- uallort