I have the below lines in of of the logs and I want to perform Search time field extraction on the sourcetype
1413346111:UPLOAD:lsv-internal-9872:10.11.12.13:/path/to-2/my_file/111AAA--0.xml:/path/to-2/my_file/111AAA--0.xml.1413346111:57494:1406.90Kbyte/sec:Wed Oct 15 00:08:31 2014
1410292368:DELETE:lsv-mgr:184.108.40.206:/path/to-2/my_file/111AAA--0.xml.xml.1410292363:Tue Sep 9 15:52:48 2014
I have the following entry in props.conf
MAX_TIMESTAMP_LOOKAHEAD = 15
KV_MODE = none
NO_BINARY_CHECK = 1
SHOULD_LINEMERGE = false
REPORT-secExt = secUploadExtract,secDeleteExtract
and following transforms.conf
REGEX = UPLOAD:([^:]*):([^:]*):([^:]*):([^:]*):([^:]*):([^:]*)
FORMAT = ftpAccount::"$1" ftpClientIP::"$2" ftpOrigFileName::"$3" ftpRenamedFile::"$4" ftpFileSize::"$5" ftpTxRate::"$6"
REGEX = DELETE:([^:]*):([^:]*):([^:]*)
FORMAT = ftpAccount::"$1" ftpClientIP::"$2" ftpRenamedFile::"$3"
I have tested the regex in several online regex testing tools and pretty confident that they work. however I am unable to see the fields in Splunk at search time. any suggestions to make it working?
If you search for the data first, then explicitly call the transform (like | extract secDeleteExtract) does it find the fields? I wonder if the base sourcetype in your props matches the data.
Next, since this is colon-separated, you might consider rules like this instead:
[secDeleteExtract] DELIMS = ":" FIELDS = stamp, action, ftpAccount, ftpClientIP, ftpRenamedFile, date
This tends to be a bit more performant than regex rules. You'll want to be sure to list every field (in this case, I've called the epoch time stamp 'stamp' in first position).
If you have access to the filesystem, can you try running 'btool props list field_sec_ext' ? This will apply Splunk's configuration layering rules to tell you what the running configuration would be for that sourcetype.
I am running the search by calling the index=test sourcetype=fieldsecext UPLOAD (or sometimes DELETE)
I get errror message if I use extract after the search (index=test sourcetype=fieldsecext UPLOAD | | extract secUploadExtract) like below.
10-21-2014 12:49:36.813 ERROR SearchOperator:kv - Error in 'extract' command: Failed to parse the key-value pair configuration for transform 'secUploadExtract'.
I removed the KV_MODE = none from props.conf and still see the same error message.
If I use DELIMS how do I assign correctly fields for UPLOAD and DELETE. You will notice that the next field after ftpClientIP is different for both. Can I use REGEX with DELIMS?
With DELIMS you'd have to provide a field name to contain either UPLOAD or DELETE, and then provide fields for the full possible with of how many fields may appear. Then, you could do an EVAL-based field keying upon the value of that "action" field (UPLOAD or DELETE, etc). This is admittedly messy. Let's see if we can't figure out what's wrong with your regexes.
Sounds like Splunk doesn't like the rule definition for secUploadExtract. I'll try to simulate it on my local box.
Interesting. I don't see any issue with your rules, when I attempt to parallel them on my local instance. I stopped Splunk, created an entry for the sourcetype in my etc/apps/search/local/props.conf (to call the REPORT rules), and then transforms.conf (to define the regexes). Once I started Splunk and did a oneshot to add the data, I got field extractions without issue.
Did you use the UI to write those rules (Settings -> Fields)? I wonder whether they're in user scope, rather than app scope. The method I described to create those settings puts them automatically in app scope (and the search app shares globally).
What do you see for the permissions on those extraction rules?
I had created the props and transforms within TA on my system, however I forgot to update the perms. as soon as they were updated - Booya!!!!
Thanks for pointing out he very basic step I missed.