I think you are close to what you want to but there is one (maybe more) error. One error was the spaces that you had in the regex, also specifying ".*^Type=Success Audit" in the regex is unnecessary. I also modified the sourcetype name in the props.conf stanza (are you actually collecting the logs via WMI?)
Try this:
props.conf changes
[WinEventLog:Security]
TRANSFORMS-set=setnull
transforms.conf changes
[setnull]
REGEX=(?mi)^EventCode=(4674)
DEST_KEY=queue
FORMAT=nullQueue
... View more
A heavy forwarder can filter logs but what I think you want is to forward the logs, in their entirety, to the indexer, using a universal forwarder. The indexer component of Splunk then stores and indexes the logs so they can be parsed, searched, have alerts generated from, etc from the search head (note that the search head could be the same physical device as the indexer, in a small environment).
Splunk provides awesome documentation that can get you started (I use it frequently myself 😉 😞
http://docs.splunk.com/Documentation/Splunk/latest/Tutorial/WelcometotheSplunkTutorial
... View more
You could utilize syslog if you do not want to install a forwarder on the linux/unix systems in question. Best practice would be to setup a host, running a syslog daemon, such as rsyslog or syslog-ng (a syslog aggregator). This host would also have a Splunk forwarder on it, to read and forward the logs collected from your "agentless" systems.
... View more
You can specify the sourcetype in a few places.
On the forwarder, in inputs.conf:
[monitor://path to log file]
sourcetype=log4j
OR
On the indexer, in props.conf:
[source::full path to log file]
sourcetype=log4j
... View more
Your description is a bit confusing. You say that "the inputs on the index was set to log4j", do you mean that the inputs.conf file on the forwarder, within the stanza collecting this file, has a line that says "sourcetype=log4j"?
... View more
It's possible that there is another inputs.conf file that is overriding your inputs.conf file. Run the following command to check the active configuration:
$SPLUNK_HOME/bin/splunk cmd btool inputs list --debug
A further description of the tool is here:
http://docs.splunk.com/Documentation/Splunk/5.0.2/Troubleshooting/Usebtooltotroubleshootconfigurations
... View more
A wrapper is the way to go here, if you want Splunk to be the scheduler. The alternative is to schedule the scripts to run separately and write the output to a file that Splunk is monitoring.
... View more
You mentioned that the forwarder is running as a local system account. Is the UF on the same device as the BizTalk server? If not, it seems likely that it's a permissions problem.
Can you (as a test) run the UF under the same account that you used to run the query from the CLI?
... View more
The "Start time" and "Finish time" fields in the saved searches page are for relative time modifiers, not exact date/time. If you did want to save the search to run over exact times, you can still leave the "earliest" and "latest" terms in your search. The "Start time" and "Finish time" fields are not required for the saved search.
... View more
You actually don't need transforms, you could put the following into props.conf on the search head (remove the \ that precedes the word MyField, I had to put that in there for formatting purposes):
EXTRACT-field = (?<\MyField>\d+)\)$
This would go in the stanza for the source/sourcetype in question.
... View more
NetApp's Data OnTap OS uses a syslog.conf file to configure the syslog daemon. I think you can just use this syntax to forward to a different port on the remote server:
*.* @remotehost:12345
... View more
I made a few assumptions about the props.conf. Just to be sure, is the props.conf on the indexer? Does the stanza for the checkpoint data have the correct name to match the sourcetype? Can you post the entire stanza for checkpoint? It's always best to be explicit. Maybe something like this:
[checkpoint]
TIME_PREFIX = ^(?:(?:[^\s]+)\s){5}
TIME_FORMAT = %d%b%Y %H:%M:%S
MAX_TIMESTAMP_LOOKAHEAD = 25
LINE_BREAKER = ([\r\n]+)
SHOULD_LINEMERGE = false
... View more
In ES 2.2.0, only TAs with a name that begins with "TA-" and "Splunk_TA_" will be imported into the configuration by default and then only during setup. To add a custom TA, you must take steps to include it into the configuration and rerun the setup step of ES. This is not as bad as it sounds and is described here:
http://docs.splunk.com/Documentation/ES/latest/Install/InstallTechnologyAdd-ons
The point of this new feature is to eliminate conflicts with TAs that are incompatible with ES.
... View more
Using props and transforms in this case is the right thing to do, if you can. An alternative, if you were to not have access, may be to use multiple extractions to get all of the fields, using a regex such as this:
^(?:(?<field1>[^,]*),){1} # catches the first field
^(?:(?<field2>[^,]*),){2} # catches the second field
.
.
.
^(?:(?<field20>[^,]*),){20} # 20
... View more
FYI, you can either escape the slashes in your answer or highlight the "code" portion of your answer and click the "code sample" (1s and 0s) button on the formatting menu.
... View more
Hi there-
I'm not sure what you've tried already but you have several options:
You can utilize Splunk's lookup functionality by:
a. write the data you want to save out to a csv file, using the outputlookup command, http://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Outputlookup
b. read the data back in from the file using the inputlookup command, http://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Inputlookup
Schedule the search that you've created, to extract your values, and choose to write them to a summary index. You can then search that summary index for the relevant values.
Although option 2 is valid, it may be overkill, depending on how you want to manage your data. I would start out with the first option and see if that does what you need it to.
... View more
runDuration is what you want to get the full time that the search took. Toward the top of that report window (Execution costs), there is a further breakdown to see where time was spent during the search, which can help you to optimize your searches.
There is of course additional information in the referenced search.log file, in case you need to refer to the raw data that the report is drawing from.
... View more