Here is the order of search operations:
#Search-Time Operation ORDER
Sourcetype RENAME
EXTRACT-xxx
REPORT-xxx
KV_MODE
FIELDALIAS-xxx
EVAL-xxx
LOOKUP-xxx
MILLISECONDS
FILTER
EVENTTYPING
TAGGING
As you can see EVAL occurs before LOOKUP.
What you might consider is not coding the lookup into the props.conf but doing the lookup as part of your search then doing an eval after the lookup.
If it's something you need to do a lot perhaps a macro would simplify it.
... View more
Text input tokens can be passed via the query string as part of the URL that launches the page.
In the URL string the first parameter starts with ? then the others use &: https://someurl:8000?form.name1=abc&form.name2=xyz
This will set those values when the page loads for the user.
... View more
If you are running on Linux how many file handles are available to splunk? If you look at the beginning of the splunkd.log file for this event:
04-07-2016 12:48:03.894 -0400 INFO ulimit - Limit: open files: 10240 files [hard maximum: unlimited]
If your number is the default value (1024) you will need to increase that number to something like 16000.
... View more
Since your tstats is using summariesonly (which is true), it's possible that your data model accelerations are not keeping up and there are no results in the time window.
... View more
Do you already have an eventtype for one of the events in the transaction? I think that should be carried over into the resulting transaction . Maybe something as simple as basing it off of the sourcetype of one of the events.
... View more
I suggest that you examine the actual event logs on one of the servers with the event viewer and see if it originates there. If the logs are OK I would re-install the UF on those servers.
... View more
You didn't show the time period you picked for your search. If it was last 90 days then it would only show events for that range.
As far as the frozenTimePeriodInSecs setting, a bucket will only be frozen if the youngest event in the bucket exceeds that time period. So there could be buckets with most of their events past 90 days but not all so it will still be in the cold db.
... View more
When you view the raw events in verbose search mode you should see the field names. What is the field name? If it is just "server" you should consider creating either an EXTRACT or REPORT in the props.conf for that source or sourcetype.
... View more
You might mention what app you updated that produced these errors.
I suspect that there is an issue with one or more of the lookup tables in the updated app. Either the lookup table name is wrong in a search, or the table is now missing or there is a field in the table that is missing or the name is incorrect.
You can look inside the search artifact at the info.csv and you should see some clue what condition is generating the errors.
... View more
Run your search before the eval result and table distance. I suspect you are getting a value that is not a number.
You might also consider downloading the haversign app to do the calculation for you:
https://splunkbase.splunk.com/app/936/
... View more
You didn't mention what the data is now being source typed as now. Still infoblox:file?
btool is your friend. I suggest that you open a terminal session to one of the indexers and run the command:
splunk btool props list --debug > /tmp/props.txt
and
splunk btool transforms list --debug > /tmp/transforms.tx
First examine the props.txt and look for the [infoblox:file] stanza. Make sure that it has the TRANSFORMS-0 setting.
Next examine the transforms.txt file and make sure that it has the actual transforms listed from the props.conf settings.
... View more
Since you are specifying the file name and not the lookup name (defined by transforms.conf) are you sure that all the saved searches are running in the same app context? If not they will output the lookup files to different app/lookup directories.
If you define the table name and make it global it won't matter what app context the searches run under as they will update the correct lookup file when you outputlookup
... View more
If you know the source names that you expect to see from the syslog server, you can easily use a metadata search and see what the lastTime value was for each source name. I like to use a regex filter to find only certain file names and then some time interval to wait.
| metadata type=sources index=* | regex source="" | eval lt=now() - lastTime | where lt>300
Schedule this to run on a 5 minute (or whatever) interval you need to check.
... View more
The reason for having csv files in a lookups directory is so that you can use the contents of the csv to provide data enrichment (usually to some other data source). If all you want is to make the csv data searchable, then all you have to do is index the csv files.
If you want to turn them into lookup tables then you will need to do a couple more steps (assuming you can't get them to your search head directly).
I'll give you an outline of the steps you need to go thru:
create an app to monitor the .csv files on the NAS
create a search to return the data in a table format
use the ouputlookup command to create a lookup table inside of some app
[monitor:///mountpoint/my_data.csv]
sourcetype = some_csv
index = test
| index=test sourcetype=some_csv | table | outputlookup lookup name
... View more
You can't use tcpout to send to a 3rd party tool as that tool does not recognize the splunk tcp protocol. If you want to forward data to a 3rd party system you are going to have to use syslog.
Here's some samples:
outputs.conf
[syslog:third_party]
server = 10.37.7.44:4444
type= tcp
maxEventSize = 38000
Note the maxEventSize - you may not need it that large.
You'll also need a props.conf and transforms.conf to select what data to send:
props.conf
[apache]
TRANSFORMS-send_syslog=to_syslog
transforms.conf
[to_syslog]
REGEX = .*
DEST_KEY = _SYSLOG_ROUTING
FORMAT = third_party
... View more
You should crank up the logging level on the deployer log channel: ConfDeployment to debug via the the Server Settings | Server Logging UI. Then run the apply command again and review the details in the splunkd log.
You may also need to look at the splunkd log on the target server as well and look for error messages.
... View more
First you need to make sure you are installing with an administrator or equivalent account.
If you are, to see what's happening you need to run the msiexec program with the log to a file option. Something like this:
Open a cmd window running as administrator
enter the command:
msiexec.exe /i /L*v
Check the logfilename for errors.
More details about the msiexec program at: https://technet.microsoft.com/en-us/library/cc759262(v=ws.10).aspx
... View more
You can uninstall by simply removing the directory SA-ldapsearch from the apps directory and restarting Splunk.
If you re-install you will still need to make the edits described above after you configure your connection. There is a log file that may contain more details at: $SPLUNK_HOME/var/log/splunk/SA-ldapsearch.log. You may also increase the logging level for the app in the file: logging.conf to DEBUG, then restart Splunk. This should show you more details in the log about what is wrong.
Note: you do NOT deploy this app to indexers as mentioned below. This app stays on the search head.
... View more
If you are not running in a search head cluster you will need to edit the default/commands.conf settings as per the documentation:
With a text editor, open the file $SPLUNK_HOME\etc\apps\SA-ldapsearch\default\commands.conf for editing.
In each stanza within this file, change the following entry:
local = false
to
local = true
3. Save the file and close it.
Restart Splunk Enterprise on the instance.
... View more
Try this:
eventtypes.conf
[iis_events]
search = sourcetype=iis
tag=web
tags.conf
[eventtype=iis_events]
web = enabled
props.conf
[iis]
FIELDALIAS-c_ip = c_ip as src
FIELDALIAS-cs_Cookie = cs_Cookie as cookie
FIELDALIAS-cs_Referer = cs_Referer as http_referrer
FIELDALIAS-cs_User_Agent = cs_User_Agent as http_user_agent
FIELDALIAS-cs_bytes = cs_bytes as bytes_in
FIELDALIAS-s_ip = s_ip as dest
FIELDALIAS-cs_method = cs_method as http_method
FIELDALIAS-cs_uri_stem = cs_uri_stem as uri_path
FIELDALIAS-s_sitename = s_sitename as site
FIELDALIAS-sc_bytes = sc_bytes as bytes_out
FIELDALIAS-sc_status = sc_status as status
FIELDALIAS-cs_username = cs_username as user
... View more
Simple fix for removing the repeating detailed description from the message field but leave details:
in props.conf
# message shortener for windows event security
# removes text from message field starting with: This event is generated
[WinEventLog:Security]
TRANSFORM-windows_events = win_event_shortener
in transforms.conf
[win_event_shortener]
DEST_KEY = _raw
REGEX = ((.*+[\v])+)(?=This event is generated)
FORMAT = $1
... View more