Activity Feed
- Got Karma for Re: Why do variations in sourcetype appear?. 02-19-2024 10:27 PM
- Got Karma for How to manage deployment clients?. 02-08-2024 01:24 AM
- Got Karma for Re: How can I search for a missing field?. 06-10-2022 07:36 AM
- Got Karma for Re: How do I set a timerange to be the last full 7 days?. 12-16-2021 01:42 PM
- Got Karma for Re: Does Splunk index gzip files?. 03-24-2021 12:16 AM
- Got Karma for Does outputlookup append or overwrite?. 10-07-2020 08:30 AM
- Karma [splunkd.log error] DispatchSearch - Unable to saved search history for user=admin for bckq. 06-05-2020 12:46 AM
- Karma Re: Can I get a count of distinct values in multivalue field? for jonuwz. 06-05-2020 12:46 AM
- Karma Re: Is it possible to dynamically reload a new/updated tags.conf file? for Ayn. 06-05-2020 12:46 AM
- Karma Re: Show description in legend instead of numbers for MarioM. 06-05-2020 12:46 AM
- Karma How to install the Universal Forwarder on a Windows Cluster for jasonstone. 06-05-2020 12:46 AM
- Karma Re: How to install the Universal Forwarder on a Windows Cluster for bsherwoodofdapt. 06-05-2020 12:46 AM
- Karma Re: How to convert scientific notation to decimal? for tchen_splunk. 06-05-2020 12:46 AM
- Karma Splunk bootstrap themes for Lazarix. 06-05-2020 12:46 AM
- Got Karma for Can I get a count of distinct values in multivalue field?. 06-05-2020 12:46 AM
- Got Karma for Can I get a count of distinct values in multivalue field?. 06-05-2020 12:46 AM
- Got Karma for Can I get a count of distinct values in multivalue field?. 06-05-2020 12:46 AM
- Got Karma for How to tell the sort command to sort by numerical order instead of lexigraphical?. 06-05-2020 12:46 AM
- Got Karma for Is it possible to dynamically reload a new/updated tags.conf file?. 06-05-2020 12:46 AM
- Got Karma for How to extract a variable number of fields?. 06-05-2020 12:46 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
2 | |||
3 | |||
1 | |||
0 | |||
1 | |||
1 | |||
1 | |||
2 |
03-07-2011
10:14 PM
6 Karma
You cannot have multiple REGEX parameters in transforms.conf for the same stanza. You almost have it correct with breaking this into 2 transforms, but they need to have unique names. So here's how you would split into 2 and call them from props.conf
-----props.conf-----
[mysourcetype]
TRANSFORMS-foo = WindowsLogonEvent675_Part1, WindowsLogonEvent675_Part2
-----transforms.conf-----
WindowsLogonEvent675_Part1]
REGEX = (?msi)EventCode=4624.Account Name:\s(-)
DEST_KEY = _TCP_ROUTING
FORMAT = forwardauqldrv00mgt1ai
[WindowsLogonEvent675_Part2]
REGEX = (?msi)^EventCode=(632|4719|4728|4729|4670)
DEST_KEY = _TCP_ROUTING
FORMAT = forwardauqldrv00mgt1ai
... View more
02-09-2011
06:55 AM
No need to apologize, just let us know how it goes when you have the time to revisit this. 🙂
... View more
01-15-2011
12:23 AM
2 Karma
I am looking to take the results of one lookup and use that as input to another lookup for the same data source. Is this possible? In testing with Splunk 4.1 I was not able to get it working, but perhaps I missed something in the config. Here's what I attempted:
[mydatasource]
LOOKUP-ac1 = AreaCodeToCityLookup areacode OUTPUT city
LOOKUP-ac2 = CityToCoordinatesLookup city OUTPUT latitude, longitude
Both lookups are simple CSV lookups.
... View more
- Tags:
- lookups
01-15-2011
12:19 AM
3 Karma
Am curious what the performance difference is between sorted and unsorted lookups (sorting by the primary search key of course), or if there is any.
... View more
01-10-2011
08:48 PM
Is the field extraction an inline extraction (e.g. EXTRACT-foo = ...)?
... View more
12-29-2010
02:32 AM
Have you considered dividing and conquering your very large job using Summary Indexing? This may remove the need for a monthly cron job altogether.
... View more
12-29-2010
02:29 AM
1 Karma
Yes, this is possible. However, if you have 2 separate servers it may be best to keep both and have one distribute searches to the other. This way you are effectively searching both Splunk servers and get the added bonus of 2 servers sharing the work and executing in parallel. More on distributed search if this interests you: http://www.splunk.com/base/Documentation/latest/Admin/Whatisdistributedsearch.
If, however, you are looking to re-purpose one of the servers and truly need to consolidate your datastore, then the process is similar to backing up your Splunk datastore, covered here: http://www.splunk.com/base/Documentation/latest/Admin/Backupindexeddata.
This is the skeleton process (assuming you have enough storage):
Redirect the incoming data stream to Splunk1
Shut down Splunk2
Roll the hot bucket on Splunk2 to grab the latest data
Move all buckets to Splunk1 after ensuring bucket sequence ids are unique
Repeat steps 3 and 4 for each index.
Steps 1 and 2 are self-explanatory.
For step 3, you can issue this command on the CLI:
./splunk _internal call /data/indexes/<index_name>/roll-hot-buckets –auth <admin_username>:<admin_password>
For Step 4, on Splunk1 and Splunk2, look in
$SPLUNK_HOME/var/lib/splunk/defaultdb/db
$SPLUNK_HOME/var/lib/splunk/defaultdb/colddb
The directories in these folders all have a unique sequence ID at the end of the directory name:
db_#_#_id
You need to ensure all the directories in Splunk1 and Splunk2 have a unique ID. Write a script or change the sequence ID manually if there are any duplicates between Splunk1 and Splunk2. Then move all the directories from
Splunk2: $SPLUNK_HOME/var/lib/splunk/defaultdb/db
Splunk2: $SPLUNK_HOME/var/lib/splunk/defaultdb/colddb
to
Splunk1: $SPLUNK_HOME/var/lib/splunk/defaultdb/colddb
... View more
12-10-2010
08:50 PM
I'm not sure how both transforms are partially working. In the first transform, REGEX should use "," not "\s" as the separator. In the 2nd transform, DELIMS should be ",".
... View more
12-10-2010
08:38 PM
1 Karma
Is your preference not to use Splunk as your 12-month datastore? Splunk can retain all or any data for as long as you want (provided you have adequate storage capacity). It is simple to set a time-based retention policy instructing Splunk to retain the data for no less than 12 months.
If you want to retain the data outside of Splunk, then there is no way to configure the batch processor to index and not delete. Your original use of the monitor input is the better option in this case.
Are you by chance using the Light Forwarder? If so, it has a setting to limit the size of output stream. In $SPLUNK_HOME/etc/apps/SplunkLightForwarder/default/limits.conf:
[thruput]
maxKBps = 256
This could be why you are seeing very slow uptake of the data in your monitored directory. You can set this higher to increase the output rate.
Also, you might want to check the number of files in the monitor directory. Are they compressed? The number of files and whether they are compressed will also have an impact on the processing.
... View more
12-10-2010
06:34 PM
1 Karma
Not sure why host_regex is not working. Have you tried configuring it as an index-time field override?
props.conf:
[nessus]
TRANSFORMS-nessus = setHost
transforms.conf:
[setHost]
DEST_KEY = MetaData:Host
REGEX = .?\|.?|(.*?)\|
FORMAT = host::$1
You will need to restart Splunk for this to take effect and it will only apply to new incoming events, not retroactively.
... View more
12-08-2010
07:03 PM
no updates, but will be onsite with customer next week, will update then
... View more
12-01-2010
07:54 PM
2 Karma
You are correct--the original data source can be removed after indexing. Splunk no longer requires it as a separate copy is now stored in the Splunk datastore. Having said that, you should ensure the appropriate protections are taken to protect and preserve the data in Spllunk should there be any software/hardware failures.
... View more
12-01-2010
07:47 PM
On Linux, are you running Splunk as root or another user? If running as a different user, you might want to check the user has permissions to access all files in the directory you are monitoring.
... View more
11-30-2010
11:57 PM
Note: I did not have to use the (?m) regex modifier in the REGEX field for transforms.conf. Somewhere along the way, Splunk automatically knows how to deal with multiline events.
... View more
11-30-2010
11:54 PM
1 Karma
These are the 2 options I would try
configuration files
rex command in the search bar
The easiest, but also most transient, option is to use rex command inline in your search. For example:
sourcetype="multiline" | rex "CLOSE, loaded in (?<close_pe_rt>\S+)" | rex "FX_CLOSE, loaded in (?<fx_close_pe_rt>\S+)" | rex "XLA_ENV, loaded in (?<xla_env_pe_rt>\S+)" | rex "INTRADAY, loaded in (?<intraday_pe_rt>\S+)" | rex "CPTY_CREDIT, loaded in (?<cpty_credit_pe_rt>\S+)"
Maybe there's a way to do this in one rex invocation, but I tried several things which didn't work.
The other option is to add a few stanzas to props.conf and transforms.conf. For example,
in props.conf:
[multiline]
REPORT-foo = mlFields
in transforms.conf:
[mlFields]
REGEX = CLOSE, loaded in (\S+).* FX_CLOSE, loaded in (\S+).* XLA_ENV, loaded in (\S+).* INTRADAY, loaded in (\S+).* CPTY_CREDIT, loaded in (\S+)
FORMAT = close_pe_rt::$1 fx_close_pe_rt::$2 xla_env_pe_rt::$3 intraday_pe_rt::$4 cpty_credit_pe_rt::$5
You could also try using the Interactive Field Extractor (IFX).
... View more
11-17-2010
04:57 PM
So your saved/scheduled search was not found in any savedsearches.conf, even the one in $SPLUNK_HOME/etc/users/yourusername?
... View more
11-17-2010
07:34 AM
1 Karma
Hi bojanz,
You're right--the numbers are Unix time. They signify the time of the latest event and earliest event, respectively, in the tsidx file. It's not abnormal for multiple tsidx files to have the same second number since you could have multiple events occurring in the same second indexed to several tsidx files.
This naming convention allows Splunk to optimize the retrieval of events. Based on the time range specified in your search, Splunk will only search the tsidx files whose events fall within the time range.
How did you identify this bucket as being problematic? And how did you find Splunk is sometimes displaying all indexed events instead of always?
... View more
11-17-2010
07:11 AM
Have you scheduled the search or is it simply saved?
... View more
11-17-2010
07:10 AM
When you show Account_Name as an enabled field in the Event Viewer, do you get multiple occurrences of Account_Name or just 1 occurrence with the value being the 1st extraction (SERVERNAME$)?
... View more
11-17-2010
07:04 AM
1 Karma
I believe only the warm and cold dbs need to have unique ids.
... View more
11-16-2010
12:46 AM
1 Karma
Hi Andrew, I have seen this error when the -owner flag is not specified. What happens when you add the -owner flag?
... View more
11-11-2010
04:06 AM
Hi Alice,
Have you tried something like this?
* [search ipAddress=123 AND userId=123 AND productId=123 | fields + ipAddress,userId,productId | format "(" "(" "OR" ")" "OR" ")"]
... View more
11-09-2010
09:21 PM
2 Karma
If I have the deployment server enabled as a separate standalone instance of Splunk, can I use the forwarder license or should I have a small license carved from my enterprise Splunk license?
... View more
11-05-2010
11:49 PM
You can simply copy (or move) the directories where the summary indexes live to the new box in the same location. In your Splunk home directory where Splunk is installed look in var/lib/splunk. There should be directories for each of the summary indexes:
summary_daydb
summary_hourdb
summary_minutedb
To be safe, shut down Splunk then move the directories. Make sure your new Splunk instance is also stopped. Then you can start it up after the move completes.
Also, be sure to create the indexes using SplunkWeb or transfer the entries from indexes.conf.
... View more
11-05-2010
11:32 PM
Your input should be [WinEventLog:System] not [WMI:WinEventLog:System] if you are running Splunk as a light or regular forwarder.
If you want to use WMI, then the entry for system event logs is:
[WMI:AppAndSys]
server = foo, bar
interval = 10
event_log_file = System
disabled = 0
http://www.splunk.com/base/Documentation/latest/Admin/MonitorWMIdata
... View more