Activity Feed
- Karma Re: Reporting on multiple fields for kristian_kolb. 06-05-2020 12:46 AM
- Got Karma for Can you create a dashboard with an adjustable time frame for searches?. 06-05-2020 12:46 AM
- Got Karma for Can you create a dashboard with an adjustable time frame for searches?. 06-05-2020 12:46 AM
- Got Karma for Can you create a dashboard with an adjustable time frame for searches?. 06-05-2020 12:46 AM
- Got Karma for Is there an efficient way to learn Splunk?. 06-05-2020 12:46 AM
- Got Karma for Is there an efficient way to learn Splunk?. 06-05-2020 12:46 AM
- Got Karma for Re: Is there an efficient way to learn Splunk?. 06-05-2020 12:46 AM
- Got Karma for Can you customize the heatmap colors?. 06-05-2020 12:46 AM
- Got Karma for Can you customize the heatmap colors?. 06-05-2020 12:46 AM
- Got Karma for Splunk migration: Tips for moving data, saved searches, and reports?. 06-05-2020 12:45 AM
- Got Karma for Re: Splunk migration: Tips for moving data, saved searches, and reports?. 06-05-2020 12:45 AM
- Got Karma for Re: Splunk migration: Tips for moving data, saved searches, and reports?. 06-05-2020 12:45 AM
- Got Karma for Is there any way to print reports to PDF on Windows?. 06-05-2020 12:45 AM
- Got Karma for Re: Extracting JSON from POST data. 06-05-2020 12:45 AM
- Got Karma for Example of doing an external lookup using HTTP GET or POST?. 06-05-2020 12:45 AM
- Got Karma for Example of doing an external lookup using HTTP GET or POST?. 06-05-2020 12:45 AM
- Posted Re: Forwarder and props.conf troubleshooting on Getting Data In. 07-01-2012 12:11 AM
- Posted Re: Forwarder and props.conf troubleshooting on Getting Data In. 06-27-2012 12:03 AM
- Posted Re: Forwarder and props.conf troubleshooting on Getting Data In. 06-26-2012 08:36 PM
- Posted Forwarder and props.conf troubleshooting on Getting Data In. 06-26-2012 04:57 PM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
2 | |||
0 | |||
2 | |||
3 | |||
0 |
07-01-2012
12:11 AM
Thanks for your comment about the faulty regex pattern. I tried it with rex against _RAW doing an extract and it seemed to work. Obviously, I wouldn't be posting if everything were working... Pulling the data from the log is very much my preferred solution.
In this log, the host name is almost at the end of the line. Below are the values from the sample posted earlier:
345101-VM3
345999-VM4
SRST-Remote-2
Is there a smart strategy for testing out regex patterns for use in this situation?
... View more
06-27-2012
12:03 AM
Thanks for the answer! It sounds like what you describe will work in my case and I like being able to use a Universal Forwarder. This will take a bit of a rework on the logging machine as it will need to detect when a new host has been introduced into the data. I'd still prefer to do it using a transformation but that just doesn't seem to be happening.
Thanks again!
... View more
06-26-2012
08:36 PM
Thanks very much for answering. I'm posting an answer as the details take more than a comment will hold.
All of our logs are custom, such as "action log", "error log", "system check log" and "access log". In other words, I'm not dealing with syslog-ng or any log type that Splunk already has a sourcetype definition for. I've seen mention of having a directory per machine. In my case, would that end up something like this?
logs/vm-1/
access_log.txt
action_log.txt
error_log.txt
logs/vm-2/
access_log.txt
action_log.txt
error_log.txt
logs/vm-3/
access_log.txt
action_log.txt
error_log.txt
etc. Hopefully, I'm getting this wrong. A setup like the above would be possible but would require reconfiguring inputs.conf and restarting Splunk each time we change the roster of contributing machines. That would work poorly for us as it's way too much hands-on activity. What I'm really after is overriding the host by extracting the value from the log. I'd like (probably need, given organizational constraints) to do this on the forwarder, not the indexer. The position of the host in any particular log is predictable. (And in our control.) I've been through the docs and threads here over and over with no joy so I must be missing something:
My regex is wrong? (It seems right using rex)
My props.conf/transforms.conf settings are being ignored because I'm not a heavy forwarder?
Something else?
Here are some file samples, cut down to focus on just one of the logs. In fact, there are a bunch of logs with the same overall approach (central log server, host name inside of the log data, need to do an override.)
# action_log.txt sample
[23/Jun/2012:01:50:06 +0000] add appuser FANGLET-AU000-0000000000 0 1336643d3082237d75191d4362fbd941 - 1.0 - 345101-VM3 SRST
[23/Jun/2012:01:51:38 +0000] add appuser FANGLET-US000-0000000000 0 9fb0638027e115dc36a313700ada3f54 - 1.1.4 - 345101-VM3 SRST
[23/Jun/2012:01:51:53 +0000] add appuser FANGLET-AU-EGGPLNT 0 d1128ee5236b17a41825832b890a8091 cdma_spyder 1.0 10 345101-VM3 SRST
[23/Jun/2012:01:52:47 +0000] add appuser FANGLET-AU000-0000000000 0 5d3ded5a3efbc9102c85e319d08c461d - 1.0 - 345101-VM3 SRST
[23/Jun/2012:06:48:04 +0000] add appuser FRINDO-UK-EGGPLNT 0 c9e9d9c86fe1592e3427592c4c4bc6a buzz 1.0 8 345999-VM4 SRST
[23/Jun/2012:06:48:20 +0000] add appuser FANGLET-AU000-0000000000 0 d0e3cc86221875df6485f28e6246bcf8 - 1.0 - 345999-VM4 SRST
[23/Jun/2012:06:48:56 +0000] add appuser FRINDO-US000-0000000000 0 459d7c547c40efa025feb0ea9fd93998 - 1.1.4 - 345999-VM4 SRST
[23/Jun/2012:06:48:57 +0000] add appuser FRINDO-US000-0000000000 0 8321965193395108fe7d85878f8c9a43 - 1.1.4 - SRST-Remote-2 SRST
# inputs.conf
[monitor://C:\Program Files\xyz\logs\action_log.txt]
disabled = false
index = xyz
followTail = 0
sourcetype = action
# props.conf
[source::.../action_log.txt]
TRANSFORMS-action-host=action_host_override
sourcetype = action
TZ = UTC
# transforms.conf
[action_host_override]
DEST_KEY = MetaData:Host
REGEX = (?i)^(?:[^ ]* ){10}([^ ]+)
FORMAT = host::$1
# outputs.conf
[tcpout]
defaultGroup = lb_9997
disabled = false
maxQueueSize = 1000
indexAndForward = false
forwardedindex.filter.disable = true
[tcpout:lb_9997]
disabled = false
server = x.y.z:9997
autoLB = true
autoLBFrequency = 60
compressed = true
[tcpout-server://x.y.z:9997]
altCommonNametoCheck = idx
sslCertPath = $SPLUNK_HOME/etc/auth/server.pem
sslCommonNametoCheck = x.y.z
sslPassword = $1$Q929lfZOAu5w
sslRootCAPath = $SPLUNK_HOME/etc/auth/cacert.pem
sslVerifyServerCert = false
Thanks again for writing and trying to help! There's a bit more on my transforms.conf efforts here.
... View more
06-26-2012
04:57 PM
I've been having trouble getting a host override transformation in my props.conf/transforms.conf to work and want to figure out if the problem is in the config files or in the running environment. I've got a centralized log server (forwarder) that writes out logs with data drawn from multiple hosts. We want to override the default host so that the indexer receiving the logs shows the original host name, not the log server's name. As I understand it:
Performing a host override is possible and normal.
The common practice is to run the transform on the indexer. For internal policy reasons, this is either impossible or at least not likely.
Only a 'heavy' forwarder will process the transform, not a light or universal forwarder.
What I'm trying to confirm now:
What kind of forwarder is running on my centralized log server?
Is there a way to get some feedback from the forwarder regarding the application of transforms? splunkd.log doesn't seem to have any relevant entries.
In other words, how can I check that the forwarder is a heavy forwarder and how do I check what (if any) transformations are being applied?
For info:
splunk enable app SplunkForwarder
The SplunkForwarder is listed as unconfigured, enabled, and invisible. I've got config files in place and the Web management GUI lists the forwarding rule and says that it's enabled. That doesn't add up to a coherent picture for me. Is this what a heavy forwarder should look like?
splunk list-forward-server
This returns "Active forwards: None" but "Configured but inactive forwards" lists the forward that I'm using. Events are forwarding without any obvious problem.
I think that the SSL part of the forward may not be working correctly, if that's relevant.
Other info:
* Events are flowing through to the Splunk indexer. So, forwarding is happening.
I'm using custom logs with custom types defined by the Splunk forwarder. These types are recognized by the indexer so I take it that my props.conf and inputs.conf are in the right locations.
... View more
06-25-2012
05:37 PM
More odd details:
splunk list-forward-server
returns "Active fowrads: None" but "Configured but inactive forwards" lists the foward that I'm using. Events are forwarding without any obvious problem.
It's a bit frustrating to try and trouble-shoot this as it's not obvious what exactly Splunk is doing on this machine.
Good: It's applying the custom sourcetype and forwarding events.
Bad: It's ignoring the host overrirde and the CLI results for the system's state don't seem to reflect reality (?)
I'd be grateful for any suggestions.
... View more
06-25-2012
04:57 PM
After rechecking all of the docs and config files carefully I've found nothing obviously wrong. After seeing other people with similar problems I ran
splunk display app
This lists the SplunkForwarder as unconfigured, disabled, and invisible. I've run
splunk enable app SplunkForwarder
The SplunkForwarder is now listd as unconfigured, enabled, and invisible. I've got config files in place and the Web management GUI lists the forwarding rule and says that it's enabled.
Could my problem be that somehow the heavy forwarder features aren't active? That would explain why my rule doesn't run.
... View more
06-25-2012
12:45 AM
I've got a few lines more than a comment will accept so I'm answering my own question.
Thanks for the quick answer but it's still not working.
Yes, this is a 4.2.5 heavy forwarder. I installed Splunk, set the license to Forwarder and then moved over and/or tweaked config files.
Understood that the fix only applies to newly indexed data.
The slashes in the props.conf have always worked in the past. Are you saying they need to be like so?
[source::...\action_log.txt]
sourcetype = action
I changed the regex to this:
REGEX = (?i)^(?:[^ ]* ){10}([^ ]+)
I again tried combining the sourcetype and TRANSFORMS statements into one stanza and splitting them into two without any visible change. The two versions:
[source::.../action_log.txt]
sourcetype = action
[source::.../action_log.txt]
TRANSFORMS-action-host=action_host_override
versus
[source::.../action_log.txt]
sourcetype = action
TRANSFORMS-action-host=action_host_override
I do stop and start Splunk each time I modify the config files.
Can you think what else I might be missing here?
(I didn't manage to get the formatting all right above but the text should be accurate.)
... View more
06-24-2012
11:36 PM
I'm having some trouble overriding the default host assignment and am hoping for some help. I've tested out a regex with rex that looks to work correctly but it's not seeming to work from props.conf + transforms.conf.
For background, here's our setup:
We've got a lot of custom logs.
Each log has its original data stored in a SQL database.
A single program on a single machine reads the rows off the SQL database and writes out the individual logs locally. Call this machine LogMachine.
LogMachine has a copy of Splunk acting as a forwarder that sends all of the data into a remote Splunk indexer.
So, all of the logs are written locally to disk on LogMachine and then forwarded on. For policy reasons outside of my control, this part of the setup is fixed. Namely, that we've got all of the logs on one machine and forward them from there. The problem is that every event has a host of LogMachine rather than the original host machine. I've looked at the docs and several threads here and it looks like I'm meant to define a transform and declare it in props.conf.
I'm only testing out one log right now as I'm trying to get the mechanics sorted out before I work on doing the same thing for our other custom logs. Here's a snippet from a custom 'action' log:
[23/Jun/2012:01:50:06 +0000] add appuser FANGLET-AU000-0000000000 0 1336643d3082237d75191d4362fbd941 - 1.0 - 345101-VM3 SRST
[23/Jun/2012:01:51:38 +0000] add appuser FANGLET-US000-0000000000 0 9fb0638027e115dc36a313700ada3f54 - 1.1.4 - 345101-VM3 SRST
[23/Jun/2012:01:51:53 +0000] add appuser FANGLET-AU-EGGPLNT 0 d1128ee5236b17a41825832b890a8091 cdma_spyder 1.0 10 345101-VM3 SRST
[23/Jun/2012:01:52:47 +0000] add appuser FANGLET-AU000-0000000000 0 5d3ded5a3efbc9102c85e319d08c461d - 1.0 - 345101-VM3 SRST
[23/Jun/2012:06:48:04 +0000] add appuser FRINDO-UK-EGGPLNT 0 c9e9d9c86fe1592e3427592c4c4bc6a buzz 1.0 8 345999-VM4 SRST
[23/Jun/2012:06:48:20 +0000] add appuser FANGLET-AU000-0000000000 0 d0e3cc86221875df6485f28e6246bcf8 - 1.0 - 345999-VM4 SRST
[23/Jun/2012:06:48:56 +0000] add appuser FRINDO-US000-0000000000 0 459d7c547c40efa025feb0ea9fd93998 - 1.1.4 - 345999-VM4 SRST
[23/Jun/2012:06:48:57 +0000] add appuser FRINDO-US000-0000000000 0 8321965193395108fe7d85878f8c9a43 - 1.1.4 - SRST-Remote-2 SRST
Here's the transforms.conf text stanza that applies:
# transforms.conf
[action_host_override]
REGEX = (?i)^(?:[^ ]* ){10}(?P<host>[^ ]+)
FORMAT = host::$1
DEST_KEY = MetaData:Host
Here are props.conf stanzas that apply:
# props.conf
[source::.../action_log.txt]
sourcetype = action
[source::.../action_log.txt]
TRANSFORMS-action-host=action_host_override
I don't know if that needs to be in two stanzas or one - I've tried both with no change either way. Namely, the sourcetype is applied and the transform is ignored.
For good measure, here's the inputs.conf stanza for this sourcetype:
[monitor://C:\Program Files\SRTS\logs\action_log.txt]
disabled = false
index = srst
followTail = 0
sourcetype = action
I've tested that regex out with rex and it extracts the values that I expect. Namely 345101-VM3, 345999-VM4, SRST-Remote-2 and so on. Here's a rex sample set in the UI to the past 15 minutes:
sourcetype="action" | rex field=_raw "(?i)^(?:[^ ]* ){10}(?P<host>[^ ]+)"
I'm guessing that I need to change the regex for transforms.conf (?)
I originally had a SplunkUniversalForwarder running but gather from an earlier thread that I need at least a full forwarder. I installed Splunk 4.2.5 and licensed it as a forwarder. Here's the machine setup:
Win 32
Splunk 4.2.5 acting as a forwarder to an indexer running Splunk 4.2.2.
I do not control the indexing machine and the person managing it is always short on time. So, I'd prefer to run the transformation before forwarding.
I'd be extremely grateful for help resolving this problem!
Thanks,
... View more
03-01-2012
06:33 PM
Thanks very much for the answer. It sounds like we can give it a go and see what happens.
... View more
03-01-2012
03:30 PM
I've got a 4.2.5 server and may be migrating to a 4.2.2 server. I've looked at the docs and they say that there's no way to downgrade but that you can install an earlier version. Does anyone know if it is safe to install 4.2.2 on top of our 4.2.5 data? I'd need to change the server and about eight forwarders.
Thanks in advance for any help
... View more
- Tags:
- downgrading
03-01-2012
02:32 PM
We've currently got a stand-alone 64-bit linux box handling inputs from a collection of Splunk forwarders, all on 4.2.5. $SPLUNK_HOME is still set to it's default and all of the data is flowing into the original, default index. We've got about 13 millions events, a lot of saved searches and dashboards.
We're now looking at how to migrate our system over to an existing 64-bit Linux Splunk setup that's running on 4.2.2. I've been reading the docs and Splunkbase about migration but am still not clear on how this would actually work. I can zip/tar the original data, but how will that work on the new server? I guess what I'm after is that we've got a home directory on the new server with all of our old data, field extractions, etc. that doesn't interfere with the rest of the server. Is that possible, given that the data is currently in the standard home default?
The server admin is happy to set up a specific index for us on the server farm, but I'm hoping to migrate our past events without export-to-CSV-and-reimport and to keep our existing searches, dashboards, etc.
Thanks for any guidance or links.
... View more
- Tags:
- migration
02-17-2012
07:47 PM
The numbers are success/failure (match/no-match) but could just as well be weight and height, temperature and humidity or any other pair of values from a sampling point. I'd like to know the frequency of each type of count overall and over time. Does that make more sense?
... View more
02-16-2012
10:11 PM
Thanks for suggesting I show some sample output. I've reworked my fictitious input (I've got a million rows of real input ready to go) to make it easier to see what I'm after:
"2011-09-20 20:32:15",0,0
"2011-08-20 09:23:19",0,3
"2011-09-15 10:56:09",3,3
"2011-08-15 10:20:33",4,7
"2011-05-29 22:54:06",5,8
The two series are
success 0,0,3,4,5
failure 0,3,3,7,8
I'm trying to get counts, avgs and so on of the values in each series. From the samples above, the results about to be like this:
series value frequency (count)
success 0 2
success 3 1
success 4 1
success 5 1
failure 0 1
failure 3 2
failure 7 1
failure 8 1
I've tried timechart and stats - here's a timechart example:
sourcetype="sample_counts"
| timechart count(success_count), count(failure_count)
The result is a table of data where the counts are identical for both series. I assume that the chart is counting against time. If I add one of the fields like "by success_count", I do no better.
I'm after frequency counts from the two fields/series but don't see how to go about it.
Thanks for your help (and patience.)
... View more
02-16-2012
09:57 PM
Thanks for the help, there's not enough room in the comments for an answer so I'll post a new answer.
... View more
02-16-2012
03:45 PM
Thanks for the answer. I don't think that my problem is reading the data as I've got field extractions set up for the two different numeric fields now. The problem is that I can't figure out how to get stats out of two different fields on the same series of events. I've tried counting (etc.) by the first field and I get the same results for both series. I've tried getting stats on one field and then piping to a second stats call on the second field and get no results.
Is there a way to get two series on a chart/table out of two fields in the same group of events?
Thanks
... View more
02-15-2012
08:50 PM
I've got a series of events with a timestamp and two numbers, like so:
"2011-05-29 22:54:06",68,31
"2011-08-15 10:20:33",143,76
"2011-09-15 10:56:09",63,27
"2011-09-20 20:32:15",0,0
"2011-08-20 09:23:19",0,3
The two numbers represent "success" and "failure" counts for a specific event. What I'd like to be able to sort out are stats for each of the numeric series as well as the ratio between success/failure over time.
Average, min, max, stddev and counts over time for "success".
Average, min, max, stddev and counts over time for "failure"
Success/failure ratio over time.
I've been banging away on this for some time, but I don't seem to be able to extract two numeric series from the same sequence of events. Am I running into a known limit?
http://docs.splunk.com/Documentation/Splunk/4.3/User/ReportOfMultipleDataSeries
If there's no way to do what I'm after with the data in the current format, would I be better off restructuing the data to make it easier to work with using eval()?
"2011-05-29 22:54:06",success,68
"2011-08-15 10:20:33",success,143
"2011-09-15 10:56:09",success,63
"2011-09-20 20:32:15",success,0
"2011-08-20 09:23:19",success,0
"2011-05-29 22:54:06",failure,31
"2011-08-15 10:20:33",failure,76
"2011-09-15 10:56:09",failure,27
"2011-09-20 20:32:15",failure,0
"2011-08-20 09:23:19",failure,3
The example above is a simple case - two fields with numbers I'd like to trend and compare - and is just a starting point. I have more complex requirements but have to get the basics down before tackling anything harder.
Thanks for any help or suggestions.
... View more
12-31-2011
10:17 PM
2 Karma
I've got a search where a heatmap really helps highlight values of importance. Unfortunately, the default colors tend to make the 'hotest' values very hard to read.
Is there a way to customize the heatmap colors? I've check the docs and SplunkBase for information on tweaking the colors without luck. I've searched under 'heatmap' and 'SimpleResultsTable'.
Thanks for any pointers.
... View more
12-28-2011
05:56 PM
I've got a search like this against a collection of Web logs:
sourcetype="access_common" | ctable uri_path host
The result is a ctable with URLs down the left, a count column for each of our servers and a grand total column on the right.
How can you format the numbers that are in the cdata cells? (I've been having similar problems with summarizing over table results and xyseries results.) This fieldFormat statement does correctly format the grand total column:
sourcetype="access_common" | ctable uri_path host | fieldFormat TOTAL=tostring(TOTAL,"commas")
So far, I haven't figured out how to address the columns for each specific host. If I use the column name that Splunk produces, no error is generated but there's also no data. I'm guessing that the problem is that I don't know what the correct identifier for the column(s) that I'd like to format. Within the result data, I can see that there are definitions with "tag" elements for the various columns - but these ID values (c='6474' or c='327', etc.) are not stable. Each time you run the search, the IDs are generated from scratch.
Is there a way to address the generated columns in a ctable? For that matter, is there something that works with other, related sorts of extractions like xyseries?
Thanks a lot.
... View more
12-28-2011
05:08 PM
Iguinn, thanks very much for the answer - I've tried it out and am still experimenting. I've ended up going with gkanapathy's simple XML solution in this case, but am glad to be poking my toe into the advanced XML features as well.
... View more
12-28-2011
05:07 PM
Thanks! I tried this out and it's an easy way to get what I'm after in this case. Much appreciated.
... View more
12-27-2011
05:19 PM
1 Karma
I don't mind the sales pitch at all. While my main customer is a huge company in the US, I live in rural Australia. Sydney is about 6 hours away and Melbourne around 11 hours. A big town around here is anything around 9,000 people and up. So. I'm keen on on-line resources 😉 I would love to attend a Splunk conference if I can find the time and money.
Is anyone planning a Splunk book?
... View more
12-27-2011
02:42 PM
2 Karma
I've used Splunk a couple of times now and end up evangelizing for it whenever I can. At the same time, I end up feeling pretty ignorant about Splunk most of the time. I'm often stumbling across features or hearing about them as part of an answer to a question. Case in point: I was just told about xyseries and stumbled across cdata.
Searhing through the docs and splunkbase, the materials and commentary are these features (and others) is often pretty thin. The docs I do find are usually well written and accurate - but thin. Am I missing something obvious? There doesn't seem to be a book about Splunk anywhere and yet there are clearly people that know every nook and cranny of the product.
Is there some maximally efficient way to learn Splunk? I've never found digging through other people's examples to work very well for me. Hoepfully, there's a huge manual somewhere that I've managed not to see.
Thanks for any advice or suggestions.
... View more
- Tags:
- learning
12-27-2011
02:37 PM
I've just stubmled across ctable, which may even be better than xyseries:
sourcetype="access_common"| ctable uri_path host | sort TOTAL DESC
I end up with a table like I was getting from my original 'chart' search with the addition of a grand total column.
Here again, I can't figure out what to pipe the results to or through to format the numbers in the series columns. This pipe does format the grand dotal column:
| fieldFormat TOTAL=tostring(TOTAL,"commas")
If I address a data column by the name Splunk gives it, the data is all set to empty.
... View more
12-27-2011
02:15 PM
Thanks for the answers and comments. I reversed the host and uri_path arguments to keep the URLs on the left. The output looks better, but I'm not quite there yet:
There are still no commas.
As Iguinn noted, there's now a spare column with no data labeled count.
The zero values are now blank which is better than a blizzard of zeros. I'd like to try out putting in a dash as a placeholder. If that's not possible, I'll stick with the blanks.
Thanks for any additional help and suggestions - it's very much appreciated.
... View more
12-26-2011
07:02 PM
3 Karma
I've built a dashboard that includes five panels that each display values over the previous 24 hours, such as top URLs requested, average processing time, and URLs tabulated by machine. My boss said "Great! Can you make it work for a 5 day period?" I can do so by creating five new saved searches with a different time frame and a new five-day view.
Is there a better way? I've looked around a bit but haven't seen an obvious way to parameterize the time period. An ideal user interface seems like it would include a popup or datetime picker that enables the user to dynamically adjust what period they're interested in reviewing - just like Splunk's standard searches.
Is there a way to go this for a custom dashboard? I'd be grateful for any suggestions or links to examples or documentation.
... View more