Activity Feed
- Karma Re: Distribute authorization.conf in an App? for MHibbin. 06-05-2020 12:46 AM
- Got Karma for Distribute authorization.conf in an App?. 06-05-2020 12:46 AM
- Karma Re: Field extraction into multivalue field for Ron_Naken. 06-05-2020 12:45 AM
- Karma Re: splunk tcp output format... for ziegfried. 06-05-2020 12:45 AM
- Karma Re: Eval and Workflows... for southeringtonp. 06-05-2020 12:45 AM
- Got Karma for Lookups - using them to replace the host field.... 06-05-2020 12:45 AM
- Got Karma for Mulltiline XML extraction.... 06-05-2020 12:45 AM
- Got Karma for Mulltiline XML extraction.... 06-05-2020 12:45 AM
- Got Karma for Mulltiline XML extraction.... 06-05-2020 12:45 AM
- Got Karma for Converting a field value from Hexadecimal to Decimal.... 06-05-2020 12:45 AM
- Got Karma for Forwarding select data in my environment.. 06-05-2020 12:45 AM
- Got Karma for Forwarding select data in my environment.. 06-05-2020 12:45 AM
- Got Karma for Forwarding select data in my environment.. 06-05-2020 12:45 AM
- Got Karma for Can I "/dev/null" a sourcetype?. 06-05-2020 12:45 AM
- Got Karma for Can I "/dev/null" a sourcetype?. 06-05-2020 12:45 AM
- Got Karma for Field extraction into multivalue field. 06-05-2020 12:45 AM
- Got Karma for Field extraction into multivalue field. 06-05-2020 12:45 AM
- Got Karma for LDAP and Search Head Pooling - Role Mapping. 06-05-2020 12:45 AM
- Got Karma for splunk tcp output format.... 06-05-2020 12:45 AM
- Got Karma for Eval and Workflows.... 06-05-2020 12:45 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
0 | |||
1 | |||
0 | |||
1 | |||
2 | |||
2 | |||
1 | |||
1 |
01-09-2014
03:07 PM
I'm trying to do some work with qualys data. There are events that describe "asset groups", with a bunch of fields, one of which is "scanips", which is a comma separated list of IP addresses. something like:
asset_group_id=1376498 asset_group_title="San Francisco Assets" scanips=10.10.1.2,10.10.1.3,10.10.5.2
I'd like to process that data and use outputlookup to create a lookup table that would be something like
ip,asset_group
10.10.1.2,San Francisco Assets
10.10.1.3,San Francisco Assets
10.10.5.2,San Francisco Assets
I'd like to do this all within splunk, but can't figure out how. Any thoughts?
Thanks
Steve
... View more
- Tags:
- lookup
- outputlookup
03-13-2013
01:42 PM
I'm having a heck of a time getting Shuttl to run with my S3 buckets. It appears that the problem is in the fact that my AWS Secret key has "/" in it, and that shuttl is using the basic auth : @s3bucket.s3.amazonaws.com/ URL scheme. Anybody run into this issue? Anybody got a fix?
... View more
- Tags:
- Shuttl
10-31-2012
08:24 AM
This one eventually cleared itself - I'm chalking it up to browser cache and perhaps testing for the condition in the middle of an upgrade.
... View more
03-14-2012
10:53 AM
So last week, due to some runaway sources, I had a set of license violations, and my search got shut down (I HATE THAT, BTW - give me a different mechanism than telling the people that I'm trying to drive to adopt the service that it's down). I got it reset, but a few of my users are still reporting the litsearch exceeded your license message. Any ideas why some users would get that but not others?
... View more
- Tags:
- license-violation
12-14-2011
08:40 AM
Not the answer I wanted :), but definitely answered the q. Thanks
... View more
12-13-2011
03:39 PM
1 Karma
I'm trying to set up some roles for a number of distributed search "users" on my indexing farm, using local authentication. I am trying to set up the role via deployment server as part of an app I call "IDX" that is the core app for all of my index servers. I've done that, but now when I try and change a local users role (via the CLI), it appears that the role is not being recognized.
So, my question is this - can authorization.conf be deployed in an App, or do I need to have it be in $SPLUNK_HOME/etc/system/local on each of the indexers?
Thanks
Steve
... View more
08-31-2011
12:27 PM
Hi -
I'm embarking on a re-organization in my splunk environment. I've come into possession of a couple big x86 boxes (4 socket, 8 core, 256GB RAM), and given that I heard multiple times that indexing is better distributed horizontally across smaller boxes, it leads me to wonder this: If I replace two of my 3 indexers with these boxes, can splunk take advantageof the hardware? Or am I better off partitioning these boxes via virtualization into a number of smaller boxes (but still using local disk resources, etc.)?
Thanks
Steve
... View more
- Tags:
- indexer
07-20-2011
03:11 PM
1 Karma
I'm trying to use an approach like Search Head Pooling to share auth info to set up search head pooling. I'm thinking that I can keep my LDAP config in system/local (with the respective hashes), and then just set up a separate app for the role mappings. I just tested this by dumping the role mapping into the search app on the share storage, and it seems to work fine.
However, my question is, if I basically decide to manage all role mapping in a separate app, what's the easiest way to tell both search heads to re-read that (other than splunk restart)? I don't see anything about authentication in the app.conf triggers setup...
Thanks
Steve
... View more
- Tags:
- ldap
- search-head
02-17-2011
12:52 AM
2 Karma
So I want to do a general field extraction of IP addresses for a sourcetype that may have them in multiple places in a given event, and may have multiples of them. Something as simple as an inline field extraction like this:
EXTRACT-mv_ip = (?\d+.\d+.\d+.\d+)
However, I'd like all occurrences to be stored in the mv_ip field as a multi value field, and I'd like to be able to use that multi value field in lookups. For some reason, I'm not understanding the documentation on field extractions enough to figure out how to do this.
Example data:
src=10.38.10.89 dst=10.188.10.50 src_port=45045 dst_port=53 src-xlated ip=10.38.12.89 port=45045 dst-xlated ip=10.188.12.50
How I'd like the mv_ip value to end up:
mv_ip = 10.38.10.89,10.188.10.50,10.38.12.89,10.188.12.50
Is this doable? If so, what am I missing?
thanks,
Steve
... View more
- Tags:
- field-extraction
02-10-2011
10:33 PM
I just had splunk add the timestamp rather than relying on the the time/timezone of the source.
... View more
01-27-2011
07:34 PM
2 Karma
So I need to temporarily free up some indexing license. Rather than tweaking my deployment, I was hoping I could just route a few sourcetypes to /dev/null for a little while. Is there a way I can tell splunk to not index a sourcetype of a list of sourcetypes?
Thanks
Steve
... View more
12-22-2010
12:44 AM
1 Karma
So I've created a couple workflow actions for interfacing with service-now. One of which is looking up the host in our CMDB. Unfortunately, we have a number of devices out there that don't reverse resolve (yeah, I know, we're working on it. :), so sometimes the host field is an IP address. Since the CMDB query in service now doesn't allow you to do "or" in the URL based queries (that I've been able to find), I've got two different workflows, one which executes the query with the argument:
Name=$host$
and the other with:
ip_address=$host$
I'd like to have a field created at search time, ala the following eval:
eval cmdbent=if(match(host,"\d+.\d+.\d+.\d+"), "ip_address", "name") . "=" . host
So really, I'd like to create a field containing "ip_address" if the host value matches the \d+.\d+.\d+.\d regex, but "name" otherwise. Can this be done with a regular transform instead of eval?
Thanks
Steve
... View more
10-22-2010
06:37 PM
The problem with the syslog output is that it just dumps the raw event, no metadata. I need some of the metadata from the cooked event stream.
... View more
10-22-2010
05:41 PM
1 Karma
if I wanted to write a receiver for splunk data (i.e. have my index server(s) forward data via tcpout in the outputs.conf), is the format for splunk2splunk traffic published anywhere?
I know it seems like an obscure need, but nonetheless, I've got it. 🙂
Thanks
Steve
... View more
- Tags:
- tcp
10-01-2010
05:59 PM
I'm not using setting syslogSourceType anywhere in my config. I did some tcpdumps on the server, and it looks like all the syslog output it getting truncated, but not at a consistent length (well under the 1.5K MTU). I haven't changed the default truncation value.
... View more
09-30-2010
10:23 PM
So I've got forwarding of splunk data set up for certain systems in my environment to go to a 3rd party, in addition to through splunk. However, it looks like the host field doesn't get included in that forward, so the 3rd party just sees all this data coming from one system, with no differentiation of where the events originally came from (they're coming from a light forwarder, through an intermediate forwarding layer, into the indexing layer and being forwarded at the index layer via syslog).
Is the host field being dropped? Is there a way to add it back in?
... View more
- Tags:
- forwarding
- syslog
09-20-2010
10:41 PM
Ok - so I finally was able to get this in place (change windows and firewalls oh my), and it works, but the data being sent to the 3rd party no longer has the original host header in it. Is that due to truncation, or is there something I can do to specifically make sure that data goes with the data?
... View more
08-13-2010
03:46 PM
If I set up the alternate port, as you suggested, would the data still be indexed? Do I, in that case need to specify:
indexAndForward = true
for the "special" splunktcp port?
... View more
08-12-2010
09:56 PM
3 Karma
I'm trying to take data from specific systems and, after indexing it, forward it to a third party for other analysis. I have data coming from light forwarders on the systems in question, and going through a forwarding layer before it gets to the indexing hosts. I applied, basically, the recipe as defined in the documentation on the indexers:
props.conf:
[host::*WCA*]
TRANSFORMS-routing = send_to_syslog
transforms.conf (note, splunk complains about not having a REGEX entry - I've tried it with .* as the REGEX to no avail):
[send_to_syslog]
DEST_KEY = _SYSLOG_ROUTING
FORMAT = SW_syslog_group
outputs.conf:
[syslog:SW_syslog_group]
disabled=false
server = 10.20.30.40:514
I'm wondering if I need to apply the props.conf and the transforms.conf at the forwarder layer, and if I do that, will it still properly index, or will it forward out prior to indexing?
thanks
Steve
... View more
- Tags:
- syslog
07-17-2010
01:41 PM
I tried both GMT and PDT in the props.conf (which I place on the forwarders). I should have said that I was making a supposition that it was happening in both places, not that I had confirmed it...
... View more
07-17-2010
01:40 PM
Rather than messing more with timezones in props, etc, and realizing that I might have other problems with syslog based timestamps, I decided to just have splunk create the timestamp at arrival time. This solved the problem (and probably just made the rest of my syslog data more consistent. :)).
... View more
07-16-2010
10:12 PM
My deployment has a bunch of geo-based forwarders, which accept splunk connections, as well as udp/514 syslog for devices that I can't put a lightforwarder on. I've run into a problem with an IBM DataPower XI50 device, where it sends strings with timestamp info like this:
Jul 16 15:04:42
No time zone info, etc. The forwarder seems to assume that it's in PDT (which it is) and adds 7 hours, but again stamps it without the Timezone (based on what I see from "Show source"). So when it gets to the indexer, it converts it yet again, and suddenly, my log entries are 7 hours in the future.
Any ideas how I can fix this (and yes, I have tried to define a "GMT for everything" standard, to no avail)...
Thanks
Steve
... View more