pkeller's Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

pkeller's Topics

I often find it somewhat difficult to go back and find one of my previously asked questions if I can't remember the exact string of the subject. Any suggestions? Or if there is an interface for that ... See more...
I often find it somewhat difficult to go back and find one of my previously asked questions if I can't remember the exact string of the subject. Any suggestions? Or if there is an interface for that functionality, could you make it more readily visible? Thank you.
Having issues with some members of our 6.2 platform where issuing a splunk restart ends up not being able to complete because the mongod process does not exit properly. Is this a necessary compone... See more...
Having issues with some members of our 6.2 platform where issuing a splunk restart ends up not being able to complete because the mongod process does not exit properly. Is this a necessary component of 6.2 if we're not running Hunk ... and should it then be disabled? Any workarounds?
I'm using splunk 6.1.3 with a deployment server. I distribute indexes.conf to my indexers via an indexer serverclass. When the deployment client receives the updated indexes.conf, the client cre... See more...
I'm using splunk 6.1.3 with a deployment server. I distribute indexes.conf to my indexers via an indexer serverclass. When the deployment client receives the updated indexes.conf, the client creates the new directory ... ie: /indexes/test/colddb and /indexes/test/db The problem though, is that while the owner of those directories is the right user (splunk), the group affinity is always root, and that index is not writeable until I manually change it to the default group that splunk is associated with. This is a NIS+ environment and I'm wondering if that's causing me a problem. I have added the splunk user to /etc/passwd and the associated group to the /etc/group file ... but the problem still persists. total 16 drwx------ 4 splunk root 4096 Aug 26 19:23 . drwxr-xr-x 7 splunk service 4096 Aug 26 19:23 .. drwx------ 2 splunk root 4096 Aug 26 19:23 colddb drwx------ 3 splunk root 4096 Aug 26 19:23 db Is there an issue with Splunk being able to operate in a NIS+ environment? Thank you.
I've noticed in another Splunk environment at my site that they've set up what appear to be undocumented stanzas in props.conf [foo:BAR] REPORT-fooregex0=fooregex0 assume that foo is the sour... See more...
I've noticed in another Splunk environment at my site that they've set up what appear to be undocumented stanzas in props.conf [foo:BAR] REPORT-fooregex0=fooregex0 assume that foo is the sourcetype and BAR is a pattern in an event logged in the 'foo' sourcetype. In the props.conf spec and example, and in numerous web searches I have failed to find a reference to this stanza type ... Can someone give me some insight or point to some documentation on it? Thanks very much.
This may be a long winded question ... After upgrading one of our search head pools from 4.3.6 to Splunk 6.0 I'm finding that I'm having to make XML changes to many of the forms that worked fine in... See more...
This may be a long winded question ... After upgrading one of our search head pools from 4.3.6 to Splunk 6.0 I'm finding that I'm having to make XML changes to many of the forms that worked fine in 4.3.6 ... This could be due to my having written poorly formatted code in the earlier version, or it could be due to a change in functionality ... example: ... the block contains this ... |inputlookup oradb.csv | table host,SID,RAC_INSTANCE,LIFECYCLE | where (host LIKE "$s_hostname$") AND (SID LIKE "$s_instance$") The XML that applies to this template looked like this: </fieldset> <input type="text" token="s_hostname"> <label>Enter a hostname or pattern</label> <default></default> <suffix>%</suffix> </input> </fieldset> <fieldset> <input type="text" token="s_instance"> <label>DB Instance Name</label> <default></default> <suffix>%</suffix> </input> </fieldset> Now in order to get this to work ... I had to make the following changes: {code} </fieldset> <input type="text" token="s_hostname"> <label>Enter a hostname or pattern</label> <! I had to add the '%' in the default block ... which worked without it in 4.3.6 --> <default>%</default> <suffix>%</suffix> </input> <! I removed the end and beginning fieldset blocks --> <input type="text" token="s_instance"> <label>DB Instance Name</label> <! I had to add the '%' in the default block ... which worked without it in 4.3.6 --> <default>%</default> <suffix>%</suffix> </input> </fieldset> Also ... in Splunk 6, the default values are visible in the input boxes of the form ... I assume that is now expected behavior ... If anyone has a better suggestion for formatting this XML ... I'd welcome it.
Trying to add some additional information in the output of an event correlation index=compute source="*messages" "DOWN" AND [search index=storage source="*messages" ERROR_STRING | rename _time as ... See more...
Trying to add some additional information in the output of an event correlation index=compute source="*messages" "DOWN" AND [search index=storage source="*messages" ERROR_STRING | rename _time as Storage_Event_Time | rename Client AS host| fields host, Storage_Event_Time ] | table Storage_Event_TIme,_time,host This correlation works fine WITHOUT trying to add the Storage_Event_Time field to the 'fields' portion of the subsearch ... ( I get the blue bar saying "no matching fields exist" ) ... I can even run the entire search without error if I just remove "Storage_Event_Time from the 'table' command ... I'm inclined to believe that I can only pass a single field ( and a common one at that ) out of the subsearch ... Apologies if this is not clear.
We have a search head pool which share etc/apps under a NAS export ... /pool/etc/apps The documention indicates that if you want to run a scheduled search through a script, you should put the scri... See more...
We have a search head pool which share etc/apps under a NAS export ... /pool/etc/apps The documention indicates that if you want to run a scheduled search through a script, you should put the script in $SPLUNK_HOME/bin/scripts ... but $SPLUNK_HOME/bin is not included in the pooling ... So, is there an appropriate location under /pool/etc/apps where the scheduled job should look for the script?
Basic search is: host="*" | stats count(linecount) as count by host,sysadmin| where count > 1000000 | sort -count | sendemail to=recipient@domain.com format=html sendresults=true subject=search_re... See more...
Basic search is: host="*" | stats count(linecount) as count by host,sysadmin| where count > 1000000 | sort -count | sendemail to=recipient@domain.com format=html sendresults=true subject=search_results Ok ... this works. Essentially any host that has sent more than a million messages to splunk will be captured in the search ... The sysadmin owner of that host is a field obtained by a working lookup ... So, in the case of this search reporting two hosts: host1 sysadmin1 2000000 host2 sysadmin2 1700000 host3 sysadmin1 1500000 I'd like to somehow be able to grab those results and send only the search output relevant to sysadmin1 to sysadmin1 and the same for sysadmin2. subject="noisy syslog volume for hosts owned by $sysadmin$" ( just a guess as to altering the subject line on a per recipient basis ) ... So, how to iterate between unique sysadmins and send the results relevant to each individually. Thanks very much
We have our search heads pooled in batches of 4. I installed Google Maps into the shared application directory and have restarted all search heads, but I'm encountering an error when trying to use th... See more...
We have our search heads pooled in batches of 4. I installed Google Maps into the shared application directory and have restarted all search heads, but I'm encountering an error when trying to use the geoip command. Streamed search execute failed because: Error in 'script': Getinfo probe failed for external search command 'geoip' Should I be installing this app on all of our indexers? Should I install it on our deployment server and then have one of our serverClasses handle getting it to our search heads and/or indexers? I see this error referenced in quite a few places on SplunkBase, but I'm not really finding a similar situation to compare ours to. Thank you.
I have a lookup table that includes fields for hostname and subnet. I can easily view all hosts in a subnet by searching: |inputlookup subnet_map.csv | where subnet LIKE "111.111.111.111%" | table ho... See more...
I have a lookup table that includes fields for hostname and subnet. I can easily view all hosts in a subnet by searching: |inputlookup subnet_map.csv | where subnet LIKE "111.111.111.111%" | table hostname,subnet ... I'd prefer to be able to grab the subnet field from a search like: |inputlookup subnet_map.csv | where host LIKE "my_host%" | table subnet ... and push it into the "where subnet LIKE "subnet" ... so that I grab a list of all hosts in the matching subnet but by using a single hostname. In essence, what I need to do is take the output of one inputlookup request and pipe it to a second one. I apologize if I'm wording this poorly. Thank you.
If I have a lookup table formatted like this: lookup_host,os host1,linux host2,linux host3,sunos And say I'm sending data to: source=/data/unix/syslog.log In my search, I can do som... See more...
If I have a lookup table formatted like this: lookup_host,os host1,linux host2,linux host3,sunos And say I'm sending data to: source=/data/unix/syslog.log In my search, I can do something like: source=/data/unix/syslog.log os=linux ... and that correctly shows me everything received from host1 and host2 ... But, I'd like to be able to use the lookup table to tell me who's not sending me data. Not quite sure how I would format a search to do that. Thanks very much, hopefully I made this fairly clear ... Paul Keller
http://docs.splunk.com/Documentation/Splunk/latest/Data/overridedefaulthostassignments I've been trying to set up what should be a very simple regex to extract the hostname out of logs that are fo... See more...
http://docs.splunk.com/Documentation/Splunk/latest/Data/overridedefaulthostassignments I've been trying to set up what should be a very simple regex to extract the hostname out of logs that are formatted like this: Apr 20 10:10:10 host=hostname Apr 20 10:10:11 host=hostname-b After spending most of the day trying to do this, I decided to try doing what is listed in the Example section of the link mentioned at the top of this message. Using the exact same event data, the exact same props.conf and transforms.conf, my instance of Splunk running on my MacOS laptop is still setting the host field to my local hostname. Thanks very much