pkeller's Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

pkeller's Topics

Given a list of CIDR ranges ... 10.198.68.132/30, 10.244.18.150/31, 10.48.37.96/24 Is there a search that could extract the IPs in each range? | table cidr_range | makemv delim="/" cidr_range... See more...
Given a list of CIDR ranges ... 10.198.68.132/30, 10.244.18.150/31, 10.48.37.96/24 Is there a search that could extract the IPs in each range? | table cidr_range | makemv delim="/" cidr_range | eval IP = mvindex(cidr_range,0) | eval MASK = mvindex(cidr_range, 1) | eval IP_SCOPE = case(MASK = 32, IP, MASK = 31, IP . ":" . IP, MASK = 30, IP . ":" . IP . ":" . IP . ":" . IP) | makemv delim=":" IP_SCOPE That's kind of the start, but I'm at a loss what to do next. ( and given a /24 .... that MASK assignment would look absolutely terrible. I'd need to take each multi-value field from IP_SCOPE, and increment by one the last octet, add 1 if it's not the first value then glue them back together. There must be an easier way.
A user is reporting that their indexed json data has a 'source' key that is being extracted. "source": "[{label:'Tree'},{label:'two'},{label:'Three'}]", When they search their data, they see: ... See more...
A user is reporting that their indexed json data has a 'source' key that is being extracted. "source": "[{label:'Tree'},{label:'two'},{label:'Three'}]", When they search their data, they see: They'd like the original source kept intact. Is there a transforms that I can add to my indexers to rename ONLY the json source? The raw json looks like: [ snipped ] "toggleMode":"click", "_toggleMode": { "desc": "Gets or sets user interaction used for expanding or collapsing any item.", "type": "enum", "keys": ["click","dblclick"], "values": ["click","dblclick"] }, *"source": "[{label:'Tree'},{label:'two'},{label:'Three'}]",** "_source": { "desc": "Sets the initial contents. Easier to do at runtime.", "type": "css", "language": "JavaScript" },*
Is there a capability that can be removed from a role that will remove the ability to run a remote search via the Splunkd port and force the user to use the UI?
When I use the Job Inspector to view the Search Log of a completed search, I find hundreds of entries tagged: SearchOperator:kv that seem to have absolutely nothing to do with the sourcetype or datas... See more...
When I use the Job Inspector to view the Search Log of a completed search, I find hundreds of entries tagged: SearchOperator:kv that seem to have absolutely nothing to do with the sourcetype or datasource of the search ... ie: I'll see hundreds of regexes that are from EXTRACT-blah-blah ... that are explicitly for other sourcetypes. I'm concerned that I have a configuration issue. Lately we've had some complaints about search performance and I'm currently wondering why all this unrelated noise in the search log? Is Splunk actually trying to perform search time extractions for data that doesn't match the sourcetype the extraction was intended for?
[monitor:///home/paul/training_status/] whitelist = (.csv$|.CSV$) blacklist = .filepart$ index=training_index sourcetype=training_status crcSalt = &ltSOURCE&gt The file gets updated once... See more...
[monitor:///home/paul/training_status/] whitelist = (.csv$|.CSV$) blacklist = .filepart$ index=training_index sourcetype=training_status crcSalt = &ltSOURCE&gt The file gets updated once per week. In many cases, the file is not being fully consumed. The most recent update missed 19 records (which were consumed the last time the file was updated ) Splunkd.log shows: 04-06-2017 07:39:19.584 -0700 INFO WatchedFile - Will begin reading at offset=4234 for file='/home/paul/training_status/filename.csv So, my uneducated guess would be that splunkd is seeing data that it's already consumed and thus ignoring those 19 records before it starts ingesting. How do I prevent this? I thought setting crcSalt=&ltSOURCE&gt was supposed to handle this. Thank you.
I'm finding that in all my 6.5.2 infrastructure that syntax highlighting is working fine, with the exception of my Search Head Cluster Members. This is pervasive across both my production and my test... See more...
I'm finding that in all my 6.5.2 infrastructure that syntax highlighting is working fine, with the exception of my Search Head Cluster Members. This is pervasive across both my production and my test clusters. Is this perhaps a known issue? Under user-prefs.conf, search_syntax_highlighting shows a value of 1 Thank you.
I have some folks that want me to ingest Adaxes events under: Application and Services Logs -> Adaxes I'm not quite sure how to construct the inputs. The example below would just be a shot in the ... See more...
I have some folks that want me to ingest Adaxes events under: Application and Services Logs -> Adaxes I'm not quite sure how to construct the inputs. The example below would just be a shot in the dark. [WinEventLog://Applications and Services/Adaxes] Thank you
I want to grant access to a non-admin role to: | rest /services/authentication/users , but am not sure what capability that requires. I've granted the role rest_apps_management but that didn't see... See more...
I want to grant access to a non-admin role to: | rest /services/authentication/users , but am not sure what capability that requires. I've granted the role rest_apps_management but that didn't seem to do the trick.
Looking at the Daily License Usage panel on the "Previous 30 Days" tab under Licensing, I see that the base search is: index=_internal [ set_local_host ] source=license_usage.log type="RolloverSum... See more...
Looking at the Daily License Usage panel on the "Previous 30 Days" tab under Licensing, I see that the base search is: index=_internal [ set_local_host ] source=license_usage.log type="RolloverSummary" earliest=-30d@d | eval _time=_time - 43200 | bin _time span=1d | stats latest(b) AS b latest(stacksz) AS stacksz by slave, pool, _time | stats sum(b) AS volumeB max(stacksz) AS stacksz by _time | eval pctused=round(volumeB/stacksz*100,2) | timechart span=1d max(pctused) AS "% used" fixedrange=false What is the idea behind setting _time 1/2 day behind? Thank you.
My search produces output containing 5 license pool names, their configured pool size, and the volume that was indexed at the time of the rollover. My search: eventtype=evt-rollover-summary ear... See more...
My search produces output containing 5 license pool names, their configured pool size, and the volume that was indexed at the time of the rollover. My search: eventtype=evt-rollover-summary earliest=-1d latest=@d | stats latest(poolsz) as "PoolSize" sum(b) as "PoolUsage" by pool | rename pool AS "Pool Name" | eval "Pool Size (Gb)" = round(PoolSize/1024/1024,0) | eval "Pool Usage (Gb)" = round(PoolUsage/1024/1024,0) | fields - PoolSize, PoolUsage I'd like the column chart to just produce a 'dot' above the "Pool Usage (Gb)" column containing the "Pool Size (Gb)" field, but all I can seem to produce is a solid line for the overlay with nothing marking the line where the Pool Size Value is met. Not sure what I need to choose in the Overlay section to highlight the associated Pool Size value above the Pool Usage for a given Pool.
The event I want to break on looks like this: 25/Jan/17:10:23:00:069+0000 DEBUG Evaluation of condition [188:FTP Mastering Users] took 0 ms props.conf looks like this: LINE_BREAKER = ([\r\n]... See more...
The event I want to break on looks like this: 25/Jan/17:10:23:00:069+0000 DEBUG Evaluation of condition [188:FTP Mastering Users] took 0 ms props.conf looks like this: LINE_BREAKER = ([\r\n]+)(\d+/\w{3}/\d+:\d{2}:\d{2}:\d{2}:\d{3}) I've also tried this: LINE_BREAKER = ([\r\n]+)(\d+\/\w{3}\/\d+:\d{2}:\d{2}:\d{2}:\d{3}) TIME_FORMAT = %-m/%b/%y:%H:%M:%S:%3N%z TRUNCATE = 0 I'm still finding that my indexers are now combining every event matching the REGEX into a single event ( until the max events boundary is reached ) I figure I'm getting hung up on the "forward slash" in the date vs what I have in the REGEX, but have not been able to ingest this particular datasource accurately. So do I need to go a step further with regards to
Using the Splunk Add-On for Microsoft Cloud Services to pull Azure data ... I'm having some difficulty indexing Azure Storage Blobs ... Anyone have any suggestions for props.conf on the sourcetype? I... See more...
Using the Splunk Add-On for Microsoft Cloud Services to pull Azure data ... I'm having some difficulty indexing Azure Storage Blobs ... Anyone have any suggestions for props.conf on the sourcetype? I'm currently doing this, but events are still being broken in random places. [mscs:storage:blob] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+)\{ TRUNCATE = 0 KV_MODE = json I'd welcome any better suggestions. props.conf has been deployed to the heavy forwarder, all indexers, and the search head cluster (although I'm not sure that it's necessary there) Splunk version HF - 6.4.3, Indexers - 6.4.3, Search 6.4.3
When using the app on a heavy forwarder to configure a Storage Account with a SAS token we're getting the following error. We've tried generating 2 different SAS keys and yet we still get the same Sp... See more...
When using the app on a heavy forwarder to configure a Storage Account with a SAS token we're getting the following error. We've tried generating 2 different SAS keys and yet we still get the same Splunk rejection. The token is valid, yet Splunk appears to be incapable of handling it. We're trying to first set this up under Configuration -> Add Azure Storage Account The token generated is formatted like the string below ... ?sv=YYYY-MM-DD&sig=xx9xx22%2xX1x0xXxXxx%2xxXXXx0XXXxXxxxXXx00xxXX%3D&se=YYYY-YY-YYTNN%3x00%3c00x&srt=sco&ss=bfqt&sp=rl
I have the Qualys Technology Add-on (TA) for Splunk configured and it is collecting data from Qualys ... but that data, in xml format, is just sitting in TA-QualysCloudPlatform/tmp and I'm not seeing... See more...
I have the Qualys Technology Add-on (TA) for Splunk configured and it is collecting data from Qualys ... but that data, in xml format, is just sitting in TA-QualysCloudPlatform/tmp and I'm not seeing it forwarded to Splunk So ... assuming that the add-on has some built-in mechanism for parsing the collected XML and then forwarding to Splunk ... Does that forwarding/parsing process run at scheduled intervals? Does it remove the collected files once they've been forwarded? What process is actually reading and forwarding the collected data? Is there a way to configure a higher level of logging for this application? There's nothing very useful being written to the logs under var/log/splunk with regards to this TA. (so we didn't find out that our authentication was failing until long after it started) Thank you.
With regards to Azure Storage Accounts using SAS key, will the Splunk Add-on for Microsoft Cloud Services support situations where the SAS key is being rotated? I see only a field for a single SAS to... See more...
With regards to Azure Storage Accounts using SAS key, will the Splunk Add-on for Microsoft Cloud Services support situations where the SAS key is being rotated? I see only a field for a single SAS token.
We'd like to grant access to an additional index to a role, but we only want the members to be able to view 2 sourcetypes in that index. I added a role called: foo_filtered I added a search filt... See more...
We'd like to grant access to an additional index to a role, but we only want the members to be able to view 2 sourcetypes in that index. I added a role called: foo_filtered I added a search filter of: "service=Amazon OR service=Ebay" to that role I gave that role access to an index named "vendor" Their existing role foo_mail has access to search the "mail" and "spam" indexes. So, when I add the foo_filtered role to their group, ALL searches (regardless of index) apply the filter. I only want the filter applied if they're searching the "vendor" index. Is this even possible? Thank you
Running Splunk 6.4.3 in a Search Head Cluster using AD/Ldap authentication. A user contacted me about adding some capabilities. When I looked under: Access Controls -> Users, I could not find that... See more...
Running Splunk 6.4.3 in a Search Head Cluster using AD/Ldap authentication. A user contacted me about adding some capabilities. When I looked under: Access Controls -> Users, I could not find that the person's ID was visible. However when I search audit.log I see that he has logged in. Additionally when I search: | rest /services/admin/users | search roles=his_role | stats values(realname) I see a list of about 365 names, (alphabetical by first name), which scrolls to the letter R or sometimes S, but seems to leave off the names towards the end of the alphabet. So, I'm wondering if there's some sort of limit to the number of names that the rest call or the UI Access Controls module. As I mentioned at the top: The person, who's firstname starts with "V" can login, he's just not referenceable, so I can't look at his inherited roles and/or capabilities. Thank you.
I have 3 environments: Laptop - Splunk 6.5.0 Test - Splunk 6.4.3 Prod - Splunk 6.3.2 In the first two environments, I am able to pull in a csv nightly and grab the timestamp from the first ... See more...
I have 3 environments: Laptop - Splunk 6.5.0 Test - Splunk 6.4.3 Prod - Splunk 6.3.2 In the first two environments, I am able to pull in a csv nightly and grab the timestamp from the first comma-separated field (in epoch form) My props.conf: [status_csv] HEADER_FIELD_LINE_NUMBER = 1 INDEXED_EXTRACTIONS = csv TIME_FORMAT = %s TIMESTAMP_FIELDS = collection_time MAX_TIMESTAMP_LOOKAHEAD = 11 KV_MODE = none SHOULD_LINEMERGE = false Sample data: collection_time,src_host,APstat,def_date,def_version,foo,bar,foobar 1476691203,xxx-osx1010-3,On,2016-10-16 00:00:00.000,2016-10-16 rev. 022,No,local,Not installed And yet when I push these configs to our PROD indexer cluster, the extractions are created, but Splunk always stamps _time with the time that the event was indexed. ( Whereas, in both my Splunk free environment on my laptop and our UAT environment ( similar to Prod, just smaller and now running 6.4.3 ), the timestamp is appropriately extracted from the 'collection_time' field in the csv ) Either something must be overriding the props I've pushed, or something in the configuration is wrong.
Is there a log configuration option that will have splunkd logging when poorly written field extractions are impacting search performance? (or is there some other option to use a Splunk search to ide... See more...
Is there a log configuration option that will have splunkd logging when poorly written field extractions are impacting search performance? (or is there some other option to use a Splunk search to identify extraction related performance issues?)
Looking for a way to report on whether a lookup table is exported to all apps by using a rest search. Assuming the lookup is under an app named etc/apps/FOO/lookups ... is there a curl endpoint I ... See more...
Looking for a way to report on whether a lookup table is exported to all apps by using a rest search. Assuming the lookup is under an app named etc/apps/FOO/lookups ... is there a curl endpoint I can use to display the lookup's metadata? Thx.