pkeller's Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

pkeller's Topics

We've recently seen a significant spike in memory utilization on our search heads ... Looking at the files opened by mongod I'm seeing info like this: -rw------- 1 root root 16777216 Dec 7 16:37 s_S... See more...
We've recently seen a significant spike in memory utilization on our search heads ... Looking at the files opened by mongod I'm seeing info like this: -rw------- 1 root root 16777216 Dec 7 16:37 s_SA-AccyUyi@longstring@.0 -rw------- 1 root root 33554432 Sep 15 20:58 s_SA-AccyUyi@longstring@.1 -rw------- 1 root root 536608768 Nov 16 02:36 s_SA-AccyUyi@longstring@.10 -rw------- 1 root root 536608768 Sep 19 23:28 s_SA-AccyUyi@longstring@.11 -rw------- 1 root root 536608768 Dec 7 16:38 s_SA-AccyUyi@longstring@.12 -rw------- 1 root root 67108864 Sep 15 20:58 s_SA-AccyUyi@longstring@.2 -rw------- 1 root root 134217728 Dec 7 16:29 s_SA-AccyUyi@longstring@.3 -rw------- 1 root root 268435456 Dec 7 16:38 s_SA-AccyUyi@longstring@.4 -rw------- 1 root root 536608768 Sep 19 23:28 s_SA-AccyUyi@longstring@.5 -rw------- 1 root root 536608768 Dec 7 16:38 s_SA-AccyUyi@longstring@.6 -rw------- 1 root root 536608768 Dec 7 16:38 s_SA-AccyUyi@longstring@.7 -rw------- 1 root root 536608768 Sep 19 23:32 s_SA-AccyUyi@longstring@.8 -rw------- 1 root root 536608768 Sep 19 23:31 s_SA-AccyUyi@longstring@.9  Any idea why there are so many versions of some of these? ... shouldn't there typically only be a ".0" and a ".ns" file for each collection?   Thank you  
Having trouble finding documentation on these two parameters. But, say column 3 is an epoch timestamp, would input_timestamp_format be a single 's'? ie: input_timestamp_format = s Thank yo... See more...
Having trouble finding documentation on these two parameters. But, say column 3 is an epoch timestamp, would input_timestamp_format be a single 's'? ie: input_timestamp_format = s Thank you.
22:13:06.901Z INFO my-portal: blah : blah - success tracker: { "trackId": "foo", "hashedAccountId": "bar", "ip": "127.0.0.1", "queryUrl": "http://my.domain.com/aluminum/... See more...
22:13:06.901Z INFO my-portal: blah : blah - success tracker: { "trackId": "foo", "hashedAccountId": "bar", "ip": "127.0.0.1", "queryUrl": "http://my.domain.com/aluminum/batPreferences/txm", "queryMethod": "GET", "elapsed": 91.561 } The nodejs output looks kinda like what's shown above. Any suggestions for parsing this so that I can view the syntax highlighted json would be appreciated. I've tried a transforms to reassign the INFO line to a separate sourcetype, but that doesn't change the fact that I only see the raw text in my search. [nodejs:all] KV_MODE = json NO_BINARY_CHECK = 1 SHOULD_LINEMERGE = false TRUNCATE = 0 LINE_BREAKER = ([\r\n]+)(\d{2}:\d{2}:\d{2}) TRANSFORMS-strip_INFO = strip_INFO transforms.conf [strip_INFO] REGEX = ^\d{2}:\d{2}:\d{2}.\d{3}Z FORMAT = sourcetype::nodejs_out DEST_KEY = MetaData:Sourcetype Clearly this doesn't work, but I'm a bit stumped. Thank you.
I've been working on automating our UF upgrade process and have found what appears to be an issue with a deprecated key, sslKeysfilePassword ... When I upgrade an old 6.1 or 6.2 host beyond Splunk... See more...
I've been working on automating our UF upgrade process and have found what appears to be an issue with a deprecated key, sslKeysfilePassword ... When I upgrade an old 6.1 or 6.2 host beyond Splunk 6.5, I've found that while the UF can still maintain forwarding over SSL to our indexers, they can no longer handshake with our deployment server. Spending most of my week on this, I've come across a workaround where, prior to performing the upgrade ... ( stopping splunk; tar -zxf blah ), if I remove the deprecated key "sslKeysfilePassword" from etc/system/local/server.conf ... the handshake problem is no longer an issue. The odd thing here is that, this is the only thing that had to be changed to rectify the issue, but my understanding of a deprecated object is that it would just be ignored. It doesn't appear to be the case in this instance. So, this isn't really a question perse, but has anyone ever run up against this before?
In the Incident Review panel, we select a Notable Event, click on Edit Selected and a form pops up. I chose the first dropdown, selected "ACKIN" and clicked on Save and was returned: Unable to c... See more...
In the Incident Review panel, we select a Notable Event, click on Edit Selected and a form pops up. I chose the first dropdown, selected "ACKIN" and clicked on Save and was returned: Unable to change 1 events: transition from New to ACKIN is not allowed (1 event) The user has both "edit_reviewstatuses" and "edit_notable_events" yet the error is returned.
The raw data looks like: ... blah, blah, blah ... "detail-type": "GuardDuty Finding", "time": "2019-03-14T14:40:39Z"} On our Heavy Forwarder I've setup in Splunk_TA_aws/local [aws:kinesis... See more...
The raw data looks like: ... blah, blah, blah ... "detail-type": "GuardDuty Finding", "time": "2019-03-14T14:40:39Z"} On our Heavy Forwarder I've setup in Splunk_TA_aws/local [aws:kinesis] TIME_PREFIX = \s+\"time\":\s+\" TIME_FORMAT = %Y-%m-%dT%H:%M:%SZ MAX_TIMESTAMP_LOOKAHEAD = 40960 The regex works on the standalone Splunk instance I have on my laptop, it works on regex101.com But when the data is indexed, the time Splunk indexes on my cluster is: 3/14/19 2:46:29.090 PM Any clues as to what might be happening here?
We've recently added 50% more indexers. After rebalancing the cluster, we're finding that we still have a gap on our hot/warm storage where, the new indexers have maybe 2 or 3 warm buckets, and the o... See more...
We've recently added 50% more indexers. After rebalancing the cluster, we're finding that we still have a gap on our hot/warm storage where, the new indexers have maybe 2 or 3 warm buckets, and the older ones have maybe 20 or 30. The splunk rebalance cluster-data command has definitely evened the total bucket counts between the indexers, but is there a way to balance only the warm buckets? (I believe the answer to that is 'no').
Using: index=default sourcetype=my:sourcetype | extract pairdelim="][", kvdelim="=", auto=f Feb 19 09:44:02 foobar Feb 19 2019 09:44:02.322 UTC : [My Port=2000][Device name=MyDevice][Device I... See more...
Using: index=default sourcetype=my:sourcetype | extract pairdelim="][", kvdelim="=", auto=f Feb 19 09:44:02 foobar Feb 19 2019 09:44:02.322 UTC : [My Port=2000][Device name=MyDevice][Device IP address=10.3.36.10][Device type=11] Splunk extracts fields named: My_Port, Device_name, Device_IP_Address, Device_type Is there a props extract that will do the same as an automatic extraction, when there will be many unique kv pairs in events with this sourcetype?
We're performing a migration of our syslog infrastructure and I need to get some metrics that show progress. Since the legacy environment would have a source name of "/data/device/path/to/file" and t... See more...
We're performing a migration of our syslog infrastructure and I need to get some metrics that show progress. Since the legacy environment would have a source name of "/data/device/path/to/file" and the new environment has a source name of "/syslog/device/path/to/file" I'm trying to manipulate the results so that 1) if the source name begins with: /data ... set syslog_source = "OldSyslog" 2) if the source name begins with: /syslog ... set syslog_source = "NewSyslog" But my SPL is clearly flawed here as the 'count' from a 'source' doesn't get passed to syslog_source. | tstats count WHERE index=* (source="/data/*" OR source="/syslog/*") earliest=-6d@d latest=@d by _time span=1d source | eval syslog_source=case(match(source,"/syslog/*"),"NewSyslog",match(source,"/data/*"),"OldSyslog") | xyseries _time, syslog_source, count The goal here is to just consolidate the count of all sources matching "/data" or "/syslog" into counts of 'syslog_source', but I'm not sure how to pass those counts along.
Data model acceleration enforcement causing issues with Enterprise Security upgrade I upgraded ES from 5.0.0 to 5.1.1 today and am concerned about the whole process. Upgrading ES is simple enou... See more...
Data model acceleration enforcement causing issues with Enterprise Security upgrade I upgraded ES from 5.0.0 to 5.1.1 today and am concerned about the whole process. Upgrading ES is simple enough, but when forced to go through the set up, the process of updating helper apps, including Splunk_SA_CIM enables data model accelerations on data models that we don't use. ( It overwrites Splunk_SA_CIM/local/datamodels.conf with all datamodels set to acceleration = true It seems to break the whole model of "put things in your local directory so they won't be touched during an upgrade" In addition, we have to go into Settings -> Data Inputs -> Data Model Acceleration Enforcement Settings and manually Disable all 19 items, otherwise it appears that the datamodels.conf file gets rewritten immediately after you make the change. Is there a better process for ensuring that you don't lose your intended configs after an ES upgrade? Is there a config file that is associated with "Data Model Acceleration Enforcement Settings" ( I have not been able to find one ) Thank you
I'm attempting to update our certs between our universal forwarders (UF) and indexers in our test environment. I believe I have the certs properly generated and in place. But when the UF attempts to ... See more...
I'm attempting to update our certs between our universal forwarders (UF) and indexers in our test environment. I believe I have the certs properly generated and in place. But when the UF attempts to forward, we see this error: 10-19-2018 08:13:14.661 -0600 WARN SSLCommon - Received fatal SSL3 alert. ssl_state='SSLv3 read server hello A', alert_description='handshake failure'. 10-19-2018 14:17:44.863 +0000 WARN SSLCommon - Received fatal SSL3 alert. ssl_state='SSLv3 read client hello C', alert_description='handshake failure'. 10-19-2018 14:17:44.863 +0000 ERROR TcpInputProc - Error encountered for connection from src=nn.nn.nn.nn:38438. error:1408A0C1:SSL routines:ssl3_get_client_hello:no shared cipher This leads me to believe that the cipherSuite needs to be updated ... indexer server.conf - ( Splunk 7.1.3 ] [sslConfig] sslVersions = tls1.2 sslVersionsForClient = tls1.2 cipherSuite = ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDH-ECDSA-AES256-GCM-SHA384:ECDH-ECDSA-AES128-GCM-SHA256:ECDH-ECDSA-AES128-SHA256:AES256-GCM-SHA384:AES128-GCM-SHA256:AES128-SHA256 ( etc/system/local/inputs.conf under [SSL] ) cipherSuite = ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDH-ECDSA-AES256-GCM-SHA384:ECDH-ECDSA-AES128-GCM-SHA256:ECDH-ECDSA-AES128-SHA256:AES256-GCM-SHA384:AES128-GCM-SHA256:AES128-SHA256:TLSv1+HIGH:TLSv1.2+HIGH:@STRENGTH UF - Splunk 6.6.4 - etc/system/default/server.conf [sslConfig] cipherSuite = ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256 etc/system/default/outputs.conf [tcpout] sslVersions = tls1.2 cipherSuite = ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:AES256-GCM-SHA384:AES128-GCM-SHA256:AES128-SHA256:ECDH-ECDSA-AES256-GCM-SHA384:ECDH-ECDSA-AES128-GCM-SHA256:ECDH-ECDSA-AES256-SHA384:ECDH-ECDSA-AES128-SHA256 I've been using this link to generate and set up the new forwarding certs. https://wiki.splunk.com/images/f/fb/SplunkTrustApril-SSLipperySlopeRevisited.pdf
As I go through the manual process of trying to migrate queries from dbConnect v1.x to dbConnect 3.1.3, I'm having issues with the Edit Input panel. I follow the steps. Choose a valid connect... See more...
As I go through the manual process of trying to migrate queries from dbConnect v1.x to dbConnect 3.1.3, I'm having issues with the Edit Input panel. I follow the steps. Choose a valid connection - check Browse structure and type SQL .... - check Pick a rising column and set the checkpoint value - check Update SQL to accept the checkpoint value and make sure it works. This is where I run into a problem.The second I start typing "WHERE TIME_STAMP > ? .... The Rising Column dropdown completely empties out and the query returns: com.microsoft.sqlserver.jdbc.SQLServerException: The value is not set for the parameter number 1. ( No results found ) This makes me unable to save the query and actually set a Rising Column and a Checkpoint Value If I execute the same query using EVENT_TIME, things work ... but, both EVENT_TIME and TIME_STAMP are 'bigint' objects, and they both show up in the batch query results, so the question would be, why is TIME_STAMP an invalid field to use as a rising column in dbConnect 3 but works perfectly in dbConnect 1? The target database is MS SQL Server
We're trying to grab cloudtrail datasources from AWS using the Splunk_TA_aws and even though the documentation says that initial_scan_datetime should be configured as a relative time (per: https://d... See more...
We're trying to grab cloudtrail datasources from AWS using the Splunk_TA_aws and even though the documentation says that initial_scan_datetime should be configured as a relative time (per: https://docs.splunk.com/Documentation/AddOns/released/AWS/S3 ) .. the UI configuration rejects that format. And when we try to enter a specific date/time ... ie: initial_scan_datetime = 2018-04-01T00:00:00Z ... Splunk still starts trying to collect data as far back as it exists ... ( in our case: 2016 ) We've also tried: (per the S3 documentation page ) initial_scan_datetime = -7d@d And that also fails. Are we configuring the inputs incorrectly, or is this a bug.
It seems that scheduler.log events are all prepared for parsing 04-09-2018 23:35:04.548 +0000 ERROR SavedSplunker - savedsearch_id="nobody;my_lookups;Unix DHCP Refresh", message="Error in 'lookup... See more...
It seems that scheduler.log events are all prepared for parsing 04-09-2018 23:35:04.548 +0000 ERROR SavedSplunker - savedsearch_id="nobody;my_lookups;Unix DHCP Refresh", message="Error in 'lookup' command: Lookups: The lookup table 'dhcp_lookup' does not exist or is not available.". No actions executed Yet, etc/app/search/default/props.conf is insistent on overwriting that so that it extracts EVERYTHING after "SavedSplunker - " into the the 'message' field. [scheduler] EXTRACT-fields = (?i)^(?:[^ ]* ){2}(?:[+\-]\d+ )?(?P<log_level>[^ ]*)\s+(?P<component>[^ ]+) - (?P<message>.+) So, now instead of message being: Error in 'lookup' command: Lookups: The lookup table 'dhcp_lookup' does not exist or is not available it's expanded to: savedsearch_id="nobody;my_lookups;Unix DHCP Refresh", message="Error in 'lookup' command: Lookups: The lookup table 'dhcp_lookup' does not exist or is not available.". No actions executed So the question is, why? And why choose the same fieldname that's already been used in the event itself? It seems that it would have been much more logical to have chosen a different fieldname than 'message'. Thank you
I'm in the process of manually migrating over 100 database inputs from dbx 1.1.5 to 3.1.2 and have run into an issue with time formatting. The dbx 1.1.5 query formats the date time fields as epoch... See more...
I'm in the process of manually migrating over 100 database inputs from dbx 1.1.5 to 3.1.2 and have run into an issue with time formatting. The dbx 1.1.5 query formats the date time fields as epoch, yet the same query formats the fields as: YYYY-MM-DD HH:MM:SS.NNN The query is very small: (Oracle database) SELECT (select sysdate from dual) as "collection_time", lastconnectiontime,agentname as "src_host", version FROM table_name The output I get is: [ dbConnect 3.1.2 ] 2018-04-06 09:12:00.958, collection_time="2018-04-06 02:12:00.0", LASTCONNECTIONTIME="2016-11-21 17:33:49.523", src_host="xxxxxxxxxxxx", VERSION="1.1.1.1" [ dbConnect 1.1.5 ] collection_time=1522980720.000 LASTCONNECTIONTIME=1479749629.523 src_host=xxxxxxxxxxxx VERSION=1.1.1.1 So, really two questions, the most important being, how do I adjust the 3.1.2 query so that I get the identical date/time format as the 1.1.5 query The other question ... is it possible to modify the 3.1.2 query remove the leading date/time the query was executed either by a conf setting in the app or some other dbxquery option. (2018-04-06 09:12:00.958)
etc/system/local/authentication.conf and etc/system/metadata/local.meta both contain many old entries of users that may no longer be using the platform. The files both get updated automatically when ... See more...
etc/system/local/authentication.conf and etc/system/metadata/local.meta both contain many old entries of users that may no longer be using the platform. The files both get updated automatically when a new user logs in. On a search cluster, is there a recommended solution for removing these entries? My plan was just to shutdown the cluster members, removing all the cached data and restarting, but is there a less disruptive way? Thank you.
Running Splunk Add-on for Microsoft Cloud Services v 2.0.1.1 The directories underneath, var/lib/splunk/modinputs/ that are being written to by this Technology Add-on are not cleaning themselves u... See more...
Running Splunk Add-on for Microsoft Cloud Services v 2.0.1.1 The directories underneath, var/lib/splunk/modinputs/ that are being written to by this Technology Add-on are not cleaning themselves up. Does this get fixed in 2.1? The performance of my heavy forwarder running this TA is very, very poor.
I have a user whose search export results are capping at 10Mb ... But with the admin role I can export well beyond that. Anyone know the specific capability that controls this? Thank you ... Splun... See more...
I have a user whose search export results are capping at 10Mb ... But with the admin role I can export well beyond that. Anyone know the specific capability that controls this? Thank you ... Splunk 6.6.3
On one of my non-production search deployers, when I issue an apply shcluster-bundle, I am not being challenged for username/password after the first successful authentication following a splunk res... See more...
On one of my non-production search deployers, when I issue an apply shcluster-bundle, I am not being challenged for username/password after the first successful authentication following a splunk restart. I've compared the server.conf->shclustering between two deployers, one that seems to challenge me pretty much ever time (preferred) and this other one that never challenges me ... the shclustering configs are identical. Can anyone point me to where this can be configured? I do wish to be challenged routinely because we typically perform these operations across a team of Splunk admins using their own credentials. The current situation implies that Splunk is seemingly caching the credentials indefinitely. note: I realize I can use -auth username:password ... but obviously, the cleartext password then gets written into the shell history which I do not want.
I have a volume defined: [volume:hot] path = /indexes/warm maxVolumeDataSizeMB = 2097152 [test] homePath=volume:hot/test/db coldPath=volume:cold/test/colddb thawedPath=/indexes/cold/t... See more...
I have a volume defined: [volume:hot] path = /indexes/warm maxVolumeDataSizeMB = 2097152 [test] homePath=volume:hot/test/db coldPath=volume:cold/test/colddb thawedPath=/indexes/cold/test/thaweddb repFactor=autotestmaxDataSize = auto_high_volume So ... I'd like this 'test' index to not be controlled by the 2Tb volume size limit, but I don't want to move it to another filesystem. The index is small, and I'd like to let it be grow to > 10Gb before it starts rolling over to cold ... can I create another volume (on the same path) [volume:test] path = /indexes/warm maxVolumeDataSizeMB = 10000 [test] homePath=volume:test/test/db coldPath=volume:test/test/colddb thawedPath=/indexes/cold/test/thaweddb repFactor=autotestmaxDataSize = auto_high_volume Would that configuration make it so, when /indexes/warm reaches 2Tb, it will start moving older buckets to cold without touching the 'test' index? Or, will this entirely mess things up. Thank you