Activity Feed
- Got Karma for Re: data rebalance progresses is very poor or getting stuck. Wednesday
- Got Karma for Re: Is there a REST API call for getting the status of a Search Head Cluster (SHC)?. 11-22-2024 04:08 AM
- Got Karma for Re: [SmartStore]Trigger that would cause a cluster to resync from remote storage. 08-15-2024 12:25 PM
- Got Karma for Re: [SmartStore]Trigger that would cause a cluster to resync from remote storage. 05-08-2024 01:36 AM
- Got Karma for Re: [SmartStore] What is Cluster bootstrap process?. 05-04-2024 06:10 AM
- Got Karma for Re: [smartstore] splunk smartstore and Data integrity. 01-04-2024 05:15 PM
- Got Karma for Large lookup caused the bundle replication to fail. What are my options?. 11-20-2023 12:13 PM
- Got Karma for Re: Large lookup caused the bundle replication to fail. What are my options?. 11-20-2023 12:13 PM
- Got Karma for Re: Smartstore:SmartStore cache is not respecting cache limits. 11-10-2023 09:22 PM
- Got Karma for Re: [smartstore] How to map S2 smartstore buckets to local splunk bucket?. 08-18-2023 09:03 AM
- Got Karma for Re: access.log indexed multiple time. 07-10-2023 02:20 AM
- Got Karma for Re: Too many Events generated for Windows Security EventCode 4662 causing high resource issues like CPU. 04-24-2023 09:28 AM
- Got Karma for Re: [SmartDtore] How to Analyse the CacheSize?. 04-04-2023 02:02 AM
- Got Karma for Re: [SmartStore] How is the Replication of Summary bucket managed in Splunk Smartstore?. 01-18-2023 05:39 PM
- Got Karma for Large lookup caused the bundle replication to fail. What are my options?. 01-06-2023 01:45 PM
- Got Karma for Re: Large lookup caused the bundle replication to fail. What are my options?. 01-06-2023 01:45 PM
- Posted Re: Post upgrade of 3 Node Search Head Cluster from vesrion 8.2.7 ,one SHC Kvstore status as DOWN on Knowledge Management. 11-29-2022 01:00 PM
- Posted Post upgrade of 3 Node Search Head Cluster from vesrion 8.2.7 ,one SHC Kvstore status as DOWN on Knowledge Management. 11-29-2022 12:54 PM
- Got Karma for Re: data rebalance progresses is very poor or getting stuck. 08-14-2022 12:37 AM
- Posted Re: Is there any controls to limit the size of a user search on Splunk Search. 07-25-2022 09:03 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 |
11-08-2019
09:15 PM
2 Karma
generally, I'd say ignore HTTP status code on their own. Some points around this:
i) There could be intermittent problems in these codes which get resolved by a retry.
ii)The codes become valuable when investigating other issues and that is why we are reporting
iii)They could raise red flags if some HTTP status codes start appearing, such as 401's, for example. It wouldn't be practical to enumerate which codes and with which frequency indicates a problem.
So if the frequency of the error for a bucket is in ore them, and for example, if it repeats for bucket's receipt.json..
01-04-2019 15:47:13.049 -0800 ERROR S3Client - command=get transactionId=0x7f19c9452000 rTxnId=0x7f19c9431000 status=completed success=N uri=http://URI/splunk-data/_internal/db/3f/9c/5~2BD2B9DE-8F5E-48AD-B566-3BC1ADC9502F/receipt.json statusCode=404 statusDescription="Not Found" payload=""
... View more
11-08-2019
09:15 PM
As part of internal testing, migrating data from the Classic index to SmartStore. The indexes.conf was configured with S3 configurations pointing to the on-prem S3 remote object store, we do see the data getting migrated from the classic index to SmartStore and we can see objects created on the S3 store.
Meanwhile, various 404 errors related to receipt.json file during this process. Interestingly those files do exist on S3 but SmartStore seems to check for this file before uploading it for the first time and at that time the S3 object store throws an error as that object didn't exist on the remote object store.
Here is an excerpt from the splunkd.log
... View more
Labels
- Labels:
-
smartstore
10-21-2019
11:36 AM
1 Karma
Editing lookups now require role upload_lookup_files since Splunk Enterprise 7.3.x. This requirement has not been documented yet.When a user without role "upload_lookup_files" attempts to add or edit a lookup they are directed to the Splunk Pony - message
Documentation JIRA s in works
BUG SPL-176155:Editing lookups roles have changed Splunk 7.3.x and now require new roll upload_lookup_files (which is undocumented)"
... View more
10-21-2019
11:34 AM
we recently upgrade our fairly large deployment of Splunk from version 7.2.6 to 7.3.2, and our users are unable to open "upload lookup" button or when trying to delete lookups..
Was there a new capability for lookups that was added?
... View more
10-03-2019
03:32 PM
The max_cache_size puts a cap on the the amount of storage in a single volume consumed by SmartStore cache.
This allows customers to split the warm/cold storage consumption between SmartStore and non-SmartStore indexes on the same volume.
Since there are no caps placed on non-smartStore indexes (module minFreeSpace), setting max_cache_size to non-zero can result in non-SmartStore indexes squeezing out space available for SmartStore cache.
One usecase for max_cache_size in addition to migration usecase is for customers to reduce their local storage consumption in staged manner. By limiting storage available to the cache relative to the available space, customers can identify an optimal cache size before reducing the indexer count.
... View more
10-03-2019
03:29 PM
The question is about the configuration of max_cache_size for the smart store.
@srerver.conf
max_cache_size =
* Specifies the maximum space, in megabytes, per partition, that the cache can occupy on disk. If this value is exceeded, the cache manager starts evicting buckets.
* A value of 0 means this feature is not used, and has no maximum size.
* Default: 0
What is the point of setting this to anything but 0?
... View more
- Tags:
- smartstore
10-01-2019
10:21 AM
1 Karma
You can Pick a single region for the s3 bucket, both sites point to it. whichever has more data.
although s3 supports cross region replication would mean that we can try out different nearest s3 site per cluster site,indexes.conf of the bundle that is pushed by master to the slaves would still need to use the same endpoint. editing the slave's indexes.conf manually to have different site is not recommended
Make sure the endpoint includes the region specifier otherwise ec2 instances will use their local regions
... View more
10-01-2019
10:17 AM
We are planning out migration to SmartStore within AWS
Currently, we are running on a multi-site cluster on EC2 instances.
Site 1 is in us-east-1(VA) and site 2 is in us-east-2(OH).
the recommendation is to put the S3 storage in the same AWS region as the indexers. With this being a multi-site cluster that spans 2 regions. What is the best way to configure it?
... View more
- Tags:
- smartstore
09-05-2019
12:03 PM
Here is something that will work -To delete bucket both from remote and locally use
curl -k -u admn:xxxxxxx https://:24711/services/data/indexes/_audit/freeze-buckets -d bucket_ids=_audit~100~33B81190-7EE1-4FCD-AC6D-DC4E3BEF7E1C -X POST
Note:
-Done on indexer
-works on standalone also
-In a cluster environment, the other indexer also would delete the bucket locally
-Suitable for S2 environment taking care of deleting remote bucket
... View more
09-04-2019
05:40 PM
With Splunk Version 7.3.1 and above will let you configure indexes from an indexer to different smartstore objects , for example below I have configure _internal index to use one smartstore and _audit index to use other one.
===========First Smartstore Configuration=======
[volume:my_s3_vol]
storageType = remote
path = s3://newrbal1
remote.s3.access_key = AXXKIAIQWJDOATYCYFTTTTTKWZ5A
remote.s3.secret_key = dCCCCCCCCCCN7rMvSN96RSDDDDYqcKeSSSSi3TcD6YQS8J+EzQI5Qm+Ar9
remote.s3.endpoint = https://s3-us-east-2.amazonaws.com
remote.s3.signature_version = v4
===========Second Smartstore Configuration=======
using AWS S3 storage
[volume:aws_s3_vol]
storageType = remote
path = s3://luantest
remote.s3.access_key = AKIASVRRRRDSSVCAAAANBVKZXK4T
remote.s3.secret_key = JYD7umcpFFFFHKM4/uq7Wi/rfyUUHdcSFFFz3j2N85bg8wK
remote.s3.endpoint = https://s3-us-east-2.amazonaws.com
remote.s3.signature_version = v4
=============Here index _internal is configured with smartstore [volume:my_s3_vol]=====
[_internal]
thawedPath = $SPLUNK_DB/_internal/thaweddb
remotePath = volume:aws_s3_vol/$_index_name
repFactor = auto
=============Here index _internal is configured with smartstore [volume:aws_s3_vol]=====
[_audit]
thawedPath = $SPLUNK_DB/_audit/thaweddb
remotePath = volume:my_s3_vol/$_index_name
repFactor = auto
... View more
09-04-2019
05:36 PM
I'm doing a proof of concept of SmartStore with multiple object stores. There appears to be a defect where the remote.s3.access_key (and maybe remote.s3.secret_key) is not being properly associated with the volume stanza.
Specifically, in my indexes.conf, I have the following:
[volume:remote_store_0]
storageType = remote
path = s3://splunk-ss-01-0
remote.s3.access_key = [REDACTED_0]
remote.s3.secret_key = [REDACTED_0]
remote.s3.endpoint = http://xx.xx.xx.xxx>
[volume:remote_store_1]
storageType = remote
path = s3://splunk-ss-01-1
remote.s3.access_key = [REDACTED_1]
remote.s3.secret_key = [REDACTED_1]
remote.s3.endpoint = http://xx.xx.xx.xxx>
What is happening is that when I try to use remote_store_1 the access key for remote_store_0 is being used. Note that the endpoint and path are properly associated with the volume specification. It is at least the access_key (and maybe the secret key) that is not being properly associated with the volume stanza.
The bug is particularly annoying since doing splunk cmd splunkd rfs -- ls --starts-with volume:remote_store_1 will use the correct access_key that is associated with the volume.
... View more
- Tags:
- smartstore
08-30-2019
04:08 PM
1)ERROR message
06-17-2019 22:48:08.445 -0700 ERROR CacheManagerHandler - ReverseIndex cannot add cacheId="bid|ceg_notification_dispatcher_nsprod~560~74F09ED5-8892-471C-9E6C-CD3BC6EDAC9D|" to search sid="remote_searchhead01-3.prd.cmn.mntspl.us-west-2_subsearch_tmp_1560836883.1" as this search has already completed and closed all buckets
S2 has some protection against lagging REST requests. There are times when a search process requests to open a bucket but then the search is stopped and cleaned up before the open request is handled. This used to cause us problems leaving buckets stuck with an invalid search reference which would prevent eviction (see "stale buckets" in jira). We have since noticed that some apps and sometimes search/subsearch will re-use the same SID causing these errors. Since we've recently changed the behavior for stale buckets we're in the process of relaxing this bit so we will let these requests through and avoid the error (for quake and then to be backported)
2)Error message
06-17-2019 22:13:08.380 -0700 WARN CacheManager - Attempting to decrement refcount of cacheId="bid|creditscore_prod_pod8~66~8D530702-839D-4679-BBAA-0E408F8EDAFD|" but sid=remote_searchhead01-1.prd.cmn.mntspl.us-west-2_tpargain_tpargainsearch_search34_1560834740.1903850_B8E7D0B9-BB6E-41DB-928B-411ABBA5AD90 is not an opener
The above error means a search tried to close a bucket that it didn't have opened. Any particular context you're looking for?
... View more
08-30-2019
04:06 PM
1)ERROR message
06-17-2019 22:48:08.445 -0700 ERROR CacheManagerHandler - ReverseIndex cannot add cacheId="bid|ceg_notification_dispatcher_nsprod~560~74F09ED5-8892-471C-9E6C-CD3BC6EDAC9D|" to search sid="remote_searchhead01-3.prd.cmn.mntspl.us-west-2_subsearch_tmp_1560836883.1" as this search has already completed and closed all buckets
2)Error message
06-17-2019 22:13:08.380 -0700 WARN CacheManager - Attempting to decrement refcount of cacheId="bid|creditscore_prod_pod8~66~8D530702-839D-4679-BBAA-0E408F8EDAFD|" but sid=remote_searchhead01-1.prd.cmn.mntspl.us-west-2_tpargain_tpargainsearch_search34_1560834740.1903850_B8E7D0B9-BB6E-41DB-928B-411ABBA5AD90 is not an opener
... View more
- Tags:
- smartstore
08-30-2019
03:33 PM
With Splunk smartstore Generally, ignore HTTP status code on their own
There could be intermittent problems in these codes which get resolved by a retry.
The codes become valuable when investigating other issues.
They could raise red flags if some HTTP status codes start appearing, such as 401's, for example. It wouldn't be practical to enumerate which codes and with which frequency indicates a problem.
... View more
08-12-2019
01:52 PM
Splunk has Cacheman's endpoint for a bucket requesting it to add the bucket to cachemanager_upload.json
Note:
Repeat the above for every bucket
Restart the indexer
Repeat the above for every indexer
Note : when you upload the bucket and if the bucket was uploaded successfully the splunkd.log:
08-07-2018 21:51:40.124 +0000 INFO CacheManager - cacheId="bid|_introspection~6~8A26F1F7-F89B-48FB-9398-1E9AC7A2235B|" is NOT on on stable storage, will transfer ...
08-07-2018 21:51:40.125 +0000 INFO CacheManager - action=upload, cacheId="bid|_introspection~6~8A26F1F7-F89B-48FB-9398-1E9AC7A2235B|", status=attempting
08-07-2018 21:51:40.473 +0000 INFO CacheManager - action=upload, cacheId="bid|_introspection~6~8A26F1F7-F89B-48FB-9398-1E9AC7A2235B|", status=succeeded, elapsed_ms=349
audit.log:
08-07-2018 21:51:40.125 +0000 INFO AuditLogger - Audit:[timestamp=08-07-2018 21:51:40.125, user=n/a, action=local_bucket_upload, info=started, cache_id="bid|_introspection~6~8A26F1F7-F89B-48FB-9398-1E9AC7A2235B|", prefix=sample/_introspection/db/49/47/6~8A26F1F7-F89B-48FB-9398-1E9AC7A2235B/guidSplunk-8A26F1F7-F89B-48FB-9398-1E9AC7A2235B][n/a]
08-07-2018 21:51:40.473 +0000 INFO AuditLogger - Audit:[timestamp=08-07-2018 21:51:40.473, user=n/a, action=local_bucket_upload, info=completed, cache_id="bid|_introspection~6~8A26F1F7-F89B-48FB-9398-1E9AC7A2235B|", local_dir="/opt/splunk/var/lib/splunk/_introspection/db/db_1532988790_1532988799_6_8A26F1F7-F89B-48FB-9398-1E9AC7A2235B", kb=260, elapsed_ms=349
*Here is an example to uploade multiple bucket *
curl --netrc-file nnfo -k -X POST https://localhost:8089/services/admin/cacheman/_bulk_register -d cache_id="bid|testindex~17~024011E7-E61E-45CE-82DE-732038D5C276|" -d cache_id="bid|testindex~22~024011E7-E61E-45CE-82DE-732038D5C276|" -d cache_id="bid|testindex~28~024011E7-E61E-45CE-82DE-732038D5C276|" -d cache_id="bid|testindex~34~024011E7-E61E-45CE-82DE-732038D5C276|" -d cache_id="bid|testindex~12~E646664A-D351-41E4-BBE7-5B02A08C44C9|" -d cache_id="bid|testindex~17~F876C294-3E3E-488A-8344-16727AC34C52|" -d cache_id="bid|testindex~17~E646664A-D351-41E4-BBE7-5B02A08C44C9|"
... View more
08-12-2019
01:48 PM
We notice that due to some reason some of the buckets never got loaded to the remote store. IS it feasible to upload the bucket manually
... View more
- Tags:
- splunk-enterprise
08-06-2019
09:46 AM
you can only use "| inputintelligence" on non-threat intelligence...given it's a local lookup you can just use "| inputlookup" ?
... View more
08-06-2019
09:46 AM
( as per https://docs.splunk.com/Documentation/ES/5.3.0/Admin/Addthreatintelcustomlookup) . and are unable to use this intelligence list with the "inputintelligence" command. Also, we see error like "Failed to read threatlist /opt/splunk/var/lib/splunk/modinputs/threatlist/oculus"
... View more
07-30-2019
09:55 AM
Splunk has a bug to guarantee searchability during rolling restart. The BUG#SPL-168132:Clustered indexes aren't fully searchable during an indexer cluster rolling upgrade
This bug is fixed in 7.2.8 and 7.3.x
... View more
07-30-2019
09:53 AM
I am trying to figure out if, during a rolling upgrade, we guarantee searchability. If not what is the expectation?
The customer is following steps as per >https://docs.splunk.com/Documentation/Splunk/7.2.4/Indexer/Searchablerollingupgrade
For now we have 12 indexer's in the cluster, but we are planning to add many more as an expansion plan.
Customer expect during "rolling upgrade" entire data will stay searchable, please confirm the expected behavior????
... View more
07-29-2019
10:09 AM
Although this question is answered in documentation , but this question comes up quite a bit.
It's not easy to roll back changes, for this reason, Splunk has not fully implemented this workflow and never tested it with any significance. Hence officially and otherwise not recommending moving out of S2 once enabled.
... View more
07-29-2019
10:07 AM
We would like to find out if it is possible to disable S2 when needed. Per our doc,
A SmartStore-enabled index cannot be converted to non-SmartStore
https://docs.splunk.com/Documentation/Splunk/7.3.0/Indexer/AboutSmartStore
... View more
- Tags:
- smartstore
07-16-2019
12:22 PM
In this case, you will need a cluster to re-discover the bucket that is only present on the remote. The cluster can be bootstrap and it will discover the bucket from remote and download them locally, which will enable them to use |dbinspet and later remove the bucket from both locally and from remote.
bootstrap command
$SPLUNK_HOME/bin/splunk _internal call /services/cluster/master/control/control/init_recreate_index -method POST
bootstrapping would ensure that buckets which are already present in the cluster would not be created again on the cluster.
bootstrapping would just list all the buckets on S3 and would then create the buckets which are not present on the cluster.
It is usually quick as well.
Hence if only missing a few buckets on the cluster, we can initiate bootstrapping and it would create these buckets.
Is also fairly safe / quick to run this for large deployments.
To discover these buckets, bootstrapping is the only option currently. it is not supported per index.
The entire operation is detached from the usual operations of CM - it is safe and quick as well.
... View more
07-16-2019
12:17 PM
I need to figure out the valid command that could be used to delete bucket locally and from a remote store. In the past, we used the command
curl -k -u admin: -X POST https://localhost:8089/services/cluster/master/buckets//remove_all
This commands only delete the bucket locally but bucket continues to exist on remote store.
To remove the bucket from the remote store the cli command is
$SPLUNk_HOME/bin/splunk cmd splunkd rfs -- rmV --starts-with bucket:_audit~2~761A77A2-6676-4BF9-83CD-1CB243ED61BF
Due to just using the "remove_all" to remove the bucket we are in a situation where are present only on remote and not locally.
Also these buckets are not visible to |dbinspect
... View more
- Tags:
- smartstore
07-11-2019
12:39 PM
1 Karma
The directory scheme is as follows when we upload a bucket to Smartstore:
{2 letter hash} / {2 letter hash} / {bucket_id_number-origin_guid} / {"guidSplunk"-uploader_guid}/ (bucket contents)
The (two) two letter hashes are determined by the first 4 characters of the sha1 output of "bucket-number_GUID" of buckets (doesnt care about et/lt/index).
For example:
my bucket on local storage is:
$SPLUNK_HOME/_internal/db/db_1533256878_1533256720_10_33A1AEFB-8C83-4005-80F0-6BEBC769EBE0
gets uploaded into remote storage as:
_internal/db/56/ba/10~33A1AEFB-8C83-4005-80F0-6BEBC769EBE0/guidSplunk-33A1AEFB-8C83-4005-80F0-6BEBC769EBE0
(note, the _internal/db comes from my s2 remote storage settings in indexes.conf)
because:
$ echo -n "10~33A1AEFB-8C83-4005-80F0-6BEBC769EBE0" | sha1sum
56bae43a9604d078d1d617ff9d63faa0a21302e0 -
note that the
56ba → 56/ba
is used as the leading two directories of our bucket.
also note that we also identify the uploader of the bucket - its very possible the same bucket is uploaded twice by different indexers, resulting in multiple copies in the bucket folder (there might be two guidSplunk-GUID1 and guidSplunk-GUID2). the receipt.json will specify which one all users of the bucket (readers/downloaders) should use.
... View more