Activity Feed
- Posted Re: Routing events is creating bottleneck for ingestions- How do I resolve this issue? on Getting Data In. 08-15-2022 01:33 PM
- Karma Re: Routing events is creating bottleneck for ingestions- How do I resolve this issue? for richgalloway. 08-15-2022 01:14 PM
- Posted Re: Routing events is creating bottleneck for ingestions- How do I resolve this issue? on Getting Data In. 08-07-2022 01:15 PM
- Posted Re: Routing events is creating bottleneck for ingestions- How do I resolve this issue? on Getting Data In. 08-04-2022 07:37 PM
- Posted Routing events is creating bottleneck for ingestions- How do I resolve this issue? on Getting Data In. 08-03-2022 09:33 AM
- Tagged Routing events is creating bottleneck for ingestions- How do I resolve this issue? on Getting Data In. 08-03-2022 09:33 AM
- Tagged Routing events is creating bottleneck for ingestions- How do I resolve this issue? on Getting Data In. 08-03-2022 09:33 AM
- Posted How to create a search that shows how much old data is used past 30 days? on Getting Data In. 03-06-2022 07:22 PM
- Tagged How to create a search that shows how much old data is used past 30 days? on Getting Data In. 03-06-2022 07:22 PM
- Karma Re: How to copy indexed data from one index to another index in smartstore enabled cluster for richgalloway. 03-02-2022 06:07 PM
- Posted Re: On-prem to Smartstore migration issues- Why are the event counts not matching? on Knowledge Management. 02-25-2022 01:52 PM
- Tagged Re: On-prem to Smartstore migration issues- Why are the event counts not matching? on Knowledge Management. 02-25-2022 01:52 PM
- Posted On-prem to Smartstore migration issues- Why are the event counts not matching? on Knowledge Management. 02-24-2022 12:21 PM
- Posted How to copy indexed data from one index to another index in smartstore enabled cluster on Getting Data In. 02-09-2022 02:07 PM
- Tagged How to copy indexed data from one index to another index in smartstore enabled cluster on Getting Data In. 02-09-2022 02:07 PM
- Posted set up an alert for SHC members on Deployment Architecture. 06-08-2021 02:13 PM
- Karma Re: How to use rex to extract JSON text in "msg" keyValue pair? for gokadroid. 03-10-2021 07:36 PM
- Karma Re: How to use rex to extract JSON text in "msg" keyValue pair? for gokadroid. 03-10-2021 07:36 PM
- Posted Re: Passing dynamic parameters in search running from cli on Splunk Search. 02-18-2021 06:19 AM
- Posted Passing dynamic parameters in search running from cli on Splunk Search. 02-14-2021 05:06 PM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 |
08-15-2022
01:33 PM
Yes, all data is one place, HF does routing . We are trying see if moving the routing to indexer helps and also we are trying to find a scalable solution. The only way as of now is to view contents. I will add couple HF see if that helps. Thank you!
... View more
08-07-2022
01:15 PM
Below is inputs.conf on UF [batch:///flume/rollingfiles/process] _TCP_ROUTING = prod-a_hf crcSalt = <SOURCE> disabled = false index = unknownsentinel move_policy = sinkhole recursive = false sourcetype = cs:sentinel:unknown:audit:json whitelist = \.rd$ UF will do any transformation of events, so it has to go through the HF / Indexer, which would be same issue we are seeing now right . Please let me know If am missing anything I way its setup now is UF ---> HF ---> Indexer cluster UF --> batch monitor to to read the files with multiple events HF --> to route each event based of tenant Id to specific index and sourcetype
... View more
08-04-2022
07:37 PM
This is way it written before to handle few case statements , but the list keep on growing. we have set up batch to read the files on UF and each event is getting evaluated with case to ingest to index with right sourcetype. do you mean moving this inputs.conf on UF and does not make huge inputs.conf if we move ? we have 5 UF and 2 HF , I have seen the issue added 2 HF , so now 5 UF and 4 HF even then it not ingesting fast enough.
... View more
08-03-2022
09:33 AM
On HF we have routing summaries in transforms.conf which are take more time and creating a bottleneck for us We have below number of routing summaries ~2000 entries for index routing ~200 entries for sourcetype routing
Can you please provide suggestions to route the events faster and efficiently.
Sample from transform.conf. [route_sentinel_to_index] INGEST_EVAL = index:=case(\ match(_raw, "\"TENANT\":\"xxxxxx-b589-c11a968d4876\""), "nacoak_mil", \ . . .<1997 entries> . . match(_raw, "\"EVENT_TIME\":\"\d{13}\""), "unknown_events", \ true(), "unknownsentinel")
[apply_sourcetype_to_sentinel] INGEST_EVAL = sourcetype:=case(\ match(_raw, "\"SYSTEM\":\"xxxx-b3a7-xxxxxx\""), "cs:fhir:prod:audit:json", \ match(_raw, "\"SYSTEM\":\"xxxxxxx-d424c20xxxx\""), "cs:railsapp_server:ambulatory:audit:json", \ . .<198 entries> . true(), "cs:sentinel:unknown:audit:json")
... View more
Labels
03-06-2022
07:22 PM
Hi,
I have smartstore cluster in AWS with frozenTimePeriodInSecs =(7 years) and In DMC I see there are lots of downloading buckets from S3. I would like to know how much old data is retrieved so that I can efficiently allocate space to the cache, does anyone have any spl query to get details on how much old data is retrieved per index.
... View more
- Tags:
- splunk-enterprise
02-25-2022
01:52 PM
I have added below stanza on my on-prem indexes.conf and server.conf indexes.conf [volume:splunk] storageType = remote path = s3://<bucketName> remote.s3.endpoint = https://s3.us-east-2.amazonaws.com remote.s3.encryption = sse-kms remote.s3.kms.key_id = remote.s3.access_key = remote.s3.secret_key = Server.conf [cachemanager] max_concurrent_uploads = 2 eviction_policy = noevict hotlist_recency_secs = 3888000 adding remotePath = volume:splunk/$_index_name specific index which I plan to migrate and rolling restart the indexer cluster. I am using below commands to compare the event count . | tstats count where index=<IndexName> by index | dbinspect index=<IndexName> | dedup bucketId | stats sum(eventCount) by index I am monitoring the migration status using below command on CM. /opt/splunk/bin/splunk search "|rest /services/admin/cacheman/_metrics |fields splunk_server migration*" -auth admin:$(cat /etc/splunk/password) .
... View more
- Tags:
- indexing
02-24-2022
12:21 PM
Hi,
We are migrating our cluster from on-prem to a smart-store enabled cluster in AWS, a few indexes at a time, during which process event counts is not matching in some cases.
Case1: Eventcount in aws cluster is less than event count in on-prem
Case2: Eventcount in aws cluster is more than event count in on-prem
Any idea what might cause the event count, not to match?
... View more
Labels
- Labels:
-
smartstore
02-09-2022
02:07 PM
I have a requirement to move indexed data from index-A to another index-B in a smart-store enabled cluster. Both indexes (A & B ) has data in the AWS s3-bucket. I would like to know if the below steps works?
Steps :
Stop the incoming data to index-A
Roll the hot bucket on index-A
Move the data from s3 for index-A to index-B
Using the aws s3 sync s3://bucket-name/index-A s3://bucket-name/index-B
Run bootstrap command on CM.
... View more
- Tags:
- smartstore
Labels
- Labels:
-
indexer
06-08-2021
02:13 PM
I have my Search head cluster in AWS and I am looking to set up an alert each time new SHC members get added to the SHC cluster and old members get removed. I came across enabling "DMC Alert - Search Peer Not Responding", but it checks for all members (CM, Indexers, SHC members) added to MC . Can you please suggest if there is any other way to set up only for SHC members?
... View more
Labels
- Labels:
-
search head clustering
02-18-2021
06:19 AM
I have implemented this way. query="index=* source=${c2_source}/*.gz earliest=-1d@d | stats count" event_count=$(/opt/splunk/bin/splunk search "$query" -uri 'https://<SH-IP>:8089/' -auth admin:password 2>/dev/null) echo $event_count
... View more
02-14-2021
05:06 PM
Hi, Is there was to dynamically pass a value like below in Splunk for running a search from cli. I am trying to write a script to find event count from source files on HF and compare event to count indexed by running the below search /opt/splunk/bin/splunk search 'index=* source=${c2_source}/*.gz | stats count' -uri 'https://<SH IP>:8089/' -auth admin:xxxxxxxxxx 2>/dev/null Or is there way to achive using restapi commands
... View more
- Tags:
- search
Labels
- Labels:
-
search job inspector
02-01-2021
12:41 PM
Hi, In Splunk's internal log file I can see the log file was processed by Splunk to index, but when I am trying to search the same index from SH, I am not able to find the events. This is happening intermittently on a few of the log files. Log from splunkd HF: INFO TailReader - Batch input finished reading file='/xx/log/xxxxxxx/processed/archive/processed.log_2021-01-20T12:45:01.log' No results from below search with all-time index=* source="/xxx/log/xxxxxxx/processed/archive/processed.log_2021-01-20T12:45:01.log"
... View more
Labels
- Labels:
-
distributed search
-
indexer clustering
01-14-2021
04:53 PM
After adding below on the forwarder, the slow indexing issue was fixed. outputs.conf [tcpout:p2s] maxQueueSize = 7MB
... View more
01-13-2021
11:43 AM
@scelikok Thanks for replying, I have checked bandwidth/latency issues, there are none, in a test I am able to send 5Gb of data in 60 seconds [SUM] 0.00-60.00 sec 5.35 GBytes 766 Mbits/sec receiver As per the monitoring console, I don't see any indexing issues
... View more
01-05-2021
03:58 PM
Hi I have started historical indexing by copying the .gz files on the HF. After that, I am seeing below in splunkd.log 01-05-2021 18:43:00.728 -0500 WARN TailReader - Could not send data to output queue (parsingQueue), retrying... 01-05-2021 18:43:01.039 -0500 WARN TcpOutputProc - The TCP output processor has paused the data flow. Forwarding to output group p2s has been blocked for 10 seconds. This will probably stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data. 01-05-2021 18:43:06.013 -0500 WARN TcpOutputProc - The TCP output processor has paused the data flow. Forwarding to output group p2s has been blocked for 10 seconds. This will probably stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data. 01-05-2021 18:43:11.049 -0500 WARN TcpOutputProc - The TCP output processor has paused the data flow. Forwarding to output group p2s has been blocked for 20 seconds. This will probably stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data. 01-05-2021 18:43:20.032 -0500 WARN TcpOutputProc - The TCP output processor has paused the data flow. Forwarding to output group p2s has been blocked for 10 seconds. This will probably stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data. ==> In metric.log on HF 01-05-2021 18:47:08.734 -0500 INFO Metrics - group=queue, ingest_pipe=1, name=indexqueue, blocked=true, max_size_kb=20480, current_size_kb=20479, current_size=7457, largest_size=7703, smallest_size=6737 01-05-2021 18:47:08.735 -0500 INFO Metrics - group=queue, ingest_pipe=2, name=indexqueue, blocked=true, max_size_kb=20480, current_size_kb=20479, current_size=7443, largest_size=7482, smallest_size=6719 01-05-2021 18:47:08.735 -0500 INFO Metrics - group=queue, ingest_pipe=2, name=typingqueue, blocked=true, max_size_kb=20480, current_size_kb=20479, current_size=7476, largest_size=7489, smallest_size=6735 01-05-2021 18:47:08.736 -0500 INFO Metrics - group=queue, ingest_pipe=3, name=aggqueue, blocked=true, max_size_kb=1024, current_size_kb=1023, current_size=367, largest_size=415, smallest_size=0 01-05-2021 18:48:59.729 -0500 INFO Metrics - group=queue, ingest_pipe=1, name=indexqueue, blocked=true, max_size_kb=20480, current_size_kb=20479, current_size=7676, largest_size=7703, smallest_size=6666 01-05-2021 18:48:59.730 -0500 INFO Metrics - group=queue, ingest_pipe=3, name=aggqueue, blocked=true, max_size_kb=1024, current_size_kb=1023, current_size=357, largest_size=368, smallest_size=0 01-05-2021 18:52:03.732 -0500 INFO Metrics - group=queue, ingest_pipe=0, name=indexqueue, blocked=true, max_size_kb=20480, current_size_kb=20479, current_size=7241, largest_size=7491, smallest_size=6542 01-05-2021 18:52:03.736 -0500 INFO Metrics - group=queue, ingest_pipe=2, name=typingqueue, blocked=true, max_size_kb=20480, current_size_kb=20479, current_size=7468, largest_size=7478, smallest_size=6443 01-05-2021 18:52:03.737 -0500 INFO Metrics - group=queue, ingest_pipe=3, name=aggqueue, blocked=true, max_size_kb=1024, current_size_kb=1023, current_size=360, largest_size=370, smallest_size=0 01-05-2021 18:55:01.732 -0500 INFO Metrics - group=queue, ingest_pipe=0, name=indexqueue, blocked=true, max_size_kb=20480, current_size_kb=20479, current_size=7243, largest_size=7316, smallest_size=6545 01-05-2021 18:55:01.732 -0500 INFO Metrics - group=queue, ingest_pipe=0, name=parsingqueue, blocked=true, max_size_kb=10240, current_size_kb=10239, current_size=1266, largest_size=1272, smallest_size=1030 01-05-2021 18:55:01.733 -0500 INFO Metrics - group=queue, ingest_pipe=0, name=typingqueue, blocked=true, max_size_kb=20480, current_size_kb=20479, current_size=7238, largest_size=7323, smallest_size=6578 ------ I have below setting on HF . limits.conf [thruput] maxKBps = 0 Server.conf [general] parallelIngestionPipelines = 4 [queue] maxSize = 20MB [queue=parsingQueue] maxSize = 10MB My HF is on-prem server and splunk indexer cluster is on AWS . Can you please let me know way speed up my indexing .
... View more
Labels
- Labels:
-
using Splunk Enterprise
12-16-2020
12:38 PM
@scelikok I was looking to know the progress, I mean how many buckets or data size in GB or MB are still pending to replicate from the indexer where I ran splunk offline command
... View more
12-15-2020
07:03 PM
Hi, I ran "splunk offline --enforce-counts" command on one of the indexer servers in a multisite cluster. it has been over a day. I would like to know the status or progress of my offline command. Is there a way to know status or progress?
... View more
- Tags:
- indexer
Labels
- Labels:
-
administration
11-04-2020
02:20 PM
Hi, In smart store splunk clusters with smart store enabled on all indexes with remotePath in [default} stanza, Is there a way disable an index or make an index readonly ? # splunk smart-store settings [default] remotePath = volume:remote_store/$_index_name disabled = <boolean>
* Toggles your index entry off and on.
* Set to "true" to disable an index.
* CAUTION: Do not set this setting to "true" on remote storage enabled indexes.
* Default: false isReadOnly = <boolean>
* Whether or not the index is read-only.
* If you set to "true", no new events can be added to the index, but the
index is still searchable.
* You must restart splunkd after changing this setting. Reloading the
index configuration does not suffice.
* Do not configure this setting on remote storage enabled indexes.
* If set to 'true', replication must be turned off (repFactor=0) for the index.
* Default: false
... View more
10-23-2020
04:43 PM
To change the default data model location and cache manager location( smart store enabled) on an indexer I see we have 2 options. 1) Updating splunk-launch.conf with SPLUNK_DB =<custom file system where hot-buckets are stored > (/splunkdata/data/internal) 2) Updating path of stanza volume:_splunk_summaries > SPLUNK_HOME/etc/slave-apps/_cluster/local/indexes.conf [volume:_splunk_summaries] path = /splunkdata/data/internal > SPLUNK_HOME/etc/system/default/indexes.conf [volume:_splunk_summaries] path = $SPLUNK_DB Do we have to change in both splunk-launch.conf and indexes.conf, If we have an update only one of the above options what is recommended way also what are pros and cons for updating one of them .
... View more
Labels
- Labels:
-
indexer
12-02-2019
02:59 PM
1 Karma
Hey Guys,
We are planning to migrate a single-site cluster to a multisite cluster.
For converting single-site bucket to multi-site bucket we are planning to use “constrain_singlesite_buckets = false” after the migration. As our data in single-site is around 12TB.
1)Can you please let me know how cluster master prioritize replication of existing single-site data(12TB) and new indexing data (350 GB per day) and other activities.
2)I would like to know if users have any negative impact until site search factor and site replication factor is met.
3)We are using indexer discovery would there be any challenges as the cluster master is busy in handling replication.
We have 6 indexers in site 1 (existing cluster ), we are planning to add 6 indexers in site 2 as part of the multi-site cluster
Below is RF and SF :
"site_replication_factor" : "origin:2,total:3"
"site_search_factor" : "origin:1,total:2”
"replication_factor": 2
"search_factor": 2
... View more
10-09-2019
12:23 PM
Recommendation from Splunk PS:
SSE-C (Customer Managed Keys)
This is the recommended option in the documentation but this will change very soon.
Advantages
The self-managed key so there is an element of trust as the creator/generator of the key.
No KMS API operation limits to contend with.
Disadvantages
Keys are in plain text on the indexes.conf file. This would not pass a security risk assessment.
Advanced key management requires an advanced key management solution e.g. Hardware Security Modules for storage and management.
To rotate keys, you need to move the data from SmartStore back to an EBS volume and re-upload the data with the new key. Splunk cannot be running during this operation. This is by far the biggest drawback.
SSE-C while technically valid requires more due care of the customer to ensure adequate protection of the data and is subject to human error and misconfiguration. It is only appropriate for customers with specific requirements to use customer-managed keys.
SSE-KMS has Amazon manage your keys for you and is more flexible than SSE-C. There are limits that are described here: https://docs.aws.amazon.com/kms/latest/developerguide/limits.html
... View more
04-24-2019
07:44 AM
I am enabling smart store on Splunk 7.2.6 with SSE-C. My smart store is working without SSL parameters successfully.
https://docs.splunk.com/Documentation/Splunk/7.2.5/Indexer/SmartStoresecuritystrategies
After adding below configuration to /opt/splunk/etc/_master-apps/_cluster/local/indexes.conf trying to apply the bundle to form clustermaster UI and facing an issue: Bad SSL settings for KMS leading to bad ssl context for volume=remote_store.
I am using AWS role for S3 buket access.
[volume:remote_store]
storageType = remote
path = s3://buket-name
remote.s3.access_key =
remote.s3.secret_key =
remote.s3.endpoint = https://s3.us-west-2.amazonaws.com
remote.s3.encryption = sse-c
remote.s3.encryption.sse-c.key_type = kms
remote.s3.encryption.sse-c.key_refresh_interval = 86400
remote.s3.kms.auth_region = us-west-2
remote.s3.kms.key_id = xxxxxxxxxxxxxxxxxxxxxxxx
remote.s3.kms.sslAltNameToCheck = s3.us-west-2.amazonaws.com
remote.s3.kms.sslVerifyServerCert = true
remote.s3.kms.sslVersions = tls1.2
remote.s3.kms.sslRootCAPath = /tmp/s3.us-west-2.amazonaws.com.pem
remote.s3.kms.cipherSuite = ECDHE-ECDSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256
remote.s3.kms.ecdhCurves = prime256v1,secp384r1,secp521r1
My Splunk env is running on AWS EC2 instances.
s3.us-west-2.amazonaws.com.pem is pem cert with root chain included
... View more
03-05-2019
01:08 PM
I am moving the existing environment Splunk 7.0 to Splunk 7.2.4 with smartstore. Can you please provide the all setting need to be set to default, disabled, ignored ...
Thanks in advance.
... View more