All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Karthikeya  For reference, the following docs page is useful for SmartStore retention settings: https://docs.splunk.com/Documentation/Splunk/9.4.1/Indexer/SmartStoredataretention maxDataSize = ... See more...
Hi @Karthikeya  For reference, the following docs page is useful for SmartStore retention settings: https://docs.splunk.com/Documentation/Splunk/9.4.1/Indexer/SmartStoredataretention maxDataSize = Bucket Size in MB, not the total size of the index Data will be "frozen" when either maxGlobalDataSizeMB or frozenTimePeriodInSecs is met (whichever is first!) - so it is not safe to assume the data will be retained for 6 years if the maxGlobalDataSizeMB setting is not large enough to hold 6 years of data. To clarify my previous post as @PickleRick mentioned - cold buckets in SmartStore indexes are functionally equivalent to warm buckets  - They are essentially the same and cold buckets only exist in circumstances and in any case, the storage on S3 is the same. Let me know if you have any further questions or need clarity on any of these points  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
@gcusello @isoutamo @PickleRick @richgalloway  Update what I recently saw in my architecture: indexes.conf in Cluster Manager: [new_index] homePath = volume:primary/$_index_name/db coldPath = vo... See more...
@gcusello @isoutamo @PickleRick @richgalloway  Update what I recently saw in my architecture: indexes.conf in Cluster Manager: [new_index] homePath = volume:primary/$_index_name/db coldPath = volume:primary/$_index_name/colddb thawedPath = $SPLUNK_DB/$_index_name/thaweddb volumes indexes.conf: [volume:primary] path = $SPLUNK_DB #maxVolumeDataSizeMB = 6000000 there is one more app which is pushing to indexers with indexes.conf: (not at all aware of this)[default] remotePath = volume:aws_s3_vol/$_index_name maxDataSize = 750 [volume:aws_s3_vol] storageType = remote path = s3://conn-splunk-prod-smartstore/ remote.s3.auth_region = eu-west-1 remote.s3.bucket_name = conn-splunk-prod-smartstore remote.s3.encryption = sse-kms remote.s3.kms.key_id = XXXX remote.s3.supports_versioning = false So I believe that we are using Splunk Smartstore to store our data... So in this case can we accommodation this project which receives 20TB of data per day occasionally? Please guide me
@PickleRick And will data be deleted in S3 if it reaches to any limit? I mean we didn't set frozentimeperiodinsecs so by default it is 6 years so the older data stays for 6 years in S3?
One small correction. With smartstore there is no separate warm/cold storage. A bucket is getting uploaded to remote storage and is being cached locally if needed but it doesn't go through warm->cold... See more...
One small correction. With smartstore there is no separate warm/cold storage. A bucket is getting uploaded to remote storage and is being cached locally if needed but it doesn't go through warm->cold lifecycle. It's also worth noting that with some use cases (especially when you often work with searches covering a significant portion of your remote storage which turns out to be way over your local storage) you might get a significant performance hit because you're effectively not caching anything locally.
Thanks for this... So my understanding is my index size which is of 500GB by default will never fill at all because once it reaches to 750 MB (Maxdatasize) it will roll over to warm bucket which is i... See more...
Thanks for this... So my understanding is my index size which is of 500GB by default will never fill at all because once it reaches to 750 MB (Maxdatasize) it will roll over to warm bucket which is in S3 bucket? Am I correct?
Hi You are using Splunk SmartStore, which offloads warm and cold buckets to remote object storage (S3). Hot buckets remain on local indexer storage until they roll to warm, then get uploaded to S3. ... See more...
Hi You are using Splunk SmartStore, which offloads warm and cold buckets to remote object storage (S3). Hot buckets remain on local indexer storage until they roll to warm, then get uploaded to S3. Your remotePath and [volume:aws_s3_vol] config confirms SmartStore is enabled, meaning: Hot Data and cached warm/cold data resides on indexers Warm and cold buckets are stored in S3 There is no need for coldToFrozenDir or coldToFrozenScript unless you want to archive frozen data elsewhere. This allows for archiving data which is passed the frozenTimePeriodInSecs to be moved elsewhere. Retention is controlled by frozenTimePeriodInSecs (age-based) or maxTotalDataSizeMB (size-based). If you don’t override these in local/, defaults apply (usually 6 years retention). You can run the following command on one of your indexers to confirm the settings which have been applied:  /opt/splunk/bin/splunk btool indexes list --debug | grep -A 10 new_index Splunk automatically retrieves data from S3 to local cache when searches require it. This is transparent to users but may add latency for cold data which is not already in the cache. When the cache reaches capacity it will "evict" buckets based on the eviction policy which by default is the least-recently used bucket. Some useful Docs relating to SmartStore and index configuration which might be useful: SmartStore Overview Indexes.conf retention settings Data lifecycle and bucket types Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @python  The following SPL search can used to determine the dashboards which have NOT been used in the last 60 days. There are multiple ways to achieve this however I've gone with the approach of... See more...
Hi @python  The following SPL search can used to determine the dashboards which have NOT been used in the last 60 days. There are multiple ways to achieve this however I've gone with the approach of using the _audit index which records the dashboard ID as "provenance" and matches it to the results of the ui/views rest endpoint to ensure we have a list of all dashboards.   index=_audit info=completed action=search provenance!="N/A" provenance!="UI:Search" provenance!="scheduler" earliest=-60d latest=now | stats count by provenance app | eval provenance=replace(replace(provenance,"UI:dashboard:",""),"UI:Dashboard:","") | append [| rest splunk_server=local /servicesNS/-/-/data/ui/views | rename title AS provenance, eai:acl.app AS app] | stats sum(count) as search_count by provenance app | fillnull search_count value=0 | where search_count=0  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @python  Did you try the SPL I gave on your previous post to look at dashboard usage in the last 60 days? index=_audit provenance=* app=* info=completed earliest=-60d provenance!="N/A" app!="N/A... See more...
Hi @python  Did you try the SPL I gave on your previous post to look at dashboard usage in the last 60 days? index=_audit provenance=* app=* info=completed earliest=-60d provenance!="N/A" app!="N/A" provenance!="UI:Search" provenance!="Scheduler" | eval provenance=replace(replace(provenance,"UI:Dashboard:",""),"UI:dashboard:","") | stats latest(user) as last_user, latest(_time) as latest_access, dc(search_id) as searches by provenance, app | append [| rest /servicesNS/-/-/data/ui/views splunk_server=local count=0 | fields eai:acl.app title name eai:acl.owner isVisible | rename eai:acl.app as app, title as provenance, eai:acl.owner as owner ] | stats values(*) as * by provenance, app | where searches>1 | eval latest_access_readble=strftime(latest_access,"%Y-%m-%d %H:%M:%S") I will work on the SPL you have provided to show completely unused dashboards in the selected time period.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi Add the trendline command after your timechart: index=my_index | timechart dc(USER) as DISTINCT_USERS | trendline sma5(DISTINCT_USERS) as trend_DISTINCT_USERS This adds a 5-point simple moving ... See more...
Hi Add the trendline command after your timechart: index=my_index | timechart dc(USER) as DISTINCT_USERS | trendline sma5(DISTINCT_USERS) as trend_DISTINCT_USERS This adds a 5-point simple moving average trendline. Adjust the window size (e.g., sma7, sma10) as needed. Trendline documentation Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I want to add a trendline to this chart: index=my_index | timechart dc(USER) as DISTINCT_USERS How do I accomplish this? Thanks! Jonathan
I was newly aligned into a project and didn't have proper KT from the left ones. I have queries regarding my current architecture and configurations and I am not well versed with advanced admin conce... See more...
I was newly aligned into a project and didn't have proper KT from the left ones. I have queries regarding my current architecture and configurations and I am not well versed with advanced admin concepts. Please help me in these queries: We have 6 indexers (hosted on AWS cloud as EC2 but not Splunk cloud) with 6.9TB disk storage and 1.5GB/day license. Is this ok? I am checking for retention period but nowhere set with frozentimeperiodinsecs or maxTotalDataSizeMB in local. But in default it is there... I am also looking whether archival location is set or not. indexes.conf in Cluster Manager: [new_index] homePath   = volume:primary/$_index_name/db coldPath   = volume:primary/$_index_name/colddb thawedPath = $SPLUNK_DB/$_index_name/thaweddb   volumes indexes.conf:   [volume:primary] path = $SPLUNK_DB #maxVolumeDataSizeMB = 6000000   there is one more app which is pushing to indexers with indexes.conf: (not at all aware of this)   [default] remotePath = volume:aws_s3_vol/$_index_name maxDataSize = 750 [volume:aws_s3_vol] storageType = remote path = s3://conn-splunk-prod-smartstore/ remote.s3.auth_region = eu-west-1 remote.s3.bucket_name = conn-splunk-prod-smartstore remote.s3.encryption = sse-kms remote.s3.kms.key_id = XXXX remote.s3.supports_versioning = false   and I don't see coldToFrozenDir and coldToFrozenScript is also not mentioned anywhere.   Now are we storing archival data in S3 bucket now? but there mentioned maxDataSize which is related to hot to warm. So apart from hot bucket data, rest all data is storing in s3 bucket now?    So how will Splunk take the data from S3 bucket to search and make queries?
you can use this q uery as reference | rest splunk_server=local /servicesNS/-/-/data/ui/views | rename title AS "Dashboard Name", eai:acl.app AS Application, eai:acl.owner AS Owner, id as dashboard_... See more...
you can use this q uery as reference | rest splunk_server=local /servicesNS/-/-/data/ui/views | rename title AS "Dashboard Name", eai:acl.app AS Application, eai:acl.owner AS Owner, id as dashboard_id | eval dashboard_name=replace(dashboard_id, ".*/data/ui/views/([^/]+)$", "\1") | eval dashboard_name=urldecode(dashboard_name) | fields "Dashboard Name", Application, Owner, dashboard_name | join type=left dashboard_name [ search index=_internal sourcetype IN ("splunk_web_service", "splunkd_access") | rex "Rendering dashboard \\\"(?<rendered_dashboard>[^\"]+)" | eval uri_decode = urldecode(uri) | rex field=uri_decode "data/ui/views/(?<rendered_dashboard>[^$]+)$" | search rendered_dashboard=* NOT rendered_dashboard="_new" | transaction rendered_dashboard maxspan=2s | stats last(_time) AS last_viewed_time BY rendered_dashboard | eval dashboard_name=rendered_dashboard | fields dashboard_name, last_viewed_time ] | eval "Last Viewed Time"=if(isnull(last_viewed_time), "Never Viewed", strftime(last_viewed_time, "%m/%d/%Y %H:%M:%S")) | table "Dashboard Name", Application, Owner, "Last Viewed Time"
Hi @python  Check out this app https://splunkbase.splunk.com/app/7300 (Splunk app for Redundant or Inefficient Search Spotting) This has dashboards for identifying things like unused dashboards and... See more...
Hi @python  Check out this app https://splunkbase.splunk.com/app/7300 (Splunk app for Redundant or Inefficient Search Spotting) This has dashboards for identifying things like unused dashboards and other knowledge objects.    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Is it possible to identify obsolete dashboards? or the last time a dashboard was executed?
To find alerts that have not triggered, try this query | rest /servicesNS/-/-/saved/searches splunk_server=local | fields title disabled triggered_alert_count alert.severity alert.track eai:acl.app ... See more...
To find alerts that have not triggered, try this query | rest /servicesNS/-/-/saved/searches splunk_server=local | fields title disabled triggered_alert_count alert.severity alert.track eai:acl.app | rename alert.track as isAlert, eai:acl.app as App | eval TriggerCount=coalesce(triggered_alert_count, 0) | where disabled=0 AND TriggerCount=0 AND isAlert=1 | table title alert.severity App
Hi @doli , This is Kartik from Cisco. Please send an email to cisco-cybervision-splunk@cisco.com and the team will share the document. Thanks!
Hi @b17gunnr  The error "UDP port 514 is not available" typically means that Splunk is not able to listen to the port, which is typically for 1 of 2 reasons: Another process is already listening t... See more...
Hi @b17gunnr  The error "UDP port 514 is not available" typically means that Splunk is not able to listen to the port, which is typically for 1 of 2 reasons: Another process is already listening to the port Confirm that nothing is already using this port, this process will vary between different OS. Splunk does not have permissions to listen to port 514. To listen to ports <1024 the Splunk process may require additional permissions (CAP_NET_BIND_SERVICE) and/or could be affected by AppArmor / SELinux. This will also vary depending on OS.  For more information check out https://docs.splunk.com/Documentation/Splunk/latest/Data/Monitornetworkports   Also, theres a previous Splunk answer which might help at https://community.splunk.com/t5/Getting-Data-In/how-to-listen-to-port-UDP-514-when-splunk-is-not-root/m-p/108169  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello folks, My organization is struggling with ingesting the Cisco Firepower audit (sys)logs into Splunk, we've been able to successfully ingest all the other sources. With the Firepowers only offe... See more...
Hello folks, My organization is struggling with ingesting the Cisco Firepower audit (sys)logs into Splunk, we've been able to successfully ingest all the other sources. With the Firepowers only offering up 514udp which is unavailable according to Splunk, or a HEC configuration without tokens so Splunk is (would?) drop the events our option appear limited. Has anyone else come across this issue and solved it?   
Hello, we found solution, there was metadata index source key that was possible to use. Thanks for your help guys.
I have similar issues poppping up as of late. But how does one isolate the affected forwarder? The error message reads Forwarder Ingestion Latency   Root Cause(s): Indicator 'ingestion_latency... See more...
I have similar issues poppping up as of late. But how does one isolate the affected forwarder? The error message reads Forwarder Ingestion Latency   Root Cause(s): Indicator 'ingestion_latency_gap_multiplier' exceeded configured value. The observed value is 89. Message from <UUID>:<ip-addrs>:54246 Unhealthy Instances: indexer1 indexer2   The "message from" section just lists the UUID, an IP adress and a port. Which part here would help me find the actual forwarder? The UUID does not match any "Client name" under forwarder management on the deployment server. The IP adress does not match a server on which I have a forwarder installed. One or a few of the indexers are listed as "unhealthy instances" each time. But the actual error sounds like it lives in the forwarder end and not on the indexer. With the available information in this warning/error. How can I figure out which forwarder is either experiencing latency issues OR need to have that log file mentioned flushed.