All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@PickleRick You also need to take into account the fact that when you ingest that amount of data you have to upload it to your S3 tenant. Depending on your whole infrastructure that might prove to be... See more...
@PickleRick You also need to take into account the fact that when you ingest that amount of data you have to upload it to your S3 tenant. Depending on your whole infrastructure that might prove to be difficult when you hit those peaks. ---> I am thinking S3 bucket storage is something unlimited because when I am checking MC it is showing Home path and Cold Path index storage is unlimited... Is it wrong assumption?     
@PickleRick yes I do understand your point I won't make decisions here but I want to gain knowledge from experts here because as I told I am still learning things....  So my understanding is however... See more...
@PickleRick yes I do understand your point I won't make decisions here but I want to gain knowledge from experts here because as I told I am still learning things....  So my understanding is however we store old data in S3 buckets once they roll from hot to warm... So I didn't understand why indexers are considering undersized (6) because however indexers not storing data here right at the end S3 bucket stores 90% of data (even if 20TB/day comes occasionally)? Are we looking in terms of CPU whether indexers can handle unusual 20TB of day at a time? What will be the consequences for that? And I believe index size default of 500 GB will not fill at all because Maxdatasize is set to 750 MB which means new data which is crossing 750 MB will roll over to warm buckets (which are there in S3 bucket)? Sorry if I am speaking wrong but that's my understanding. 
@ITWhisperer This is actually what Splunk internally translates earliest and latest parameters to. @PunnuThis is a very interesting issue because when I use an identical search on a 9.1.2 instance I... See more...
@ITWhisperer This is actually what Splunk internally translates earliest and latest parameters to. @PunnuThis is a very interesting issue because when I use an identical search on a 9.1.2 instance I just pulled and ran in my docker container on my laptop, it runs without any issues. Try running your subsearch with added | format command and see what it returns (it should return the set of conditions for the outer search rendered as string. | makeresults | eval earliest=strptime("12/03/2025 13:00","%d/%m/%Y %H:%M") | eval latest=relative_time(earliest,"+1d") | table earliest latest | format  
As much as we're trying to be helpful here, this is something you should work on with your local friendly Splunk Partner. As I said before, your environment already seems undersized in terms of numb... See more...
As much as we're trying to be helpful here, this is something you should work on with your local friendly Splunk Partner. As I said before, your environment already seems undersized in terms of number of indexers but you might have an unusual use case in which it would be enough. It doesn't seem to be enough for the 20TB/day peaks. You also need to take into account the fact that when you ingest that amount of data you have to upload it to your S3 tenant. Depending on your whole infrastructure that might prove to be difficult when you hit those peaks. But it's something to discuss in details with someone at hand with whom you can share your requirements and all limitations in details. We might have Splunk knowledge and expertise but from your company's point of view we're just a bunch of random people from the internet. And random people's "advice" is not something I'd base my business decisions on. Yes, I know that consulting services tend to cost money but then again, failing to properly architect your environment might prove to be even more costly.
This is the SPL i m using | rest /servicesNS/-/-/saved/searches splunk_server=local | fields title | search title=Reports* | eval dayEarliest="-1d@d", dayLatest="@d" | map maxsearches=100000 s... See more...
This is the SPL i m using | rest /servicesNS/-/-/saved/searches splunk_server=local | fields title | search title=Reports* | eval dayEarliest="-1d@d", dayLatest="@d" | map maxsearches=100000 search="savedsearch \"$title$\" etime=\"$dayEarliest$\" ltime=\"$dayLatest$\" | addinfo | collect index=INDEXNAME testmode=false | search" Error i get: [map]: No results to summary index. Why?
Hi @Karthikeya  For reference, the following docs page is useful for SmartStore retention settings: https://docs.splunk.com/Documentation/Splunk/9.4.1/Indexer/SmartStoredataretention maxDataSize = ... See more...
Hi @Karthikeya  For reference, the following docs page is useful for SmartStore retention settings: https://docs.splunk.com/Documentation/Splunk/9.4.1/Indexer/SmartStoredataretention maxDataSize = Bucket Size in MB, not the total size of the index Data will be "frozen" when either maxGlobalDataSizeMB or frozenTimePeriodInSecs is met (whichever is first!) - so it is not safe to assume the data will be retained for 6 years if the maxGlobalDataSizeMB setting is not large enough to hold 6 years of data. To clarify my previous post as @PickleRick mentioned - cold buckets in SmartStore indexes are functionally equivalent to warm buckets  - They are essentially the same and cold buckets only exist in circumstances and in any case, the storage on S3 is the same. Let me know if you have any further questions or need clarity on any of these points  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
@gcusello @isoutamo @PickleRick @richgalloway  Update what I recently saw in my architecture: indexes.conf in Cluster Manager: [new_index] homePath = volume:primary/$_index_name/db coldPath = vo... See more...
@gcusello @isoutamo @PickleRick @richgalloway  Update what I recently saw in my architecture: indexes.conf in Cluster Manager: [new_index] homePath = volume:primary/$_index_name/db coldPath = volume:primary/$_index_name/colddb thawedPath = $SPLUNK_DB/$_index_name/thaweddb volumes indexes.conf: [volume:primary] path = $SPLUNK_DB #maxVolumeDataSizeMB = 6000000 there is one more app which is pushing to indexers with indexes.conf: (not at all aware of this)[default] remotePath = volume:aws_s3_vol/$_index_name maxDataSize = 750 [volume:aws_s3_vol] storageType = remote path = s3://conn-splunk-prod-smartstore/ remote.s3.auth_region = eu-west-1 remote.s3.bucket_name = conn-splunk-prod-smartstore remote.s3.encryption = sse-kms remote.s3.kms.key_id = XXXX remote.s3.supports_versioning = false So I believe that we are using Splunk Smartstore to store our data... So in this case can we accommodation this project which receives 20TB of data per day occasionally? Please guide me
@PickleRick And will data be deleted in S3 if it reaches to any limit? I mean we didn't set frozentimeperiodinsecs so by default it is 6 years so the older data stays for 6 years in S3?
One small correction. With smartstore there is no separate warm/cold storage. A bucket is getting uploaded to remote storage and is being cached locally if needed but it doesn't go through warm->cold... See more...
One small correction. With smartstore there is no separate warm/cold storage. A bucket is getting uploaded to remote storage and is being cached locally if needed but it doesn't go through warm->cold lifecycle. It's also worth noting that with some use cases (especially when you often work with searches covering a significant portion of your remote storage which turns out to be way over your local storage) you might get a significant performance hit because you're effectively not caching anything locally.
Thanks for this... So my understanding is my index size which is of 500GB by default will never fill at all because once it reaches to 750 MB (Maxdatasize) it will roll over to warm bucket which is i... See more...
Thanks for this... So my understanding is my index size which is of 500GB by default will never fill at all because once it reaches to 750 MB (Maxdatasize) it will roll over to warm bucket which is in S3 bucket? Am I correct?
Hi You are using Splunk SmartStore, which offloads warm and cold buckets to remote object storage (S3). Hot buckets remain on local indexer storage until they roll to warm, then get uploaded to S3. ... See more...
Hi You are using Splunk SmartStore, which offloads warm and cold buckets to remote object storage (S3). Hot buckets remain on local indexer storage until they roll to warm, then get uploaded to S3. Your remotePath and [volume:aws_s3_vol] config confirms SmartStore is enabled, meaning: Hot Data and cached warm/cold data resides on indexers Warm and cold buckets are stored in S3 There is no need for coldToFrozenDir or coldToFrozenScript unless you want to archive frozen data elsewhere. This allows for archiving data which is passed the frozenTimePeriodInSecs to be moved elsewhere. Retention is controlled by frozenTimePeriodInSecs (age-based) or maxTotalDataSizeMB (size-based). If you don’t override these in local/, defaults apply (usually 6 years retention). You can run the following command on one of your indexers to confirm the settings which have been applied:  /opt/splunk/bin/splunk btool indexes list --debug | grep -A 10 new_index Splunk automatically retrieves data from S3 to local cache when searches require it. This is transparent to users but may add latency for cold data which is not already in the cache. When the cache reaches capacity it will "evict" buckets based on the eviction policy which by default is the least-recently used bucket. Some useful Docs relating to SmartStore and index configuration which might be useful: SmartStore Overview Indexes.conf retention settings Data lifecycle and bucket types Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @python  The following SPL search can used to determine the dashboards which have NOT been used in the last 60 days. There are multiple ways to achieve this however I've gone with the approach of... See more...
Hi @python  The following SPL search can used to determine the dashboards which have NOT been used in the last 60 days. There are multiple ways to achieve this however I've gone with the approach of using the _audit index which records the dashboard ID as "provenance" and matches it to the results of the ui/views rest endpoint to ensure we have a list of all dashboards.   index=_audit info=completed action=search provenance!="N/A" provenance!="UI:Search" provenance!="scheduler" earliest=-60d latest=now | stats count by provenance app | eval provenance=replace(replace(provenance,"UI:dashboard:",""),"UI:Dashboard:","") | append [| rest splunk_server=local /servicesNS/-/-/data/ui/views | rename title AS provenance, eai:acl.app AS app] | stats sum(count) as search_count by provenance app | fillnull search_count value=0 | where search_count=0  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @python  Did you try the SPL I gave on your previous post to look at dashboard usage in the last 60 days? index=_audit provenance=* app=* info=completed earliest=-60d provenance!="N/A" app!="N/A... See more...
Hi @python  Did you try the SPL I gave on your previous post to look at dashboard usage in the last 60 days? index=_audit provenance=* app=* info=completed earliest=-60d provenance!="N/A" app!="N/A" provenance!="UI:Search" provenance!="Scheduler" | eval provenance=replace(replace(provenance,"UI:Dashboard:",""),"UI:dashboard:","") | stats latest(user) as last_user, latest(_time) as latest_access, dc(search_id) as searches by provenance, app | append [| rest /servicesNS/-/-/data/ui/views splunk_server=local count=0 | fields eai:acl.app title name eai:acl.owner isVisible | rename eai:acl.app as app, title as provenance, eai:acl.owner as owner ] | stats values(*) as * by provenance, app | where searches>1 | eval latest_access_readble=strftime(latest_access,"%Y-%m-%d %H:%M:%S") I will work on the SPL you have provided to show completely unused dashboards in the selected time period.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi Add the trendline command after your timechart: index=my_index | timechart dc(USER) as DISTINCT_USERS | trendline sma5(DISTINCT_USERS) as trend_DISTINCT_USERS This adds a 5-point simple moving ... See more...
Hi Add the trendline command after your timechart: index=my_index | timechart dc(USER) as DISTINCT_USERS | trendline sma5(DISTINCT_USERS) as trend_DISTINCT_USERS This adds a 5-point simple moving average trendline. Adjust the window size (e.g., sma7, sma10) as needed. Trendline documentation Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I want to add a trendline to this chart: index=my_index | timechart dc(USER) as DISTINCT_USERS How do I accomplish this? Thanks! Jonathan
I was newly aligned into a project and didn't have proper KT from the left ones. I have queries regarding my current architecture and configurations and I am not well versed with advanced admin conce... See more...
I was newly aligned into a project and didn't have proper KT from the left ones. I have queries regarding my current architecture and configurations and I am not well versed with advanced admin concepts. Please help me in these queries: We have 6 indexers (hosted on AWS cloud as EC2 but not Splunk cloud) with 6.9TB disk storage and 1.5GB/day license. Is this ok? I am checking for retention period but nowhere set with frozentimeperiodinsecs or maxTotalDataSizeMB in local. But in default it is there... I am also looking whether archival location is set or not. indexes.conf in Cluster Manager: [new_index] homePath   = volume:primary/$_index_name/db coldPath   = volume:primary/$_index_name/colddb thawedPath = $SPLUNK_DB/$_index_name/thaweddb   volumes indexes.conf:   [volume:primary] path = $SPLUNK_DB #maxVolumeDataSizeMB = 6000000   there is one more app which is pushing to indexers with indexes.conf: (not at all aware of this)   [default] remotePath = volume:aws_s3_vol/$_index_name maxDataSize = 750 [volume:aws_s3_vol] storageType = remote path = s3://conn-splunk-prod-smartstore/ remote.s3.auth_region = eu-west-1 remote.s3.bucket_name = conn-splunk-prod-smartstore remote.s3.encryption = sse-kms remote.s3.kms.key_id = XXXX remote.s3.supports_versioning = false   and I don't see coldToFrozenDir and coldToFrozenScript is also not mentioned anywhere.   Now are we storing archival data in S3 bucket now? but there mentioned maxDataSize which is related to hot to warm. So apart from hot bucket data, rest all data is storing in s3 bucket now?    So how will Splunk take the data from S3 bucket to search and make queries?
you can use this q uery as reference | rest splunk_server=local /servicesNS/-/-/data/ui/views | rename title AS "Dashboard Name", eai:acl.app AS Application, eai:acl.owner AS Owner, id as dashboard_... See more...
you can use this q uery as reference | rest splunk_server=local /servicesNS/-/-/data/ui/views | rename title AS "Dashboard Name", eai:acl.app AS Application, eai:acl.owner AS Owner, id as dashboard_id | eval dashboard_name=replace(dashboard_id, ".*/data/ui/views/([^/]+)$", "\1") | eval dashboard_name=urldecode(dashboard_name) | fields "Dashboard Name", Application, Owner, dashboard_name | join type=left dashboard_name [ search index=_internal sourcetype IN ("splunk_web_service", "splunkd_access") | rex "Rendering dashboard \\\"(?<rendered_dashboard>[^\"]+)" | eval uri_decode = urldecode(uri) | rex field=uri_decode "data/ui/views/(?<rendered_dashboard>[^$]+)$" | search rendered_dashboard=* NOT rendered_dashboard="_new" | transaction rendered_dashboard maxspan=2s | stats last(_time) AS last_viewed_time BY rendered_dashboard | eval dashboard_name=rendered_dashboard | fields dashboard_name, last_viewed_time ] | eval "Last Viewed Time"=if(isnull(last_viewed_time), "Never Viewed", strftime(last_viewed_time, "%m/%d/%Y %H:%M:%S")) | table "Dashboard Name", Application, Owner, "Last Viewed Time"
Hi @python  Check out this app https://splunkbase.splunk.com/app/7300 (Splunk app for Redundant or Inefficient Search Spotting) This has dashboards for identifying things like unused dashboards and... See more...
Hi @python  Check out this app https://splunkbase.splunk.com/app/7300 (Splunk app for Redundant or Inefficient Search Spotting) This has dashboards for identifying things like unused dashboards and other knowledge objects.    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Is it possible to identify obsolete dashboards? or the last time a dashboard was executed?
To find alerts that have not triggered, try this query | rest /servicesNS/-/-/saved/searches splunk_server=local | fields title disabled triggered_alert_count alert.severity alert.track eai:acl.app ... See more...
To find alerts that have not triggered, try this query | rest /servicesNS/-/-/saved/searches splunk_server=local | fields title disabled triggered_alert_count alert.severity alert.track eai:acl.app | rename alert.track as isAlert, eai:acl.app as App | eval TriggerCount=coalesce(triggered_alert_count, 0) | where disabled=0 AND TriggerCount=0 AND isAlert=1 | table title alert.severity App