thanks @richgalloway & @isoutamo - I did expand those searches and used the same to clone required panels under my App - only lookup I needed to clone was reserved_cidrs.csv. Thank you.
Hello, I am facing an issue when a saved report is used in a simple xml dashboard using | loadjob savedsearch="madhav.d@xyz.com:App_Support:All Calls" My time zone preference is (GMT+01:00) Gr...
See more...
Hello, I am facing an issue when a saved report is used in a simple xml dashboard using | loadjob savedsearch="madhav.d@xyz.com:App_Support:All Calls" My time zone preference is (GMT+01:00) Greenwich Mean Time : London and the report I am referring to (All Calls) is also created by me and runs every 15 mins. Now, when I use this report in a simple xml dashboard, it only provided data as on an hour ago. Example: when the report runs at 08:00 AM and I check dashboard at 08:05 AM, it will show report results for 07:00 AM run and not the latest. I expect this to be due to recent day light saving time changes in UK. Can someone please help how should I handle this? Thank you. Regards, Madhav
Well... this ia valid method but. 1) The map command can mean that you cause a big performance hit to your environment - it's gonna be spawning search after search.... 2) In this particular case yo...
See more...
Well... this ia valid method but. 1) The map command can mean that you cause a big performance hit to your environment - it's gonna be spawning search after search.... 2) In this particular case you indiscriminately run all searches matching a given pattern. You have no control about who defined those searches and what they do. That might not be the best idea. This could actually be a way for other user to abuse your privileges or at least unwillingly damage your environment. It's as if you were trying to find all executable files on your server and running them with sudo. Let's imagine someone creates a search containing "index=* | delete" and your user has the can_delete capability. Or just create "index=* | collect sourcetype=some_non_stash_sourcetype". That's gonna hurt. Badly.
Okay, thx for your reply! Qst: How would you construct the search, when you make a rest call and let all searches run then saving them in an index? Which approach is better then map?
Hi Your map search is running saved searches with savedsearch command, but that command does not accept etime and ltime parameters. Instead, use earliest and latest parameters: | rest /servicesNS/...
See more...
Hi Your map search is running saved searches with savedsearch command, but that command does not accept etime and ltime parameters. Instead, use earliest and latest parameters: | rest /servicesNS/-/-/saved/searches splunk_server=local
| fields title
| search title=Reports*
| eval dayEarliest="-1d@d", dayLatest="@d"
| map maxsearches=100 search="| savedsearch \"$title$\" earliest=\"$dayEarliest$\" latest=\"$dayLatest$\" | addinfo | collect index=INDEXNAME testmode=false" Please note: mapcan be slow or resource-intensive with many saved searches. Ensure the saved searches are compatible with overridden time ranges (e.g. they might specify their own earliest/latest). The collect command requires capabilities to allow writing to the target index. Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Yes, the latest images seem to be broken. Apparently someone missed the fact that the UF doesn't listen on web port anymore and so the ansible task checking for open port fails. AFAIK it has been fl...
See more...
Yes, the latest images seem to be broken. Apparently someone missed the fact that the UF doesn't listen on web port anymore and so the ansible task checking for open port fails. AFAIK it has been flagged as a bug and hopefully will be resolved soon.
Hi, I've been working on the frontend developer for six months now... I'm using splunk/react-ui toolkit. https://splunkui.splunk.com/Toolkits/SUIT/Overview https://www.npmjs.com/package/@splu...
See more...
Hi, I've been working on the frontend developer for six months now... I'm using splunk/react-ui toolkit. https://splunkui.splunk.com/Toolkits/SUIT/Overview https://www.npmjs.com/package/@splunk/create I'm created dashboard pages to use @splunk/create command. but, I think that the dashboards seems like to be unable to share the state with other dashboards. Is there any way to make this possible? ↑ my project directory(/pacakges/<splunk app>). this is dashboard pages. Even with react-router, state sharing between dashboard pages was not possible. Splunk React Ui can't share data props between dashboard pages, and there's no document on how to use it. somebody help me
Also remember that while Splunk can listen for syslogs this is not a recommended setup. It's relatively ok for a small lab deployment but in production you'd rather want to go for a separate syslog d...
See more...
Also remember that while Splunk can listen for syslogs this is not a recommended setup. It's relatively ok for a small lab deployment but in production you'd rather want to go for a separate syslog daemon either writing to local files for pick up by UF or sending to HEC input.
It's not only about the number of uploads. It's also about the bandwidth and data usage. You can compare smartstore to swap space on your server (with a small difference that once a bucket is "swapp...
See more...
It's not only about the number of uploads. It's also about the bandwidth and data usage. You can compare smartstore to swap space on your server (with a small difference that once a bucket is "swapped out" since it doesn't change it doesn't get re-uploaded again). If your applications request more memory than your server has, some pages are getting swapped out by the kernel to free some physical memory. But CPU cannot interact with pages on disk so if you need to access data from those pages they have to be read back into physical memory. Since disk is usually way slower than RAM (ok, nowadays with NVMe storage those differences aren't as huge as they were even a few years back but still...) the kernel starts getting more and more occupied with juggling pages in and out and your system's load soars sky high and the whole system becomes unresponsive. Same can happen with smartstore-enabled indexes. If the buckets are not yet uploaded, they occupy your "RAM". When you need to search from a bucket which is not present in the local cache (your "RAM"), cache manager has to fetch that bucket from smartstore which is relatively slow compared to reading it directly from disk. If another search requires another bucket, cache manager queues fetching another bucket. And so on. If you need to access sufficiently many different buckets from smartstore-enabled indexes you may end up with a situation when a bucket is getting fetched to the local cache only to be read once and then immediately evicted and having to be re-read from remote storage next time it's needed. Cache manager might be using more sophisticated caching policies than simple FIFO (to be honest, I didn't dig that deep into this topic so I'm not sure if it's a simple LRU or something more sophisticated) but you can't beat physics and math. If you have only enough local storage for X buckets you can't use it to store 2X or 3X buckets. They simply won't fit.
As it's already been said - with Smartstore it works like this (maybe oversimplifying a bit): 1) Splunk ingests data into hot bucket (and replicates this hot bucket live to replication peers). 2) T...
See more...
As it's already been said - with Smartstore it works like this (maybe oversimplifying a bit): 1) Splunk ingests data into hot bucket (and replicates this hot bucket live to replication peers). 2) The bucket is finalized and rolled to the cache storage. 3) Cache manager marks the bucket as queued for upload. 4) When the bucket gets uploaded to the remote storage it _can_ be evicted (removed from local cache storage). But 5) Splunk needs local cache storage for buckets downloaded from remote storage from which it searches data. So you have multiple mechanisms here. 1) Splunk does the indexing locally (the hot buckets are local). 2) Splunk must upload the bucket to the smartstore 3) Splunk _only_ searches against locally cached data. There is no way to search from a bucket stored remotely. If Splunk needs to search from a bucket the cache manager must firstly fetch that bucket from remote storage to local cache.
but still, remember that you have to cache the data somewhere. ---> cache the data means? Can you please brief on this... With 20TB/day searches over two-three days might prove to be difficult to ha...
See more...
but still, remember that you have to cache the data somewhere. ---> cache the data means? Can you please brief on this... With 20TB/day searches over two-three days might prove to be difficult to handle locally. ---> Can't SmartStore help us here? Locally only hot bucket will be there right remaining days will be automatically rolled to S3 buckets and we can search from there no?
I'm trespassing into a territory a bit unknown to me (others have more experience with smartstore so the might correct me if I go wrong somehwere) but even if from Splukn's point of view the storage ...
See more...
I'm trespassing into a territory a bit unknown to me (others have more experience with smartstore so the might correct me if I go wrong somehwere) but even if from Splukn's point of view the storage is "unlimited": a) You might have limits on your S3 service b) You will pay more if you use more data (that might not be a deal breaker for you but it's worth being aware of it) c) You still need bandwidth to push it out of your local environment. If you don't have enough bandwidth you might clog your indexers because they will not be able to evict buckets from cache.
It's not only about data size (but still, remember that you have to cache the data somewhere. With 20TB/day searches over two-three days might prove to be difficult to handle locally. But it's also ...
See more...
It's not only about data size (but still, remember that you have to cache the data somewhere. With 20TB/day searches over two-three days might prove to be difficult to handle locally. But it's also about processing power. You have a total of 6 indexers which might or might not have equally distributed ingestion load. Depending on your environment and load characteristics this is - in a normal case - somewhere between "slightly undersized" to "barely breathing". But again - your environment might be unusual, your equipment might be hugely oversized vs. the standardized specs and tuned to utilize the hardware power (although usually with indexers you'd rather go into horizontal scaling instead of pumping up the specs of individual indexers). So there are many factors at play here. And I suppose you're not the one who'll be making the business decisions. The more reason to get something to cover your bottom.
@PickleRick You also need to take into account the fact that when you ingest that amount of data you have to upload it to your S3 tenant. Depending on your whole infrastructure that might prove to be...
See more...
@PickleRick You also need to take into account the fact that when you ingest that amount of data you have to upload it to your S3 tenant. Depending on your whole infrastructure that might prove to be difficult when you hit those peaks. ---> I am thinking S3 bucket storage is something unlimited because when I am checking MC it is showing Home path and Cold Path index storage is unlimited... Is it wrong assumption?
@PickleRick yes I do understand your point I won't make decisions here but I want to gain knowledge from experts here because as I told I am still learning things.... So my understanding is however...
See more...
@PickleRick yes I do understand your point I won't make decisions here but I want to gain knowledge from experts here because as I told I am still learning things.... So my understanding is however we store old data in S3 buckets once they roll from hot to warm... So I didn't understand why indexers are considering undersized (6) because however indexers not storing data here right at the end S3 bucket stores 90% of data (even if 20TB/day comes occasionally)? Are we looking in terms of CPU whether indexers can handle unusual 20TB of day at a time? What will be the consequences for that? And I believe index size default of 500 GB will not fill at all because Maxdatasize is set to 750 MB which means new data which is crossing 750 MB will roll over to warm buckets (which are there in S3 bucket)? Sorry if I am speaking wrong but that's my understanding.
@ITWhisperer This is actually what Splunk internally translates earliest and latest parameters to. @PunnuThis is a very interesting issue because when I use an identical search on a 9.1.2 instance I...
See more...
@ITWhisperer This is actually what Splunk internally translates earliest and latest parameters to. @PunnuThis is a very interesting issue because when I use an identical search on a 9.1.2 instance I just pulled and ran in my docker container on my laptop, it runs without any issues. Try running your subsearch with added | format command and see what it returns (it should return the set of conditions for the outer search rendered as string. | makeresults
| eval earliest=strptime("12/03/2025 13:00","%d/%m/%Y %H:%M")
| eval latest=relative_time(earliest,"+1d")
| table earliest latest
| format
As much as we're trying to be helpful here, this is something you should work on with your local friendly Splunk Partner. As I said before, your environment already seems undersized in terms of numb...
See more...
As much as we're trying to be helpful here, this is something you should work on with your local friendly Splunk Partner. As I said before, your environment already seems undersized in terms of number of indexers but you might have an unusual use case in which it would be enough. It doesn't seem to be enough for the 20TB/day peaks. You also need to take into account the fact that when you ingest that amount of data you have to upload it to your S3 tenant. Depending on your whole infrastructure that might prove to be difficult when you hit those peaks. But it's something to discuss in details with someone at hand with whom you can share your requirements and all limitations in details. We might have Splunk knowledge and expertise but from your company's point of view we're just a bunch of random people from the internet. And random people's "advice" is not something I'd base my business decisions on. Yes, I know that consulting services tend to cost money but then again, failing to properly architect your environment might prove to be even more costly.
This is the SPL i m using | rest /servicesNS/-/-/saved/searches splunk_server=local | fields title | search title=Reports* | eval dayEarliest="-1d@d", dayLatest="@d" | map maxsearches=100000 s...
See more...
This is the SPL i m using | rest /servicesNS/-/-/saved/searches splunk_server=local | fields title | search title=Reports* | eval dayEarliest="-1d@d", dayLatest="@d" | map maxsearches=100000 search="savedsearch \"$title$\" etime=\"$dayEarliest$\" ltime=\"$dayLatest$\" | addinfo | collect index=INDEXNAME testmode=false | search" Error i get: [map]: No results to summary index. Why?