All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @L_Petch , you can use the Monitoring Console App to have this information. Ciao. Giuseppe
Hi @sumarri , no, as @PickleRick said, it isn't possible. My solution permits to add a note to a record in a lookup, but it isn't your requirement. Ciao and happy splunking Giuseppe P.S.: Karma ... See more...
Hi @sumarri , no, as @PickleRick said, it isn't possible. My solution permits to add a note to a record in a lookup, but it isn't your requirement. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi! Maybe this question is so simple to answer that I did not find any example, so please be kind to me We use append in our correlation search to see if we do have a server in blackout. Unfortu... See more...
Hi! Maybe this question is so simple to answer that I did not find any example, so please be kind to me We use append in our correlation search to see if we do have a server in blackout. Unfortunately we have seen the append returning just partial results which makes an incoming event create an Episode and Incident. It does happen very seldom but imagine you set a server into blackout for a week and your run the correlation search every minute. Just one issue with the indexer layer, i.e. timeout creates a risk of the event passing through. Our idea now is to have a saved search to feed a lookup instead. This search can then even run at a lower frequency, maybe every 5 minutes. But what if that search is seeing partial results and updates the lookup with partial data. So, long story short, how can one detect in a running search that it deals with partial results down the pipe? Could this work, example for peer timeout?   |index=... |eval sid="$name$" |search NOT [|search index=_internal earliest=-5m latest=now() sourcetype=splunk_search_messages message_key="DISPATCHCOMM:PEER_ERROR_TIMEOUT" log_level=ERROR | fields sid] |outputlookup ...   Any help is appreciated.
Sorry, I wasn't clear. I only need to add the URI_Stem (which is the URL of the API from IIS) to the timechart shown in the first query. I want to track time_taken (performance of the calls to differ... See more...
Sorry, I wasn't clear. I only need to add the URI_Stem (which is the URL of the API from IIS) to the timechart shown in the first query. I want to track time_taken (performance of the calls to different APIs our app is making) over time so I can see outlier periods. Hopefully this is clearer.
OK. One search creates a timechart, another calculates overall averages by some parameter. How would you want to add "fixed" statistics to timechart?
1.  @Mitesh_Gajjar 's response looks like generated with some lousy AI tool. 2. Unfortunately, the app is a third-party app so indeed your options are rather limited - either look into the app's con... See more...
1.  @Mitesh_Gajjar 's response looks like generated with some lousy AI tool. 2. Unfortunately, the app is a third-party app so indeed your options are rather limited - either look into the app's contents and try to make sense of what's going on there or write to the email address provided in app's description trying to get more info.
I have a timechart that traffic volume over time and the top 15% of API performance times. I would like to add URI_Stem to the timechart so that I can track performance over time for each of my API c... See more...
I have a timechart that traffic volume over time and the top 15% of API performance times. I would like to add URI_Stem to the timechart so that I can track performance over time for each of my API calls. Not sure how that can be done. | timechart span=1h count(_raw) as "Traffic Volume", perc85(time_taken) as "85% Longest Time Taken" Example of table by URI_Stem | stats count avg(time_taken) as Average BY cs_uri_stem | eval Average = round(Average, 2) | table cs_uri_stem count Average | sort -Average | rename Average as "Average Response Time in ms"
Saved searches are scheduled across the whole search head cluster. (with some additional conditions like a cluster member being in detention). That's what search head is for. Also - limiting searches... See more...
Saved searches are scheduled across the whole search head cluster. (with some additional conditions like a cluster member being in detention). That's what search head is for. Also - limiting searches to just one SH would inevitably lead to delayed/skipped searches. It won't solve the performance issue. Even if you have multiple indexers holding the same bucket, the indexers holding primary copies respond with results from those primaries - it's by design and lets you distribute the search. Even if you had a possibility to get results from just one indexer, there would be no guarantee that you'd get all events from given time range because with sf=rf=3 and 4 indexers you'd still probably hit (actually would _not_ hit) some buckets which are not present at that chosen indexer. So your idea is not a very good one. You can use site affinity to force search heads to use only one site. But again - especially if you already have performance problems - that's counterproductive. And from experience - it's often not the _number_ of searches but _how_ they're written.
Thank you Mitesh_Gajjar Unfortunately, https://splunkbase.splunk.com/app/7404) gives this very useful information: No information provided. Reach out to the developer to learn more.   The link t... See more...
Thank you Mitesh_Gajjar Unfortunately, https://splunkbase.splunk.com/app/7404) gives this very useful information: No information provided. Reach out to the developer to learn more.   The link to the Cisco website is for a different App altogether, so not much further along. (https://splunkbase.splunk.com/app/5558)  is also a different app Thanks for your efforts however!
Hello,   A client went mad on how many saved searches they require and wont get rid of them. Due to this it is hammering rw on the indexers to the point the indexers can cope and remove themselves ... See more...
Hello,   A client went mad on how many saved searches they require and wont get rid of them. Due to this it is hammering rw on the indexers to the point the indexers can cope and remove themselves from the cluster and then re adds which and more resource strain. adding more indexers isnt an option The current setup is 3vm multisite search head cluster and a 4vm multisite indexer cluster.   As they only require 3rf and 3sf i am wondering if there is a way to use only 1SH and 1 Indexer for all saved searches to run so that the load doesnt affect the other 3 indexers?
Yes. Dashboard Studio doesn't allow as much customizations as classic dashboards - no custom visualizations, no custom code. But still in classic you'd have to implement it all by yourself (but hey, ... See more...
Yes. Dashboard Studio doesn't allow as much customizations as classic dashboards - no custom visualizations, no custom code. But still in classic you'd have to implement it all by yourself (but hey, that's how half of Enterprise Security is written ;-))
So there is no way I cold do it on Dashboard Studio... but I can do it on Classic because I can edit the java script?    Thank you so much for the idea!
There is no out-of-the-box component that gives you this functionality. With classic dashboards you can add your own javascript code that will take care of storing the "notes" into kvstore so that it... See more...
There is no out-of-the-box component that gives you this functionality. With classic dashboards you can add your own javascript code that will take care of storing the "notes" into kvstore so that it can be shared and updated but it is something you'd need to write yourself. And there are several possible issues with that (including of course permissions management, concurrent access and so on). So it's not that straightforward.
So, they want to add notes and have them on live for them to view.  So on the dashboard, I will have tables and a notes section and they want to add note to the table/section on the dashboard view. ... See more...
So, they want to add notes and have them on live for them to view.  So on the dashboard, I will have tables and a notes section and they want to add note to the table/section on the dashboard view. Is that possible???  I can make a lookup maybe or something, but I think I did not communicate what I wanted. 
This worked beautifully, thank you!
Understood! I appreciate your answers.  I will keep this post unresolved for now and test it.
I'll give it a shot.  Thank you for your help!
Hi @alferone , I don't think so, if you have a fixed frequency of data update. I prefer a summary index for the reasons I listed in my before answer. Ciao. Giuseppe
No. As far as I know it's neither officially supported nor (well) documented. And at least up to not so long ago you couldn't install both components from a RPM or DEB package because they were insta... See more...
No. As far as I know it's neither officially supported nor (well) documented. And at least up to not so long ago you couldn't install both components from a RPM or DEB package because they were installed in the same place (/opt/splunk). More recent versions install in separate directories (/opt/splunk vs. /opt/splunkforwarder) so it might be possible to install both from packages (I haven't tried this myself though so I'd strongly advise to test in lab first).
Hi @jpillai , ok, my hint is to evaluate the effort to cheng the searches that use these indexes. Anyway, let us know if we can help you more, or, please, accept one answer for the other people of ... See more...
Hi @jpillai , ok, my hint is to evaluate the effort to cheng the searches that use these indexes. Anyway, let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors