All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Do you have Cold Volume storage restrictions?  The offline may have cold buckets that want to replicate back to the always on site which that site may have removed due to volume utilization restricti... See more...
Do you have Cold Volume storage restrictions?  The offline may have cold buckets that want to replicate back to the always on site which that site may have removed due to volume utilization restrictions.  Do you have any details in your internal logs which indicate which buckets are not replicating, anything special about those specific buckets?
Hi All, We previously used the Splunk Add-on for Microsoft Office 365 Reporting Web Service (https://splunkbase.splunk.com/app/3720) to collect message trace logs in Splunk.   However, since this ... See more...
Hi All, We previously used the Splunk Add-on for Microsoft Office 365 Reporting Web Service (https://splunkbase.splunk.com/app/3720) to collect message trace logs in Splunk.   However, since this add-on has been archived, what is the recommended alternative for collecting message trace logs now?
We have an app created with splunk add-on builder. We got an alert about the new python SDK.. check_python_sdk_version If your app relies on the Splunk SDK for Python, we require you to use an a... See more...
We have an app created with splunk add-on builder. We got an alert about the new python SDK.. check_python_sdk_version If your app relies on the Splunk SDK for Python, we require you to use an acceptably-recent version in order to avoid compatibility issues between your app and the Splunk Platform or the Python language runtime used to execute your app’s code. Please update your Splunk SDK for Python version to the least 2.0.2. More information is available on this project’s GitHub page: https://github.com/splunk/splunk-sdk-python   How do we upgrade the SDK in the add-on builder to use the latest?
Hi @gcusello , thank you for your answer. What about the default value?
Hi @juvenile , create a more complicated value for All, e.g. if you want to exclude events with some_field="xxx": <input id="select_abc" type="multiselect" token="token_abc" searchWhenChanged="true... See more...
Hi @juvenile , create a more complicated value for All, e.g. if you want to exclude events with some_field="xxx": <input id="select_abc" type="multiselect" token="token_abc" searchWhenChanged="true"> <label>ABC&#8205;</label> <default>*</default> <prefix>(</prefix> <suffix>)</suffix> <choice value=* AND NOT some_field="SomeArbitraryStringValue">All</choice> <search base="base_search"> <query> | stats count as count by some_field | sort 0 - count </query> </search> <fieldForLabel>some_field</fieldForLabel> <fieldForValue>some_field</fieldForValue> </input> Please adapt this solution to your requirements. Ciao. Giuseppe
I think I got it. | timechart span=1h perc85(time_taken) by cs_uri_stem
I have 1 TB of data that I want to analyze. Will TA_eventgenb be accepted?
<input id="select_abc" type="multiselect" token="token_abc" searchWhenChanged="true"> <label>ABC&#8205;</label> <default>*</default> <prefix>(</prefix> <suffix>)</suffix> <value... See more...
<input id="select_abc" type="multiselect" token="token_abc" searchWhenChanged="true"> <label>ABC&#8205;</label> <default>*</default> <prefix>(</prefix> <suffix>)</suffix> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> <choice value="*">All</choice> <search base="base_search"> <query> | stats count as count by some_field | sort 0 - count </query> </search> <fieldForLabel>some_field</fieldForLabel> <fieldForValue>some_field</fieldForValue> <delimiter>,</delimiter> <change> <condition label="All"> <set token="token_abc">("*") AND some_field != "SomeArbitraryStringValue"</set> </condition> </change> </input> I was wondering how I can exclude a specific option from the asterisk (*) value of the "All" option? Also, how does it work with parantheses and also exlcuding it from the default value? Thank you
Hey! I am currently standing up an enterprise splunk system that has a multi-site(2) indexer cluster of 8 Peers and 2 Cluster Managers in HA configuration (LB by F5). I've noticed that if we have ou... See more...
Hey! I am currently standing up an enterprise splunk system that has a multi-site(2) indexer cluster of 8 Peers and 2 Cluster Managers in HA configuration (LB by F5). I've noticed that if we have outages specific to a site, data rightfully continues to get ingested at the site that is still up..... But upon the return of service to the secondary site, we have a thousand or more fixup tasks (normal I suppose) but at times they hang and eventually I get replication failures in my health check. Usually unstable pending-down-up status is associated with the peers from the site that went down as they attempt to clean up. This is still developmental, so I have the luxury of deleting things with no consequence. The only fix I have seen to work is deleting all the data from the peers that went down and allowing them to resync and copy from a clean slate. I'm sure there is a better way to remedy this issue.    Can anyone explain/or point me in the direction of the appropriate solution and what the exact cause of this problem is?  I've read this Anomalous bucket issues - Splunk Documentation but roll, resync, delete doesn't quite do enough. And there is no mention as to why the failures start to occur. From my understanding, fragmented buckets play a factor when reboots or unexpected outages happen but how do I exactly regain some stability in my data replication.
Hi @L_Petch , you can use the Monitoring Console App to have this information. Ciao. Giuseppe
Hi @sumarri , no, as @PickleRick said, it isn't possible. My solution permits to add a note to a record in a lookup, but it isn't your requirement. Ciao and happy splunking Giuseppe P.S.: Karma ... See more...
Hi @sumarri , no, as @PickleRick said, it isn't possible. My solution permits to add a note to a record in a lookup, but it isn't your requirement. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi! Maybe this question is so simple to answer that I did not find any example, so please be kind to me We use append in our correlation search to see if we do have a server in blackout. Unfortu... See more...
Hi! Maybe this question is so simple to answer that I did not find any example, so please be kind to me We use append in our correlation search to see if we do have a server in blackout. Unfortunately we have seen the append returning just partial results which makes an incoming event create an Episode and Incident. It does happen very seldom but imagine you set a server into blackout for a week and your run the correlation search every minute. Just one issue with the indexer layer, i.e. timeout creates a risk of the event passing through. Our idea now is to have a saved search to feed a lookup instead. This search can then even run at a lower frequency, maybe every 5 minutes. But what if that search is seeing partial results and updates the lookup with partial data. So, long story short, how can one detect in a running search that it deals with partial results down the pipe? Could this work, example for peer timeout?   |index=... |eval sid="$name$" |search NOT [|search index=_internal earliest=-5m latest=now() sourcetype=splunk_search_messages message_key="DISPATCHCOMM:PEER_ERROR_TIMEOUT" log_level=ERROR | fields sid] |outputlookup ...   Any help is appreciated.
Sorry, I wasn't clear. I only need to add the URI_Stem (which is the URL of the API from IIS) to the timechart shown in the first query. I want to track time_taken (performance of the calls to differ... See more...
Sorry, I wasn't clear. I only need to add the URI_Stem (which is the URL of the API from IIS) to the timechart shown in the first query. I want to track time_taken (performance of the calls to different APIs our app is making) over time so I can see outlier periods. Hopefully this is clearer.
OK. One search creates a timechart, another calculates overall averages by some parameter. How would you want to add "fixed" statistics to timechart?
1.  @Mitesh_Gajjar 's response looks like generated with some lousy AI tool. 2. Unfortunately, the app is a third-party app so indeed your options are rather limited - either look into the app's con... See more...
1.  @Mitesh_Gajjar 's response looks like generated with some lousy AI tool. 2. Unfortunately, the app is a third-party app so indeed your options are rather limited - either look into the app's contents and try to make sense of what's going on there or write to the email address provided in app's description trying to get more info.
I have a timechart that traffic volume over time and the top 15% of API performance times. I would like to add URI_Stem to the timechart so that I can track performance over time for each of my API c... See more...
I have a timechart that traffic volume over time and the top 15% of API performance times. I would like to add URI_Stem to the timechart so that I can track performance over time for each of my API calls. Not sure how that can be done. | timechart span=1h count(_raw) as "Traffic Volume", perc85(time_taken) as "85% Longest Time Taken" Example of table by URI_Stem | stats count avg(time_taken) as Average BY cs_uri_stem | eval Average = round(Average, 2) | table cs_uri_stem count Average | sort -Average | rename Average as "Average Response Time in ms"
Saved searches are scheduled across the whole search head cluster. (with some additional conditions like a cluster member being in detention). That's what search head is for. Also - limiting searches... See more...
Saved searches are scheduled across the whole search head cluster. (with some additional conditions like a cluster member being in detention). That's what search head is for. Also - limiting searches to just one SH would inevitably lead to delayed/skipped searches. It won't solve the performance issue. Even if you have multiple indexers holding the same bucket, the indexers holding primary copies respond with results from those primaries - it's by design and lets you distribute the search. Even if you had a possibility to get results from just one indexer, there would be no guarantee that you'd get all events from given time range because with sf=rf=3 and 4 indexers you'd still probably hit (actually would _not_ hit) some buckets which are not present at that chosen indexer. So your idea is not a very good one. You can use site affinity to force search heads to use only one site. But again - especially if you already have performance problems - that's counterproductive. And from experience - it's often not the _number_ of searches but _how_ they're written.
Thank you Mitesh_Gajjar Unfortunately, https://splunkbase.splunk.com/app/7404) gives this very useful information: No information provided. Reach out to the developer to learn more.   The link t... See more...
Thank you Mitesh_Gajjar Unfortunately, https://splunkbase.splunk.com/app/7404) gives this very useful information: No information provided. Reach out to the developer to learn more.   The link to the Cisco website is for a different App altogether, so not much further along. (https://splunkbase.splunk.com/app/5558)  is also a different app Thanks for your efforts however!
Hello,   A client went mad on how many saved searches they require and wont get rid of them. Due to this it is hammering rw on the indexers to the point the indexers can cope and remove themselves ... See more...
Hello,   A client went mad on how many saved searches they require and wont get rid of them. Due to this it is hammering rw on the indexers to the point the indexers can cope and remove themselves from the cluster and then re adds which and more resource strain. adding more indexers isnt an option The current setup is 3vm multisite search head cluster and a 4vm multisite indexer cluster.   As they only require 3rf and 3sf i am wondering if there is a way to use only 1SH and 1 Indexer for all saved searches to run so that the load doesnt affect the other 3 indexers?
Yes. Dashboard Studio doesn't allow as much customizations as classic dashboards - no custom visualizations, no custom code. But still in classic you'd have to implement it all by yourself (but hey, ... See more...
Yes. Dashboard Studio doesn't allow as much customizations as classic dashboards - no custom visualizations, no custom code. But still in classic you'd have to implement it all by yourself (but hey, that's how half of Enterprise Security is written ;-))