Did you ever figure out what the issue was? I am having the same issue with VSCode and the API, however it did work at one point earlier this year and has now stopped working. I don't know exactly wh...
See more...
Did you ever figure out what the issue was? I am having the same issue with VSCode and the API, however it did work at one point earlier this year and has now stopped working. I don't know exactly when because it has not been a regular thing to use VSCode to search our Splunk Cloud.
Hi, I'm also facing the similar issue. We have installed 9.0.2 version of Splunk UF on windows servers and connectivity towards DS all looks good. And firewall team and NSG also confirmed rules a...
See more...
Hi, I'm also facing the similar issue. We have installed 9.0.2 version of Splunk UF on windows servers and connectivity towards DS all looks good. And firewall team and NSG also confirmed rules and routing are in place. Still, we are not able to see logs at Splunk console. We are getting error stating "existing connection forcibley closed by remote host" and "TCPout processor stopped to process the flow and blocked for seconds". Can you help us here with your inputs.
Hello, I've seen many others in this forum trying to achieve something similar to what I'm trying to do but I didn't find an answer that completely satisfied me. This is the Use Case: I want to...
See more...
Hello, I've seen many others in this forum trying to achieve something similar to what I'm trying to do but I didn't find an answer that completely satisfied me. This is the Use Case: I want to compare the number of requests received by our Web Proxy with the same period in the last week. Then I want to filter out any increase lower than X percent. This is how I've tried to implement it using the timewrap and it's pretty close to what I want to achieve. Only problem is that the timewrap command only seems to work fine if I only group by _time. | tstats `summariesonly` count as event_count from datamodel="Web" where sourcetype="f5:bigip:ltm:http:irule" by _time span=10m
| timewrap 1w | where _time >= relative_time(now(), "-60m")
| where (event_count_latest_week - event_count_1week_before) > 0
| where (((event_count_latest_week - event_count_1week_before)/event_count_latest_week)*100) >= 40 This gives me a result like this. _time event_count_1week_before_week event_count_latest_week XXXX YYYY ZZZZ If I try to do something similar but grouping by the name of the web site that it's being accesed in the tstats command then timewrap command doesn't work for me anymore. It outputs just the latest values of 1 of the web sites. | tstats `summariesonly` count as event_count from datamodel="Web" where sourcetype="f5:bigip:ltm:http:irule" by _time Web.site span=10m
| timewrap 1w | where _time >= relative_time(now(), "-60m")
| where (event_count_latest_week - event_count_1week_before) > 0
| where (((event_count_latest_week - event_count_1week_before)/event_count_latest_week)*100) >= 40 That doesn't work. Do you know why that happens and how can I achieve what I want? Many thanks. Kind regards.
Leverage your monitoring console as the easy method to check volume sizes on each of your indexers. Ideally the total space should be absolutely identical across all indexers, ie 100MB x 8idx/site x...
See more...
Leverage your monitoring console as the easy method to check volume sizes on each of your indexers. Ideally the total space should be absolutely identical across all indexers, ie 100MB x 8idx/site x 2 sites (completely made up numbers). | rest splunk_servers=* /services/data/index-volumes Run that SPL on your search head and it will return for all servers in your search cluster and indexing cluster. You can add more search terms to get down to the indexer level and then transform the results for "/dataHot", "/dataCold", and "_splunk_summaries". Look at the per server results for used vs available/total and create a calculated field for %used. Anything above 85% for /dataCold is typically a strong indication you need to expand your storage capabilities. Note that "/dataHot" by design runs full before it will roll a bucket over to the /dataCold volume.
I may have restrictions but I'm not sure what they are, as I inherited some configurations from our Splunk PS whom started building our architecture but I subsequently ended up finishing. Where shoul...
See more...
I may have restrictions but I'm not sure what they are, as I inherited some configurations from our Splunk PS whom started building our architecture but I subsequently ended up finishing. Where should I look? As for the internal logs, they are vague. It does tell me which buckets but there are a lot of them. Nothing stands out to me, but that could be my untrained eye on whether they are all hot/warm/cold.
Hi, Do anyone have idea how we can disable or hide the OpenAPI.json which is visible in the UI of Splunk Add-on which exposes the schema of the inputs ? I also deleted the OpenAPI.json file fro...
See more...
Hi, Do anyone have idea how we can disable or hide the OpenAPI.json which is visible in the UI of Splunk Add-on which exposes the schema of the inputs ? I also deleted the OpenAPI.json file from appserver->static directory which helped in not downloading the file but that button is still present which I want to hide. It is referenced in some .js files which is difficult to read. Any idea how to hide the below button from the configuration tab of the UI
Do you have Cold Volume storage restrictions? The offline may have cold buckets that want to replicate back to the always on site which that site may have removed due to volume utilization restricti...
See more...
Do you have Cold Volume storage restrictions? The offline may have cold buckets that want to replicate back to the always on site which that site may have removed due to volume utilization restrictions. Do you have any details in your internal logs which indicate which buckets are not replicating, anything special about those specific buckets?
Hi All, We previously used the Splunk Add-on for Microsoft Office 365 Reporting Web Service (https://splunkbase.splunk.com/app/3720) to collect message trace logs in Splunk. However, since this ...
See more...
Hi All, We previously used the Splunk Add-on for Microsoft Office 365 Reporting Web Service (https://splunkbase.splunk.com/app/3720) to collect message trace logs in Splunk. However, since this add-on has been archived, what is the recommended alternative for collecting message trace logs now?
We have an app created with splunk add-on builder. We got an alert about the new python SDK.. check_python_sdk_version If your app relies on the Splunk SDK for Python, we require you to use an a...
See more...
We have an app created with splunk add-on builder. We got an alert about the new python SDK.. check_python_sdk_version If your app relies on the Splunk SDK for Python, we require you to use an acceptably-recent version in order to avoid compatibility issues between your app and the Splunk Platform or the Python language runtime used to execute your app’s code. Please update your Splunk SDK for Python version to the least 2.0.2. More information is available on this project’s GitHub page: https://github.com/splunk/splunk-sdk-python How do we upgrade the SDK in the add-on builder to use the latest?
Hi @juvenile , create a more complicated value for All, e.g. if you want to exclude events with some_field="xxx": <input id="select_abc" type="multiselect" token="token_abc" searchWhenChanged="true...
See more...
Hi @juvenile , create a more complicated value for All, e.g. if you want to exclude events with some_field="xxx": <input id="select_abc" type="multiselect" token="token_abc" searchWhenChanged="true">
<label>ABC‍</label>
<default>*</default>
<prefix>(</prefix>
<suffix>)</suffix>
<choice value=* AND NOT some_field="SomeArbitraryStringValue">All</choice>
<search base="base_search">
<query>
| stats count as count by some_field
| sort 0 - count
</query>
</search>
<fieldForLabel>some_field</fieldForLabel>
<fieldForValue>some_field</fieldForValue>
</input> Please adapt this solution to your requirements. Ciao. Giuseppe
<input id="select_abc" type="multiselect" token="token_abc" searchWhenChanged="true">
<label>ABC‍</label>
<default>*</default>
<prefix>(</prefix>
<suffix>)</suffix>
<valuePrefix>"</valuePrefix>
<valueSuffix>"</valueSuffix>
<choice value="*">All</choice>
<search base="base_search">
<query>
| stats count as count by some_field
| sort 0 - count
</query>
</search>
<fieldForLabel>some_field</fieldForLabel>
<fieldForValue>some_field</fieldForValue>
<delimiter>,</delimiter>
<change>
<condition label="All">
<set token="token_abc">("*") AND some_field != "SomeArbitraryStringValue"</set>
</condition>
</change>
</input> I was wondering how I can exclude a specific option from the asterisk (*) value of the "All" option? Also, how does it work with parantheses and also exlcuding it from the default value? Thank you
Hey! I am currently standing up an enterprise splunk system that has a multi-site(2) indexer cluster of 8 Peers and 2 Cluster Managers in HA configuration (LB by F5). I've noticed that if we have ou...
See more...
Hey! I am currently standing up an enterprise splunk system that has a multi-site(2) indexer cluster of 8 Peers and 2 Cluster Managers in HA configuration (LB by F5). I've noticed that if we have outages specific to a site, data rightfully continues to get ingested at the site that is still up..... But upon the return of service to the secondary site, we have a thousand or more fixup tasks (normal I suppose) but at times they hang and eventually I get replication failures in my health check. Usually unstable pending-down-up status is associated with the peers from the site that went down as they attempt to clean up. This is still developmental, so I have the luxury of deleting things with no consequence. The only fix I have seen to work is deleting all the data from the peers that went down and allowing them to resync and copy from a clean slate. I'm sure there is a better way to remedy this issue. Can anyone explain/or point me in the direction of the appropriate solution and what the exact cause of this problem is? I've read this Anomalous bucket issues - Splunk Documentation but roll, resync, delete doesn't quite do enough. And there is no mention as to why the failures start to occur. From my understanding, fragmented buckets play a factor when reboots or unexpected outages happen but how do I exactly regain some stability in my data replication.