As already said, don’t use any beta versions unless you are testing how that beta is working and you are eager to give feedback to splunk! Currently there are several beta 10 versions out. All those...
See more...
As already said, don’t use any beta versions unless you are testing how that beta is working and you are eager to give feedback to splunk! Currently there are several beta 10 versions out. All those have separate testing periods and different features to test. You can see those versions on voc.splunk.com and there are instructions how you should install current license or is it already included into installation package. Also if there are several versions of independent beta, normally you must uninstall old and then starting from scratch with newer one.
In Splunk’s Slack user groups is an own channel for UCC. Maybe they could help you with this case? You can found it here https://splunkcommunity.slack.com/archives/C03SG3ZL4S1
There seems to be some nasty restrictions on this add on depending on what inputs you are using. Sometimes this leads that filtering some events away from streams is not so simple than docs said. Als...
See more...
There seems to be some nasty restrictions on this add on depending on what inputs you are using. Sometimes this leads that filtering some events away from streams is not so simple than docs said. Also those docs are not enough clear in this use case (at least user like I, which isn’t a native English speaker). So could you tell more about your case, so we could better understand your issue? The minimum what we need to know is your environment single node distributed environment and if, which kind of versions Is your splunk in azure or AWS or even somewhere other cloud one or more tenants which inputs you have configured and how probably something else is needed later
I agree with @PickleRick, don’t move and upgrade at same time! Also you shouldn’t upgrade directly from 8.2.x to 9.2.x. Only supported way is migrate over one version like 8.2-> 9.0 -> 9.2 etc. and y...
See more...
I agree with @PickleRick, don’t move and upgrade at same time! Also you shouldn’t upgrade directly from 8.2.x to 9.2.x. Only supported way is migrate over one version like 8.2-> 9.0 -> 9.2 etc. and you must start your node(s) after upgrade to each separate versions. Splunk doesn’t support rollback of version upgrade. So uninstall version is not needed/suggested. Also you should check in Amz23 at least systemd startup settings as those are somehow different than in RHEL. Cgroups default is v2 which needs some parameter changes etc. Also if your environment needs IMDS its version has changed to v2. Probably doesn’t affect to you unless yo are using some old AWS ta?
I may have misspoken i want to reduce the storage usage on my indexer. I have a SharePoint server that has Splunk UF on it and its ingesting unnecessary data that is eating a lot of storage on my in...
See more...
I may have misspoken i want to reduce the storage usage on my indexer. I have a SharePoint server that has Splunk UF on it and its ingesting unnecessary data that is eating a lot of storage on my indexer. The screen shots come from my indexer. Im doing a bit of research now and it looks as if i can use the ingest actions to possibly filter out some of that unnecessary data from that sharepoint UF?
Hi @tbarn005 Can I just check, you want to reduce your storage usage on your Universal Forwarder, but the UF isnt storing your data ingested, its only sending it on. UFs are typically not used fo...
See more...
Hi @tbarn005 Can I just check, you want to reduce your storage usage on your Universal Forwarder, but the UF isnt storing your data ingested, its only sending it on. UFs are typically not used for parsing the data. Did you apply the screenshotted configuration to your UF or a different (HF/IDX) instance? Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
There appear to be a few problems here. 1) The SharePoint app should have a single folder called 'default'. The default folder should contain the files shown in the first screenshot. 2) Universal ...
See more...
There appear to be a few problems here. 1) The SharePoint app should have a single folder called 'default'. The default folder should contain the files shown in the first screenshot. 2) Universal Forwarders do not consume disk space so filtering will not save any there. Caveat: if you use persistent queuing then the UF will use disk space, but the space will be returned once the queue is drained. 3) Universal Forwarders do not process transforms so they cannot filter events this way. Put the props and transforms on the first full instance that touches the data (indexer or heavy forwarder).
Hi Splunk Community, I’m trying to reduce disk space usage on my Splunk Universal Forwarder by filtering out unnecessary SharePoint logs and only forwarding those with a severity of High, error, ...
See more...
Hi Splunk Community, I’m trying to reduce disk space usage on my Splunk Universal Forwarder by filtering out unnecessary SharePoint logs and only forwarding those with a severity of High, error, or warning in the message I created a deployment app named SharePoint. here is what's in that folder: I attempted to create a props and transforms.conf files to filter out the data that was unnecessary. i only need to see the log files in the dir that have certain key words not all of those logs here is what i wrote in the files. I didn't write the regex myself i found something similar to it online somewhere and tried to make it work for my environment After deploying this i now do not see any of my SharePoint logs indexed at all for this specific server even the ones with high. As you can see from the logs i even pointed them at a test index that i made so i should be seeing them I'm not sure what's going on.
1. This is a very old thread. People involved probably don't use this forum anymore. Instead of hijacking ancient threads it's better to create a new one describing the problem and debugging steps al...
See more...
1. This is a very old thread. People involved probably don't use this forum anymore. Instead of hijacking ancient threads it's better to create a new one describing the problem and debugging steps alreadh taken. Possibly linking to the old thread for refetence. 2. The issue is so vaguely specified it's difficult to advise anything. We don't know what and how you're sending, we don't know how your receiving side is configured. We don't know the extent of your problem - are all your events duplicated or just some od them? We don't know if you did any debugging at all.
We recently switched over from an ingest based license to a resource based (vCPU) license model in our deployment. The license was successfully installed in the (dedicated) license manager however a...
See more...
We recently switched over from an ingest based license to a resource based (vCPU) license model in our deployment. The license was successfully installed in the (dedicated) license manager however after the old license expired I noticed a bunch of warnings that the allowed volume had been exceeded. Our manually specified pools have not exceeded their allocation. Though when checking the "Usage report" the available total pool license is now the "free" 500 MB/day. This is not very surprising as we no longer have a "max per day". But should the available not be "infinite" now rather then drop down to default? I deleted all expired licenses and restarted the license manager and the warnings seem to have disappeared, at least for now. But the "total available license" still pushes a varning at 500 MB and up with the gauge screaming red in the "usage report" My first question, how can I modify the "total available license" from the ingest based GB per day to "infinite" or any other higher number than 500 MB per day? Did I miss some step when switching from ingest based license to resource based license in the configuration of the license manager? My second related question, how can I now monitor available license? There is no resource based license usage report available on the license manager? All the best
Hi @sankardevarajan you can start with following free courses provided by Splunk in STEP 1 Introduction to Splunk IT Service Intelligence 2 Installing and Administering ITSI (eLearni...
See more...
Hi @sankardevarajan you can start with following free courses provided by Splunk in STEP 1 Introduction to Splunk IT Service Intelligence 2 Installing and Administering ITSI (eLearning) also free youtube course by Splunk https://www.youtube.com/watch?v=XAmfolPbO1E and following two courses are paid Using Splunk IT Service Intelligence Implementing Splunk IT Service Intelligence for certification there is no Prerequisite Certification for this. you can directly attempt the certification kindly refer to blueprint for exam https://www.splunk.com/en_us/pdfs/training/splunk-test-blueprint-itsi-admin.pdf https://www.splunk.com/en_us/pdfs/training/splunk-itsi-certified-admin-track.pdf
Sorry I forgot to post the solution, I can't remember what it was. It was most likely one of these: https://splunk.my.site.com/customer/s/article/Links-leading-to-alert-configuration-throw-404-e...
See more...
Sorry I forgot to post the solution, I can't remember what it was. It was most likely one of these: https://splunk.my.site.com/customer/s/article/Links-leading-to-alert-configuration-throw-404-error https://community.splunk.com/t5/Security/Why-am-I-getting-a-quot-404-Not-Found-Page-not-found-quot-error/m-p/209742
@JohnGreggFrom what I've read, Java agents should be dividing by processor count: CPU millis / time / processor_count But if you're not seeing that in your K8s results, maybe the cluster agent works...
See more...
@JohnGreggFrom what I've read, Java agents should be dividing by processor count: CPU millis / time / processor_count But if you're not seeing that in your K8s results, maybe the cluster agent works differently than regular Java agents? For the cluster agent metrics, I know it treats 1 CPU = 100%, so multi-core usage gives you >100%. Might be worth checking if you're looking at app agent metrics vs cluster agent metrics - they could calculate differently. If this helps, Please Upvote.
Hi @Pranita_P For each table click on the Visualisation dropdown on the right and select the "When data is unavailable, hide element" checkbox. This means that the table will not show when there ar...
See more...
Hi @Pranita_P For each table click on the Visualisation dropdown on the right and select the "When data is unavailable, hide element" checkbox. This means that the table will not show when there are no results. Now you need to make it return no results when value5 is/isnt selected depending on your dropdown. You can do this with a where statement on each table (it might not be the most efficient to use a where statement here but not sure how else you could achieve it). You would add something like: | where "$yourDropdownToken$"="value 5" Obviously you can change = to != as required. Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing