All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Good news, thanks for sharing  
Thank you, I will have to check the versions of each
Certainly, I have already reviewed the provided documentation on this matter. I received this " 1.   check_for_addon_builder_version o Only the add_on_builder version in addon_builder.conf is updat... See more...
Certainly, I have already reviewed the provided documentation on this matter. I received this " 1.   check_for_addon_builder_version o Only the add_on_builder version in addon_builder.conf is updated to the 4.1.3 version of AOB and not the whole app. The AOB library files must also be updated to make Splunk cloud-compatible. This app contains an older AOB library. File: default/addon_builder.conf Line Number: 4" Since, the existing app is developed on some other instance, and we are trying to import that .tgz file downloaded from splunkbase on different instance. Need guidance to make it work.  
You probably have already read these https://dev.splunk.com/enterprise/docs/releaseapps/cloudvetting and other instructions from dev.splunk.com?
Hi @shashankk, try to simplify your search because the eval isn't mandatory: index=test_index | rex "\.(?<TestMQ>.*)\@" | stats count AS TotalCount count(eval(Priority="Low")) AS Low coun... See more...
Hi @shashankk, try to simplify your search because the eval isn't mandatory: index=test_index | rex "\.(?<TestMQ>.*)\@" | stats count AS TotalCount count(eval(Priority="Low")) AS Low count(eval(Priority="Medium")) AS Medium count(eval(Priority="High")) AS High BY TestMQ Ciao. Giuseppe
Hi Splunk Team I am having issues while fetching data from 2 stats count fields together. Below is the query: index=test_index | rex "\.(?<TestMQ>.*)\@" | eval Priority_Level=case(Priority="... See more...
Hi Splunk Team I am having issues while fetching data from 2 stats count fields together. Below is the query: index=test_index | rex "\.(?<TestMQ>.*)\@" | eval Priority_Level=case(Priority="Low", "Low", Priority="Medium", "Medium", Priority="High", "High") | stats count as TotalCount, count(eval(Priority_Level="Low")) as Low, count(eval(Priority_Level="Medium")) as Medium, count(eval(Priority_Level="High")) as High by TestMQ This gives me result like example below: TestMQ    | TotalCount | Low | Medium | High MQNam1 | 120               | 0       | 0               | 0 MQNam2 | 152               | 0       | 0               | 0 .. The problem is that I am getting "0" value for Low, Medium & High columns - which is not correct. I want to combine both the stats and show the group by results of both the fields. If I run the same query with separate stats - it gives individual data correctly. Case 1: stats count as TotalCount by TestMQ index=test_index | rex "\.(?<TestMQ>.*)\@" | eval Priority_Level=case(Priority="Low", "Low", Priority="Medium", "Medium", Priority="High", "High") | stats count as TotalCount by TestMQ Example Output: TestMQ     | TotalCount MQName  | 201 Case 2: stats count as PriorityCount by Priority_Level index=test_index | rex "\.(?<TestMQ>.*)\@" | eval Priority_Level=case(Priority="Low", "Low", Priority="Medium", "Medium", Priority="High", "High") | stats count as PriorityCount by Priority_Level Example Output:  Priority_Level | PriorityCount  High                    |  20 Medium             |  53 Low                     |  78 Please help and suggest. @ITWhisperer - kindly assist. 
We have received an email requesting the upgrade of our existing add-on app to the latest version of the add-on builder. Despite our attempts to validate the app using the add-on builder app, we enco... See more...
We have received an email requesting the upgrade of our existing add-on app to the latest version of the add-on builder. Despite our attempts to validate the app using the add-on builder app, we encountered difficulties importing the .tgz file. It's important to note that we are using a separate instance for validation and packaging. We are seeking guidance on how to successfully validate and package the app using the add-on builder app. Our ultimate goal is to submit the updated app to Splunkbase, ensuring compatibility with the Splunk Cloud platform. Any assistance in this matter would be greatly appreciated.
Probably you need to your own TA/scripted input to looking used disk space on $SPLUNK_HOME/var/splunk/dispatch directory?
If your Mac is using Mx processor (apple silicon) then there could be some issues with buckets and other additional binaries/TA/Apps? With macOS you are running those with Rosetta2 which basically sho... See more...
If your Mac is using Mx processor (apple silicon) then there could be some issues with buckets and other additional binaries/TA/Apps? With macOS you are running those with Rosetta2 which basically should fix this kind of issues? If it's older versions which are using Intel chips then there shouldn't be this kind of issues. Anyhow follow those instructions on previous post and do 1st test migration. When it works without issues then do again final migration if needed.
Hi Usually those apps/TAs contains readme / install instructions. Just follow those to get current versions upgrade. If there is no separate instructions then you should use test environment to try... See more...
Hi Usually those apps/TAs contains readme / install instructions. Just follow those to get current versions upgrade. If there is no separate instructions then you should use test environment to try to update it. Usually you can update it with GUI or cli or using DS if that is distributed into UFs. Just follow these general instructions https://docs.splunk.com/Documentation/AddOns/released/Overview/Installingadd-ons for splunk's own add-ons. r. Ismo
Read thoroughly this documentation - it has most of the answers. https://docs.splunk.com/Documentation/AddOns/released/Overview/Installingadd-ons If you have some specific problems, feel free to ask.
Basically you must check that it's valid for Splunk Cloud. You can do it by following this instructions https://dev.splunk.com/enterprise/docs/developapps/testvalidate/appinspect/ On splunkbase ther... See more...
Basically you must check that it's valid for Splunk Cloud. You can do it by following this instructions https://dev.splunk.com/enterprise/docs/developapps/testvalidate/appinspect/ On splunkbase there is no mention that it's valid for SC. Usually this means that it cannot install into it, before it has fixed/validated for SC. Usually with apps in splunkbase, you should contact to developer and ask that they will port it to splunkcloud.
How to upgrade existing Add-on apps to newer add-on version on different computers.
Your problem is not well-defined. Splunk can only search (and alert based on) events that are in splunk. It's not clear whether you are trying to find added/changed/whatever _Splunk users_ (which sh... See more...
Your problem is not well-defined. Splunk can only search (and alert based on) events that are in splunk. It's not clear whether you are trying to find added/changed/whatever _Splunk users_ (which should be at least partially achievable, but approach to this task can differ based on whether you have 9.x Splunk version which has _configtracker index or earlier one) or if you want to find in your Splunk data info about user accounts from other systems. In the latter case you need to have the information from those systems ingested into Splunk first in order to be able to find anything.
https://docs.splunk.com/Documentation/Splunk/latest/Installation/MigrateaSplunkinstance
Apart from finding the information (the _internal index by default rolls after 30 days), the trouble with "index sizes" is that there are so many different parameters which can be meant as "index siz... See more...
Apart from finding the information (the _internal index by default rolls after 30 days), the trouble with "index sizes" is that there are so many different parameters which can be meant as "index size". Even simple dbinspect has two different parameters (rawSize and sizeOnDiskMB). Add to this summary and datamodel_summary directories...
Adding to what has already been said - I would advise _against_ using those fields. Their contents may be misleading, especially if you ingest data from different timezones and searching by them can... See more...
Adding to what has already been said - I would advise _against_ using those fields. Their contents may be misleading, especially if you ingest data from different timezones and searching by them can be additionally skewed vs. what you expect if you're yet in another timezone. Quoting the docs: [...] If an event has a date_* field, it represents the value of time/date directly from the event itself. If you have specified any timezone conversions or changed the value of the time/date at indexing or input time (for example, by setting the timestamp to be the time at index or input time), these fields will not represent that. [...]
@isoutamo , Hi, Its Splunk base app  https://splunkbase.splunk.com/app/6128 Thanks  
You could try to walkaround that with custom css. Insert a <html><style>[...]</style></html> block into your panel and set display: none for selected elements.
There is no direct REST endpoint to query for the current state of quota consumption. You might be able to dig out something from the _introspection or _metrics indexes but I wouldn't count on too m... See more...
There is no direct REST endpoint to query for the current state of quota consumption. You might be able to dig out something from the _introspection or _metrics indexes but I wouldn't count on too much granularity.