All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

We have received an email requesting the upgrade of our existing add-on app to the latest version of the add-on builder. Despite our attempts to validate the app using the add-on builder app, we enco... See more...
We have received an email requesting the upgrade of our existing add-on app to the latest version of the add-on builder. Despite our attempts to validate the app using the add-on builder app, we encountered difficulties importing the .tgz file. It's important to note that we are using a separate instance for validation and packaging. We are seeking guidance on how to successfully validate and package the app using the add-on builder app. Our ultimate goal is to submit the updated app to Splunkbase, ensuring compatibility with the Splunk Cloud platform. Any assistance in this matter would be greatly appreciated.
Probably you need to your own TA/scripted input to looking used disk space on $SPLUNK_HOME/var/splunk/dispatch directory?
If your Mac is using Mx processor (apple silicon) then there could be some issues with buckets and other additional binaries/TA/Apps? With macOS you are running those with Rosetta2 which basically sho... See more...
If your Mac is using Mx processor (apple silicon) then there could be some issues with buckets and other additional binaries/TA/Apps? With macOS you are running those with Rosetta2 which basically should fix this kind of issues? If it's older versions which are using Intel chips then there shouldn't be this kind of issues. Anyhow follow those instructions on previous post and do 1st test migration. When it works without issues then do again final migration if needed.
Hi Usually those apps/TAs contains readme / install instructions. Just follow those to get current versions upgrade. If there is no separate instructions then you should use test environment to try... See more...
Hi Usually those apps/TAs contains readme / install instructions. Just follow those to get current versions upgrade. If there is no separate instructions then you should use test environment to try to update it. Usually you can update it with GUI or cli or using DS if that is distributed into UFs. Just follow these general instructions https://docs.splunk.com/Documentation/AddOns/released/Overview/Installingadd-ons for splunk's own add-ons. r. Ismo
Read thoroughly this documentation - it has most of the answers. https://docs.splunk.com/Documentation/AddOns/released/Overview/Installingadd-ons If you have some specific problems, feel free to ask.
Basically you must check that it's valid for Splunk Cloud. You can do it by following this instructions https://dev.splunk.com/enterprise/docs/developapps/testvalidate/appinspect/ On splunkbase ther... See more...
Basically you must check that it's valid for Splunk Cloud. You can do it by following this instructions https://dev.splunk.com/enterprise/docs/developapps/testvalidate/appinspect/ On splunkbase there is no mention that it's valid for SC. Usually this means that it cannot install into it, before it has fixed/validated for SC. Usually with apps in splunkbase, you should contact to developer and ask that they will port it to splunkcloud.
How to upgrade existing Add-on apps to newer add-on version on different computers.
Your problem is not well-defined. Splunk can only search (and alert based on) events that are in splunk. It's not clear whether you are trying to find added/changed/whatever _Splunk users_ (which sh... See more...
Your problem is not well-defined. Splunk can only search (and alert based on) events that are in splunk. It's not clear whether you are trying to find added/changed/whatever _Splunk users_ (which should be at least partially achievable, but approach to this task can differ based on whether you have 9.x Splunk version which has _configtracker index or earlier one) or if you want to find in your Splunk data info about user accounts from other systems. In the latter case you need to have the information from those systems ingested into Splunk first in order to be able to find anything.
https://docs.splunk.com/Documentation/Splunk/latest/Installation/MigrateaSplunkinstance
Apart from finding the information (the _internal index by default rolls after 30 days), the trouble with "index sizes" is that there are so many different parameters which can be meant as "index siz... See more...
Apart from finding the information (the _internal index by default rolls after 30 days), the trouble with "index sizes" is that there are so many different parameters which can be meant as "index size". Even simple dbinspect has two different parameters (rawSize and sizeOnDiskMB). Add to this summary and datamodel_summary directories...
Adding to what has already been said - I would advise _against_ using those fields. Their contents may be misleading, especially if you ingest data from different timezones and searching by them can... See more...
Adding to what has already been said - I would advise _against_ using those fields. Their contents may be misleading, especially if you ingest data from different timezones and searching by them can be additionally skewed vs. what you expect if you're yet in another timezone. Quoting the docs: [...] If an event has a date_* field, it represents the value of time/date directly from the event itself. If you have specified any timezone conversions or changed the value of the time/date at indexing or input time (for example, by setting the timestamp to be the time at index or input time), these fields will not represent that. [...]
@isoutamo , Hi, Its Splunk base app  https://splunkbase.splunk.com/app/6128 Thanks  
You could try to walkaround that with custom css. Insert a <html><style>[...]</style></html> block into your panel and set display: none for selected elements.
There is no direct REST endpoint to query for the current state of quota consumption. You might be able to dig out something from the _introspection or _metrics indexes but I wouldn't count on too m... See more...
There is no direct REST endpoint to query for the current state of quota consumption. You might be able to dig out something from the _introspection or _metrics indexes but I wouldn't count on too much granularity.
1. Whenever possible (I know that sometimes you don't have technical means) try to copy-paste actual text input in the code box (the </> symbol in the editor when you're typing in your post) or in th... See more...
1. Whenever possible (I know that sometimes you don't have technical means) try to copy-paste actual text input in the code box (the </> symbol in the editor when you're typing in your post) or in the preformatted style instead of doing a screenshot - it's much easier to work with. 2. As @isoutamo already pointed out - those messages don't seem to have anything to do with time issues (nobody says you don't have time issues, it's just that this particular case is about network connectivity, not time). We don't know your network setup but it seems our hosts don't see each other (or the traffic is filtered somewhere).  
Hi @jaro , if the field is in the Notable index, can be displayed. Did you checked if it's in the visualized fields? Ciao. Giuseppe
Luckily, the requirements are not that strict. https://docs.splunk.com/Documentation/VersionCompatibility/current/Matrix/Compatibilitybetweenforwardersandindexers I ran successfully 9.0 forwarders ... See more...
Luckily, the requirements are not that strict. https://docs.splunk.com/Documentation/VersionCompatibility/current/Matrix/Compatibilitybetweenforwardersandindexers I ran successfully 9.0 forwarders with 8.2 indexers for some time since the client's policy was to install the latest available UF. But while the s2s protocol is not that often upgraded (and even so, the components can negotiate a lower version if one of the connection sides doesn't support the most recent version), there are some issues which can happen if you're using UF with a newer version than your indexers - in case of 9.0 forwarders it was that the forwarders generated events for the _configtracker index which did not exist on the indexers. But it was a minor annoyance, not a real problem.
Thanks @gcusello.  ---to check if this field is displayed in the Notable event (running index=notable search=your_correlation_search), yes, I have display the result "signature" in the search I ran. ... See more...
Thanks @gcusello.  ---to check if this field is displayed in the Notable event (running index=notable search=your_correlation_search), yes, I have display the result "signature" in the search I ran. However, the below description can not show the field value "signature" I search in correlation search as $signature$.  Also I have tried eval other name equal to field signature, still nothing.
After you have define those lookups as described on those links you could use it as | makeresults | eval foo="a" ``` Previous lines generate example data and should replaced your real search ``` | l... See more...
After you have define those lookups as described on those links you could use it as | makeresults | eval foo="a" ``` Previous lines generate example data and should replaced your real search ``` | lookup regiondetails Alias as foo Above example gives you a result like Name _time foo america 2024-01-05 11:44:56 a If you want to do it without lookup command you must define automatic lookup. You find that from previous links. 
There are probably several different possible approaches. index="my_index" [| inputlookup InfoSec-avLookup.csv | rename emailaddress AS msg.parsedAddresses.to{}] final_module="av" final_action="di... See more...
There are probably several different possible approaches. index="my_index" [| inputlookup InfoSec-avLookup.csv | rename emailaddress AS msg.parsedAddresses.to{}] final_module="av" final_action="discard" | rename msg.parsedAddresses.to{} AS To, envelope.from AS From, msg.header.subject AS Subject, filter.modules.av.virusNames{} AS Virus_Type This part is OK (unless you have too many results from the subsearch; you are aware of the subsearch limitations?) - it will give you a list of matching events. Now you're doing | eval Time=strftime(_time,"%H:%M:%S %m/%d/%y") While in your particular case it might not be that bad, I always advise to (unless you have a very specific use case like filtering by month so you render your timestamp to just month to have something to filter by) leave the _time as it is since it's easier manipulated this way. Just use eval (or even better - fieldformat) at the end of your pipeline for presentations. | stats count, list(From) as From, list(Subject) as Subject, list(Time) as Time, list(Virus_Type) as Virus_Type by To Now that's a tricky part - you're doing stats list() over several separate fields. Are you aware that you are creating completely disconnected multivalued fields? If - for any reason - you had an empty Subject in one of your emails, you wouldn't know which email it was from because the values in the multivalued field are "squished" together. I know it's tempting to use multivalued fields to simulate "cell merging" functionality you know from spreadsheets but it's good to know that mechanism has its limitations. | search [| inputlookup InfoSec-avLookup.csv | rename emailaddress AS To] This part is pointless. You already searched for those addresses (and you're creating a subsearch again). I'd do it differently. After your initial search I'd do | eventstats count by To | sort - count + To - _time | streamstats count as eventorder by To | where eventorder<=5 | table _time To From Subject Virus_Type The eventstats part is needed only if you want to have the users with most matches first. Otherwise just drop the eventstats and remove the first field from the sort command - you'll just have your results sorted alphabetically then. Now if you want to have your time field called Time, not _time, add | rename _time as Time | fieldformat Time=strftime(Time,"%H:%M:%S %m/%d/%y") And if you don't want to repeat the To values (which I don't recommend because this breaks the logical structure of your data), you can use autoregress or streamstats to copy over the To value from the previous event and in case it's the same as the current one, just blank the existing field. But again - I don't recommend it - it does make the output look "prettier" but it makes it "logically incomplete".