All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You've never scripted on unices, have you? But seriously - that's kinda obvious. I'd say Write-Output is like writing to stdout whereas Write-Host is more like writing to stderr (yes, I know that... See more...
You've never scripted on unices, have you? But seriously - that's kinda obvious. I'd say Write-Output is like writing to stdout whereas Write-Host is more like writing to stderr (yes, I know that this analogy is not 100% correct).
Thanks to reddit user u/chadbaldwin who pointed out that the fault was in using `Write-Host` rather than `Write-Output`; whereas `Write-Host` isn't something Splunk is able to capture. Replaced the ... See more...
Thanks to reddit user u/chadbaldwin who pointed out that the fault was in using `Write-Host` rather than `Write-Output`; whereas `Write-Host` isn't something Splunk is able to capture. Replaced the script to use `Write-Output` and it's now working.
Hello, I have a saved search that pushes data to summary index. The summary index has data for last 2 years and data volume is really huge. Suppose I want to add a new field to this data in summary ... See more...
Hello, I have a saved search that pushes data to summary index. The summary index has data for last 2 years and data volume is really huge. Suppose I want to add a new field to this data in summary index, I need to re run search for last two years. Since the volume is huge, if I try to run the search for all 2 years data in one time, the search fails or data gets missed. To avoid this, I'll be pushing data in 10 days batch or 30 days batch. For example - if I have to repopulate my summary index after adding a new field. So, for first batch I'll run for data from 1st Aug 2023 to 10th Aug 2023. Next batch I'll run from 11th Aug to 20th Aug. Similar thing has to be done for past two years of data to be pushed in summary index. This task is very cumbersome . Is there a way to automate this task in splunk. Can I schedule my search in such a way that while repushing data , without manual intervention data gets pushed in 10 days batch in summary index?
Buenas comunidad quería saber si es posible, cuando tienes un splunk de manera local implementar maquinas "instancias" nuevas para monitorizarlas junto a la local eso es todo si es posible , me gusta... See more...
Buenas comunidad quería saber si es posible, cuando tienes un splunk de manera local implementar maquinas "instancias" nuevas para monitorizarlas junto a la local eso es todo si es posible , me gustaría que se me dejase el link de el procedimiento a seguir para completar esta tarea    Un saludo y gracias
Hi @ITWhisperer !    Thanks for your response!    It is working fine, it always selected two values in dashboard 2 even if we are selecting one value in dashboard 1,     For Eg. If we are selectin... See more...
Hi @ITWhisperer !    Thanks for your response!    It is working fine, it always selected two values in dashboard 2 even if we are selecting one value in dashboard 1,     For Eg. If we are selecting "Front Office" in Dashboard 1, It shows both values "Front Office" and "Back Office" in Dashboard 2. Thanks!
Hi @bt149, for the lookup population search you could try something like this: <your_search> | stats count earliest(_time) AS first_event latest(_time) AS last_event BY host | outputlo... See more...
Hi @bt149, for the lookup population search you could try something like this: <your_search> | stats count earliest(_time) AS first_event latest(_time) AS last_event BY host | outputlookup your_lookup.csv for the alert the fires eventual missing hosts, you could try: <your_search> | stats count BY host | append [ | your_lookup | eval count=0 | fields host count] | stats sum(count) AS count BY host | where count=0 Ciao. Giuseppe
I have a lookup file.  Lookup has "host", "count", "first_event" and "last_event" fields.  I want to run a search hourly that will update all the fields with fresh values and in the event that a "hos... See more...
I have a lookup file.  Lookup has "host", "count", "first_event" and "last_event" fields.  I want to run a search hourly that will update all the fields with fresh values and in the event that a "host" is not found in the search send an alert. Any guidance would be appreciated.
Hi @john_snow00, sorry, where is the timestamp? if it isn't contained in the event, it's added by Splunk. Anyway, you could run something like this: <your_search> | rex "Rate\s+(?<Bytes>\d+)\/sec... See more...
Hi @john_snow00, sorry, where is the timestamp? if it isn't contained in the event, it's added by Splunk. Anyway, you could run something like this: <your_search> | rex "Rate\s+(?<Bytes>\d+)\/sec" | eval MB=Bytes/1024/1024 | timechart sum(MB) AS MB I also added the regex to extract the field, if you already have it, don't use my regex. Ciao. Giuseppe
@PickleRick  how would you use REST in splunkcloud indexers? isnt it restricted to SH only
Lessons learned: 1) Use btool (or REST in case of Cloud) to see effective config. 2) Use unique naming schema in order not to accidentally clash with settings from other chunks of config.
Hey,  thank you for the help. Now we have a solution Best regards
I have regular traffic passing through my server. The server has the IP 10.41.6.222 My goal is to extract the Rate /sec passing through the server and  to be able to see theRate /sec in a graph an h... See more...
I have regular traffic passing through my server. The server has the IP 10.41.6.222 My goal is to extract the Rate /sec passing through the server and  to be able to see theRate /sec in a graph an having x asis showing time and y axis Rate /sec (extracted values). -----------------------------------------------------------------------------------------------------------------------------------   Rate 0/sec : Bytes 9815772 : from owa client to vs_owa with address 10.41.6.166:443:10.41.6.222Rate 402/sec : Bytes 9816135 : from owa client to vs_owa with address 10.41.6.166:443:10.41.6.222Rate 139587/sec : Bytes 10004146 : from owa client to vs_owa with address 10.41.6.166:443:10.41.6.222Rate 147636/sec : Bytes 10009645 : from owa client to vs_owa with address 10.41.6.166:443:10.41.6.222Rate 69967/sec : Bytes 10358668 : from owa client to vs_owa with address 10.41.6.166:443:10.41.6.222Rate 69967/sec : Bytes 10361672 : from owa client to vs_owa with address 10.41.6.166:443:10.41.6.222Rate 69967/sec : Bytes 10364579 : from owa client to vs_owa with address 10.41.6.166:443:10.41.6.222Rate 69967/sec : Bytes 10364667 : from owa client to vs_owa with address 10.41.6.166:443:10.41.6.222Rate 49661/sec : Bytes 10371887 : from owa client to vs_owa with address 10.41.6.166:443:10.41.6.222Rate 217793/sec : Bytes 10700517 : from owa client to vs_owa with address 10.41.6.166:443:10.41.6.222Rate 353829/sec : Bytes 10944230 : from owa client to vs_owa with address 10.41.6.166:443:10.41.6.222Rate 93689/sec : Bytes 10946290 : from owa client to vs_owa with address 10.41.6.166:443:10.41.6.222Rate 82030/sec : Bytes 10950753 : from owa client to vs_owa with address
Hi @gcusello , it was setnull stanza which was being used by another app and was taking precedence over this one that is why it was not taken into consideration . i changed the setnull stanza in tr... See more...
Hi @gcusello , it was setnull stanza which was being used by another app and was taking precedence over this one that is why it was not taken into consideration . i changed the setnull stanza in tranforms to a more meaningful & unique name and that worked .. Thanks a lot for your help.
Hi @gcusello thanks for your reply. Agree with what you suggested. However, I found it challenging to recognize stanzas that can be used for completing each individual field. I am using Vladiator an... See more...
Hi @gcusello thanks for your reply. Agree with what you suggested. However, I found it challenging to recognize stanzas that can be used for completing each individual field. I am using Vladiator and filling in gaps that way.  From what you said "Following your example: WinRegMon  belongs to the Splunk_TA_Windows Add-On that's CIM 4.x Compliant, so you don't need to perform any action."  WinRegMon was there but I had to discover it myself as all options that come under the default folder/inputs.conf are lots and they are disabled by default for the user to decide on which ones to enable and for what purpose. Would you be able to help me identify the best and appropriate way to decide on how to enable the Ports Data Set's fields (currently not getting any data in...is ti coming from SysMon or any other data sourcetype?) from the Endpoint Data Model? Hope you can understand my challenge.
It depends on how you populate the choices for dropdown 2 - for example, if you are using a search, you can filter the results of the search based on the selection from dropdown 1. You may also be ab... See more...
It depends on how you populate the choices for dropdown 2 - for example, if you are using a search, you can filter the results of the search based on the selection from dropdown 1. You may also be able to use the change handler for dropdown 1 to set the form.dropdown2 token to auto-select a value for dropdown 2
Hi All, I need help building a SPL that would return all available fields mapped to their sourcetypes/source  Looking across all Indexers crawling through all indexes index=* I currently use to... See more...
Hi All, I need help building a SPL that would return all available fields mapped to their sourcetypes/source  Looking across all Indexers crawling through all indexes index=* I currently use to strip off all the fields and their extracted fields but I have no idea where they are coming from, what is their sourcetype and source: index=* fieldsummary | search values!="[]" | rex field=values max_match=0 "\{\"value\":\"(?<extracted_values>[^\"]+)\"" | fields field extracted_values Thank you!
Multi-selects settings are passed in URLs by repeating the token with each value that has been selected <link target="_blank">/app/SAsh/operational_beautiful?form.choose_office=Front%20Office&amp;fo... See more...
Multi-selects settings are passed in URLs by repeating the token with each value that has been selected <link target="_blank">/app/SAsh/operational_beautiful?form.choose_office=Front%20Office&amp;form.choose_office=Back%20Office&amp;...
Hi @DanAlexander, the correct approach, in my opinion, is: identify your Data Sources, identify in Splunkbase the best Add-Ons for your Data Sources. The CIM4.x compliant Add-Ons are ready to b... See more...
Hi @DanAlexander, the correct approach, in my opinion, is: identify your Data Sources, identify in Splunkbase the best Add-Ons for your Data Sources. The CIM4.x compliant Add-Ons are ready to be used without any action. If instead you have some data source without a CIM 4.x complaint Add-On, you have to create it using the Add-On Builder (https://splunkbase.splunk.com/app/2962) and the SA-CIM-Vladiator (https://splunkbase.splunk.com/app/2968) apps that guide you in this actions. Following your example: WinRegMon  belongs to the Splunk_TA_Windows Add-On that's CIM 4.x Compliant, so you don't need to perform any action. Ciao. Giuseppe
Hi All, trying to identify what data source/sourcetype is needed for each individual field while performing Data Model CIM normalization. For example for Endpoint->Ports/Data Set (https://docs.splunk... See more...
Hi All, trying to identify what data source/sourcetype is needed for each individual field while performing Data Model CIM normalization. For example for Endpoint->Ports/Data Set (https://docs.splunk.com/Documentation/CIM/5.2.0/User/Endpoint) there is a table with 5 columns Dataset Name/Field name/Data type/Description/Abbreviated list of example values/, but there is no guidance of what data source is needed for each individual field to start populating. As an example, I recently found that for the Registry Data Set it needs WinRegMon stanza (configuring this is another challenge ) to be able to recognise and start parsing data. Any help much appreciated!
Hi There!    I need to pass a token form one dashboard to another dashboard when clicking its pie chart  Input in dashboard 1 </input> <input type="multiselect" token="choose_office" searchWhenCh... See more...
Hi There!    I need to pass a token form one dashboard to another dashboard when clicking its pie chart  Input in dashboard 1 </input> <input type="multiselect" token="choose_office" searchWhenChanged="true"> <label>Front/Back office</label> <choice value="Front Office">Front Office</choice> <choice value="Back Office">Back Office</choice> <initialValue>Front Office,Back Office</initialValue> <default>Front Office,Back Office</default> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter>, </delimiter> </input> one of the searches in dashboard 1 `compliance_op` | search office IN ($choose_office$) | chart count by $scope$global | sort $scope$global My link to next dashboard is  <drilldown> <link target="_blank">/app/SAsh/operational_beautiful?form.choose_office=$choose_office$&amp;form.machine=$machine$&amp;form.origin=$origin$&amp;form.country=$country$&amp;form.cacp=$cacp$&amp;form.scope=$scope$</link> </drilldown> Multiselect in dashboard 2 <input type="multiselect" token="office_filter" searchWhenChanged="true"> <label>Front/Back Office</label> <choice value="Front Office">Front Office</choice> <choice value="Back Office">Back Office</choice> <choice value="Unknown">Unknown</choice> <prefix>office IN (</prefix> <suffix>)</suffix> <initialValue>Front Office,Back Office,Unknown</initialValue> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter>, </delimiter> <change> <eval token="office_filter_drilldown">mvjoin('form.office_filter',"&amp;form.office_filter=")</eval> </change> </input>   search in dashboard 2 `compliance_ap` | search office IN ($choose_office$) | chart count by $scope$global | sort $scope$global I'm facing error in search of dashboard 2. Thanks!