All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

A = data('http.server.request.duration_min', filter=filter('http.route', '/Gain/vl/*')).publish(label='A') -signalflow builder query.   I need histogram function metrics instead of data. Is there a... See more...
A = data('http.server.request.duration_min', filter=filter('http.route', '/Gain/vl/*')).publish(label='A') -signalflow builder query.   I need histogram function metrics instead of data. Is there any configuration changes required as part of otel instrumentation. A = histogram('http.server.request.duration', filter=filter('http.route', '/batch-process/iomatch')).min().publish(label='A')    
@saiiman- This SOAR document clearly describes current limitations with community license. https://docs.splunk.com/Documentation/SOARonprem/latest/Admin/License 100 licensed actions per day 1 tena... See more...
@saiiman- This SOAR document clearly describes current limitations with community license. https://docs.splunk.com/Documentation/SOARonprem/latest/Admin/License 100 licensed actions per day 1 tenant 5 cases in the New or Open states   I hope this helps!!! Kindly upvote & accept the answer if it does!!!
@silverKi- As @richgalloway mentioned it is not possible to do it for particular role or user. But you can disable risky for particular command or for all commands. Reference document - https://doc... See more...
@silverKi- As @richgalloway mentioned it is not possible to do it for particular role or user. But you can disable risky for particular command or for all commands. Reference document - https://docs.splunk.com/Documentation/Splunk/latest/Security/SPLsafeguards   I hope this helps!!!
What is your search/SPL for this?
http.server.request.duration histogram Duration of HTTP server requests. metrics coming as grouped like below http.server.request.duration_sum http.server.request.duration_count http.ser... See more...
http.server.request.duration histogram Duration of HTTP server requests. metrics coming as grouped like below http.server.request.duration_sum http.server.request.duration_count http.server.request.duration_max http.server.request.duration_bucket http.server.request.duration_min http.client.request.duration_count similarly... http.route as well coming as Gain/Vl/* instead of full end point. Any solution for this.   
Make sure that you reopen the modified dashboard in a new tab/window otherwise existing token values may get carried forward
If you have single indexer you can migrate it to cluster and then multisite cluster quite easily. You can found those steps on  https://docs.splunk.com/Documentation/Splunk/9.4.0/Indexer/Migratenon... See more...
If you have single indexer you can migrate it to cluster and then multisite cluster quite easily. You can found those steps on  https://docs.splunk.com/Documentation/Splunk/9.4.0/Indexer/Migratenon-clusteredindexerstoaclusteredenvironment https://docs.splunk.com/Documentation/Splunk/9.4.0/Indexer/Migratetomultisite You can create one node cluster if needed or use several nodes on site and of course same amount and size of nodes in DR site too. Without this with other tool it will be more complicated to build DR and especially working DR site. So I strongly recommend to use Splunk's own way to do DR!
@ITWhisperer  : I tried this as well, directly passing the value. Still same  
You seem to have removed the parsing of the slot - also, try using epoch times and not converting them to strings (as this is unnecessary) index="index1" | search "slot" | rex field=msg "VF\s+slot\... See more...
You seem to have removed the parsing of the slot - also, try using epoch times and not converting them to strings (as this is unnecessary) index="index1" | search "slot" | rex field=msg "VF\s+slot\s+(?<slot_number>\d+)" | rex field=msg "(?<action>added|removed)" | eval added_epoch=if(action="added",_time,null()) | eval removed_epoch=if(action="removed",_time,null()) | sort 0 _time | streamstats max(added_epoch) as added_epoch latest(removed_epoch) as removed_epoch by host, slot_number | eval downtime=if(isnotnull(added_epoch) AND isnotnull(removed_epoch), removed_epoch - added_epoch, 0)  
Change your default value to be the value not the label "defaultValue": "*",
But ideally WGET downloaded files should work, not sure why Splunk throws error! This saves time and effort of downloading and copying. Can anyone suggest? Forwarder Awareness 
Asterisks are wild cards - are you really using wildcards or are you just obfuscating your search for the purposes of posting here? It would also be very helpful if you could share some sample raw e... See more...
Asterisks are wild cards - are you really using wildcards or are you just obfuscating your search for the purposes of posting here? It would also be very helpful if you could share some sample raw events, anonymised appropriately; please share them in a code block using the </> button to create an area to place them in so that formatting is preserved
Hi All, I have a dropdown multi-select created using dashboard studio with default value set as "All".  This All is nothing but the static value set under menu configuration. Label - "All" Val... See more...
Hi All, I have a dropdown multi-select created using dashboard studio with default value set as "All".  This All is nothing but the static value set under menu configuration. Label - "All" Value - * Query used :  index=test sourcetype="billing_test" productcode="testcode"  | fields account_id account_name cluster namespace pod cost  | search account_id IN ($account_id$) AND clustername IN ($cluster$) AND account_name IN ($account_name$) | stats count by namespace But when I click on this multi-select dropdown it is loading another "All" as value together with the default value I have set. Example Screenshot :      Full xml code "visualizations": {}, "dataSources": { "ds_1sGu0DN2": { "type": "ds.search", "options": { "query": "index=test sourcetype=\"billing_test\" productcode=\"testcode\"| fields account_id account_name cluster namespace pod cost" }, "name": "Base search" }, "ds_fURg97Gu": { "type": "ds.chain", "options": { "extend": "ds_1sGu0DN2", "query": "| search account_id IN ($account_id$) AND eks_clustername IN ($cluster$) AND account_name IN ($account_name$)| stats count by namespace" }, "name": "Namespacefilter" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } } }, "inputs": { "input_global_trp": { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "-7d@h,now" }, "title": "Global Time Range" }, "input_jHd4pV3L": { "options": { "items": [ { "label": "All", "value": "*" } ], "defaultValue": [ "All" ], "token": "account_id" }, "title": "Namespace", "type": "input.multiselect", "dataSources": { "primary": "ds_fURg97Gu" }, "context": {} } }, "layout": { "options": {}, "globalInputs": [ "input_global_trp", "input_jHd4pV3L" ], "tabs": { "items": [ { "layoutId": "layout_1", "label": "New tab" } ] }, "layoutDefinitions": { "layout_1": { "type": "grid", "structure": [], "options": { "width": 1440, "height": 960 } } } }, "description": "", "title": "Test Dashboard" } Please can anyone of you help me to know what is going wrong. Thanks , NVP
Hi @Rim-unix , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Po... See more...
Hi @Rim-unix , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Apologies i am pretty New to Splunk  and i still learning and going through tutorials just got till the below but no results yet  Index="Nex" Application="Pe***g.Ne**s.Platform.Host"| Search 
What have you tried so far?
Below was the question for me "I need a running report to be exported, with the number of errors on each of the services in the last 7 days then it has to show a graph for each week" i would need... See more...
Below was the question for me "I need a running report to be exported, with the number of errors on each of the services in the last 7 days then it has to show a graph for each week" i would need a query to search for this Serivce "Per****ng.N**s.Platform.Host" Index="Nex" where i would need data for Information, Error, Debug, Warnings. Please help me with this 
Thanks Giuseppe , your suggestions, we are planning the different way to build setup, if we have any query, we will get back to you.  once again thanks Giuseppe
Hi @Rim-unix , if you have an Indexer Cluster, you can create a multisite Cluster and DR is automatic. If you don't have an Indexer Cluster, you have to find a different way for DR, using external ... See more...
Hi @Rim-unix , if you have an Indexer Cluster, you can create a multisite Cluster and DR is automatic. If you don't have an Indexer Cluster, you have to find a different way for DR, using external tools as Veeam or other products. Ciao. Giuseppe
Hi @gcusello  Nice idea my friend, thanks for your answer Danke  Zake