All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Make sure that you reopen the modified dashboard in a new tab/window otherwise existing token values may get carried forward
If you have single indexer you can migrate it to cluster and then multisite cluster quite easily. You can found those steps on  https://docs.splunk.com/Documentation/Splunk/9.4.0/Indexer/Migratenon... See more...
If you have single indexer you can migrate it to cluster and then multisite cluster quite easily. You can found those steps on  https://docs.splunk.com/Documentation/Splunk/9.4.0/Indexer/Migratenon-clusteredindexerstoaclusteredenvironment https://docs.splunk.com/Documentation/Splunk/9.4.0/Indexer/Migratetomultisite You can create one node cluster if needed or use several nodes on site and of course same amount and size of nodes in DR site too. Without this with other tool it will be more complicated to build DR and especially working DR site. So I strongly recommend to use Splunk's own way to do DR!
@ITWhisperer  : I tried this as well, directly passing the value. Still same  
You seem to have removed the parsing of the slot - also, try using epoch times and not converting them to strings (as this is unnecessary) index="index1" | search "slot" | rex field=msg "VF\s+slot\... See more...
You seem to have removed the parsing of the slot - also, try using epoch times and not converting them to strings (as this is unnecessary) index="index1" | search "slot" | rex field=msg "VF\s+slot\s+(?<slot_number>\d+)" | rex field=msg "(?<action>added|removed)" | eval added_epoch=if(action="added",_time,null()) | eval removed_epoch=if(action="removed",_time,null()) | sort 0 _time | streamstats max(added_epoch) as added_epoch latest(removed_epoch) as removed_epoch by host, slot_number | eval downtime=if(isnotnull(added_epoch) AND isnotnull(removed_epoch), removed_epoch - added_epoch, 0)  
Change your default value to be the value not the label "defaultValue": "*",
But ideally WGET downloaded files should work, not sure why Splunk throws error! This saves time and effort of downloading and copying. Can anyone suggest? Forwarder Awareness 
Asterisks are wild cards - are you really using wildcards or are you just obfuscating your search for the purposes of posting here? It would also be very helpful if you could share some sample raw e... See more...
Asterisks are wild cards - are you really using wildcards or are you just obfuscating your search for the purposes of posting here? It would also be very helpful if you could share some sample raw events, anonymised appropriately; please share them in a code block using the </> button to create an area to place them in so that formatting is preserved
Hi All, I have a dropdown multi-select created using dashboard studio with default value set as "All".  This All is nothing but the static value set under menu configuration. Label - "All" Val... See more...
Hi All, I have a dropdown multi-select created using dashboard studio with default value set as "All".  This All is nothing but the static value set under menu configuration. Label - "All" Value - * Query used :  index=test sourcetype="billing_test" productcode="testcode"  | fields account_id account_name cluster namespace pod cost  | search account_id IN ($account_id$) AND clustername IN ($cluster$) AND account_name IN ($account_name$) | stats count by namespace But when I click on this multi-select dropdown it is loading another "All" as value together with the default value I have set. Example Screenshot :      Full xml code "visualizations": {}, "dataSources": { "ds_1sGu0DN2": { "type": "ds.search", "options": { "query": "index=test sourcetype=\"billing_test\" productcode=\"testcode\"| fields account_id account_name cluster namespace pod cost" }, "name": "Base search" }, "ds_fURg97Gu": { "type": "ds.chain", "options": { "extend": "ds_1sGu0DN2", "query": "| search account_id IN ($account_id$) AND eks_clustername IN ($cluster$) AND account_name IN ($account_name$)| stats count by namespace" }, "name": "Namespacefilter" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } } }, "inputs": { "input_global_trp": { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "-7d@h,now" }, "title": "Global Time Range" }, "input_jHd4pV3L": { "options": { "items": [ { "label": "All", "value": "*" } ], "defaultValue": [ "All" ], "token": "account_id" }, "title": "Namespace", "type": "input.multiselect", "dataSources": { "primary": "ds_fURg97Gu" }, "context": {} } }, "layout": { "options": {}, "globalInputs": [ "input_global_trp", "input_jHd4pV3L" ], "tabs": { "items": [ { "layoutId": "layout_1", "label": "New tab" } ] }, "layoutDefinitions": { "layout_1": { "type": "grid", "structure": [], "options": { "width": 1440, "height": 960 } } } }, "description": "", "title": "Test Dashboard" } Please can anyone of you help me to know what is going wrong. Thanks , NVP
Hi @Rim-unix , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Po... See more...
Hi @Rim-unix , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Apologies i am pretty New to Splunk  and i still learning and going through tutorials just got till the below but no results yet  Index="Nex" Application="Pe***g.Ne**s.Platform.Host"| Search 
What have you tried so far?
Below was the question for me "I need a running report to be exported, with the number of errors on each of the services in the last 7 days then it has to show a graph for each week" i would need... See more...
Below was the question for me "I need a running report to be exported, with the number of errors on each of the services in the last 7 days then it has to show a graph for each week" i would need a query to search for this Serivce "Per****ng.N**s.Platform.Host" Index="Nex" where i would need data for Information, Error, Debug, Warnings. Please help me with this 
Thanks Giuseppe , your suggestions, we are planning the different way to build setup, if we have any query, we will get back to you.  once again thanks Giuseppe
Hi @Rim-unix , if you have an Indexer Cluster, you can create a multisite Cluster and DR is automatic. If you don't have an Indexer Cluster, you have to find a different way for DR, using external ... See more...
Hi @Rim-unix , if you have an Indexer Cluster, you can create a multisite Cluster and DR is automatic. If you don't have an Indexer Cluster, you have to find a different way for DR, using external tools as Veeam or other products. Ciao. Giuseppe
Hi @gcusello  Nice idea my friend, thanks for your answer Danke  Zake
I suppose that you have an Indexer Cluster, is it correct? No ,you should design a multisite Indexer Cluster where the secondary site is on AWS. yes we are planning multisite Indexer Cluster.  th... See more...
I suppose that you have an Indexer Cluster, is it correct? No ,you should design a multisite Indexer Cluster where the secondary site is on AWS. yes we are planning multisite Indexer Cluster.  the DR site is US-WEST-2 (Oregon) .
Hi @zksvc , you could extract from splunk the list of hostnames with a simple search index=* | stats count BY host. Then you could elaborate these results e.g. using nslookup to have the hostnames ... See more...
Hi @zksvc , you could extract from splunk the list of hostnames with a simple search index=* | stats count BY host. Then you could elaborate these results e.g. using nslookup to have the hostnames when you have the IPs and viceversa, at the same time, when you have an FQDN, you could extract the hostname using a regex, but it depends on your data. In this way, you couls have a list of hosts whose logs are monitored by Splunk and you can match them with the Sophos list using e.g. Excel. Otherwise, if you planned to ingest Sophos logs in Splunk, you can do this match in Splunk. Ciao. Giuseppe
I have not set up the ingest from Sophos to Splunk yet. I am currently looking to create a custom correlation search. However, if you know how to verify the data, please let me know. The query I've ... See more...
I have not set up the ingest from Sophos to Splunk yet. I am currently looking to create a custom correlation search. However, if you know how to verify the data, please let me know. The query I've crafted clearly identifies all the necessary details such as hostname, IP, and username. The issue of uppercase/lowercase is not a problem, as it only requires output without the need to compare data. I've been quite troubled trying to sort this out, which has led me to this point.
Hi @Rim-unix , what do you mean with DR Indexers? at first, I suppose that you have an Indexer Cluster, is it correct? Anyway, you should design a multisite Indexer Cluster where the secondary sit... See more...
Hi @Rim-unix , what do you mean with DR Indexers? at first, I suppose that you have an Indexer Cluster, is it correct? Anyway, you should design a multisite Indexer Cluster where the secondary site is on AWS. To do this I hint to engage a Splunk PS or a certified Splunk Architect. Ciao. Giuseppe
For DR purposes you should use multisite cluster option. See more https://docs.splunk.com/Documentation/SVA/current/Architectures/M2M12