All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @melanie_granite  I'm not sure if this is still of use to you but to answer your question: You have to configure inputs in  Settings > Data Inputs > Google Spreadsheet Check this link: htt... See more...
Hi @melanie_granite  I'm not sure if this is still of use to you but to answer your question: You have to configure inputs in  Settings > Data Inputs > Google Spreadsheet Check this link: https://community.splunk.com/t5/All-Apps-and-Add-ons/Google-Import-Export-app-configuration/m-p/521681
Hello @esnaidergarzon  I'm not sure if this is still of help since the question was asked quite a while ago.  But if you follow the documentation from here https://lukemurphey.net/projects/splu... See more...
Hello @esnaidergarzon  I'm not sure if this is still of help since the question was asked quite a while ago.  But if you follow the documentation from here https://lukemurphey.net/projects/splunk-google-docs/wiki/How_to_setup_app You should be able to set it up correctly.
Hi, You might have got the result i am replying so that it can help others  as you are using sendemail command and you need Contact field do the following ...|rename Contact as _Contact |sendema... See more...
Hi, You might have got the result i am replying so that it can help others  as you are using sendemail command and you need Contact field do the following ...|rename Contact as _Contact |sendemail to=$result._Contact$ subject=subject sendresults=true format=table  while sending mail splunk will ignore _Contatct field as it has underscore(_) Hope this helps Happy Splunking  
We have Splunk message validation scenarios in our test scenarios and need to know whether any Open API's are available for Test automation? Automation Framework -TOSCA 
Hi Team, We are using Alert manager enterprise to receive the alert notifications. As we are new to this alert manager enterprise, we would like to understand few features about it. Firstly, we have ... See more...
Hi Team, We are using Alert manager enterprise to receive the alert notifications. As we are new to this alert manager enterprise, we would like to understand few features about it. Firstly, we have use cases with threshold criteria.  So, we would like to understand that if we have events which are with inside the threshold limit but not with exact threshold match (i.e) we set an alert to trigger if we encounter 5 failure attempts for any user within 5minutes. But in real time, we noticed that 5 failure attempts for user within 2 minutes, as we set up threshold as 5 minutes, even though we have 5 failure attempts within  2 minutes, will it trigger an alert? 
And another not so happy user here. The documentation clearly states "When a search head cluster member is in manual detention, it stops accepting all new searches from the search scheduler or from ... See more...
And another not so happy user here. The documentation clearly states "When a search head cluster member is in manual detention, it stops accepting all new searches from the search scheduler or from users. Existing ad-hoc and scheduled search jobs run to completion. New scheduled searches are distributed by the captain to search head cluster members that are up and not in detention." As expected, an interactive search is refused. Yet when I monitor the active_historical_search_count of a member in detention, I observe the count going up and down. When I look at the Job Manager screen, I see lots of newly created jobs. Either I misunderstood the detention feature, or the documentation is off the mark, or there is a bug. What is it?  
Hello Splunkers !! Our Splunk setup is currently setup to have singular processing instead of parallel processing, therefore the load is not being distributed but rather spikes on one core. We want... See more...
Hello Splunkers !! Our Splunk setup is currently setup to have singular processing instead of parallel processing, therefore the load is not being distributed but rather spikes on one core. We want to distribute load on all the other CPU core parallelly. Please suggest how I check the core CPU used by Splunk and in which config file I need to change ?  
Fillnull works properly in my case. Thank you!  
Got the solution. Thank you so much.
Thank you, Now I am getting correct output but Phase data is missing. | tstats count as Total where index="abc" by _time, Type, Phase span=1d | timechart span=1d max(Total) as Total by Type | untab... See more...
Thank you, Now I am getting correct output but Phase data is missing. | tstats count as Total where index="abc" by _time, Type, Phase span=1d | timechart span=1d max(Total) as Total by Type | untable _time Type Total  Phase field is missing in the final table. I tried to add 'Phase' field in the untable but showing error.   Pls suggest
Hi,  we implemented persistent queues to catch high latencies between the locations.  E.g. Connection is down for some minutes / hours and the firewall is not buffering logs itself so it gets lost ... See more...
Hi,  we implemented persistent queues to catch high latencies between the locations.  E.g. Connection is down for some minutes / hours and the firewall is not buffering logs itself so it gets lost if queues are filled up..  Is there a way to monitor the persistent queues? Fill ration or other metrics?    I dont want to build custom scripts collecting bash information.  
共有ディスクを利用したクラスタ構成のサーバで共有ディスク上のログをuniversal forwarderでSplunkCloudへ転送しアラートにより特定条件のメッセージ監視をしようとしています。 通常運用時は問題ありませんが、クラスタでフェールオーバーが発生した場合に待機系サーバのuniversal forwarderでは共有ディスク上のログを先頭から読み込んでしまい、Splunk... See more...
共有ディスクを利用したクラスタ構成のサーバで共有ディスク上のログをuniversal forwarderでSplunkCloudへ転送しアラートにより特定条件のメッセージ監視をしようとしています。 通常運用時は問題ありませんが、クラスタでフェールオーバーが発生した場合に待機系サーバのuniversal forwarderでは共有ディスク上のログを先頭から読み込んでしまい、SplunkCloud上ではログの重複が発生してしまいます。 このような共有ディスク構成でのクラスタ構成においてフェールオーバー後に転送対象ログを先頭から転送しないなどSplunkCloud上でログが重複しないような工夫をされた方はいらっしゃいませんでしょうか。 I am trying to forward the logs on the shared disk to Splunk Cloud using a universal forwarder on a server in a cluster configuration using a shared disk, and use alerts to monitor messages under specific conditions. There is no problem during normal operation, but if a failover occurs in the cluster, the universal forwarder on the standby server will read the logs on the shared disk from the beginning, resulting in duplicate logs on Splunk Cloud. In a cluster configuration with a shared disk configuration like this, has anyone taken any measures to prevent logs from being duplicated on SplunkCloud, such as not transferring logs to be transferred from the beginning after a failover? *Translated by the Splunk Community Team*
It is not clear what you are trying to achieve with your sample code. The first line reduces your columns to just 2 (SERVERS and count) - what about the other 5+ columns? do you still want these? ar... See more...
It is not clear what you are trying to achieve with your sample code. The first line reduces your columns to just 2 (SERVERS and count) - what about the other 5+ columns? do you still want these? are these to be added by lookups afterwards? The second line doesn't work because Domain is no longer a column (removed by first line) - are you trying to count the number of servers in each domain? Does this do what you want? | eventstats count by SERVERS | eventstats dc(SERVERS) as Domain_Count by Domain | eventstats dc(SERVERS) as Total_Servers
Want to add a text box on a dashboard  e.g--> search |table field1 field2 field3 Here I want to put a textbox on field3 on the pannel it self so that it can be filter with the field3 value. ... See more...
Want to add a text box on a dashboard  e.g--> search |table field1 field2 field3 Here I want to put a textbox on field3 on the pannel it self so that it can be filter with the field3 value.  
@gcusello  What is the validation criteria for Splunk UniversalForwarder ?
Hi Hendrik, you can use normal toInt() or toString() for text values Example getBody().toInt() or getBody().toInteger() Can't remember 100% if it's toInt()or toInteger()
Hi,  My ask is like can we adjust these regex under one or more blacklist so that we can add few more regex for limitation issue like 10 is max.
Hello I have a table with 7 columns, some of them calculated from lookup I want to count the total of one of the columns and then calculate percentage of other column based on the total I tried th... See more...
Hello I have a table with 7 columns, some of them calculated from lookup I want to count the total of one of the columns and then calculate percentage of other column based on the total I tried this but im getting 0 results   | stats count by SERVERS | stats count(SERVERS) by Domain as "Domain_Count" | eventstats sum(count) as Total_Servers    What can I do ? Thanks
Hi @CyberGuy1033, you can create a fresh Splunk installation and configure it as Search Head connecting it to your Indexers. Ciao. Giuseppe
Hi @damode1, when you speak of troubleshooting data ingestion, probably you're meaning of serching on Splunk. In this case you can use every Search Head that can access data, there isn't any prefer... See more...
Hi @damode1, when you speak of troubleshooting data ingestion, probably you're meaning of serching on Splunk. In this case you can use every Search Head that can access data, there isn't any prefereable one. If instead you are using SHs for inputs, it isn't correct, it's always better to have different roles (usually Universal or Heavy Forwarders) to ingest data. The only exception is Splunk Cloud, where in the Search Heads you can do almost everything. Ciao., Giuseppe