All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Team, We are using Alert manager enterprise to receive the alert notifications. As we are new to this alert manager enterprise, we would like to understand few features about it. Firstly, we have ... See more...
Hi Team, We are using Alert manager enterprise to receive the alert notifications. As we are new to this alert manager enterprise, we would like to understand few features about it. Firstly, we have use cases with threshold criteria.  So, we would like to understand that if we have events which are with inside the threshold limit but not with exact threshold match (i.e) we set an alert to trigger if we encounter 5 failure attempts for any user within 5minutes. But in real time, we noticed that 5 failure attempts for user within 2 minutes, as we set up threshold as 5 minutes, even though we have 5 failure attempts within  2 minutes, will it trigger an alert? 
And another not so happy user here. The documentation clearly states "When a search head cluster member is in manual detention, it stops accepting all new searches from the search scheduler or from ... See more...
And another not so happy user here. The documentation clearly states "When a search head cluster member is in manual detention, it stops accepting all new searches from the search scheduler or from users. Existing ad-hoc and scheduled search jobs run to completion. New scheduled searches are distributed by the captain to search head cluster members that are up and not in detention." As expected, an interactive search is refused. Yet when I monitor the active_historical_search_count of a member in detention, I observe the count going up and down. When I look at the Job Manager screen, I see lots of newly created jobs. Either I misunderstood the detention feature, or the documentation is off the mark, or there is a bug. What is it?  
Hello Splunkers !! Our Splunk setup is currently setup to have singular processing instead of parallel processing, therefore the load is not being distributed but rather spikes on one core. We want... See more...
Hello Splunkers !! Our Splunk setup is currently setup to have singular processing instead of parallel processing, therefore the load is not being distributed but rather spikes on one core. We want to distribute load on all the other CPU core parallelly. Please suggest how I check the core CPU used by Splunk and in which config file I need to change ?  
Fillnull works properly in my case. Thank you!  
Got the solution. Thank you so much.
Thank you, Now I am getting correct output but Phase data is missing. | tstats count as Total where index="abc" by _time, Type, Phase span=1d | timechart span=1d max(Total) as Total by Type | untab... See more...
Thank you, Now I am getting correct output but Phase data is missing. | tstats count as Total where index="abc" by _time, Type, Phase span=1d | timechart span=1d max(Total) as Total by Type | untable _time Type Total  Phase field is missing in the final table. I tried to add 'Phase' field in the untable but showing error.   Pls suggest
Hi,  we implemented persistent queues to catch high latencies between the locations.  E.g. Connection is down for some minutes / hours and the firewall is not buffering logs itself so it gets lost ... See more...
Hi,  we implemented persistent queues to catch high latencies between the locations.  E.g. Connection is down for some minutes / hours and the firewall is not buffering logs itself so it gets lost if queues are filled up..  Is there a way to monitor the persistent queues? Fill ration or other metrics?    I dont want to build custom scripts collecting bash information.  
共有ディスクを利用したクラスタ構成のサーバで共有ディスク上のログをuniversal forwarderでSplunkCloudへ転送しアラートにより特定条件のメッセージ監視をしようとしています。 通常運用時は問題ありませんが、クラスタでフェールオーバーが発生した場合に待機系サーバのuniversal forwarderでは共有ディスク上のログを先頭から読み込んでしまい、Splunk... See more...
共有ディスクを利用したクラスタ構成のサーバで共有ディスク上のログをuniversal forwarderでSplunkCloudへ転送しアラートにより特定条件のメッセージ監視をしようとしています。 通常運用時は問題ありませんが、クラスタでフェールオーバーが発生した場合に待機系サーバのuniversal forwarderでは共有ディスク上のログを先頭から読み込んでしまい、SplunkCloud上ではログの重複が発生してしまいます。 このような共有ディスク構成でのクラスタ構成においてフェールオーバー後に転送対象ログを先頭から転送しないなどSplunkCloud上でログが重複しないような工夫をされた方はいらっしゃいませんでしょうか。 I am trying to forward the logs on the shared disk to Splunk Cloud using a universal forwarder on a server in a cluster configuration using a shared disk, and use alerts to monitor messages under specific conditions. There is no problem during normal operation, but if a failover occurs in the cluster, the universal forwarder on the standby server will read the logs on the shared disk from the beginning, resulting in duplicate logs on Splunk Cloud. In a cluster configuration with a shared disk configuration like this, has anyone taken any measures to prevent logs from being duplicated on SplunkCloud, such as not transferring logs to be transferred from the beginning after a failover? *Translated by the Splunk Community Team*
It is not clear what you are trying to achieve with your sample code. The first line reduces your columns to just 2 (SERVERS and count) - what about the other 5+ columns? do you still want these? ar... See more...
It is not clear what you are trying to achieve with your sample code. The first line reduces your columns to just 2 (SERVERS and count) - what about the other 5+ columns? do you still want these? are these to be added by lookups afterwards? The second line doesn't work because Domain is no longer a column (removed by first line) - are you trying to count the number of servers in each domain? Does this do what you want? | eventstats count by SERVERS | eventstats dc(SERVERS) as Domain_Count by Domain | eventstats dc(SERVERS) as Total_Servers
Want to add a text box on a dashboard  e.g--> search |table field1 field2 field3 Here I want to put a textbox on field3 on the pannel it self so that it can be filter with the field3 value. ... See more...
Want to add a text box on a dashboard  e.g--> search |table field1 field2 field3 Here I want to put a textbox on field3 on the pannel it self so that it can be filter with the field3 value.  
@gcusello  What is the validation criteria for Splunk UniversalForwarder ?
Hi Hendrik, you can use normal toInt() or toString() for text values Example getBody().toInt() or getBody().toInteger() Can't remember 100% if it's toInt()or toInteger()
Hi,  My ask is like can we adjust these regex under one or more blacklist so that we can add few more regex for limitation issue like 10 is max.
Hello I have a table with 7 columns, some of them calculated from lookup I want to count the total of one of the columns and then calculate percentage of other column based on the total I tried th... See more...
Hello I have a table with 7 columns, some of them calculated from lookup I want to count the total of one of the columns and then calculate percentage of other column based on the total I tried this but im getting 0 results   | stats count by SERVERS | stats count(SERVERS) by Domain as "Domain_Count" | eventstats sum(count) as Total_Servers    What can I do ? Thanks
Hi @CyberGuy1033, you can create a fresh Splunk installation and configure it as Search Head connecting it to your Indexers. Ciao. Giuseppe
Hi @damode1, when you speak of troubleshooting data ingestion, probably you're meaning of serching on Splunk. In this case you can use every Search Head that can access data, there isn't any prefer... See more...
Hi @damode1, when you speak of troubleshooting data ingestion, probably you're meaning of serching on Splunk. In this case you can use every Search Head that can access data, there isn't any prefereable one. If instead you are using SHs for inputs, it isn't correct, it's always better to have different roles (usually Universal or Heavy Forwarders) to ingest data. The only exception is Splunk Cloud, where in the Search Heads you can do almost everything. Ciao., Giuseppe
Hi @revanthammineni, if you see in Splunk baseline (https://splunkbase.splunk.com) there are many apps for Jira integration. Probably the one for you is the Jira App ( https://splunkbase.splunk.com... See more...
Hi @revanthammineni, if you see in Splunk baseline (https://splunkbase.splunk.com) there are many apps for Jira integration. Probably the one for you is the Jira App ( https://splunkbase.splunk.com/app/5806 ) but there are also others. Ciao. Giuseppe
Hi @Dustem , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Point... See more...
Hi @Dustem , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @mohammadsharukh, if I correctly remember, there's a sample of a shourt living account in the Splunk Security Essential App, that I hint. Anyway, don't use the transaction command because it's v... See more...
Hi @mohammadsharukh, if I correctly remember, there's a sample of a shourt living account in the Splunk Security Essential App, that I hint. Anyway, don't use the transaction command because it's very slow, please try this search: sourcetype=wineventlog (EventCode=4726 OR EventCode=4720) | stats earliest(eval(EventCode=4720)) AS earliest latest(eval(EventCode=4726)) AS latest values(dest) AS dest values(src_user) AS src_user values(Account_Domain) AS Account_Domain BY user | eval diff=latest-earliest, creation_time=strftime(earliest,"%Y-%m-%d %H:%M:%S"), deletion_time=strftime(latest,"%Y-%m-%d %H:%M:%S") | where diff<240*60 | table creation_time deletion_time dest EventCode user src_user Account_Domain Ciao. Giuseppe
Hi, I need to be able to see what inputs people are entering into dashboard panels (i.e. what filters they are applying or searching for). So far I have used the _audit and _internal indexes to ... See more...
Hi, I need to be able to see what inputs people are entering into dashboard panels (i.e. what filters they are applying or searching for). So far I have used the _audit and _internal indexes to be able to see which users have accessed which saved-searches on each dashboard, but have not been able to identify the input that they have entered. I am hoping to create a dashboard table for this. Thanks.