All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, i want to get count of all panels present in the dashboard and total time required to each panel to execute the query (there is different query for each panel) after clicking submit button. 
I created a dashboard for 24 hours and also 10 mins dashboard which I merged to the existing one. I need  the dashboard colors to be based on threshhold (last seen 24 hours red, last seen 10 mins gre... See more...
I created a dashboard for 24 hours and also 10 mins dashboard which I merged to the existing one. I need  the dashboard colors to be based on threshhold (last seen 24 hours red, last seen 10 mins green). I know I can enter the color entries by editing XML. I entered it but I am having invalid error message.  Please how can I do it in XML edit or maybe Format visualization?   My splunk queries are:  For 24 hour monitoring: | tstats latest(_time) as latest where index=* earliest=-48h by host | eval minutesago=round((now()-latest)/60,0)   For 10 mins monitoring: | tstats latest(_time) as latest where index=* earliest=-10m by host | eval minutesago=round((now()-latest)/60,0)    
Hi, Is there a way to remove or quarantine multiple search peers (indexers) at the same time? It's not practical enough for me to do it for on every search (one by one) head like the document states... See more...
Hi, Is there a way to remove or quarantine multiple search peers (indexers) at the same time? It's not practical enough for me to do it for on every search (one by one) head like the document states. 
Hi all, I have a question for installing Splunk add for AWS / Splunk add-on for AWS. My on-prem deployment is like this: datasource <-> HF <-> IDX cluster <-> SH cluster Where do I have to instal... See more...
Hi all, I have a question for installing Splunk add for AWS / Splunk add-on for AWS. My on-prem deployment is like this: datasource <-> HF <-> IDX cluster <-> SH cluster Where do I have to install app/add-on? https://docs.splunk.com/Documentation/AWS/6.0.1/Installation/Installon-prem: It says both should be installed on SH and add-on should be on HF. https://docs.splunk.com/Documentation/AddOns/released/AWS/Distributeddeployment: It's table says add-on should be only on HF. Which document is correct? Thanks in advance.  
Good day, I have noticed that the incident review shows no events, for about a day. The indexers were reviewed by means of a search and if records are observed. Do you know what it could be? I ap... See more...
Good day, I have noticed that the incident review shows no events, for about a day. The indexers were reviewed by means of a search and if records are observed. Do you know what it could be? I appreciate your help. Regards.
I have a litigation hold report and I need to display if the account is disable. I created a lookup table so I can display user full and if the account is disable. when I pull data from the lookup ta... See more...
I have a litigation hold report and I need to display if the account is disable. I created a lookup table so I can display user full and if the account is disable. when I pull data from the lookup table I can't display the status Here is my search eventtype=msexchange-mailbox-usage Database="*" Database="*" LitigationHoldEnabled=True |dedup User |table User, TotalDeletedItemSize, TotalItemSize, Database, Total, LitigationHoldEnable |addtotals fieldname=Total | lookup ActiveDirectoryUsers.csv User OUTPUT name |stats max(Total) as Total by name, Database |eval Total=round((Total/1000/1000/1000),2) |rename name as "Mailbox User Name",Total as "Mailbox Size (GB)" in the lookup table I have  this: name, User, status for example : name: Rumer, Shelly, status: disable in my final report all I see the name, database, total  i'm not able to display the status   thank you
Need some help with a search       { "id": "123", "start_time": "2020-08-01 15:00:00", "end_time": "2020-08-01 16:00:00", "status": "FAIL", "details": [ { ... See more...
Need some help with a search       { "id": "123", "start_time": "2020-08-01 15:00:00", "end_time": "2020-08-01 16:00:00", "status": "FAIL", "details": [ { "sub_id": 1, "status": "PASS" }, { "sub_id": 2, "status": "FAIL" } ] } { "id": "124", "start_time": "2020-08-01 16:05:00", "end_time": "2020-08-01 16:30:00", "status": "PASS", "original_id": "123", "details": [ { "sub_id": 1, "status": "PASS" }, { "sub_id": 3, "status": "PASS" } ] }             These two events can be joined with id and original_id field. The output below shows data from id:123 but overrides some fields like, end_time, status, sub_id and sub_status from second event. The Tabular output I expect is :  id start_time end_time status sub_id sub_status 123 2020-08-01 15:00:00 2020-08-01 16:30:00 PASS 1 PASS 123 2020-08-01 15:00:00 2020-08-01 16:30:00 PASS 3 PASS   Any help is appreciated, thanks
I am currently trying to filter EventCode 4703. I wanted to do this via blacklist but not fully block the EventCode but simply do like a regex to filter on Account Names ending in $ and drop those lo... See more...
I am currently trying to filter EventCode 4703. I wanted to do this via blacklist but not fully block the EventCode but simply do like a regex to filter on Account Names ending in $ and drop those logs from sending to splunk.  Also I am trying to filter 4688 and 4689. I was following this, https://gist.github.com/automine/a3915d5238e2967c8d44b0ebcfb66147  , guide but it doesn't seem to work for me. 
I have a bunch of incoming events that either link to a single outcome event or don't link. I'm interested in determining all the events that don't link to the outcome event. For example: T... See more...
I have a bunch of incoming events that either link to a single outcome event or don't link. I'm interested in determining all the events that don't link to the outcome event. For example: Tx 1, event="message_received", id="A" Tx 2, event="message_received", id="A" Tx 3, event="message_received", id="B" Tx 4, event="message_received", id="A" ... Tx 20, event="batch_send_success", id="B" ​I would like to run a search to determine which events have not been sent; in the case above, all the events with id="A" do not have a corresponding event="batch_send_success", id="A" event, so I would like to show Tx 1, Tx 2, and Tx 4 in the search.   I tried using transaction with keepevicted=true, but that doesn't seem to work for many-to-one linkage transactions like I am trying; it works in reverse and ends up considering all "message_received" with the exception of the most recent one as evicted and gives me false negatives.   Thanks in advance for any help!
We have a heartbeat service that runs every minute recording the following timestamp information: Heartbeat: 2020-09-21T13:50:00.3031757-06:00 I'm hoping to detect events that take 15 minutes bet... See more...
We have a heartbeat service that runs every minute recording the following timestamp information: Heartbeat: 2020-09-21T13:50:00.3031757-06:00 I'm hoping to detect events that take 15 minutes between the current timestamp and the last timestamp.  This indicates that a server restart has happened. Basically:  Heartbeat: 2020-09-21T13:50:00.3031757-06:00 - Heartbeat: 2020-09-21T13:35:00.3031757-06:00 Does anyone know how I could record this?   Thanks      
When I create an action or try to change the variables in any of alert actions for an alert, I end up with a message -  "In handler 'savedsearch': Expecting different token"  I cannot save any of t... See more...
When I create an action or try to change the variables in any of alert actions for an alert, I end up with a message -  "In handler 'savedsearch': Expecting different token"  I cannot save any of the changes to the search. This seems to be a permissions issues but just not sure how to resolve. Any help is appreciated. 
This warning message indicates that even though it has errors, it is still running or is definitely not working? Asynchronous bundle replication might cause (pre 4.2) search peers to run searches wi... See more...
This warning message indicates that even though it has errors, it is still running or is definitely not working? Asynchronous bundle replication might cause (pre 4.2) search peers to run searches with different bundle/config versions. Results might not be correct. [subsearch]: Subsearches of a real-time search run over all-time unless explicit time bounds are specified within the subsearch. [subsearch]: Successfully read lookup file '/splunk/etc/apps/SA-Utils/lookups/qualitative_thresholds.csv'. remote search process failed on peer    
Overview: Flexible I/O (FIO) is a storage I/O testing tool. It offers options to perform a variety of storage tests, has detailed reporting, is CLI-based, and can run simultaneous tests across many m... See more...
Overview: Flexible I/O (FIO) is a storage I/O testing tool. It offers options to perform a variety of storage tests, has detailed reporting, is CLI-based, and can run simultaneous tests across many machines using one control node. A pre-compiled version is available for multiple *nix distributions and Windows. See Flexible I/O Binary packages for the latest builds.
Is there any chunk size applied while reading the data on the connections? chunk size like 2kb,4kb,8kb ? is there a way i can check this setting? Also, Is there any logic at Splunk to make sure th... See more...
Is there any chunk size applied while reading the data on the connections? chunk size like 2kb,4kb,8kb ? is there a way i can check this setting? Also, Is there any logic at Splunk to make sure that the record has been read  completely on the connection?    
How do you give a search command to get the list of servers which are not running with Zabbix Agent service?
I open a new thread because in the previous one I was reviewing several errors at the same time for this specific error message I have already read all the forum posts and the ones I have found on t... See more...
I open a new thread because in the previous one I was reviewing several errors at the same time for this specific error message I have already read all the forum posts and the ones I have found on the internet but I am still having problems • I have no Licence problems • I don't have another output.conf file in the heavy forwarder • I do not have a high performance or consumption in IOPS after consulting with the command iostats and iotop - The devices through syslog send the logs to the heavy carrier that receives them and when reviewing them the logs are with the date and time updated all the time For some reason the heavy forwarder doesn't constantly forward the logs to the indexers, as when querying with index = * host = xxxx | statistics count per host _time I see records but with delays of 8-10 hours. When checking the logs in var / log / splunk / splunkd.log I use grep xxxx splunkd.log to check the errors only from the host that interests me and that is where I see the message 09-21-2020 07:20:48.483 -0500 WARN TcpOutputProc - The TCP output processor has paused the data flow. Forwarding to host_dest=indexer inside output group default-autolb -group from host_src=xxxxx has been blocked for blocked_seconds=100. This can stall the data flow towards indexing and other network outputs. Review the receivin g system's health in the Splunk Monitoring Console. It is probably not accepting data. the heavy forwarder does not have high IOPS problems The problem seems to be in the indexers that are not able to receive the information. In that order of ideas, what do you recommend reviewing?
I have two indexes Index A and Index B and it has a common key “ID” and I want to compare two indexes and need to report which status code are not matching with each other and any missing record. In... See more...
I have two indexes Index A and Index B and it has a common key “ID” and I want to compare two indexes and need to report which status code are not matching with each other and any missing record. Index A ID status_code 101 A01 102 A11 103 B10 104 M01 105 D01 101 A02   Index B ID status_code 101 A01 102 B10 103 B10 104 M01 101 Z01   Expected output – Mismatched records: ID Index A code Index B Code 102 A11 B10 105 D01   101 A02 Z01    
Hi, Rony here. We've got a Search Head Cluster with 6 Search Heads. When a user access our splunk URL, it hits a load balancer, wich randomly uses one of the 6 Search heads. Authentication is by... See more...
Hi, Rony here. We've got a Search Head Cluster with 6 Search Heads. When a user access our splunk URL, it hits a load balancer, wich randomly uses one of the 6 Search heads. Authentication is by LDAP integration.   Heres the problem: When a user logins to splunk, that user groups (and as a result, his roles) only gets updated on the specific search head that user happens to hit. If the user continues to use splunk, the load balancer might connect him to another search head, where the user groups are not updated The user tries to perform an action that requires the permissions provided by the group, and fails. Is there a way that, when the user logs in, his assigned LDAP groups gets replicated to all the search heads on the cluster?
Is anyone using Splunk Phantom to query ServiceNow catalog request items? Someone at our company is trying to use the out-of-the-box Splunk Phantom application to query request item variables in Se... See more...
Is anyone using Splunk Phantom to query ServiceNow catalog request items? Someone at our company is trying to use the out-of-the-box Splunk Phantom application to query request item variables in ServiceNow. They can see the variable name, but not the value the user enters for the variable. There are no errors and they are unable to see what the Splunk Phantom application is actually calling.
Has anybody installed Sophos Anti-Virus for Linux on the same machines as their Splunk Head and Splunk Indexer?  If so, what are the gotchas?