All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have multiple concurrent saved searches(around 6). All searches have outputlookup command which is writing to separate kvstore. Searches are taking too much time to execute the outputlookup command... See more...
I have multiple concurrent saved searches(around 6). All searches have outputlookup command which is writing to separate kvstore. Searches are taking too much time to execute the outputlookup command. It is working fine if outputlookup is removed. Any suggestion on this? I know there is limit on number of rows to written for the outputlookup command. But, as all searches are within that limit wondering if there is limit on number of concurrent outputlookup command. Is there any such thing? Is it like one search outputlookup will wait for other output lookup to complete? If so, any solution for that?
I need to auto-refresh my dashboard every 30secounds. I am trying to set up a refresh for a form that was created. the below is it correct?   This hasn't been working for me. Am I missing something... See more...
I need to auto-refresh my dashboard every 30secounds. I am trying to set up a refresh for a form that was created. the below is it correct?   This hasn't been working for me. Am I missing something or is this not the correct placement for this? If it matters, I am using Splunk cloud 8.2 <form version="1.1" theme="dark" refresh="30"> <label>Health Dashboard - 24 Hours</label> <fieldset submitButton="false" autoRun="true"> <input type="time" token="timetoken" searchWhenChanged="true"> <label>Select Time Range</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <title>Servers_Memory_Usage</title> <chart> <search>
Describe what happens when an adhoc search is issued on a search head in a distributed environment? Does the search head communicate with the cluster master or directly with the indexers? Im looking... See more...
Describe what happens when an adhoc search is issued on a search head in a distributed environment? Does the search head communicate with the cluster master or directly with the indexers? Im looking for clarifications. 
Hello champions, I run the below 1,2,3 queries on the given datasets to find out which users ran the enable command on which host at what time: 1. index= networking user* enable* host* Oct 15 08:1... See more...
Hello champions, I run the below 1,2,3 queries on the given datasets to find out which users ran the enable command on which host at what time: 1. index= networking user* enable* host* Oct 15 08:17:45 brg-c-1.com.au 8279: Oct 15 2021 08:17:44.820 AEST: %PARSER-5-CFGLOG_LOGGEDCMD: User:John logged command:!exec: enable Oct 15 08:17:35 brg-c-1.com.au 8278: Oct 15 2021 08:17:34.082 AEST: %PARSER-5-CFGLOG_LOGGEDCMD: User:lili logged command:!exec: enable failed Sep 15 23:29:55 gsw-r-4.com.au 466: Sep 15 23:29:54.009: %PARSER-5-CFGLOG_LOGGEDCMD: User:Khan logged command:!exec: enable Aug 12 15:18:37 edc-r-4.com.au 02: Aug 12 15:18:36.472: %PARSER-5-CFGLOG_LOGGEDCMD: User:Khan logged command:!exec: enable Aug 11 03:31:05 ctc-s.com.au 134: Aug 10 17:31:04.859: %PARSER-5-CFGLOG_LOGGEDCMD: User:cijs logged command:!exec: enable Jan 29 11:30:58 brg-c-1.com.au 2082: Jan 29 2021 11:30:57.141 AEST: %PARSER-5-CFGLOG_LOGGEDCMD: User:chick logged command:!exec: enable failed 2. index=linux_logs host=edc-03-tacacs enable* Oct 26 12:56:13 egc-03-ts tc_plus[149]: enable query for 'kim' tty86 from 202.168.5.22 accepted Oct 26 11:33:44 egc-03-ts tc_plus[259]: enable query for 'kim' tty86 from 202.168.5.22 accepted Oct 21 11:35:59 egc-03-ts tc_plus[285]: enable query for 'John' tty86 from 202.168.5.23 accepted Oct 21 11:35:53 egc-03-ts tc_plus[282]: enable query for 'Han' tty86 from 202.168.5.23 rejected 3. index=linux_logs host=gsw-03-tacacs enable* Sep 30 13:35:53 gdw-02-ts tc_plus[143]: 192.168.2.21 James tty1 192.168.6.56 stop task_id=55161 timezone=AEST service=shell start_time=1632972953 priv-lvl=0 cmd=enable Sep 29 12:38:17 gdw-02-ts tc_plus[319]: 192.168.2.24 linda tty1 192.168.5.3 stop task_id=15729 timezone=AEST service=shell start_time=1632883097 priv-lvl=0 cmd=enable Sep 15 22:23:23 gdw-02-ts tc_plus[1649]: 192.168.4.2 Brown tty322 192.168.46.1 stop task_id=2574 timezone=AEST service=shell start_time=1631708603 priv-lvl=0 cmd=enable Sep 9 14:58:32 gdw-02-ts tc_plus[2030]: 192.168.2.29 Gordan tty1 192.168.26.3 stop task_id=14329 timezone=AEST service=shell start_time=1631163512 priv-lvl=0 cmd=enable I tried hard but could not find a query to merge all these data (indexes and hosts) to find out who ran enable command successfully at what time on which host. And get those results to a table look like |table date host user command(enable) status(success) Could anyone please help me ? Thank you in advance.
Hello, So this is my first time trying to consolidate logs and use the data extraction and I am a little lost. I have the following payload below and I would like to extract the following fields fro... See more...
Hello, So this is my first time trying to consolidate logs and use the data extraction and I am a little lost. I have the following payload below and I would like to extract the following fields from the "line" field in the json payload. An example payload would be   {"line":"2021/10/25 18:49:52.982|DEBUG|GoogleHomeController|Recieved a request for broadcast: {\"Message\":\"Ring Ring Ring Niko and Xander, someone is at your front door and rung the doorbell!\",\"ExecuteTime\":\"0001-01-01T00:00:00\"}","source":"stdout","tag":"b5fcd8b8b5a4"}   Time - "2021/10/25 18:49:52.982" Level - "DEBUG" Controller - "GoogleHomeController" Message - "Recieved a request for broadcast..." It all follows the format "{TIME}|{LEVEL}|{CONTROLLER}|{MESSAGE}" basically the fields seperated by pipe characters. I have all the information there formatted using NLog in my code, but how do I extract the fields that are within a field out that way I can search based on the Time (from the log msg), Log Level, Controller, and Message? How would I go about pulling this information out? I tried going through the field extraction, but it only seems to let me do it at the highlest level, IE the line, source, and tag fields, not the fields within.  
We have an add-on installed on our splunk instance last year and when i checked today in splunkbase, this add-on is not available and it took me to archived page.Does this have any impact on the exis... See more...
We have an add-on installed on our splunk instance last year and when i checked today in splunkbase, this add-on is not available and it took me to archived page.Does this have any impact on the existing instance.There is a lot of dependency on this add-on and we are worried if splunk comes back to us saying that they are going to uninstall this add-on sometime in near future.   Anyone ever faced this situation?  
I wanted to extract the data for every node. As you can see the pg-2 and ss7-2 are the nodes and below is the information for the node. How do I extract the percentage value? I want to find the maxim... See more...
I wanted to extract the data for every node. As you can see the pg-2 and ss7-2 are the nodes and below is the information for the node. How do I extract the percentage value? I want to find the maximum percentage for every node.   
This question is based on a comment from @woodcock on this post: https://community.splunk.com/t5/Splunk-Search/Why-are-real-time-searches-not-running-and-getting-error-quot/m-p/281407 in which the al... See more...
This question is based on a comment from @woodcock on this post: https://community.splunk.com/t5/Splunk-Search/Why-are-real-time-searches-not-running-and-getting-error-quot/m-p/281407 in which the alert equation provided is as follows: "Schedule it to cover a span of X and run it every X/2. This covers the case where events at the end of span t an the beginning of t+1 would just miss triggering in those windows but will hit in the next alert run. Then make X as large as you can stomach."  I do not fully understand this so I am hoping someone can help me out here. Let's say I have an alert running every 5 mins. By that equation I should search -10m to now. But isn't that going to also significantly overlap with the prior run? Why not search -6m to now, for example? How do span sizes affect things? Here is an alert I have running every 5 mins. I did notice the search itself picks up the current span and the prior span so I have been wondering how to optimize this properly.     | mstats avg(cpu_metric.pctIdle) as Idle WHERE index="itsi_im_metrics" AND host="*" span=5m by host | eval cpu_utilization=round(100 - Idle,2) | where cpu_utilization > 90 | stats list(host) as host_list list(cpu_utilization) as avg_cpu_utilization    
I have a JSON-based log file for which every line is a valid JSON document. When searching it like this: source="/path/to/json/logfile" message.path="/ws/ws_metrics/page_hidden/" | table message.par... See more...
I have a JSON-based log file for which every line is a valid JSON document. When searching it like this: source="/path/to/json/logfile" message.path="/ws/ws_metrics/page_hidden/" | table message.params.page_hide_metrics I get entries with the JSON I expect, like this:  {"connections":[{"connection_num":1,"initialized":"2021-10-25T20:46:45.318Z","ready_state":1,"connected_duration_seconds":32.296,"ready_state_times":[null,0.512,null,null]}],"tab_session_id":"604931x|concept|1635194804","first_connection_index":0,"percent_uptime":0.9843940502316508,"duration_seconds":32.296,"page_duration_seconds":32.808}   However, when I try to use an example like example #1 given for json_extract in the splunk docs,  source="/path/to/json/logfile" message.path="/ws/ws_metrics/page_hidden/" | eval ph_metrics = json_extract(message.params.page_hide_metrics) | table ph_metrics I don't get any results. Why?
I recently created a Splunk trial to test the Splunk + Okta integration. I have installed the Okta Identity Cloud Add-on for Splunk app but I'm unable to configure it. When I select the Configuration... See more...
I recently created a Splunk trial to test the Splunk + Okta integration. I have installed the Okta Identity Cloud Add-on for Splunk app but I'm unable to configure it. When I select the Configuration tab within the app, the Okta Accounts tab gets stuck on "loading". Below are the steps I have taken thus far. Did I miss something?  + Find More Apps Searched for Okta Located Okta Identity Cloud Add-on for Splunk Clicked Install Provided Splunk Credentials, Checked the "I have read..." and Selected Login and Install Clicked Open the App Select the Configuration Tab Stuck on "Loading" These steps were attempted in Chrome and Safari browsers. Reference Documentation: chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/viewer.html?pdfurl=https%3A%2F%2Fraw.githubusercontent.com%2Fmbegan%2FOkta-Identity-Cloud-for-Splunk%2Fmaster%2FREADME%2FOkta%2520Identity%2520Cloud%2520Add-on%2520for%2520Splunk.pdf&clen=1363755&chunk=true 
While running arules command across multiple fields,  The 'Given fields' generated with various 'Implied fields'.  But how come a value of 'Given fields' can have various 'Given fields support'... See more...
While running arules command across multiple fields,  The 'Given fields' generated with various 'Implied fields'.  But how come a value of 'Given fields' can have various 'Given fields support' values ? Sample results like: Given fields            Implied fields           Given fields support            Implied fields support     Strength a1, b1                          c1                                 0.6                                               0.3                                          1.0 a1, b1                          c2                                 0.4                                               0.6                                           0.8
Hello   There are several dashboards in the app created by others and there is a Clone button.   I want to clone/mirror some dashboards which will be totally private i.e. only visible/editable by... See more...
Hello   There are several dashboards in the app created by others and there is a Clone button.   I want to clone/mirror some dashboards which will be totally private i.e. only visible/editable by me.   Will this Clone button offer that? I am reluctant to click it because I do not want to create Dashboard2 and confuse all other users!   Thanks!
Hi How can I find event that have send but not recieved response here is the log: this is send 2021-07-15 00:00:01,892 INFO CUST.InAB-ServerApp-1234567 [MyService] Packet Processed: A[50] B[00002... See more...
Hi How can I find event that have send but not recieved response here is the log: this is send 2021-07-15 00:00:01,892 INFO CUST.InAB-ServerApp-1234567 [MyService] Packet Processed: A[50] B[0000211] this is recieve 2021-07-15 00:00:11,719 INFO CUST.InEP-Server2-9876543_CUST.InAB-ServerApp-1234567 [MyService] Normal Packet Received: A[55] B[0000211]   step1: find send id 1234567 step2: find response id  9876543 due to send id 1234567, where A=A+5 AND B=B finally show id than not have recieve   e.g 2021-07-15 00:00:01,988 INFO CUST.InAB-ServerApp-0000001 [ApiManager] Send Packet [0000000000000*] to [APP.MODULE]   table   id status id                        status 0000001    no recieve   any idea? thanks
Hello Splunk World,  I'm working on importing raw logs from McAfee ELM to Splunk. The only option I've come across from McAfee documentation is sftp. Reaching out to see if anyone has had experience... See more...
Hello Splunk World,  I'm working on importing raw logs from McAfee ELM to Splunk. The only option I've come across from McAfee documentation is sftp. Reaching out to see if anyone has had experience routing data from ELM into Splunk and a good method to do so.   Thank you.  
Can someone point me to DLTK User Guide please?
Has anyone found a query or way to track what files have been moved onto or off of a USB. I can see that a USB was plugged in but i need to go one level deeper and see what has been either moved on o... See more...
Has anyone found a query or way to track what files have been moved onto or off of a USB. I can see that a USB was plugged in but i need to go one level deeper and see what has been either moved on or moved off of the USB and to and from the system.
Hello, I'm having issue with getting a report of users Action, with fullname and username = email.. But the sourcetypes have username. But one has username and  fullname but not the action. Also on t... See more...
Hello, I'm having issue with getting a report of users Action, with fullname and username = email.. But the sourcetypes have username. But one has username and  fullname but not the action. Also on the username part one has an uppercase U and other has a lower case u..  I trying to get the fullname,username,action.. This is what I have tried  index=hv_lastpass [search source=lastpass_users fullname="*,*" [search sourcetype="lastpass:activity" Action="Failed Login Attempt"| return fullname] |return Action,Time, Username] | table fullname,Username, Action ,Time index=hv_lastpass | join type=left username [search source=lastpass_users]|join type=left Username [search sourcetype="lastpass:activity"]| table fullname,Username, Action ,Time thank you 
OK, this is odd Search:  index=myindex Works and returns a field "Name", happily listing all values of Name as expected However any search on the name field e.g. index=myindex Name=Fred returns... See more...
OK, this is odd Search:  index=myindex Works and returns a field "Name", happily listing all values of Name as expected However any search on the name field e.g. index=myindex Name=Fred returns the error: Cannot expand lookup field 'Name' due to a reference cycle in the lookup configuration. Check search.log for details and update the lookup configuration to remove the reference cycle. Unfortunately I have no idea what to search for in the search log  Splunk support have only pointed me to this discussion and told me to re-save a specific cisco lookup: https://community.splunk.com/t5/Splunk-Cloud-Platform/Cannot-expand-lookup-field-due-to-a-reference-cycle-in-the/m-p/543455 and it isn't that as we don't have that cisco lookup table
I have a props conf file that is not parsing data as i expected. I can see in the raw log that the IIS log has the header information in it.    [sourcetype] DATETIME_CONFIG = INDEXED_EXTRACTIONS ... See more...
I have a props conf file that is not parsing data as i expected. I can see in the raw log that the IIS log has the header information in it.    [sourcetype] DATETIME_CONFIG = INDEXED_EXTRACTIONS = w3c LINE_BREAKER = ([\r\n]+) MAX_TIMESTAMP_LOOKAHEAD = 32 NO_BINARY_CHECK = true SHOULD_LINEMERGE = false category = Web description = W3C Extended log format produced by the Microsoft Internet Information Services (IIS) web server detect_trailing_nulls = auto disabled = false pulldown_type = true   When i manual upload the log file and assigning it to it will parse out the log with the same settings.   [sourcetypeA] DATETIME_CONFIG = INDEXED_EXTRACTIONS = w3c LINE_BREAKER = ([\r\n]+) MAX_TIMESTAMP_LOOKAHEAD = 32 NO_BINARY_CHECK = true SHOULD_LINEMERGE = false category = Web description = W3C Extended log format produced by the Microsoft Internet Information Services (IIS) web server detect_trailing_nulls = auto disabled = false pulldown_type = true   This props is on the searchhead were i am searching the data. The IIS logs is being capture via a UF.   Has anyone run into this before?
Hello Folks, How can i perform a CIDR/Subnet match with the "ip_intel" lookup file that comes by default ?  This lookup KV store dataset has CIDR ranges and single IP's listed under "IP" column . ... See more...
Hello Folks, How can i perform a CIDR/Subnet match with the "ip_intel" lookup file that comes by default ?  This lookup KV store dataset has CIDR ranges and single IP's listed under "IP" column . Basically if the Dest_IP from my search results fall in a subnet range of the "IP" column  of the lookup file , then it should display the result in a table format.  I am able to match against a single IP-address but not against CIDR  range. How do you guys about this one ? Thanks in advance