All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Does Splunk SOAR operate in the cloud, or just on-premises?
This question is related my previous post. https://community.splunk.com/t5/Splunk-Search/XML-field-Extraction/m-p/571944#M199301 My source have a date which i'll be extracting using rex command. I ... See more...
This question is related my previous post. https://community.splunk.com/t5/Splunk-Search/XML-field-Extraction/m-p/571944#M199301 My source have a date which i'll be extracting using rex command. I want my table data to be shown on those respective dates. I have used xyseries, but i cannot add other fields to the table. source="weekly_report_20211025_160957*.xml"  |rex field=source "weekly_report_(?<Date>\w.*)\.xml"|.... | table suitename  name "Time taken(s)" status  | xyseries name Date status My final table should contain suitename , name, "Time taken(s)", status(under the Date filed). Is there any method to append all these table fields after applying xyseries?
Hello I use a dropdown list in my dashboard like this   <input type="dropdown" token="web_domain" searchWhenChanged="true"><choice value="*www.colis.fr*">Colis</choice>   And I retrieve the toke... See more...
Hello I use a dropdown list in my dashboard like this   <input type="dropdown" token="web_domain" searchWhenChanged="true"><choice value="*www.colis.fr*">Colis</choice>   And I retrieve the token in my title panel like this   <panel> <title>Application $web_domain$ - Evolution moyenne des appels</title>   Instead the $web_domain$ , I would like to retrieve the generic name of the $web_domain$, it means that instead displaying "www.colis.fr" I would like to retrieve onlis "Colis" How to do this please?  
Hi at all, my customer has the requirement to have the "index" field in each DataModel used in ES. Obviously, this additional field doesn't modify CIM compliance but it's needed to make an addition... See more...
Hi at all, my customer has the requirement to have the "index" field in each DataModel used in ES. Obviously, this additional field doesn't modify CIM compliance but it's needed to make an additional filter to data. But the question is: at the next upgrade of ES, the customization will be maintained or not? Bye. Giuseppe
Hi, I have configured Splunk heavy forwarder in 2 machines. I want to send logs from one machine to another and expect the receiver to store all the received logs in an index called "receivedlogs". ... See more...
Hi, I have configured Splunk heavy forwarder in 2 machines. I want to send logs from one machine to another and expect the receiver to store all the received logs in an index called "receivedlogs".   This is the video I followed to configure Splunk: https://www.youtube.com/watch?v=S4ekkH5mv3E&t=454s&ab_channel=Splunk%26MachineLearning Thank you.
Good day Team, I have a application which contains 5 servers. Each server is having different path. But the end is to read error.log and wrapper.log /log/apple/production/A1/error.log /log/ball/pr... See more...
Good day Team, I have a application which contains 5 servers. Each server is having different path. But the end is to read error.log and wrapper.log /log/apple/production/A1/error.log /log/ball/production/A2/error.log .. Here I can use regex like this in monitor stanza -- /log/*/prodcution/*/error.log But the problem is each server is having many folders for that *. I dont want all folders. Need only few.  Say the first star. I want only apple or ball or cat. If it is any other name in any server I can ignore Similarly take the second star. I want only A1 or A2  or A3. I can ignore B1 or C1 or so.    So is it possible to write like that using any regex either in inputs itself or using props?
Hi , does anyone have any experience with Parsing Version 6 schema of Umbrella logs the release notes from the addon https://splunkbase.splunk.com/app/3926/ talks only of version5 1.0.5: Adds suppo... See more...
Hi , does anyone have any experience with Parsing Version 6 schema of Umbrella logs the release notes from the addon https://splunkbase.splunk.com/app/3926/ talks only of version5 1.0.5: Adds support for logging format version 5 + Firewall Logs   the change in Umbrella seems for my environment to be only from Version4 -> version6 and "Schema upgrades are one way; you will not be able to revert this upgrade." Its scary you cant revert   Anyone moved to version6 and did they make changes in the local/{props,transforms} ?  
| datamodel "Change_Analysis" "Account_Management" search | where 'All_Changes.tag'="delete" AND 'All_Changes.user'!="*$*" | stats values(All_Changes.result) as "signature",values(All_Changes.src) as... See more...
| datamodel "Change_Analysis" "Account_Management" search | where 'All_Changes.tag'="delete" AND 'All_Changes.user'!="*$*" | stats values(All_Changes.result) as "signature",values(All_Changes.src) as "src",values(All_Changes.dest) as "dest", values(All_Changes.user) as "users", DC(All_Changes.user) as user_count by "All_Changes.Account_Management.src_user" | rename "All_Changes.Account_Management.src_user" as "src_user","All_Changes.user" as "user"   I am using this query to monitor  for Account Deleted-  But all the time I am getting this alert triggered for the computer account ending with $ symbol  Ex:  XYZLAPTOP$ , ABCLAPTOP$  etc I have added the search  where 'All_Changes.tag'="delete" AND 'All_Changes.user'!="*$*"" How can I exclude this $ symbol account from the report? Can any one please help      
Hello all,   I am trying to extract a field from the below event and the extraction is working fine on events that is coming with the value for the field. However, in the events that are coming in ... See more...
Hello all,   I am trying to extract a field from the below event and the extraction is working fine on events that is coming with the value for the field. However, in the events that are coming in empty values it is picking the next matching value. How to fix it so it only picks the required value and ignore the empty field. Expression used: (?:[^,]+,){23}\"(?<occurance>\w+)\",.*   Below highlighted is the event that is extracting correct: 50271232,00004102,00000000,1600,"20210901225500","20210901225500",4,-1,-1,"SYSTEM","","System",46769357,"System","Server-I \x83W\x83\x87\x83u\x83l\x83b\x83g(AJSROOT1:/\x90V\x8A_\x96{\x94ԏ\x88\x97\x9D/\x92l\x8ED\x94\xAD\x8Ds/04_\x92l\x8ED\x8Ew\x8E\xA6\x83f\x81[\x83^\x98A\x8Cg_\x8CߑO1TAX:@5V689)\x82\xF0\x8AJ\x8En\x82\xB5\x82܂\xB7","Information","admin","/App/Sys/AJS2","JOBNET","AJSROOT1:/\x90V\x8A_\x96{\x94ԏ\x88\x97\x9D/\x92l\x8ED\x94\xAD\x8Ds/04_\x92l\x8ED\x8Ew\x8E\xA6\x83f\x81[\x83^\x98A\x8Cg_\x8CߑO1TAX","JOBNET","AJSROOT1:/\x90V\x8A_\x96{\x94ԏ\x88\x97\x9D/\x92l\x8ED\x94\xAD\x8Ds/04_\x92l\x8ED\x8Ew\x8E\xA6\x83f\x81[\x83^\x98A\x8Cg_\x8CߑO1TAX","AJSROOT1:/\x90V\x8A_\x96{\x94ԏ\x88\x97\x9D/\x92l\x8ED\x94\xAD\x8Ds/04_\x92l\x8ED\x8Ew\x8E\xA6\x83f\x81[\x83^\x98A\x8Cg_\x8CߑO1TAX","START","20210901225500","","",11,"A0","AJSROOT1:/\x90V\x8A_\x96{\x94ԏ\x88\x97\x9D/\x92l\x8ED\x94\xAD\x8Ds","A1","04_\x92l\x8ED\x8Ew\x8E\xA6\x83f\x81[\x83^\x98A\x8Cg_\x8CߑO1TAX","A3"   The below event does not have the value in the field and the next matching field is picked from below. 50266209,00000501,00000000,3476,"20210901220311","20210901220311",4,-1,-1,"SYSTEM","","psd005",142331,"MS932","OR01201S [psd005:HONDB1] YSN1 free 4.52% \x82\xAA\x82\xB5\x82\xAB\x82\xA2\x92l5%\x82\xF0\x89\xBA\x89\xF1\x82\xE8\x82܂\xB5\x82\xBD (Free size = 1466560KB) [Jp1 Notified]","Alert","","/insight/PI","","","","","","","","","",9,"ACTION_VERSION","510","OPT_CATEGORY","OS","OPT_PARM1","","OPT_PARM2","","OPT_PARM3","","OPT_PARM4","","OPT_SID","HONDB1","OPT_URL1","","OPT_URL2","",   Please help in this.
Hi, We are using Splunk cloud 8.2 and mainly utilizing for Splunk SIEM solution.  Currently we have many scheduled alerts, searches and reports. In the recent days we could see 21% of the searche... See more...
Hi, We are using Splunk cloud 8.2 and mainly utilizing for Splunk SIEM solution.  Currently we have many scheduled alerts, searches and reports. In the recent days we could see 21% of the searches were skipped and job execution time also increased.  From yesterday, we are unable to see output results for any of the jobs, but we are getting the search result when we execute adhoc search. We are also able to see below Errors and warnings in our console. The percentage of non high priority searches skipped (74%) over the last 24 hours is very high and exceeded the red thresholds (20%) on this Splunk instance. Total Searches that were part of this percentage=7056. Total skipped Searches=5271 The instance is approaching the maximum number of historical searches that can be run concurrently. The number of extremely lagged searches (1) over the last hour exceeded the red threshold (1) on this Splunk instance Could you please share some solution to implement in this case.     
I have multiple concurrent saved searches(around 6). All searches have outputlookup command which is writing to separate kvstore. Searches are taking too much time to execute the outputlookup command... See more...
I have multiple concurrent saved searches(around 6). All searches have outputlookup command which is writing to separate kvstore. Searches are taking too much time to execute the outputlookup command. It is working fine if outputlookup is removed. Any suggestion on this? I know there is limit on number of rows to written for the outputlookup command. But, as all searches are within that limit wondering if there is limit on number of concurrent outputlookup command. Is there any such thing? Is it like one search outputlookup will wait for other output lookup to complete? If so, any solution for that?
I need to auto-refresh my dashboard every 30secounds. I am trying to set up a refresh for a form that was created. the below is it correct?   This hasn't been working for me. Am I missing something... See more...
I need to auto-refresh my dashboard every 30secounds. I am trying to set up a refresh for a form that was created. the below is it correct?   This hasn't been working for me. Am I missing something or is this not the correct placement for this? If it matters, I am using Splunk cloud 8.2 <form version="1.1" theme="dark" refresh="30"> <label>Health Dashboard - 24 Hours</label> <fieldset submitButton="false" autoRun="true"> <input type="time" token="timetoken" searchWhenChanged="true"> <label>Select Time Range</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <title>Servers_Memory_Usage</title> <chart> <search>
Describe what happens when an adhoc search is issued on a search head in a distributed environment? Does the search head communicate with the cluster master or directly with the indexers? Im looking... See more...
Describe what happens when an adhoc search is issued on a search head in a distributed environment? Does the search head communicate with the cluster master or directly with the indexers? Im looking for clarifications. 
Hello champions, I run the below 1,2,3 queries on the given datasets to find out which users ran the enable command on which host at what time: 1. index= networking user* enable* host* Oct 15 08:1... See more...
Hello champions, I run the below 1,2,3 queries on the given datasets to find out which users ran the enable command on which host at what time: 1. index= networking user* enable* host* Oct 15 08:17:45 brg-c-1.com.au 8279: Oct 15 2021 08:17:44.820 AEST: %PARSER-5-CFGLOG_LOGGEDCMD: User:John logged command:!exec: enable Oct 15 08:17:35 brg-c-1.com.au 8278: Oct 15 2021 08:17:34.082 AEST: %PARSER-5-CFGLOG_LOGGEDCMD: User:lili logged command:!exec: enable failed Sep 15 23:29:55 gsw-r-4.com.au 466: Sep 15 23:29:54.009: %PARSER-5-CFGLOG_LOGGEDCMD: User:Khan logged command:!exec: enable Aug 12 15:18:37 edc-r-4.com.au 02: Aug 12 15:18:36.472: %PARSER-5-CFGLOG_LOGGEDCMD: User:Khan logged command:!exec: enable Aug 11 03:31:05 ctc-s.com.au 134: Aug 10 17:31:04.859: %PARSER-5-CFGLOG_LOGGEDCMD: User:cijs logged command:!exec: enable Jan 29 11:30:58 brg-c-1.com.au 2082: Jan 29 2021 11:30:57.141 AEST: %PARSER-5-CFGLOG_LOGGEDCMD: User:chick logged command:!exec: enable failed 2. index=linux_logs host=edc-03-tacacs enable* Oct 26 12:56:13 egc-03-ts tc_plus[149]: enable query for 'kim' tty86 from 202.168.5.22 accepted Oct 26 11:33:44 egc-03-ts tc_plus[259]: enable query for 'kim' tty86 from 202.168.5.22 accepted Oct 21 11:35:59 egc-03-ts tc_plus[285]: enable query for 'John' tty86 from 202.168.5.23 accepted Oct 21 11:35:53 egc-03-ts tc_plus[282]: enable query for 'Han' tty86 from 202.168.5.23 rejected 3. index=linux_logs host=gsw-03-tacacs enable* Sep 30 13:35:53 gdw-02-ts tc_plus[143]: 192.168.2.21 James tty1 192.168.6.56 stop task_id=55161 timezone=AEST service=shell start_time=1632972953 priv-lvl=0 cmd=enable Sep 29 12:38:17 gdw-02-ts tc_plus[319]: 192.168.2.24 linda tty1 192.168.5.3 stop task_id=15729 timezone=AEST service=shell start_time=1632883097 priv-lvl=0 cmd=enable Sep 15 22:23:23 gdw-02-ts tc_plus[1649]: 192.168.4.2 Brown tty322 192.168.46.1 stop task_id=2574 timezone=AEST service=shell start_time=1631708603 priv-lvl=0 cmd=enable Sep 9 14:58:32 gdw-02-ts tc_plus[2030]: 192.168.2.29 Gordan tty1 192.168.26.3 stop task_id=14329 timezone=AEST service=shell start_time=1631163512 priv-lvl=0 cmd=enable I tried hard but could not find a query to merge all these data (indexes and hosts) to find out who ran enable command successfully at what time on which host. And get those results to a table look like |table date host user command(enable) status(success) Could anyone please help me ? Thank you in advance.
Hello, So this is my first time trying to consolidate logs and use the data extraction and I am a little lost. I have the following payload below and I would like to extract the following fields fro... See more...
Hello, So this is my first time trying to consolidate logs and use the data extraction and I am a little lost. I have the following payload below and I would like to extract the following fields from the "line" field in the json payload. An example payload would be   {"line":"2021/10/25 18:49:52.982|DEBUG|GoogleHomeController|Recieved a request for broadcast: {\"Message\":\"Ring Ring Ring Niko and Xander, someone is at your front door and rung the doorbell!\",\"ExecuteTime\":\"0001-01-01T00:00:00\"}","source":"stdout","tag":"b5fcd8b8b5a4"}   Time - "2021/10/25 18:49:52.982" Level - "DEBUG" Controller - "GoogleHomeController" Message - "Recieved a request for broadcast..." It all follows the format "{TIME}|{LEVEL}|{CONTROLLER}|{MESSAGE}" basically the fields seperated by pipe characters. I have all the information there formatted using NLog in my code, but how do I extract the fields that are within a field out that way I can search based on the Time (from the log msg), Log Level, Controller, and Message? How would I go about pulling this information out? I tried going through the field extraction, but it only seems to let me do it at the highlest level, IE the line, source, and tag fields, not the fields within.  
We have an add-on installed on our splunk instance last year and when i checked today in splunkbase, this add-on is not available and it took me to archived page.Does this have any impact on the exis... See more...
We have an add-on installed on our splunk instance last year and when i checked today in splunkbase, this add-on is not available and it took me to archived page.Does this have any impact on the existing instance.There is a lot of dependency on this add-on and we are worried if splunk comes back to us saying that they are going to uninstall this add-on sometime in near future.   Anyone ever faced this situation?  
I wanted to extract the data for every node. As you can see the pg-2 and ss7-2 are the nodes and below is the information for the node. How do I extract the percentage value? I want to find the maxim... See more...
I wanted to extract the data for every node. As you can see the pg-2 and ss7-2 are the nodes and below is the information for the node. How do I extract the percentage value? I want to find the maximum percentage for every node.   
This question is based on a comment from @woodcock on this post: https://community.splunk.com/t5/Splunk-Search/Why-are-real-time-searches-not-running-and-getting-error-quot/m-p/281407 in which the al... See more...
This question is based on a comment from @woodcock on this post: https://community.splunk.com/t5/Splunk-Search/Why-are-real-time-searches-not-running-and-getting-error-quot/m-p/281407 in which the alert equation provided is as follows: "Schedule it to cover a span of X and run it every X/2. This covers the case where events at the end of span t an the beginning of t+1 would just miss triggering in those windows but will hit in the next alert run. Then make X as large as you can stomach."  I do not fully understand this so I am hoping someone can help me out here. Let's say I have an alert running every 5 mins. By that equation I should search -10m to now. But isn't that going to also significantly overlap with the prior run? Why not search -6m to now, for example? How do span sizes affect things? Here is an alert I have running every 5 mins. I did notice the search itself picks up the current span and the prior span so I have been wondering how to optimize this properly.     | mstats avg(cpu_metric.pctIdle) as Idle WHERE index="itsi_im_metrics" AND host="*" span=5m by host | eval cpu_utilization=round(100 - Idle,2) | where cpu_utilization > 90 | stats list(host) as host_list list(cpu_utilization) as avg_cpu_utilization    
I have a JSON-based log file for which every line is a valid JSON document. When searching it like this: source="/path/to/json/logfile" message.path="/ws/ws_metrics/page_hidden/" | table message.par... See more...
I have a JSON-based log file for which every line is a valid JSON document. When searching it like this: source="/path/to/json/logfile" message.path="/ws/ws_metrics/page_hidden/" | table message.params.page_hide_metrics I get entries with the JSON I expect, like this:  {"connections":[{"connection_num":1,"initialized":"2021-10-25T20:46:45.318Z","ready_state":1,"connected_duration_seconds":32.296,"ready_state_times":[null,0.512,null,null]}],"tab_session_id":"604931x|concept|1635194804","first_connection_index":0,"percent_uptime":0.9843940502316508,"duration_seconds":32.296,"page_duration_seconds":32.808}   However, when I try to use an example like example #1 given for json_extract in the splunk docs,  source="/path/to/json/logfile" message.path="/ws/ws_metrics/page_hidden/" | eval ph_metrics = json_extract(message.params.page_hide_metrics) | table ph_metrics I don't get any results. Why?
I recently created a Splunk trial to test the Splunk + Okta integration. I have installed the Okta Identity Cloud Add-on for Splunk app but I'm unable to configure it. When I select the Configuration... See more...
I recently created a Splunk trial to test the Splunk + Okta integration. I have installed the Okta Identity Cloud Add-on for Splunk app but I'm unable to configure it. When I select the Configuration tab within the app, the Okta Accounts tab gets stuck on "loading". Below are the steps I have taken thus far. Did I miss something?  + Find More Apps Searched for Okta Located Okta Identity Cloud Add-on for Splunk Clicked Install Provided Splunk Credentials, Checked the "I have read..." and Selected Login and Install Clicked Open the App Select the Configuration Tab Stuck on "Loading" These steps were attempted in Chrome and Safari browsers. Reference Documentation: chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/viewer.html?pdfurl=https%3A%2F%2Fraw.githubusercontent.com%2Fmbegan%2FOkta-Identity-Cloud-for-Splunk%2Fmaster%2FREADME%2FOkta%2520Identity%2520Cloud%2520Add-on%2520for%2520Splunk.pdf&clen=1363755&chunk=true