All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Currently my Splunk Search is shown as below: Serial Description DateTime StartTime EndTime MY111 Registration 2021-05-01 00:30:00 2021-05-01 00:30:00   MY122 Registration 2021-05... See more...
Currently my Splunk Search is shown as below: Serial Description DateTime StartTime EndTime MY111 Registration 2021-05-01 00:30:00 2021-05-01 00:30:00   MY122 Registration 2021-05-02 09:00:00 2021-05-02 09:00:00   MY134 Registration 2021-05-02 09:30:00 2021-05-02 09:30:00   MY122 Picking 2021-05-02 10:00:00   2021-05-02 10:00:00 MY134 Picking 2021-05-02 12:00:00    2021-05-02 12:00   However, there are some Serial that have not reached EndTime yet (only Registration description). How I can get the duration (in seconds) for those serial that completed (Have both Registration & Picking description) Expected Outcome: Serial Description DateTime StartTime EndTime Duration MY111 Registration 2021-05-01 00:30:00 2021-05-01 00:30:00     MY122 Registration 2021-05-02 09:00:00 2021-05-02 09:00:00     MY134 Registration 2021-05-02 09:30:00 2021-05-02 09:30:00     MY122 Picking 2021-05-02 10:00:00   2021-05-02 10:00:00 3600 MY134 Picking 2021-05-02 09:40:00    2021-05-02 09:40:00 600
tenable is missing dest values if there is no value available in dnsName field.  
Hi, The basic function of delete my account is missing. Which ultimately leads me to abundaunning of my account.    
Hello, I was using Transform type Field Extraction, I have an issue to select my Delimiter and facing some errors (not extracting fields as expected). Please see below the Raw Event and the paramete... See more...
Hello, I was using Transform type Field Extraction, I have an issue to select my Delimiter and facing some errors (not extracting fields as expected). Please see below the Raw Event and the parameters used for it. Thank you so much .....greatly appreciated your support. Raw Event "time_stamp":"2021-08-21 19:14:32 EST","user_type":"TESTUSER","file_source_cd":"1","ip_addr":"103.91.224.65","session_id":"ABSkbE7IWb3ZU52VZk=","tsn":"490937st,"request_id":"3ee0a-0c1712196e7-317f2700-d751c8e","user_id":"EASA68A7-780DEA22","return_cd":"10","app_name":"ALAO","event_type":"TEST_AUTH","event_id":"VIEW_LIST_RESPONSE","vardata":"[]","uri":https://wap-prod- /api/web-apps /authorizations,"error_msg":"" Parameters used:        
I am trying to add a dashboard to the action dropdown when you are in incident review under specific notables. How do I do this? I cannot seem to find ANY document on how to do it and would appreciat... See more...
I am trying to add a dashboard to the action dropdown when you are in incident review under specific notables. How do I do this? I cannot seem to find ANY document on how to do it and would appreciate a link to it or an explanation of how...
Hello, I noticed that  ... WHERE somefield = string1 OR string2 works the same way as  ... WHERE somefield = string1 OR somefield=string2 Why is it so? How OR works with strings?
        Hi, I have this set up: Splunk enterprise with stream enabled set up on a VM Splunk forwarder on my windows machine which works for now without SSL   I want to implement the abilit... See more...
        Hi, I have this set up: Splunk enterprise with stream enabled set up on a VM Splunk forwarder on my windows machine which works for now without SSL   I want to implement the ability to read the https also. But I'm not sure what to do when reading https://docs.splunk.com/Documentation/StreamApp/latest/DeployStreamApp/EnableSSLforStreamForwarder   Has anyone here set up ssl on stream on windows machines (not servers) and how did you do it?
Dear All, I am new to splunk, I want to extract data from one of the log file and like to create the dashboard visualization. I've tried using the material and Splunk doesn't recognize the data. You... See more...
Dear All, I am new to splunk, I want to extract data from one of the log file and like to create the dashboard visualization. I've tried using the material and Splunk doesn't recognize the data. Your kickstart will give me boost and confidence. I have copied the small part of the log which i am trying to extract data. I would like to have a visualization of Type : LOC, Channel, offset level. I need all data of TXT   Printed on Aug 18, 2021 5:37:46 035: Aug 17, 2021 6:45:33 TYPE: LOC [+46.2 degC] -0.3200 ddm [ 90 Hz pred] 90Hz: 35.15 %mod 150Hz: 3.15 %mod Channel: 110.50 MHz -4.84 KHz offset level: -61.0 dBm 030: Aug 17, 2021 6:44:48 TYPE: LOC [+46.2 degC] -0.2915 ddm [ 90 Hz pred] 90Hz: 33.82 %mod 150Hz: 4.67 %mod Channel: 110.50 MHz -4.83 KHz offset level: -56.2 dBm 022: Aug 17, 2021 6:42:52 TYPE: LOC [+46.2 degC] -0.3360 ddm [ 90 Hz pred] 90Hz: 36.02 %mod 150Hz: 2.42 %mod Channel: 110.50 MHz -4.83 KHz offset level: -68.2 dBm 
When editing searches in ITSI, control-e expands macros and control-z undoes the last change.  I know this only by being told.  Where is documentation on these, and whatever other hotkeys are defined... See more...
When editing searches in ITSI, control-e expands macros and control-z undoes the last change.  I know this only by being told.  Where is documentation on these, and whatever other hotkeys are defined for this editor?
Rex field to Pick the value as column and duration value as Row against them. refer the below. Date And Time Siline PrimaryAddress SearchInServicing NavExistingAddresses 2/11/1982 1:25 1.... See more...
Rex field to Pick the value as column and duration value as Row against them. refer the below. Date And Time Siline PrimaryAddress SearchInServicing NavExistingAddresses 2/11/1982 1:25 1.132 1.375 1.149 1.885 XML format <Transaction Name="Naviline" Time="02/11/1982 01:25:07:223" Duration="9.034" /> <Transaction Name="SePipeline" Time="02/11/1982 01:25:07:899" Duration="0.662" /> <Transaction Name="NdwIncuse" Time="02/11/1982 01:25:09:553" Duration="1.614" /> <Transaction Name="EnterDetails" Time="02/11/1982 01:25:11:532" Duration="1.916" /> <Transaction Name="SIline" Time="02/11/1982 01:25:12:703" Duration="1.132" /> <Transaction Name="GetWindowIn" Time="02/11/1982 01:25:20:748" Duration="7.957" /> <Transaction Name="PrimaryAddress" Time="02/11/1982 01:25:22:154" Duration="1.375" /> <Transaction Name="WindowingTouch" Time="02/11/1982 01:25:51:674" Duration="1.365" /> <Transaction Name="dailysearch" Time="02/11/1982 01:26:01:908" Duration="10.141" /> <Transaction Name="SearchInServicing" Time="02/11/1982 01:26:03:115" Duration="1.149" /> <Transaction Name="NavExistingAddresses" Time="02/11/1982 01:26:05:060" Duration="1.885" />
I want my time to be the "Date" property in the following json: { "Level": "ERROR", "Date": "2021-08-20 17:21:53.6355", "Logger":.... } I created a props.conf here: ...\Splunk\etc\system\local with... See more...
I want my time to be the "Date" property in the following json: { "Level": "ERROR", "Date": "2021-08-20 17:21:53.6355", "Logger":.... } I created a props.conf here: ...\Splunk\etc\system\local with: TIME_PREFIX = "Date":\s" TIME_FORMAT = %Y-%m-%d %H:%M:%S.%4N I then restarted splunk...but it's not working. Any idea what I'm missing?
I am a newbie to Splunk,  I have found that I have been able to re-create most of my reports and build them out into a usable dashboard or report. I have one that I just cannot seem to get correct or... See more...
I am a newbie to Splunk,  I have found that I have been able to re-create most of my reports and build them out into a usable dashboard or report. I have one that I just cannot seem to get correct or all the information into the correct way.  So here is what I have  (Source) email=*, recipient_group="*", reported_phish="*" | timechart count(reported_phish) by recipient_group  This gets me real close,it will split out the report into the three departments and give a total of all the email phishing scenarios available in the reported_phish field in grand total.  If I change the reported_phish="Yes" I get everyone that has reported the phishing test or if I use reported_phish="No" I get the same for the people who have not reported the phish email,  so I believe that the data I need is there  for my graph. What my final outcome would be is have the chart where every department has the count of yes or no answers in a total. below shows the grand totals and I would like to split the department to reflect yes and no along with the grand total.  Again I apologize for not being able to find the answer.  I have tried to split, append, different charts from the community and google and I am just drawing a total blank   Thank You in advance Jeff  
I am using dashboard studio and had a base search that is just a macro then chained that to a search creating a table. When I try to use this in the table nothing shows up. If I take the chain search... See more...
I am using dashboard studio and had a base search that is just a macro then chained that to a search creating a table. When I try to use this in the table nothing shows up. If I take the chain search and move it fully to its own base search I get results. Is there something I am missing that a chain search can't have certain commands? I have stats and evals in there also; do they need to be part of the base search and not the chain?
I have an app that needs to be installed on a particular server in our network. We have Splunk Ent.& ES. I need to learn how to install an app on this target server using my Deployment Server. I appr... See more...
I have an app that needs to be installed on a particular server in our network. We have Splunk Ent.& ES. I need to learn how to install an app on this target server using my Deployment Server. I appreciate any instructions. Thank u
Hello all, Our Splunk enterprise security uses the following correlation search for the  "Detect New Local Admin Account" notables: `wineventlog_security` EventCode=4720 OR (EventCode=4732 Group_Na... See more...
Hello all, Our Splunk enterprise security uses the following correlation search for the  "Detect New Local Admin Account" notables: `wineventlog_security` EventCode=4720 OR (EventCode=4732 Group_Name=Administrators) | transaction member_id connected=false maxspan=180m | rename member_id as user | stats count min(_time) as firstTime max(_time) as lastTime by user dest | `security_content_ctime(firstTime)`| `security_content_ctime(lastTime)` | `detect_new_local_admin_account_filter` This matches the correlation search at https://docs.splunksecurityessentials.com/content-detail/detect_new_local_admin_account/ The way its written makes it so the search returns any transaction with event code equal to 4720 or event code equal to 4732 with the phrase Administrators. It doesn't make a subquery on the transaction to make sure that the transaction contains both a 4720 and 4732 with phrase Administrators. So we're getting one of these notables for every account created. The page https://docs.splunksecurityessentials.com/content-detail/showcase_new_local_admin_account/ has this correlation search: index=* source="*WinEventLog:Security" EventCode=4720 OR (EventCode=4732 Administrators) | transaction Security_ID maxspan=180m connected=false | search EventCode=4720 (EventCode=4732 Administrators) | table _time EventCode Account_Name Target_Account_Name Message If I swap out index=* source="*WinEventLog:Security" for `wineventlog_security`, that correlation search only returns true positives. The key difference between those searches is the subquery that searches the transactions for logs that have both 4720 and 4732 with the phrase Adminstrators.  Does anyone know why Splunk enterprise security and Splunk security essentials have that first correlation search listed? It seems to not do what its supposed to do. Am I missing something?
I am using the splunk field: _time and subtracting my own time field: open_date from the time field. The goal is to get the difference between these two time stamps. For example, one entry in the _ti... See more...
I am using the splunk field: _time and subtracting my own time field: open_date from the time field. The goal is to get the difference between these two time stamps. For example, one entry in the _time field is: "2021-03-11 11:17:13" and one entry in the open_date field is: "2021-06-07T14:50:42".   I am running the current query operations: | eval difference = abs(_time - strptime(open_date, "%Y-%m-%dT%H:%M:%S")) | eval difference = strftime(difference,"%Y-%m-%dT%H:%M:%S") When running this for the above example entries I get a difference output of "1970-03-29T20:10:19",  which is obviously incorrect. What am I doing wrong and how can I get the correct difference between these two fields?
Under "Settings > Access Controls > Password Policy Management" in the "Login Settings " section, there is a field named "Constant login time" with a caption that reads: "Sets a login time that sta... See more...
Under "Settings > Access Controls > Password Policy Management" in the "Login Settings " section, there is a field named "Constant login time" with a caption that reads: "Sets a login time that stays consistent regardless of user settings. Set a time between .001 and 5 seconds. Set to 0 to disable the feature." I can't find this referenced in any Splunk docs or other posts.  Can someone explain just what this is for? Thanks.
Hello.   Making dashboards using Meraki Syslog.  Anyone have a good definition ro description of the Meraki Syslog fields?   Thank You
My goal is to calculate a score of confidence based on how anomalous the amount of failed logins is compared to activity over a 30 day period.  Then, I want to sort those scores in a table showing th... See more...
My goal is to calculate a score of confidence based on how anomalous the amount of failed logins is compared to activity over a 30 day period.  Then, I want to sort those scores in a table showing the users, maybe the relative time of spike, and the average number of failed logins at that time.  That way I can tune thresholds and what not. This is what I've tried up until now.  Even some "pseudocode" would help here.  I understand that the way these commands output with the pipes might be the problem too.   |from datamodel:Authentication.Authentication | search action="failure" | timechart span=1h count as num_failures by user | stats avg(num_failures) as avg_fail_num | trendline sma5(avg_fail_num) as moving_avg_failures | eval score = (avg_fail_num/(moving_avg)) | table user, avg_fail_num, moving_avg, score | sort – score   The score variable is supposed to increase the larger the fail_num is compared to the moving_avg -- which should show me a confidence score on spikes.  This should help me quantify it also for more analysis opportunities. Also, I should clarify that I want this to detect users who specifically have activity not like their normal activity, and also when failed logins go over a certain number.  In other words, not the outliers in the big picture of failed logins but rather when a user is acting weird and there is a huge increase in failed logins for that specific user. I want to be able to apply this query's structure to other situations. Here are some of my other iterations/attempts at trying to do this (all with separate issues): Using bin to get average per hour:   |from datamodel:Authentication.Authentication | search action="failure" | bin _time span=1h | stats count as fail_num by user, _time | stats avg(fail_num) as avg_fail_num by user | trendline sma24(avg_fail_num) as moving_avg | eval moving_avg=moving_avg*3 | eval score = (avg_fail_num/(moving_avg)) | table user, _time, fail_num, avg_fail_num, moving_avg, score | sort – score   Making a time variable to separate hours:   |from datamodel:Authentication.Authentication | search action="failure" | regex user="^([^\s]*)$" | eval date_hour=strftime(_time,"%H") | stats count as fail_num by user, date_hour | stats avg(fail_num) as avg_fail_num by user, date_hour | trendline sma24(avg_fail_num) as moving_avg | eval moving_avg=moving_avg*1 | eval score = (avg_fail_num/(moving_avg)) | table user, date_hour, fail_num, avg_fail_num, moving_avg, score | sort – score