All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Rex field to Pick the value as column and duration value as Row against them. refer the below. Date And Time Siline PrimaryAddress SearchInServicing NavExistingAddresses 2/11/1982 1:25 1.... See more...
Rex field to Pick the value as column and duration value as Row against them. refer the below. Date And Time Siline PrimaryAddress SearchInServicing NavExistingAddresses 2/11/1982 1:25 1.132 1.375 1.149 1.885 XML format <Transaction Name="Naviline" Time="02/11/1982 01:25:07:223" Duration="9.034" /> <Transaction Name="SePipeline" Time="02/11/1982 01:25:07:899" Duration="0.662" /> <Transaction Name="NdwIncuse" Time="02/11/1982 01:25:09:553" Duration="1.614" /> <Transaction Name="EnterDetails" Time="02/11/1982 01:25:11:532" Duration="1.916" /> <Transaction Name="SIline" Time="02/11/1982 01:25:12:703" Duration="1.132" /> <Transaction Name="GetWindowIn" Time="02/11/1982 01:25:20:748" Duration="7.957" /> <Transaction Name="PrimaryAddress" Time="02/11/1982 01:25:22:154" Duration="1.375" /> <Transaction Name="WindowingTouch" Time="02/11/1982 01:25:51:674" Duration="1.365" /> <Transaction Name="dailysearch" Time="02/11/1982 01:26:01:908" Duration="10.141" /> <Transaction Name="SearchInServicing" Time="02/11/1982 01:26:03:115" Duration="1.149" /> <Transaction Name="NavExistingAddresses" Time="02/11/1982 01:26:05:060" Duration="1.885" />
I want my time to be the "Date" property in the following json: { "Level": "ERROR", "Date": "2021-08-20 17:21:53.6355", "Logger":.... } I created a props.conf here: ...\Splunk\etc\system\local with... See more...
I want my time to be the "Date" property in the following json: { "Level": "ERROR", "Date": "2021-08-20 17:21:53.6355", "Logger":.... } I created a props.conf here: ...\Splunk\etc\system\local with: TIME_PREFIX = "Date":\s" TIME_FORMAT = %Y-%m-%d %H:%M:%S.%4N I then restarted splunk...but it's not working. Any idea what I'm missing?
I am a newbie to Splunk,  I have found that I have been able to re-create most of my reports and build them out into a usable dashboard or report. I have one that I just cannot seem to get correct or... See more...
I am a newbie to Splunk,  I have found that I have been able to re-create most of my reports and build them out into a usable dashboard or report. I have one that I just cannot seem to get correct or all the information into the correct way.  So here is what I have  (Source) email=*, recipient_group="*", reported_phish="*" | timechart count(reported_phish) by recipient_group  This gets me real close,it will split out the report into the three departments and give a total of all the email phishing scenarios available in the reported_phish field in grand total.  If I change the reported_phish="Yes" I get everyone that has reported the phishing test or if I use reported_phish="No" I get the same for the people who have not reported the phish email,  so I believe that the data I need is there  for my graph. What my final outcome would be is have the chart where every department has the count of yes or no answers in a total. below shows the grand totals and I would like to split the department to reflect yes and no along with the grand total.  Again I apologize for not being able to find the answer.  I have tried to split, append, different charts from the community and google and I am just drawing a total blank   Thank You in advance Jeff  
I am using dashboard studio and had a base search that is just a macro then chained that to a search creating a table. When I try to use this in the table nothing shows up. If I take the chain search... See more...
I am using dashboard studio and had a base search that is just a macro then chained that to a search creating a table. When I try to use this in the table nothing shows up. If I take the chain search and move it fully to its own base search I get results. Is there something I am missing that a chain search can't have certain commands? I have stats and evals in there also; do they need to be part of the base search and not the chain?
I have an app that needs to be installed on a particular server in our network. We have Splunk Ent.& ES. I need to learn how to install an app on this target server using my Deployment Server. I appr... See more...
I have an app that needs to be installed on a particular server in our network. We have Splunk Ent.& ES. I need to learn how to install an app on this target server using my Deployment Server. I appreciate any instructions. Thank u
Hello all, Our Splunk enterprise security uses the following correlation search for the  "Detect New Local Admin Account" notables: `wineventlog_security` EventCode=4720 OR (EventCode=4732 Group_Na... See more...
Hello all, Our Splunk enterprise security uses the following correlation search for the  "Detect New Local Admin Account" notables: `wineventlog_security` EventCode=4720 OR (EventCode=4732 Group_Name=Administrators) | transaction member_id connected=false maxspan=180m | rename member_id as user | stats count min(_time) as firstTime max(_time) as lastTime by user dest | `security_content_ctime(firstTime)`| `security_content_ctime(lastTime)` | `detect_new_local_admin_account_filter` This matches the correlation search at https://docs.splunksecurityessentials.com/content-detail/detect_new_local_admin_account/ The way its written makes it so the search returns any transaction with event code equal to 4720 or event code equal to 4732 with the phrase Administrators. It doesn't make a subquery on the transaction to make sure that the transaction contains both a 4720 and 4732 with phrase Administrators. So we're getting one of these notables for every account created. The page https://docs.splunksecurityessentials.com/content-detail/showcase_new_local_admin_account/ has this correlation search: index=* source="*WinEventLog:Security" EventCode=4720 OR (EventCode=4732 Administrators) | transaction Security_ID maxspan=180m connected=false | search EventCode=4720 (EventCode=4732 Administrators) | table _time EventCode Account_Name Target_Account_Name Message If I swap out index=* source="*WinEventLog:Security" for `wineventlog_security`, that correlation search only returns true positives. The key difference between those searches is the subquery that searches the transactions for logs that have both 4720 and 4732 with the phrase Adminstrators.  Does anyone know why Splunk enterprise security and Splunk security essentials have that first correlation search listed? It seems to not do what its supposed to do. Am I missing something?
I am using the splunk field: _time and subtracting my own time field: open_date from the time field. The goal is to get the difference between these two time stamps. For example, one entry in the _ti... See more...
I am using the splunk field: _time and subtracting my own time field: open_date from the time field. The goal is to get the difference between these two time stamps. For example, one entry in the _time field is: "2021-03-11 11:17:13" and one entry in the open_date field is: "2021-06-07T14:50:42".   I am running the current query operations: | eval difference = abs(_time - strptime(open_date, "%Y-%m-%dT%H:%M:%S")) | eval difference = strftime(difference,"%Y-%m-%dT%H:%M:%S") When running this for the above example entries I get a difference output of "1970-03-29T20:10:19",  which is obviously incorrect. What am I doing wrong and how can I get the correct difference between these two fields?
Under "Settings > Access Controls > Password Policy Management" in the "Login Settings " section, there is a field named "Constant login time" with a caption that reads: "Sets a login time that sta... See more...
Under "Settings > Access Controls > Password Policy Management" in the "Login Settings " section, there is a field named "Constant login time" with a caption that reads: "Sets a login time that stays consistent regardless of user settings. Set a time between .001 and 5 seconds. Set to 0 to disable the feature." I can't find this referenced in any Splunk docs or other posts.  Can someone explain just what this is for? Thanks.
Hello.   Making dashboards using Meraki Syslog.  Anyone have a good definition ro description of the Meraki Syslog fields?   Thank You
My goal is to calculate a score of confidence based on how anomalous the amount of failed logins is compared to activity over a 30 day period.  Then, I want to sort those scores in a table showing th... See more...
My goal is to calculate a score of confidence based on how anomalous the amount of failed logins is compared to activity over a 30 day period.  Then, I want to sort those scores in a table showing the users, maybe the relative time of spike, and the average number of failed logins at that time.  That way I can tune thresholds and what not. This is what I've tried up until now.  Even some "pseudocode" would help here.  I understand that the way these commands output with the pipes might be the problem too.   |from datamodel:Authentication.Authentication | search action="failure" | timechart span=1h count as num_failures by user | stats avg(num_failures) as avg_fail_num | trendline sma5(avg_fail_num) as moving_avg_failures | eval score = (avg_fail_num/(moving_avg)) | table user, avg_fail_num, moving_avg, score | sort – score   The score variable is supposed to increase the larger the fail_num is compared to the moving_avg -- which should show me a confidence score on spikes.  This should help me quantify it also for more analysis opportunities. Also, I should clarify that I want this to detect users who specifically have activity not like their normal activity, and also when failed logins go over a certain number.  In other words, not the outliers in the big picture of failed logins but rather when a user is acting weird and there is a huge increase in failed logins for that specific user. I want to be able to apply this query's structure to other situations. Here are some of my other iterations/attempts at trying to do this (all with separate issues): Using bin to get average per hour:   |from datamodel:Authentication.Authentication | search action="failure" | bin _time span=1h | stats count as fail_num by user, _time | stats avg(fail_num) as avg_fail_num by user | trendline sma24(avg_fail_num) as moving_avg | eval moving_avg=moving_avg*3 | eval score = (avg_fail_num/(moving_avg)) | table user, _time, fail_num, avg_fail_num, moving_avg, score | sort – score   Making a time variable to separate hours:   |from datamodel:Authentication.Authentication | search action="failure" | regex user="^([^\s]*)$" | eval date_hour=strftime(_time,"%H") | stats count as fail_num by user, date_hour | stats avg(fail_num) as avg_fail_num by user, date_hour | trendline sma24(avg_fail_num) as moving_avg | eval moving_avg=moving_avg*1 | eval score = (avg_fail_num/(moving_avg)) | table user, date_hour, fail_num, avg_fail_num, moving_avg, score | sort – score  
I'm using Splunk ITSI, viewing its Episode Review. When an episode is opened, the episode list is compressed on the left side, and the opened episode displayed on the right side.  When this occurs, ... See more...
I'm using Splunk ITSI, viewing its Episode Review. When an episode is opened, the episode list is compressed on the left side, and the opened episode displayed on the right side.  When this occurs, In the episode list, a count is added in the left side of that pane.  It displays the number of Notable Events within each episode, unless that amount exceeds 99, in which case it shows "100+".  When an episode is not opened, the count is only displayed if that field is included in those selected for display, but again, if the value exceeds 99, displays "100+". If the count is selected for display in the episode list, AND an episode is opened, THEN in the compressed episode list in the left pane, the count is added on the left side of the pane as before, but the selected count field still displays as well, AND the selected count field NOW displays the actual count values even if they exceed 99.  HOW can I get the actual values exceeding 99 to display when no episode is open?
Is there a way to get the actual link for the alert when using the Service Now Incident Integration addon, as you would get with the normal Send email option? Thinking it’s a Custom fields setting, b... See more...
Is there a way to get the actual link for the alert when using the Service Now Incident Integration addon, as you would get with the normal Send email option? Thinking it’s a Custom fields setting, but not sure. https://docs.splunk.com/Documentation/AddOns/released/ServiceNow/Usecustomsearchcommands See screenshots.
I need to add a file to a lookup list / table. Please share how this is done?
We would like to be alerted when an alert has been changed. We use -     | rest /servicesNS/-/-/saved/searches   This call brings back the owner but not the recent modifier id. Is there any w... See more...
We would like to be alerted when an alert has been changed. We use -     | rest /servicesNS/-/-/saved/searches   This call brings back the owner but not the recent modifier id. Is there any way to get the modifier id?  
We're trying to gather a list of servers, both linux and windows that are missing specific software packages. It's easy enough to get the list of servers that has the software installed.   search s... See more...
We're trying to gather a list of servers, both linux and windows that are missing specific software packages. It's easy enough to get the list of servers that has the software installed.   search software IN ("CrowdStrike")   I was hoping I could search against the software package, like   search NOT software in ("CrowdStrike")   but that still displays hosts with Crowdstrike installed, just not that particular event showing that Crowdstrike is indeed installed.  I thought of making an eval    |eval cs_win_installed=if(match(software, "CrowdStrike"),1,0)   and then searching for 0 or 1 depending on what I care about, but can I do that with all the software that I'm searching on? Running that eval for multiple pieces of software   | eval cs_lin_is_installed=if(match(software, "falcon-sensor"),1,0) | eval cs_win_is_installed=if(match(software, "CrowdStrike Windows Sensor"),1,0) | eval q_is_installed=if(match(software, "Qualys*"),1,0) | eval f_is_installed=if(match(software, "SecureConnector*"),1,0)   only returns with the event showing that 1 piece of software on the machine. Am I overthinking this? How should I go about displaying hosts with missing software? Thanks much.
I want to get a predicted value from the data statistics. Is it possible to output the predicted value for each pattern from NO, 1, NO2 to No.3 like the following data? ------ no,time,pattern1,p... See more...
I want to get a predicted value from the data statistics. Is it possible to output the predicted value for each pattern from NO, 1, NO2 to No.3 like the following data? ------ no,time,pattern1,pattern2,pattern3,pattern4,pattern5,pattern6 1,2021/8/1,3,17,20,25,26,29 2,2021/8/2,11,12,21,30,28,11 ・ ・ ------ is there any good methods for that?
New to Splunk and experimenting a couple of functionalities, especially data aggregation With the experimental file app_usage.csv, I was trying to see the percentile of Webmail using  |inputlookup ... See more...
New to Splunk and experimenting a couple of functionalities, especially data aggregation With the experimental file app_usage.csv, I was trying to see the percentile of Webmail using  |inputlookup app_usage.csv | stats perc(Webmail, 10.0) but it returns error  Percentile must be a floating point number that is >= 0 and < 100. Not sure what to do, tried to cast Webmail to float also failed |inputlookup app_usage.csv | eval Webmail=cast(Webmail, 'float') with error Error in 'eval' command: The 'cast' function is unsupported or undefined. cast should be in the eval command, right? Based on the documentation.         
Hi Splunkers, I have query where i want to filter out all the legitimate process by path process which ive identify that path is legit. Basically this query i custom from ESCU, where all the element... See more...
Hi Splunkers, I have query where i want to filter out all the legitimate process by path process which ive identify that path is legit. Basically this query i custom from ESCU, where all the element i already setup to match exactly the same with the existing escu query.  What i expect is the result display will be not from the lookup (whitelist process) that i call from the query. Field : process , process_path | tstats `security_content_summariesonly` count values(Processes.dest) as dest values(Processes.user) as user min(_time) as firstTime max(_time) as lastTime from datamodel=Endpoint.Processes by Processes.process_name, Processes.parent_process_path | rename Processes.process_name as process, Processes.parent_process_path as process_path | rex field=user "(?<user_domain>.*)\\\\(?<user_name>.*)" | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)` | search [| tstats count from datamodel=Endpoint.Processes by Processes.process_name, Processes.parent_process_path | rare Processes.process_name limit=30 | rename Processes.process_name as process, Processes.parent_process_path as process_path | lookup update=true lookup_rare_process_allow_list_default2 process, process_path OUTPUTNEW allow_list | where allow_list="false" | lookup update=true lookup_rare_process_allow_list_local2 process, process_path OUTPUT allow_list | where allow_list="false" | table process process_path ] | `detect_rare_executables_filter`   as you can see above query, the second "tstats" consist of two lookup, which first lookup definition (lookup_rare_process_allow_list_default2) is whitelist on totally existing process (ex: splunk process) and the second lookup definition used (lookup_rare_process_allow_list_local2) is the all list of whitelist process.    The above query is running fine if i change both lookup definition line into below: | lookup update=true lookup_rare_process_allow_list_default2 process OUTPUTNEW allow_list | where allow_list="false" | lookup update=true lookup_rare_process_allow_list_local2 process OUTPUT allow_list | where allow_list="false"   But what i want is not on the field=process, but on field=process_path. I've read the doc for lookup and other community postage, seem should be no issue. No error display for first query if run. Just result is empty and i think some string is not pass to display the result. Really glad if someone can help me on this. thanks!
How can I split a field, into many other fields, but without using a delimiter, and using the position range instead? For example: bignumber = 16563764 I need to split it in: account id = positio... See more...
How can I split a field, into many other fields, but without using a delimiter, and using the position range instead? For example: bignumber = 16563764 I need to split it in: account id = position [0 to 3] of field "bignumber" company code = position [4 to 6] of field "bignumber" operation code = position [7] of field "bignumber"   Thanks!!