All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hey Ya'll - Wanted to see if anyone has a simplified solution for locating potential password compromises in a Windows AD Environment. For example, in an Active Directory domain structure, when a us... See more...
Hey Ya'll - Wanted to see if anyone has a simplified solution for locating potential password compromises in a Windows AD Environment. For example, in an Active Directory domain structure, when a user accidentally types their password in the username field and presses enter, the information is sent to the security log. The user will then see they failed their logon and then attempt to logon in again. The following SPL is what I can use to pull the events but is not the best method. I have a manual method where I can pass a token from 1 panel to another but would like an automated method. index=wineventlog source="wineventlog:security" EventCode=4625 OR (EventCode=4624 Logon_Type=2) |  eval Account = mvindex(Account_Name,1)  <- the default pulls the computer name, using 1 for user name | transaction maxspan=1m startswith="EventCode=4625"  endswith="EventCode=4624" | table _time host EventCode Account * From the SPL above I would like to have the multivalue Account field from the table require no null fields for the Account name (1 field will be the compromised password, and the other will be the user name, sometimes no name will show with the 4625 event) and also require one of the multivalue account fields length be greater than 13
Hi, I am trying to create 2 fields based on if condition. If in the logs, 200 Emv error or NoAcquirerFoundConfigured prints, capture it under Failed else print every other error under Declined. eva... See more...
Hi, I am trying to create 2 fields based on if condition. If in the logs, 200 Emv error or NoAcquirerFoundConfigured prints, capture it under Failed else print every other error under Declined. eval Failed=if((Failure_Message=="200 Emv error " OR Failure_Message=="NoAcquirerFoundConfigured "),1,0) eval Declined=if((Failure_Message!="200 Emv error " OR Failure_Message!="NoAcquirerFoundConfigured "),1,0) But when my declined seems to print the 200 Emv error as well when I filter with search Declined=1 Sample logs -  13/07/2023 12:26:51 (01) >> AdyenProxy::AdyenPaymentResponse::ProcessResponse::Response -> Result : Failure 13/07/2023 12:26:51 (01) >> AdyenProxy::AdyenPaymentResponse::ProcessPaymentFailure::Additional response -> Message : 200 Emv error ; Refusal Reason : 200 Emv error  Error_Message = 200 Emv error Failure_Message = 200 Emv error Register = 04 Status = Failure Store = tkg0554
Suppose there are 5 events in raw text in Splunk as below: "host":"111.123.23.34","level":1,"msg":"cricket score : 10","time":"2023-07-11T17:28:33.265Z" "host":"111.123.23.34","level":2,"msg":"cric... See more...
Suppose there are 5 events in raw text in Splunk as below: "host":"111.123.23.34","level":1,"msg":"cricket score : 10","time":"2023-07-11T17:28:33.265Z" "host":"111.123.23.34","level":2,"msg":"cricket score : 20","time":"2023-07-11T17:28:33.265Z" "host":"111.123.23.34","level":3,"msg":"cricket score : 30","time":"2023-07-11T17:28:33.265Z" "host":"111.123.23.34","level":4,"msg":"cricket score : 40","time":"2023-07-11T17:28:33.265Z" "host":"111.123.23.34","level"5,"msg":"cricket score : 50","time":"2023-07-11T17:28:33.265Z" So I need to create a Splunk query to get output as below. Total Number of events (sum of all events) 5 Total Score (sum of all cricket score) 150 Request your help for the same.
Hi Splunk Experts, I've a scheduled savedSearch where it runs every 5 mins, with the Schedule window of 2 minutes. Instead of searching for last 5 mins, I want to achieve something like 00 to 05 m... See more...
Hi Splunk Experts, I've a scheduled savedSearch where it runs every 5 mins, with the Schedule window of 2 minutes. Instead of searching for last 5 mins, I want to achieve something like 00 to 05 mins, 05 to 10 mins, 10 to 15 mins and so on. Is it possible to achieve this in the search, could someone please shred some lights. Thanks in advance!! | eval STime=now()-300, ETime=now() | bin STime span=5m | bin ETime span=5m  
Hi team,  This issue is in regards to the resource consumption while running the pods, in the below yaml file content, you may see that the memory allocated is 950m  resources:             request... See more...
Hi team,  This issue is in regards to the resource consumption while running the pods, in the below yaml file content, you may see that the memory allocated is 950m  resources:             requests:               memory: "300Mi"               cpu: "500m"             limits:               memory: "1000Mi"               cpu: "950m"   However when running the pods the memory goes upto 930m even though the application compared to the resource was not carrying this excess amount of load , I am unsure to the reason being why. Please let us know if you need more information.
I've a search that return a table like this: column1             column2               column3       a                            x                           value1                                ... See more...
I've a search that return a table like this: column1             column2               column3       a                            x                           value1                                                                  value2                                                                   value3       b                            y                           value1                                                                   value2                                                                   value3   I need column3 values ​​to appear separated by "|", like this: column1             column2               column3       a                            x                           value1|value2|value3       b                            y                           value1|value2|value3   it is possible format data like this?
Hi, We deployed an UF on a Win server 2022 and enabled the [WinEventLog://Security] log collection.  The log collection stops for hours sometime, and we see this error : ERROR ExecProcessor [6468 ... See more...
Hi, We deployed an UF on a Win server 2022 and enabled the [WinEventLog://Security] log collection.  The log collection stops for hours sometime, and we see this error : ERROR ExecProcessor [6468 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe"" WinEventCommonChannel - WinEventLogChannelBase::transADObject: Failed to convert guid string to guid structure: Invalid class string After a few hours or minutes (randomly), Splunk starts again the log collection and then stops again. And all of that witout any service restart. It only happens with Security Event logs. No issue with Application or System.   Has anyone seen this error before? Splunk UF version : 9.0.5 (64bits) Splunk_TA_windows : 8.7.0  
Hi Splunkers (I know, you starts to see my post too much on this blog...sorry!), I'm a bit confused about the management of blacklist and whitelist mechanism, for universal forwarders. As I wrote o... See more...
Hi Splunkers (I know, you starts to see my post too much on this blog...sorry!), I'm a bit confused about the management of blacklist and whitelist mechanism, for universal forwarders. As I wrote on others posts, we are managing a Splunk Cloud for a customer where we are completing, for Windows logs, the migration from WMI to UF. After installation completed, we want to manage those UF with a DS. Reading docs, I got that first step to say a Splunk host "Hey, you are a DS!" is to create the first app to be deployed on clients. Here the example states about outputs.conf but, due we already linked UFs to our HF, we don't need that; we prefer to use the inputs.conf, cause we want manage blacklist and whitelist mechanism true DS. The confusing thing for me is: is I want to say to UF "Hey, collect only a subset of Windows Event Code ", I saw here on community some posts where people get struck with whitelist and its wa suggested to them to us bot parameters: whitelist and blacklist. What I don't understand is why this and, so, the final configuration. For Example, if I want to say on inputs.conf for Security logs "Hey, collect only 4624 and 4625" I will have something like that: [WinEventLog://Security] ... <other parameter> ... whitelist = ? blacklist=?
Dears, I cannot Open Ticket Case:    
Hi Team, Is there any way we can setup a single Splunk alert having 4 host servers with different error threshold - for example. I have 4 host server1, server2, server3, server4  if there... See more...
Hi Team, Is there any way we can setup a single Splunk alert having 4 host servers with different error threshold - for example. I have 4 host server1, server2, server3, server4  if there 10 error count occurs for  server1 it will raise alert stating server1 having 10 error if there 20 error count occurs for  server2 it will raise alert stating server2 having 20 error if there 5 error count occurs for  server3 it will raise alert stating server3 having 5 error if there 10 error count occurs for  server4 it will raise alert stating server4 having 10 error I know this can be possible by  setting up 4 separate alerts for each server. just wanted to know if we can setup single alert involving all condition together in one alert.   Please help with sample search query. Thank you,
hi I try to add an option name in a pie chart: <option name="charting.chart.showPercent">true</option>  and an option name in a bar chart <option name="charting.fieldColors">{"Nombre d'incidents"... See more...
hi I try to add an option name in a pie chart: <option name="charting.chart.showPercent">true</option>  and an option name in a bar chart <option name="charting.fieldColors">{"Nombre d'incidents": 0xF91805, "Moyenne":0x639BF1}</option> but splunk 9.0.4 tells me "Unknown option name for node="chart" while it was working before The 2 options are between <chart></chart> tags what happened please?
hello, I have installed the add-on (Jira issue input add-on: https://splunkbase.splunk.com/app/6168) for collecting jira data in our splunk enterprise.  We have configured the account and required in... See more...
hello, I have installed the add-on (Jira issue input add-on: https://splunkbase.splunk.com/app/6168) for collecting jira data in our splunk enterprise.  We have configured the account and required input. However cant see any data getting ingested in splunk. Internal jira log (ta_jira_issue_input_jira_issue.log) says below-  2023-07-13 07:00:31,772 INFO pid=23702 tid=MainThread file=base_modinput.py:log_info:295 | The input xyz_jira_input ran successfully! There were no (new) Jira issues indexed during this interval. Any idea why the data is not getting ingested? Happy to share more details as needed.   Thanks
i have a search query and i want to add another condition to check the url if test!=staging. the first test is coming as a parameter and could be test, staging or prod. i've done following query but ... See more...
i have a search query and i want to add another condition to check the url if test!=staging. the first test is coming as a parameter and could be test, staging or prod. i've done following query but test!=staging part doesn't work and not returning as true.      index="my_index" AND (test!=staging OR "Properties.URL"="*stg*") source=Payments    
Hi Team,  I'm trying to find outliers in the network kpi for a project but every time I run this query I get 0 outliers so I'm stuck on what's wrong with my query.  this is it: | mstats avg("Ne... See more...
Hi Team,  I'm trying to find outliers in the network kpi for a project but every time I run this query I get 0 outliers so I'm stuck on what's wrong with my query.  this is it: | mstats avg("Network_Interface.Bytes_Received/sec") As Packets_Received stdev("Network_Interface.Bytes_Received/sec") as stdev WHERE index=dcn_nc01_os AND host = HIC026117 | eval lowerBound=(Packets_Received-stdev*exact(2)), upperBound=(Packets_Received+stdev*exact(2)) | eval isOutlier=if('Network_Interface.Bytes_Received/sec' < lowerBound OR 'Network_Interface.Bytes_Received/sec' > upperBound, 1, 0) any ideas or fixes are much appreciated! 
Hello, I need help in creating a Choropleth Map with Continent View by status scenario usecase? Fields I have is :   Country                 Region                Status          Scenario    ... See more...
Hello, I need help in creating a Choropleth Map with Continent View by status scenario usecase? Fields I have is :   Country                 Region                Status          Scenario    usecase  Australia                Asia                          pass                 abc             cookie united states        Americas                fail                    xyz                  - France                     EMEA                       delay                ghi                - PS : I don't have Iplocation in my data, the output I'm expecting is like below link https://www.infoplease.com/geography/world-geography/continents 
How can i modify or update owner & app context for  dashboard view/panel via REST API call ?  
I've a dashboard built with 3 panels with Panel 1 populate the date with loading at first time. Second panel will take input from 1st panel using drilldown and populate the data. Similarly, Third pa... See more...
I've a dashboard built with 3 panels with Panel 1 populate the date with loading at first time. Second panel will take input from 1st panel using drilldown and populate the data. Similarly, Third panel will take input from 2nd panel using drilldown and populate the data. Now if I click on different value from Panel 1, Panel 2 recognize that and it will refresh the result based on the input from Panel 1 but Panel 3 will remain display the result based on the previous input from Panel 2. Is there a way to overcome this issue like as soon as input for panel 2 changes, the panel 3 should disappear or it should change to " Search is waiting for input..."
Hey guys!   I need the statistics of a bunch of data by month. And this is done already.   search |eval Month=strftime(_time,"%Y %m") | stats count(mydata) AS nobs, mean(mydata) as mean, min(myd... See more...
Hey guys!   I need the statistics of a bunch of data by month. And this is done already.   search |eval Month=strftime(_time,"%Y %m") | stats count(mydata) AS nobs, mean(mydata) as mean, min(mydata) as min by Month | reverse   The output is what I want: Month nobs mean min 2023 06 1900 -5.0239778 -68.73417 2023 05 3562 -4.2430259 -67.134697 2023 04 3181 -4.1811658 -64.995394 2023 03 4274 -4.3373071 -134.20177 2023 02 3939 -4.7725011 -73.538274 2023 01 2868 -5.5231115 -41.056093 2022 12 395 -4.617424 -35.51642   Now I need add another row on the very top with statistics for the most recent WEEK. Ideally, I can use the search result without the need to search again and degrade performance. Thanks!
i have json input  Please find the Query  below: ... ... | stats values(*) as * by Id| eval Status=if(match(Error,"^[a-zA-Z0-9_]"),"Failure","Success")| stats Count by Dept,Status i can pri... See more...
i have json input  Please find the Query  below: ... ... | stats values(*) as * by Id| eval Status=if(match(Error,"^[a-zA-Z0-9_]"),"Failure","Success")| stats Count by Dept,Status i can print as below in dashboard Dept Status Count Accounts Success 4 Accounts Failure 7 Mechanical Success 4 Mechanical Failure 4   i want to print as below:   Dept Success  Failure total Accounts 5 1 6 Mechanical 6 2 8   Please help here
I have JSON event data like this (it is shown as a collapsable tree structure in the event view):   { "data": { "192.168.1.1": { "ip": "192.168.1.1", "number... See more...
I have JSON event data like this (it is shown as a collapsable tree structure in the event view):   { "data": { "192.168.1.1": { "ip": "192.168.1.1", "number": 0, "list": [ { "msg": "msg1", "time": "2023-07-01T01:00:00", }, { "msg": "msg2", "time": "2023-07-01T02:00:00", }, { "msg": "msg3", "time": "2023-07-01T03:00:00", } ] }, "192.168.1.2": { "ip": "192.168.1.2", "number": 2, "list": [ { "msg": "msg1", "time": "2023-07-02T01:00:00", }, { "msg": "msg2", "time": "2023-07-02T02:00:00", } ] } } }   Please note:  The key names under "data" are not known beforehand, but they are guaranteed to be IP addresses. That means they contain dots, which makes direct addressing such as "data.192.168.1.1.ip" difficult. The number of entries in "data.X.list" is variable. "data.X.number" is just any number, it does not contain the length of the list or so. I want to flatten the structure into a tabular form, like this: ip number msg time "192.168.1.1" 0 "msg1" "2023-07-01T01:00:00" "192.168.1.1" 0 "msg2" "2023-07-01T02:00:00" "192.168.1.1" 0 "msg3" "2023-07-01T03:00:00" "192.168.1.2" 2 "msg1" "2023-07-02T01:00:00" "192.168.1.2" 2 "msg1" "2023-07-02T02:00:00"   My strategy so far was: In the raw data (_raw), replace all dots in IP addresses by underscores, to avoid the dot notation hassle Then use foreach to generate "iterator variables" for each ip entry Then iterate over all "iterator variables", use <<MATCHSTR>> as a placeholder for all spath operations within each IP address's sub-tree. Something along the lines of   | rex field=_raw mode=sed "s/192.168.([0-9]{1,3}).([0-9]{1,3})/192_168_\1_\2/g" | foreach data.*.ip [ eval iterator_<<MATCHSTR>>='<<FIELD>>'] | foreach iterator_* [ spath path=data.<<MATCHSTR>>.list{} output=<<MATCHSTR>>_json | eval <<MATCHSTR>>_json=mvmap(<<MATCHSTR>>_json, <<MATCHSTR>>_json."##<<MATCHSTR>>") | eval messages=mvappend(messages, <<MATCHSTR>>_json) | fields - <<MATCHSTR>>_json ]     But the problems start with the fact that rex applied to _raw does not seem to have the desired effect. The closest I get are iterator variables still containing dotted IP addresses, such as "iterator_192.168.1.1". (This behaviour might be difficult to reproduce with makeresults sample data!) What am I missing here?