All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have created a windows level brute force attack alert to alert me when X number of authentication failures occur in a 15 min interval. sometimes I see an alert where the user is the same hostname ... See more...
I have created a windows level brute force attack alert to alert me when X number of authentication failures occur in a 15 min interval. sometimes I see an alert where the user is the same hostname but ends in the $ sign I have searched the windows documentation but it is not completely clear to me, could someone give me their opinion on what it is, if it is relevant or a false positive.   tnx
Hi Team, I need to use print two values from an index with different earliest values. please find the below example. index=abcd cust_name="*" earliest=-30d@d | fields cust_name,origUserid,destUs... See more...
Hi Team, I need to use print two values from an index with different earliest values. please find the below example. index=abcd cust_name="*" earliest=-30d@d | fields cust_name,origUserid,destUserid,finalCalledPartyUnicodeLoginUserID,callingPartyUnicodeLoginUserID | eval id =coalesce(origUserid,destUserid,finalCalledPartyUnicodeLoginUserID,callingPartyUnicodeLoginUserID) | rename cust_name as Customer | dedup id | stats count(id) as MAU by Customer | appendcols [ search index=abcd cust_name="*" earliest=-14d | fields cust_name,origUserid,destUserid,finalCalledPartyUnicodeLoginUserID,callingPartyUnicodeLoginUserID | eval id =coalesce(origUserid,destUserid,finalCalledPartyUnicodeLoginUserID,callingPartyUnicodeLoginUserID) | rename cust_name as Customer | dedup id | stats count(id) as DAU by Customer | eval DAU=round(DAU/14,2) ] My problem is I am using two earliest values so in my first earliest value I have a customer count of 57 and on the second earliest value customer count is 43, so while printing both the values it's printing the wrong value in one column.
I have parts of a Windows .Net application that are installed as services and run as services under an account on Windows Server.   I would like to monitor the number of threads that these services... See more...
I have parts of a Windows .Net application that are installed as services and run as services under an account on Windows Server.   I would like to monitor the number of threads that these services are consuming.   Is there a way to do that with Splunk?  I have begun looking at the documentation for APM but I am not convinced this is possible under Windows + .Net.   Thoughts?  Suggestions?   Regards, Joel
I would like to determine how many times an app on a deployment server has been deployed.  I'm not concerned with the host information.  I'm trying to determine which Apps are no longer being used an... See more...
I would like to determine how many times an app on a deployment server has been deployed.  I'm not concerned with the host information.  I'm trying to determine which Apps are no longer being used and can be archived.  I suspect it would come from the | rest command. | rest splunk_server=127.0.0.1 /services/deployment/server/clients I'm just not sure which fields and how to accurately calculate this. Looking for suggestions.
Hi  There,     Good day ,  Is there a SPL based set up to look for UF connectivity on port 9997 to Non Splunk Destinations. And is there any documentation in splunk finding what IP addresses splu... See more...
Hi  There,     Good day ,  Is there a SPL based set up to look for UF connectivity on port 9997 to Non Splunk Destinations. And is there any documentation in splunk finding what IP addresses splunk cloud uses or if splunk has allocated IP address range. Thanks in advance for the help and support.  
Hello,  I went through some of the on-line resources, to have a clear idea on what Protocols SPLUNK UF/HF uses to send data/events to SPLUNK indexer. But I couldn't get any clear ideas. Any help/inf... See more...
Hello,  I went through some of the on-line resources, to have a clear idea on what Protocols SPLUNK UF/HF uses to send data/events to SPLUNK indexer. But I couldn't get any clear ideas. Any help/info on what protocol UF/HF uses ....would be highly appreciated. Thank you so much.  
Hi, I have splunk Waiting for queued job to start getting error for a particular user however no jobs are queued for that user in the Job monitor, the role can run 50 concurrent searches and is only... See more...
Hi, I have splunk Waiting for queued job to start getting error for a particular user however no jobs are queued for that user in the Job monitor, the role can run 50 concurrent searches and is only affected this user. Also no jobs are in queued,Parsing,Finalising or Finalised.  Any advice?   Thanks
Hi folks, Hoping you might be able to help. I've some raw logs coming in and one of the "extracted" fields is a fields with all the system information I need e.g. Remote Desktop Protocol: OS: Wind... See more...
Hi folks, Hoping you might be able to help. I've some raw logs coming in and one of the "extracted" fields is a fields with all the system information I need e.g. Remote Desktop Protocol: OS: Windows 10/Windows Server OS Build: 10.0.18362 Target Name: NACORP NetBIOS Domain Name: NACORP NetBIOS Computer Name: 1234ABC DNS Domain Name: na.corp.xxxxx.com System Time: 2021-12-12 22:55:40.534959 Authorized Use Only - v 041001 This system is for the use of authorized users only. How do I extract specific pieces from it?  I need NetBIOS Domain Name and Computer Name and System Time I thought of using a regex but not sure how to build it since NetBIOS Domain Name values can be of different variations e'g one will be NACORP another one will be USON and another wil be NACORP.LOCAL Similar with the computer name - there's no unified naming convention - it depends on a country we get the logs from. some can have a "-" in the middle, some will have a "." etc. Any hints and tips on how to tackle it will be more than appreciated!
I upgraded Splunk Enterprise to 8.1.8 from 8.0.6.   I am now getting messages where 45 days are allowed over 60 days to go over the indexing limit.   Looking at the indexing, the largest amount are f... See more...
I upgraded Splunk Enterprise to 8.1.8 from 8.0.6.   I am now getting messages where 45 days are allowed over 60 days to go over the indexing limit.   Looking at the indexing, the largest amount are from internal Splunk.  I have a single instance.  The first three indexes are internal Splunk The largest source is the Splunk Metrics log And lastly, the sourcetypes splunk_metrics_log and splunkd are a major portion of indexed data My question is, why is the internal Splunk processes counting towards my indexing?     Regards, Scott Runyon    
Hi folks,  Hoping you might be able to help.  I've some raw logs coming in and one of the "extracted" fields is a fields with all the system information I need e.g. Remote Desktop Protocol: OS:... See more...
Hi folks,  Hoping you might be able to help.  I've some raw logs coming in and one of the "extracted" fields is a fields with all the system information I need e.g. Remote Desktop Protocol: OS: Windows 10/Windows Server OS Build: 10.0.18362 Target Name: NACORP NetBIOS Domain Name: NACORP NetBIOS Computer Name: 1234ABC DNS Domain Name: na.corp.xxxxx.com System Time: 2021-12-12 22:55:40.534959 Authorized Use Only - v 041001 This system is for the use of authorized users only.  How do I extract specific pieces from it?  I need NetBIOS Domain Name and Computer Name and System Time I thought of using a regex but not sure how to build it since NetBIOS Domain Name values can be of different variations e'g one will be NACORP another one will be USON and another wil be NACORP.LOCAL  Similar with the computer name - there's no unified naming convention - it depends on a country we get the logs from. some can have a "-" in the middle, some will have a "." etc. Any hints and tips on how to tackle it will be more than appreciated!
Hi everyone. I have three charts in a panel in a Simple XML dashboard and I'm trying to programmatically (i.e., with tokens) sync the maximum value of the Y axis. The idea is that the value, determin... See more...
Hi everyone. I have three charts in a panel in a Simple XML dashboard and I'm trying to programmatically (i.e., with tokens) sync the maximum value of the Y axis. The idea is that the value, determined by the maximum value of all three charts, is used in all three charts. Any clues? I tried setting a token in each of the three searches and then another token as max of these tokens. The resulting token was used to set the maxY value, but it doesn't work. This is the token initialization:     <init> <set token="max_y_version">0</set> <set token="max_y_version1">0</set> <set token="max_y_version2">0</set> <set token="max_y_version3">0</set> </init>       And this is an example chart (I have three of these):     <chart> <search> <done> <eval token="max_y_version1">$result.max_count$</eval> <eval token="max_y_version">max($max_y_version1$, $max_y_version2$, $max_y_version3$)</eval> </done> <query>(some query which creates results containing max_count in the first row...)</query> </search> <option name="charting.axisY.maximumNumber">$max_y_version$</option> <option name="charting.axisTitleX.text">Date</option> <option name="charting.axisTitleX.visibility">collapsed</option> <option name="charting.chart">column</option> <option name="charting.chart.stackMode">stacked</option> <option name="charting.drilldown">none</option> <option name="charting.legend.placement">top</option> <option name="refresh.display">progressbar</option> </chart>       Any clues? Thanks.
Hello everyone, i need a small help    In my search head splunkSRCH637T7.local If I search for any logs with sourcetype , logs not getting genarating,  Health status of splund is TailReader  Roo... See more...
Hello everyone, i need a small help    In my search head splunkSRCH637T7.local If I search for any logs with sourcetype , logs not getting genarating,  Health status of splund is TailReader  Root cause: the monitor input cannot produce data because splunkds processing queues are full . This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data,   COULD ANY ONE PLEASE HELP  PLEASE SEE MY SCREEN SHOT
This ^ is sample xml log file that I want to onboard. Please guide me about the settings which I should set in order to properly input this data. Also tell me on which instances the settings (pro... See more...
This ^ is sample xml log file that I want to onboard. Please guide me about the settings which I should set in order to properly input this data. Also tell me on which instances the settings (props.conf and transforms.conf) are required. I am running a Distributed system with indexer clustering. 
Hi all,   Does this Add-On support Azure Certificate Based Authentication? The documentation seems to have steps for using Client ID + Client Secret, and doesn't mention Certificate Based Authentic... See more...
Hi all,   Does this Add-On support Azure Certificate Based Authentication? The documentation seems to have steps for using Client ID + Client Secret, and doesn't mention Certificate Based Authentication, but wanted to double check. Thanks, Chris
Hi, I have installed and configured Palo Alto Addon which is creating multiple eventtypes , one of which is pan_traffic_start which I believe are the session start logs. I want to remove these parti... See more...
Hi, I have installed and configured Palo Alto Addon which is creating multiple eventtypes , one of which is pan_traffic_start which I believe are the session start logs. I want to remove these particular event types from the logs so that the license can be saved since much value are not there from these logs. Can someone please help me in filtering out this eventype so that they will not reach indexers or search headers. 
Hello, I have a query that return results if im running it for 1 hour but if im trying to run the query for more than 1 our it returns no result..  index=clientlogs sourcetype=clientlogs Mode=Real ... See more...
Hello, I have a query that return results if im running it for 1 hour but if im trying to run the query for more than 1 our it returns no result..  index=clientlogs sourcetype=clientlogs Mode=Real ApplicationIdentifier="*" "orders-for-open" (Action="OpenPositionRequest" AND Level=Info) | eval StartTime=strptime(ClientDateTime,"%Y-%m-%dT%H:%M:%S.%3N") | rename Request_Id AS RequestId | stats min(StartTime) as StartTime min(_time) AS _time BY RequestId | join RequestId [ search index=clientlogs sourcetype=clientlogs Mode=Real ApplicationIdentifier="*" Message="Create OrderForOpen" | rename OrderID AS PushEventData_Position_OrderID ] | join PushEventData_Position_OrderID [ search index=clientlogs sourcetype=clientlogs Mode=Real ApplicationIdentifier="*" (Message="Position.Open" AND (PushEventData_Position_OrderType=17 OR PushEventData_Position_OrderType=18)) | eval finishTime=strptime(ClientDateTime,"%Y-%m-%dT%H:%M:%S.%3N") | stats min(finishTime) as finishTime min(_time) AS _time BY PushEventData_Position_OrderID ] | eval Latency=finishTime-StartTime | where Latency>0 | timechart avg(Latency) span=1m
Hi,   I'm trying to figure out how to get data for the past few weeks and data will be filtered. week start should be from every (previous week)Saturday to Friday. I will send a report every ... See more...
Hi,   I'm trying to figure out how to get data for the past few weeks and data will be filtered. week start should be from every (previous week)Saturday to Friday. I will send a report every Friday. the report should be like this DATE           COUNT    NAME 21-01-22      58             one 14-01-22      58             one 07-01-22      45             two Thus on next Friday one more value-added to report. DATE           COUNT    NAME 28-01-22      61             one 21-01-22      58             one 14-01-22      58             one 07-01-22      45             two @ITWhisperer  @gcusello 
Hi, all! I wish to display the event without the fields like "host", "source", and "sourcetype" like the photo below on my dashboard. But when I save it as a dashboard, it still shows these fie... See more...
Hi, all! I wish to display the event without the fields like "host", "source", and "sourcetype" like the photo below on my dashboard. But when I save it as a dashboard, it still shows these fields!  How could I solve the problem?
Hi, We are having issues integrating full compatibility of Splunk Enterprise alerts in Opsgenie. The current Splunk app for opsgenie is not editable like slack or e-mail where you can choose what to... See more...
Hi, We are having issues integrating full compatibility of Splunk Enterprise alerts in Opsgenie. The current Splunk app for opsgenie is not editable like slack or e-mail where you can choose what to capture directly from it. This is somewhat limiting our delivery of alerts and making them less dynamic. The fields captured by opsgenie do not have the critical component that we would like to hve, i.e MESSAGE. To give you a bit of insight, our team is a 24x7 NOC that should receive Splunk alerts forwarded into Opsgenie and the alert must contain free text input related to triage steps and confluence links. I would like to know if there are other alternatives in Splunk for example to concatenate free text in a splunk search query that can be captured by opsgenie current setup, for example: Base query index=*titanic*   and   Free Text Query index=*titanic* | It doesn't end well   In the latter example, I want to make splunk concatenate the text to the search where i can append it to an alert and the freetext part would include the necessary triage steps and links needed for my team to go directly to conflueence. I don't know if this is possible but maybe someone knows.
Splunk search headで以下のクエリとした場合、service毎に2日ごとに合計量が表示されてしまいます。 timechart limit=0 useother=false span=2d count by service   2日おきに、集計計日のみの合計量を出したいのですが、どのようなクエリになりますでしょうか?