All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I want to add a cloud database for monitoring in AppD. But it didn't create a connection as it is showing as ?. The server which helped us to monitor is a local server but the Cloud database is... See more...
Hi, I want to add a cloud database for monitoring in AppD. But it didn't create a connection as it is showing as ?. The server which helped us to monitor is a local server but the Cloud database is an external. So how can I make a connection in between both to monitor Cloud DB. Any private key is used to help the monitoring? Regards, Sania.
Hi, I'm trying to detect brute force activity by detecting multiple auth failures followed by success.  I started with the following search which works and shows when there has been over 20 failures... See more...
Hi, I'm trying to detect brute force activity by detecting multiple auth failures followed by success.  I started with the following search which works and shows when there has been over 20 failures and at least 1 success, but the success can happen anywhere during the search period. It could be 1 success followed by 20 failures or the success can happen in the middle. index=main sourcetype="wineventlog" (EventCode=4624 OR EventCode=4625) Logon_Type IN (2,3,8,10,11) user!=*$ | bin _time span=5m as Time | stats count(eval(match(Keywords,"Audit Failure"))) as Failed, count(eval(match(Keywords,"Audit Success"))) as Success, count(eval(match(lower(Status),"0xc0000224"))) as "PwChangeReq", count(eval(match(lower(Sub_Status),"0xc0000071"))) as "Expired", count(eval(match(lower(Status),"0xc0000234"))) as "Locked" by Time user src_ip | where Success>0 AND Failed>=20 AND PwChangeReq=0 AND Locked=0 AND Expired=0   I need the query to only trigger if the success happens after 20 failures. I found some examples using streamstats so I created the following search but it's not working properly because the *reset_after* clears the failure_count for all src_ip. Therefore as long as there is 1 success from any IP address, the failure_count gets reset and I'm not seeing the failure count reach 20.   index=main sourcetype="wineventlog" EventCode IN (4624,4625) Logon_Type IN (2,3,8,10,11) | eval action=if(match(Keywords,"Audit Failure"),"failed","success") | reverse | streamstats window=0 current=true reset_after="("action==\"success\"")" count as failure_count by src_ip | where action="success" and failure_count > 20 | table _time, user, src_ip, action, failure_count   Is streamstats the way to go? Or how can I setup a query to detect the success after more than 20 failures?
I have a multiselect input that gets populated by results from a search. When I set searchwhenchanged="false" it it doesn't add the selectedValue to the multiselect. This is some broken code that I ... See more...
I have a multiselect input that gets populated by results from a search. When I set searchwhenchanged="false" it it doesn't add the selectedValue to the multiselect. This is some broken code that I spaghettied together from searching, clearly I'm not great with Splunk:     require(['splunkjs/mvc', 'splunkjs/mvc/simplexml/ready!'], function (mvc) { console.log("multiselect_functions.js loaded."); function setupMultiInput(instance_id) { /* */ // Get multiselect // var multi = mvc.Components.get(instance_id); // On change, check selection // multi.on("change", (selectedValues) => { // console.log("values: " + selectedValues); // }); var multi = mvc.Components.get(instance_id); // On change, check selection multi.on("change", function() { console.log("change " +multi.val()); multi.settings.set.choices("choices", ['53','57']); //blah }); } var all_multi_selects = document.getElementsByClassName("input-multiselect"); for (j = 0; j < all_multi_selects.length; j++) { setupMultiInput(all_multi_selects[j].id); } });      Above I'm just trying to insert to some random stuff, I get a multi.settings.set.choices is not a function error, so I am probably trying on the wrong object. Any help would be greatly appreciated it, thank you.
I have an event that logs the following      . . startTime: 2020-07-17T17:48:46Z endTime: 2020-07-17T17:52:27Z . .     I can pull out the following startTime and endTimes with regex. However, I... See more...
I have an event that logs the following      . . startTime: 2020-07-17T17:48:46Z endTime: 2020-07-17T17:52:27Z . .     I can pull out the following startTime and endTimes with regex. However, I also have a different event that comes in randomly. Call this the triggerEvent. I basically want to alert whenever the triggerEvent comes in, provided it is not within the time period(between startTime and endTime) of the previous types of events I described. There may be multiple events over multiple days so I need to check that it doesn't occur during any of those time periods. Any feedback is appreciated!  
I have an alert for excessive login failures configured to fire off when a PC reports greater than normal login attempts over a 5 minute period. But the alert doesn't specify the PC generating the al... See more...
I have an alert for excessive login failures configured to fire off when a PC reports greater than normal login attempts over a 5 minute period. But the alert doesn't specify the PC generating the alert. Can this be configured?
The volume where the hot/warm data resides shall be running on a disk with at least 1200 IOPS, since Splunk Enterprise Security is part of our deployment. Is there a "safe" IOPS estimate for the cold... See more...
The volume where the hot/warm data resides shall be running on a disk with at least 1200 IOPS, since Splunk Enterprise Security is part of our deployment. Is there a "safe" IOPS estimate for the cold bucket volume? Cold data will not be accessed regularly since we're sizing our indexes to maintain around 30 days of data in the hot/warm buckets (based on the amount of incoming volume) and the remaining 60 days in the cold buckets. We have a requirement to keep 90 days of data online. Would searches that look for data located in our cold bucket still be able to execute if the cold buckets were running on less than 1200 IOPS? I was thinking 300-400 IOPS since I'm trying to conserve costs associated to disk performance and I/O.
Hi using a Report (cause I need to allow permissions to the data) in a dashboard passing tokens. Looking at the docs, I can use "savedsearch" command https://docs.splunk.com/Documentation/Splunk/late... See more...
Hi using a Report (cause I need to allow permissions to the data) in a dashboard passing tokens. Looking at the docs, I can use "savedsearch" command https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Savedsearch     | savedsearch "MyReport" emailsubject_tok="Long Subject Name with + | and spaces"       When I look at the job log, only the first word is being replaced. So for my example, the job log shows emailsubject_tok as "Long".  How can I pass this in as a literal string? Trying not to modify the string itself as this will be a user cutting and pasting email subject text. Thank you!  Chris
Hi, I have data set that is getting ingested from the source to Splunk. Using auto extraction for, fields are extracted as they should. In this data, I have a field name pluginText. This field conta... See more...
Hi, I have data set that is getting ingested from the source to Splunk. Using auto extraction for, fields are extracted as they should. In this data, I have a field name pluginText. This field contains a lot of information e.g. software installed on endpoints, updates installed etc. I need to extract this information from this field. Sample is below. What is the best approach? I need both from configuring field extraction for this in configs or in actual Splunk search using rex or eval. pluginText: <plugin_output> The following software are installed on the remote host : KB3171021 [version 12.2.5000.0] [installed on 2018/06/11] Service Pack 3 for SQL Server 2014 (KB4022619) (64-bit) [version 12.3.6024.0] [installed on 2020/06/23] KB4052725 [version 12.2.5571.0] [installed on 2018/06/11] Veritas NetBackup Client [version 8.1.2] [installed on 2020/05/18] Windows Policy Checker 8.0.1 SQL Server 2014 Reporting Services [version 12.3.6024.0] [installed on 2020/06/23] Microsoft Visual Studio 2015 Shell (Minimum) [version 14.0.23107] [installed on 2019/09/11] Microsoft Visual Studio Tools for Applications 2015 Language Support - ENU Language Pack [version 14.0.23107.20] [installed on 2019/09/11] The following updates are installed : Microsoft .NET Framework 4 Multi-Targeting Pack : KB2504637 [version 1] [installed on 9/11/2019] Microsoft Visual C++ 2010 x64 Redistributable - 10.0.40219 : KB2151757 [version 1] [installed on 6/8/2018] KB2467173 [version 1] [installed on 6/8/2018] KB2565063 [version 1] [installed on 6/8/2018] KB982573 [version 1] [installed on 6/8/2018] Microsoft Visual C++ 2010 x86 Redistributable - 10.0.40219 : KB2151757 [version 1] [installed on 6/11/2018] KB2467173 [version 1] [installed on 6/11/2018] KB2565063 [version 1] [installed on 6/11/2018] </plugin_output>   Thanks in-advance!!
We are using LDAP to login splunk but now we want to enable SSO. We have Splunk enterprise 7.2.6 how can I do this? thank you  
I am looking for the proper sourcetype to ingest JBOSS console.log. Below is the path to the log file: /opt/app/jboss/standalone/log/console.log   The TA does not define the sourcetype for this lo... See more...
I am looking for the proper sourcetype to ingest JBOSS console.log. Below is the path to the log file: /opt/app/jboss/standalone/log/console.log   The TA does not define the sourcetype for this log file. Any help would be appreciated.   
We have a prospective client interested in knowing what our reporting capabilities are, and I would like to pull a list of reports that Splunk ES already has pre-configured out of the box. We current... See more...
We have a prospective client interested in knowing what our reporting capabilities are, and I would like to pull a list of reports that Splunk ES already has pre-configured out of the box. We currently don't have Splunk installed so I'm wondering if there is a public repository or page that has this information.
If so, what query would capture all of these notable events? The goal is to be able to create this report and schedule it as an email report so that our management knows which notable events were g... See more...
If so, what query would capture all of these notable events? The goal is to be able to create this report and schedule it as an email report so that our management knows which notable events were generated in the last 48 hours.
can anyone help me in telling why i am getting time difference between _time and indextime? the logs are sent via syslog from source and it is in CEF format.  <Apr 9 02:00:01>  <syslog- server name... See more...
can anyone help me in telling why i am getting time difference between _time and indextime? the logs are sent via syslog from source and it is in CEF format.  <Apr 9 02:00:01>  <syslog- server name> <02: 00:01, 371>  ERROR [EventLogManager] Udated logs Successfully CEF:|<cefVersion>|<vendor>|<product>|<version>|<id>|<id desc>|<severity id>|start=Apr 09 2020 01:00:01 end=Apr 09 2020 01:00:01 <............log msg> as my logs are getting written in a file path,  have written inputs.conf  and stored in forwarder which is pushed via deployment server: [monitor:///<path>] disabled=<> sourcetype=<> index=<>  in props.conf  [<sourcetype>] TIME_PREFIX= \send\= TIME_FORMAT= %b %d %Y %H:%M:%S MAX_TIMESTAMP_LOOKAHEAD = 20 TZ= GMT props. conf is placed in my SH and Indexer I am getting 1 hour time difference. the logs are getting generated in GMT timezone.  Let me know if any further details required Thank You !  
When saving a new connection in DBConnect running on Windows and trying to connect to a MSSQL Server database, I'm receiving the following error: "There was an error processing your request. It has ... See more...
When saving a new connection in DBConnect running on Windows and trying to connect to a MSSQL Server database, I'm receiving the following error: "There was an error processing your request. It has been logged (ID 86d1511df962721b8)." Of course, the ID changes every time I receive this message. Anyone had any success is resolving this issue?
  Hi All, need help to get the width adjusted for the panel.  out of the 3 all are in equal width. Can i make one smaller and giving space to the other one.    thanks jerin 
Hi, I have 2 sources log so much like 2 gb per day just them. What could I do to limit it? Is possible to set some filters somewhere (where?) Either is there another way to permit to send less log... See more...
Hi, I have 2 sources log so much like 2 gb per day just them. What could I do to limit it? Is possible to set some filters somewhere (where?) Either is there another way to permit to send less logs from them?
Questions: 1) How much does a license from 5gb per day cost?  2) Where could that be bought? 2) When license overcomes limit of 5gb per day what does Splunk happen? It stucks logging? It permits l... See more...
Questions: 1) How much does a license from 5gb per day cost?  2) Where could that be bought? 2) When license overcomes limit of 5gb per day what does Splunk happen? It stucks logging? It permits logging the same but reduced for certain days? Nothing, logging arrives also to 10gb?
Hi Splunk Community I was using MySQL databases and DB connect to ingest data into Splunk. Working great! If I use Mongo DB, which app can I use to ingest data into Splunk? I am hoping the app will... See more...
Hi Splunk Community I was using MySQL databases and DB connect to ingest data into Splunk. Working great! If I use Mongo DB, which app can I use to ingest data into Splunk? I am hoping the app will work similarly to DB connect. This app has to be reliable and well supported too.  Pls advise me. Thanks.
Hello splunk community, I'm a newbie on splunk so i this maybe a basic question. Basically I'm trying to do a piechart containing all the processes currently running. I managed (via powershell scri... See more...
Hello splunk community, I'm a newbie on splunk so i this maybe a basic question. Basically I'm trying to do a piechart containing all the processes currently running. I managed (via powershell script) to generate a csv file containing this:   "Values","Count","Group","Name" "System.Collections.ArrayList","1","System.Collections.ObjectModel.Collection`1[System.Management.Automation.PSObject]","ApplicationFrameHost" "System.Collections.ArrayList","1","System.Collections.ObjectModel.Collection`1[System.Management.Automation.PSObject]","conhost" "System.Collections.ArrayList","3","System.Collections.ObjectModel.Collection`1[System.Management.Automation.PSObject]","csrss" "System.Collections.ArrayList","1","System.Collections.ObjectModel.Collection`1[System.Management.Automation.PSObject]","dllhost" ........ ......... When forwarded, splunk couldn't find fields associated with the file, even when i tried to extract fields manually, splunk confused field name with data. (Objective: Pie chart containing the name of process and the number of its processes.)
I read the following document but I couldn't find any description. https://splunk.paloaltonetworks.com/compatibility.html Are the latest versions of APP and Add-on compatible?(Version 6.2) And is ... See more...
I read the following document but I couldn't find any description. https://splunk.paloaltonetworks.com/compatibility.html Are the latest versions of APP and Add-on compatible?(Version 6.2) And is it available in Splunk 8.X?