All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a dynamically populated dropdown. Depending on other input values, no values should be populated. However, when this is the case, this error message is displayed below the dropdown, "Search pr... See more...
I have a dynamically populated dropdown. Depending on other input values, no values should be populated. However, when this is the case, this error message is displayed below the dropdown, "Search produced no results". How can I stop this error message from displaying?
I need to create an audit for AD changes and have followed all steps in https://support.logbinder.com/SuperchargerKB/50135/8-Install-Supercharger-with-Splunk-Light-and-the-Splunk-App-for-LOGbinder . ... See more...
I need to create an audit for AD changes and have followed all steps in https://support.logbinder.com/SuperchargerKB/50135/8-Install-Supercharger-with-Splunk-Light-and-the-Splunk-App-for-LOGbinder . Logs are all showing when I search index=main but no results found in the dashboard and Logbinder for splunk AD changes. I followed some searches from the past threads. Please see attached results. Kindly assist on this. Thank you.                  
Hey there! I'm quite new in Splunk an am struggeling again. What I'm trying to do is to hide a column if every field in that column has a certain value. I've already searched a lot online and found ... See more...
Hey there! I'm quite new in Splunk an am struggeling again. What I'm trying to do is to hide a column if every field in that column has a certain value. I've already searched a lot online and found several solutions, that should work for me but don't. Can anybody help me out here?  
I have to create something look alike below on splunk :     I have something like below , but how to get a view like above , if someone could help me here :    
Hello I use the search below   [| inputlookup host.csv | table host] `diskspace` | fields FreeSpaceKB host | eval host=upper(host) | eval FreeSpace = FreeSpaceKB/1024 | eval FreeSpace... See more...
Hello I use the search below   [| inputlookup host.csv | table host] `diskspace` | fields FreeSpaceKB host | eval host=upper(host) | eval FreeSpace = FreeSpaceKB/1024 | eval FreeSpace = round(FreeSpace/1024,1) | search host=$tok_filterhost$ | stats latest(FreeSpace) as FreeSpace by host | table FreeSpace | appendpipe [| stats count | eval FreeSpace="No event for this host" | where count = 0 | table FreeSpace ]   I use a color visualization in my color panel     The problem is that when the appendpipe condition is true, the message displayed is "No event for this host GB" instead "No event for this host" I tried to delete the number format option and to add it in the code but it's the same problem   | eval FreeSpace=FreeSpace." GB"   How can I keep the format option and the color option of my single panel with the appendpipe subsearch? Thanks for your help
i have a provided SQL query which runs fine in the data lab and SQL explorer and provides an output i want to use - however when i create the input the steps work up until the last point when to sele... See more...
i have a provided SQL query which runs fine in the data lab and SQL explorer and provides an output i want to use - however when i create the input the steps work up until the last point when to select "finish" the input then gives an error.             There was an error processing your request. It has been logged (ID edb0cdb547c4f5d8).             Looking at the _internal index the error is:             020-08-04 13:44:44.975 +1000 [dw-59 - POST /api/inputs] ERROR io.dropwizard.jersey.errors.LoggingExceptionMapper - Error handling a request: edb0cdb547c4f5d8 java.lang.RuntimeException: java.lang.NullPointerException at com.splunk.dbx.server.util.ResultSetMetaDataUtil.isTableHavingSameNameColumns(ResultSetMetaDataUtil.java:117) at com.splunk.dbx.server.api.service.conf.impl.InputServiceImpl.create(InputServiceImpl.java:136) at com.splunk.dbx.server.api.service.conf.impl.InputServiceImpl.create(InputServiceImpl.java:38) at com.splunk.dbx.server.api.resource.InputResource.createInput(InputResource.java:97) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory$1.invoke(ResourceMethodInvocationHandlerFactory.java:81) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:144) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:161) at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$TypeOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:205) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:99) at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:389) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:347) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:102) at org.glassfish.jersey.server.ServerRuntime$2.run(ServerRuntime.java:326) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267) at org.glassfish.jersey.internal.Errors.process(Errors.java:315) at org.glassfish.jersey.internal.Errors.process(Errors.java:297) at org.glassfish.jersey.internal.Errors.process(Errors.java:267) at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:317) at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:305) at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:1154) at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:473) at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:427) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:388) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:341) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:228) at io.dropwizard.jetty.NonblockingServletHolder.handle(NonblockingServletHolder.java:49) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1650) at io.dropwizard.servlets.ThreadNameFilter.doFilter(ThreadNameFilter.java:34) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637) at io.dropwizard.jersey.filter.AllowedMethodsFilter.handle(AllowedMethodsFilter.java:45) at io.dropwizard.jersey.filter.AllowedMethodsFilter.doFilter(AllowedMethodsFilter.java:39) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637) at com.splunk.dbx.server.api.filter.ResponseHeaderFilter.doFilter(ResponseHeaderFilter.java:30) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:166) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1155) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) at com.codahale.metrics.jetty9.InstrumentedHandler.handle(InstrumentedHandler.java:241) at io.dropwizard.jetty.RoutingHandler.handle(RoutingHandler.java:52) at org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:455) at io.dropwizard.jetty.BiDiGzipHandler.handle(BiDiGzipHandler.java:68) at org.eclipse.jetty.server.handler.RequestLogHandler.handle(RequestLogHandler.java:56) at org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:169) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) at org.eclipse.jetty.server.Server.handle(Server.java:561) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:334) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:279) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:104) at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:124) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:247) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:140) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:243) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:679) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:597) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.NullPointerException: null at com.splunk.dbx.server.util.ResultSetMetaDataUtil.isTableHavingSameNameColumns(ResultSetMetaDataUtil.java:106) ... 67 common frames omitted              
Hi, I'm running on Red Hat 7.3 and Splunk version is 7.3. The following edits were made to the /etc/security/limits.conf file inside the splunk container: root hard nofile 202400 root s... See more...
Hi, I'm running on Red Hat 7.3 and Splunk version is 7.3. The following edits were made to the /etc/security/limits.conf file inside the splunk container: root hard nofile 202400 root soft nofile 102400 splunk hard nofile 202400 splunk soft nofile 102400 The /etc/pam.d/su files were also edited to add: session required pam_limits.so But when I check the splunkd logs, I still see the default value  of 65536.  I have read few other similar questions too but still no luck. What did I miss here? Thanks in advance
Hi, I am trying to pull logs from Nessus Professional installed on ec2 instance into Splunk. I installed Tenable Add-on for Splunk and Tenable App for Splunk. I am trying to configure Account withi... See more...
Hi, I am trying to pull logs from Nessus Professional installed on ec2 instance into Splunk. I installed Tenable Add-on for Splunk and Tenable App for Splunk. I am trying to configure Account within Tenable Add-on for Splunk. I am using Splunk 7.3.5. In the Add Account form, I see Tenable.io, Tenable.sc credentials, and Tenable.sc certificate inTenable Account Type drop down list. I chose Tenable.sc credentials based on some documentation found online. Is it correct selection? Also, within Address, I chose ip address of Nessus Professional in the format of 10.20.30.40 format. I did not mention any port such as 8834. I unselected 'Verify SSL Support' checkbox. I provided the username and password of service account created with Nessus Professional. There is no proxy. So I unchecked 'Proxy Enable' checkbox. When I saved, I am getting exception to check IP Address, Username and Password. I tried Tenable.io Account Type, just for curiosity, even though it is incorrect. I provided the access key ID and Secret Access Key for the user created in Nessus Professional within 'Add Account' form for Tenable.io Type. I am still getting same exception. Can you please let me know what I am doing wrong. What all ports do I need to open for communication between my machine with Splunk browser and Nessus Professional machine? Also, what prirvileges should the Nessus Professional user need to have? Also, is there a better way to feed Nessus Professional logs into Splunk? Thanks a lot for your help
I've created a text form input called 'username' to search for usernames in my dashboard panels and i've set the token value to 'user_name'        Now, I'm trying to add the above token value to ... See more...
I've created a text form input called 'username' to search for usernames in my dashboard panels and i've set the token value to 'user_name'        Now, I'm trying to add the above token value to this search string which filters out all the users with failed logins But, I'm not sure how to add the token to this search query. Does anyone know how to do this?
Hi fellow Splunkers, I want to create alert with these conditions: alert triggered when any of the VPNs go down. alert triggered when someone brings down the tunnel. Thanks in advance
Splukers, I want to calculate uptime for my network. By this I mean, I need uptime in hours like time diffrence between to consecutive uptimes and downtimes.    As in above image, I want t... See more...
Splukers, I want to calculate uptime for my network. By this I mean, I need uptime in hours like time diffrence between to consecutive uptimes and downtimes.    As in above image, I want to calculate uptime and downtime but not total. for example my network was up for two hours and there was downtime for 15 mins and then network is up for 4 hours and then again experience downtime for 10 mins. I want to achieve this. How can I do this stuff in SPL
Whats the alternative when link between on-prem HF to Splunk cloud goes down? how we can we prevent loss of data during the interim? For syslog, we already use syslog server so no issue on that pa... See more...
Whats the alternative when link between on-prem HF to Splunk cloud goes down? how we can we prevent loss of data during the interim? For syslog, we already use syslog server so no issue on that part. However, what can we do for data from non-syslog based sources such as UFs? There is option for Persistent queues but its not available for these input types: Monitor Batch File system change monitor splunktcp (input from Splunk forwarders)
I have 2 tables I'd like to join the tables. for example :  A table str1 str2 str3 B table str4 val1 oval1 str5 val2 oval2 str6 val3 oval3 result : A + B tabl... See more...
I have 2 tables I'd like to join the tables. for example :  A table str1 str2 str3 B table str4 val1 oval1 str5 val2 oval2 str6 val3 oval3 result : A + B table str1 str4 val1 oval1 str1 str5 val2 oval2 str1 str6 val3 oval3 str2 str4 val1 oval1 str2 str5 val2 oval2 str2 str6 val3 oval3 str3 str4 val1 oval1 str3 str5 val2 oval2 str3 str6 val3 oval3 thank you.
HI I have a query like below. Can i use something else than join  index=hello sourcetype="logs A" source="C:\\football\ab*"  OR source="C:\\Tennis\cd*" OR source="C:\\Cricket\eb*"  | rex (somethin... See more...
HI I have a query like below. Can i use something else than join  index=hello sourcetype="logs A" source="C:\\football\ab*"  OR source="C:\\Tennis\cd*" OR source="C:\\Cricket\eb*"  | rex (something) |eval (something) | join type=left [search sourcetype = "logs B" source="C:\\football\ab*"  OR source="C:\\Tennis\cd*" OR source="C:\\Cricket\eb*"  |rex (something) |eval (somthing)] table 
Hello, This is my first post, so I apologize if I'm lacking in some sort of post etiquette or other guidelines. I'm trying to execute a query over a database of logs, where different types of logs h... See more...
Hello, This is my first post, so I apologize if I'm lacking in some sort of post etiquette or other guidelines. I'm trying to execute a query over a database of logs, where different types of logs have different fields. I'd like my query to accomplish two things: 1) search for logs of type A, and group results based on field 1 (integer field), field 2 (integer field), and field 3 (string field) (the aggregation operator will be a count) I know how to accomplish step 1 logType=A (fieldA=5* OR fieldA=4*) | stats count BY fieldA, fieldB, fieldC | sort -count +desc the tricky part is completing step 2 2) for each result in query 1 (our subsearch), search for all logs of type B such that field 4 (a string field in log type B, that logs of type A do NOT contain) contains field 2 (cast to a string, as field 2 holds integers for logs of type A and we are seeing if the text value of this integer is in field 4) and contains field 3.  By contains, I mean in the literal String.contains() meaning. Field 4 will be a very long message stored in a string, and will contain the values stored in fields 2 and 3 of log type A. I'm searching for logs of type B that correspond to the specific logs of type A that were returned in my subsearch. logs of type B do not contains fields 1, 2, or 3, so I need to extract these fields from logs of type A, then see if field 4 in logs of type B contain these values. Is there any way to do this in one query? The first problem that I've come across is that subsearches are (I believe) meant to return one result, whereas mine must return multiple results. Furthermore, subsearches are meant to add an extra parameter, or narrow down your outer search, but the log type I'm searching over in my outer search doesn't contain the fields that my subsearch produces. Any help on this would be greatly appreciated!
Hello, Splunk is timing out after I try and use the export feature in UI.  There is quite a bit of data that needs to be exported.  Is there a way to keep it from timing out?
I am trying to get the Date (altering _time in a specific format shown below), number of events (which I am using stats count to count the number of occurrences of "EXAMPLE" and renaming as Transacti... See more...
I am trying to get the Date (altering _time in a specific format shown below), number of events (which I am using stats count to count the number of occurrences of "EXAMPLE" and renaming as Transactions), and the sum of a value from different events (which I have to trim USD and quotes in order to make it register as a number). I can get the results separately but when I try to get all three columns to show in one table, it will only give me the number of events. All fields come from the same log. I want it to look like this: ------------------------------------------------ Date                                 Transactions                                  entryAmountDay 08-02-2020                      7                                                          5000.00 ------------------------------------------------- What works separately: source=example  "EXAMPLE" | stats count | rename count AS Transactions ------------------------- source=example "EXAMPLE" | eval Date = strftime(_time, "%m-%d-%y") | fields - _time | eval entryAmount = trim(replace(entryAmount, "'USD", "")), eval entryAmount = trim(replace(entryAmount, "'", "")), eval entryAmount=trim(entryAmount) | stats sum(entryAmount) as entryAmountDay by Date I have tried many different combinations and commands but can't get anything to work. Please help! Thank you
Hi, I’m trying to perform a query in Splunk that not sure if it’s even possible… I have my query over data with a format like: 01 Jan aaa ... 02 Jan bbb ... 01 Jan ccc ... 02 Jan aaa ... The qu... See more...
Hi, I’m trying to perform a query in Splunk that not sure if it’s even possible… I have my query over data with a format like: 01 Jan aaa ... 02 Jan bbb ... 01 Jan ccc ... 02 Jan aaa ... The query is extracting the value "aaa", "bbb", "ccc" into a field and taking the date of the last appearance. Then it's displayed using a table showing the letters and the date. The problem is that the possible values of the three letters would also contain "ddd" and "eee" (and it could be more or less) that they are not found in the query. So, I would like to add this "ddd" and "eee" in the table with date "never" (or some similar value). Would it be this possible?
If the trend is zero, how do I not have a black background? I just want a grey background  
I have a DB ouput job sending around 6 to 8 millions records to a MSSQL database daily.  This job failed frequently; however, I don't see any valuable messages from internal logs to tell why the job ... See more...
I have a DB ouput job sending around 6 to 8 millions records to a MSSQL database daily.  This job failed frequently; however, I don't see any valuable messages from internal logs to tell why the job failed.  Any clues what could be the reasons the job failed? Thanks, Lucas