All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello guys, I'm using index=... | join commonfield [search index=...] | sistats count as nb scheduled each minute on low number of results, nothing is written in summary index, unlike other queries... See more...
Hello guys, I'm using index=... | join commonfield [search index=...] | sistats count as nb scheduled each minute on low number of results, nothing is written in summary index, unlike other queries without join. Is it correct syntax or maybe summary indexing is 'missing' data as it's scheduled too frequently? No error in scheduler.log Thanks.    
I have a query trying to compare two different time periods, which I do with an inner search ( | append [search <identical query>] ).  But I've run into a curious problem.  If I search over the same ... See more...
I have a query trying to compare two different time periods, which I do with an inner search ( | append [search <identical query>] ).  But I've run into a curious problem.  If I search over the same period of time in both the inner and outer searches, I get different results. So if I set the time picker (ie, driving the outer query) to "last week" and my inner query to earliest=-w@w latest=@w, it seems like they should produce identical results since they are identical queries running over identical time periods in the past. Just looking at the results: Outer query Time picker = Last Week  152,286,377 records Inner query Query time params: earliest=-w@w latest=@w 144,081,130 records    
If Master Nodes don't do any indexing, is there a different configuration that is recommended for Master Nodes?    What would that configuration be?
Hello I have the following regex from cisco asa add-on default transforms.conf: [cisco_source_ipv4] REGEX = \s+(?:from|for|src(?! user)) (?:(\S+):)[\w-]*?(\d{1,3}\.\d{1,3}.\d{1,3}.\d{1,3})(?:\/(... See more...
Hello I have the following regex from cisco asa add-on default transforms.conf: [cisco_source_ipv4] REGEX = \s+(?:from|for|src(?! user)) (?:(\S+):)[\w-]*?(\d{1,3}\.\d{1,3}.\d{1,3}.\d{1,3})(?:\/(\w+))?(?:\((?:([\S^\\]+)\\)?([\w\-_]+)\))?\s*\(?(\d{1,3}\.\d{1,3}.\d{1,3}.\d{1,3})?\/?(\d+)?\)?\s*(?:\((?:([\S^\\]+)\\)?([\w\-_]+)\))? FORMAT = src_zone::$1 src_ip::$2 src_port::$3 src_nt_domain::$4 src_user::$5 src_translated_ip::$6 src_translated_port::$7 src_nt_domain::$8 src_user::$9   The issue is that If I try to run the regex from UI, I get error : Error in 'SearchOperator:regex': The regex '\s+(?:from|for|src(?! user)) (?:(\S+):)[\w-]*?(\d{1,3}\.\d{1,3}.\d{1,3}.\d{1,3})(?:\/(\w+))?(?:\((?:([\S^\]+)\)?([\w\-_]+)\))?\s*\(?(\d{1,3}\.\d{1,3}.\d{1,3}.\d{1,3})?\/?(\d+)?\)?\s*(?:\((?:([\S^\]+)\)?([\w\-_]+)\))?' is invalid. Regex: missing closing parenthesis. The add-on is working fine as well as search time field extraction so obviously the regex is working fine from transforms.conf but not in UI using regex command. Someone can help?
With the below query I am able to get data as below(first one) and I need to convert it as second box   For the time field I am getting common values and i need to merge and combine them as shown. ... See more...
With the below query I am able to get data as below(first one) and I need to convert it as second box   For the time field I am getting common values and i need to merge and combine them as shown. is there any way to achieve this, I've tried with values() but it is not working       sourcetype=access_combined | eval action = if(isnull(action) OR action="", "unknown", action) | bin _time span=102h |eval Time=strftime(_time,"%Y-%m-%d %H:%M:%S") | stats count as totals by action,Time | sort -Time,action    
Hello, I've have an alert that returns by email suspicious login attempts in the form of a table with client_ip, number of different logins used, list of logins used, continent and country. Basical... See more...
Hello, I've have an alert that returns by email suspicious login attempts in the form of a table with client_ip, number of different logins used, list of logins used, continent and country. Basically, the table is created by this search (time window 60 minutes):     index=webauth sourcetype=cas login!="audit:unknown" | eval login=lower(login) | stats dc(login) AS number, values(login) AS "logins list" by client_ip| iplocation allfields=true client_ip | fields - City,MetroCode,Region,lat,lon,Timezone | search number>1     Sample result: client_ip number logins list Continent Country 192.168.0.6 3 foo bar baz Somewhere Here   We have many false positive alert because some users make typos when they try to log in. I would like to clean up this table from any login that is not very different from the first one. If a result line lists those logins: myself, myselv, then the Levenshtein distance would be 1, then I would like to ditch the line (ie. number would fall from 2 to 1, and result would be excluded). If a result line lists: myself, myselv, yourself, then the second login would be excluded, but the result should be kept in the final table because yourself is very different from myself. I hope it makes sense. I've studied the solution https://community.splunk.com/t5/Splunk-Search/Is-there-any-way-to-compare-multivalue-fields-to-single-value/td-p/317189 for hours, but my result is so ugly that I can't believe it's the only solution:     index=webauth sourcetype=cas login!="audit:unknown" | eval login=lower(login) | fields client_ip,login | dedup client_ip,login | mvcombine login | eval n=mvcount(login), llogs=mvdedup(login) | search n>1 | iplocation allfields=true client_ip | fields - City,MetroCode,Region,lat,lon,Timezone,_raw,_time | mvexpand login | table * | map maxsearches=100 search=" | makeresults | eval login=\"$login$\", llogs=\"$llogs$\", number=\"$n$\" , Continent=\"$Continent$\" , Country=\"$Country$\" , client_ip=\"$client_ip$\" |makemv delim=\" \" llogs | mvexpand llogs |table *" | `ut_levenshtein(login,llogs)` | search ut_levenshtein>3 | fields - _time, llogs | mvcombine login | eval logins=mvdedup(login) | eval number=mvcount(logins) | fields - login | dedup client_ip,logins | table client_ip,number,logins,Continent,Country,ut_levenshtein       Any idea to design something better? Thanks
(Item Id: 45) Container Name: Abc Admin Accounts (Container Id: 19) suid=1 need to extract Container name & Container id from the above partial log posted here using regex. Kindly help. Thanks in a... See more...
(Item Id: 45) Container Name: Abc Admin Accounts (Container Id: 19) suid=1 need to extract Container name & Container id from the above partial log posted here using regex. Kindly help. Thanks in advance
Hi, I've been trying to get email trace for office365 exchange using the addon in subject.  No data is coming under this sourcetype   sourcetype: ms:o365:reporting:messagetrace    checked ta_ms... See more...
Hi, I've been trying to get email trace for office365 exchange using the addon in subject.  No data is coming under this sourcetype   sourcetype: ms:o365:reporting:messagetrace    checked ta_ms_o365_reporting_ms_o365_message_trace.log  no errors are seen. No attempts of using the API is seen I have the below configuration set for the input Not sure about the "delay throttle" part. if there are default values i don't mind using them What am I missing here? 
I need an action for an incident responder to send a selected event's data via email. I can define notable actions, but they will be triggered automatically when a notable is created. How can I do th... See more...
I need an action for an incident responder to send a selected event's data via email. I can define notable actions, but they will be triggered automatically when a notable is created. How can I do this manually?
Hi What is the best way to make sure your nodes are getting real time updates if your app is updating all the time? step 1 ) Use the deployment server to copy files from the master node to the mast... See more...
Hi What is the best way to make sure your nodes are getting real time updates if your app is updating all the time? step 1 ) Use the deployment server to copy files from the master node to the master-app directory $SPLUNK_HOME/etc/master-apps Step 2) The master-app directory will use the "configuration bundle" to sent out the updates? If these steps are correct, how do you make sure step 1 and 2 happen automatically? Thanks Robert 
Hello all, I have two search strings that pull information - one pulls all the blocked emails and the second pulls the emails blocked due to the rule it was blocked on and the file name. I would lik... See more...
Hello all, I have two search strings that pull information - one pulls all the blocked emails and the second pulls the emails blocked due to the rule it was blocked on and the file name. I would like to combine these searches so that the table has the additional "File Name" field whenever the rule the email is being blocked on is a certain value. Search 1: index=index_name sourcetype=sourcetype_name | stats count by _time, email_domain, rule_name, email_ID Search 2: index=index_name sourcetype=sourcetype_name rule=hasFile file_name=* | table _time, email_ID, file Note: Both searches have the email_ID that match and I've been trying to use that value to no avail.  Thanks in advance for the assistance!
I have a line chart where I have mentioned values for y axis as below: interval: 25 min value: 0 max value: 100 So I should be getting a chart where the division of lines and scale will be 0,25,50,... See more...
I have a line chart where I have mentioned values for y axis as below: interval: 25 min value: 0 max value: 100 So I should be getting a chart where the division of lines and scale will be 0,25,50,75,100. But instead I am getting values on y axis as 40,80. I want the lines on y axis to be 25,50,75,100.
Hi, I am unable to 'edit role' or assign a new role to a user although I do have an admin role.? What could be the solution ?
i want to extract two values from the below log message like TestUser as one field(featuename) and accounts_fetch as scenario name , and visualize the average requests for featurename +sceanrioname ... See more...
i want to extract two values from the below log message like TestUser as one field(featuename) and accounts_fetch as scenario name , and visualize the average requests for featurename +sceanrioname "Successfully retrieved the account details for user: KL**19**19**19**19**11**11**11** with feature: TestUser, scenario: accounts_fetch"
@splunk  @Anonymous  In order to include Percentage Values in both, Dashboards and PDF Exports I am using the xml setting: <option name="charting.chart.showPercent">1</option> I did set up a weekl... See more...
@splunk  @Anonymous  In order to include Percentage Values in both, Dashboards and PDF Exports I am using the xml setting: <option name="charting.chart.showPercent">1</option> I did set up a weekly scheduled PDF export a couple of month ago. It used to be working just fine exporting all my Pie Charts to PDF including percentage values. Now I did not change anything on the setup but the pdf export is no longer including the pie charts at all. After playing around a bit with the xml settings I found that the PDF export will only include the pie charts if I remove the setting mentiond above or replace it with: <option name="charting.chart.showPercent">0</option> So I guess there must be a Bug in the PDF creation logic.
Hello. Can you send me the DB Connect Vs 3.1.2, please? That version is not available at Splunkbase.
Hello, I am trying to create a connection to Oracle DB but on saving the connection, splunk_app_db_connect_server.log shows below error: 2020-08-31 09:20:34.764 +0100 [dw-56 - POST /api/connection... See more...
Hello, I am trying to create a connection to Oracle DB but on saving the connection, splunk_app_db_connect_server.log shows below error: 2020-08-31 09:20:34.764 +0100 [dw-56 - POST /api/connections/status] ERROR io.dropwizard.jersey.errors.LoggingExceptionMapper - Error handling a request: bd54ff303cdcc36b java.lang.NullPointerException: null at com.splunk.dbx.connector.logger.AuditLogger.replace(AuditLogger.java:50) at com.splunk.dbx.connector.logger.AuditLogger.error(AuditLogger.java:44) at com.splunk.dbx.server.api.service.database.impl.DatabaseMetadataServiceImpl.getStatus(DatabaseMetadataServiceImpl.java:159) at com.splunk.dbx.server.api.service.database.impl.DatabaseMetadataServiceImpl.getConnectionStatus(DatabaseMetadataServiceImpl.java:116) at com.splunk.dbx.server.api.resource.ConnectionResource.getConnectionStatusOfEntity(ConnectionResource.java:72) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:52) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:124) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:167) at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$TypeOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:219) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:79) at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:469) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:391) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:80) at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:253) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244) at org.glassfish.jersey.internal.Errors.process(Errors.java:292) at org.glassfish.jersey.internal.Errors.process(Errors.java:274) at org.glassfish.jersey.internal.Errors.process(Errors.java:244) at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265) at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:232) at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:680) at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:392) at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:346) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:365) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:318) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:205) at io.dropwizard.jetty.NonblockingServletHolder.handle(NonblockingServletHolder.java:50) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1617) at io.dropwizard.servlets.ThreadNameFilter.doFilter(ThreadNameFilter.java:35) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1604) at io.dropwizard.jersey.filter.AllowedMethodsFilter.handle(AllowedMethodsFilter.java:47) at io.dropwizard.jersey.filter.AllowedMethodsFilter.doFilter(AllowedMethodsFilter.java:41) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1604) at com.splunk.dbx.server.api.filter.ResponseHeaderFilter.doFilter(ResponseHeaderFilter.java:30) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1604) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:545) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1297) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:485) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1212) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at com.codahale.metrics.jetty9.InstrumentedHandler.handle(InstrumentedHandler.java:249) at io.dropwizard.jetty.RoutingHandler.handle(RoutingHandler.java:52) at org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:717) at org.eclipse.jetty.server.handler.RequestLogHandler.handle(RequestLogHandler.java:54) at org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:173) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at org.eclipse.jetty.server.Server.handle(Server.java:500) at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383) at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:547) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:270) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103) at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:129) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:388) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:806) at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:938) at java.lang.Thread.run(Thread.java:748)  Splunk 8.0.5 DB Connect 3.3.1   Can someone please suggest how to resolve this? Thank you.  
Can one use Splunk phantom for auto-remediation? What real-life use cases are applicable to the use of Phantom?  
I have below command in Linux - grep "login?" access.log access.log.1 | grep https | cut -d, -f3 | sed 's/"wafip"://g' | sort -n | uniq -c | sort -nr | head -100   I need to find out equivalent ... See more...
I have below command in Linux - grep "login?" access.log access.log.1 | grep https | cut -d, -f3 | sed 's/"wafip"://g' | sort -n | uniq -c | sort -nr | head -100   I need to find out equivalent Splunk command.  I know the index and host.  Can somebody please help me with this?  
Hi,  Want to find universal forwarders and to which index they are sending data to ? We have cmd to list all the UF. Need help on how to check/co-relate internal logs to which index they are sendin... See more...
Hi,  Want to find universal forwarders and to which index they are sending data to ? We have cmd to list all the UF. Need help on how to check/co-relate internal logs to which index they are sending data to.