All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I am currently running Splunk Enterprise 8.0.4 on an AWS EC2 instance, which already has Python 3.7 configured and pip-20.2.2 installed.  Since I am planning to package and bundle our existi... See more...
Hello, I am currently running Splunk Enterprise 8.0.4 on an AWS EC2 instance, which already has Python 3.7 configured and pip-20.2.2 installed.  Since I am planning to package and bundle our existing search app and other conf files into a private app for deployment to Splunk Cloud, I am trying to install and execute AppInspect beforehand. I am following the instructions here, under the "Install AppInspect" section with the assumption that a virtualenv is created.  - https://dev.splunk.com/enterprise/docs/developapps/testvalidate/appinspect/splunkappinspectclitool/installappinspect/installappinspectonlinux/ So far, I have the AppInspect tarball downloaded and was able to successfully run the following command, which allows me to upgrade pip: [sudo] pip3 install --upgrade pip [sudo] pip3 install splunk-appinspect-latest.tar.gz However, when I ran the above command to install splunk-appinspect, I get the following error:  /bin/bash: pip3: command not found   Does anyone have any suggestions on how I can proceed from here?
Hi everyone, after a search with some eval e rex commands, I end up in a table like this:   ID --- FIELD(1) --- FIELD(2) --- FIELD(3) --- ... --- FIELD(9) id(1) --- OK --- Alert --- Alert --- OK -... See more...
Hi everyone, after a search with some eval e rex commands, I end up in a table like this:   ID --- FIELD(1) --- FIELD(2) --- FIELD(3) --- ... --- FIELD(9) id(1) --- OK --- Alert --- Alert --- OK --- ... --- OK id(2) --- OK --- OK --- OK --- Alert --- ... --- OK id(3) --- Alert --- Alert --- OK --- ... --- OK . . . id(10000) --- OK --- Alert --- OK --- OK --- ... --- Alert So I have a 10000x10 table where the first field represents an id for the row, while the other fields can take only 2 values (OK or Alert).   I'd like to get a summury with respect to the field ID like this:                         OK                 ALERT FIELD1 FIELD2 FIELD3 . . . FIELD9   Where for each cell [FIELD(i), OK] I have the sum of the OKs in the column FIELD(i) and for each cell [FIELD(i), Alert] I have the sum of the Alerts in the column FIELD(i) ... obviously the sum of each row has to be equal to 10000. If you have some ideas you are welcome, thx
I tried looking for 8 hours and no result found. I have an SQL database that serves no purpose other than to add data into Splunk for testing (it has an online form to get the data in). When I tr... See more...
I tried looking for 8 hours and no result found. I have an SQL database that serves no purpose other than to add data into Splunk for testing (it has an online form to get the data in). When I try to install DB Connect, this gives an error:   "There was an error processing your request. It has been logged (ID 7122576f9fb1563d)."   When I telnet from Splunk to the SQL server I get the event below.   splunk@Splunk:~$ telnet 10.0.4.4 3306 Trying 10.0.4.4... Connected to 10.0.4.4. Escape character is '^]'. [ 5.7.31-0ubuntu0.18.04.19>c^hD=-PGSj2;C@mysql_native_passwordxterm-256colorbob !#08S01Got packets out of orderConnection closed by foreign host. splunk@Splunk:~$   So that seems to be working-ish. When I search on the error ID I get the following (see below). I'd love a hint, if not a solution, of where to look for a solution. Feel free to ask more details if needed.   2020-08-25 16:37:05.217 +0200 [dw-57 - POST /api/connections/status] ERROR io.dropwizard.jersey.errors.LoggingExceptionMapper - Error handling a request: 7122576f9fb1563d java.lang.NullPointerException: null at com.splunk.dbx.connector.logger.AuditLogger.replace(AuditLogger.java:50) at com.splunk.dbx.connector.logger.AuditLogger.error(AuditLogger.java:44) at com.splunk.dbx.server.api.service.database.impl.DatabaseMetadataServiceImpl.getStatus(DatabaseMetadataServiceImpl.java:159) at com.splunk.dbx.server.api.service.database.impl.DatabaseMetadataServiceImpl.getConnectionStatus(DatabaseMetadataServiceImpl.java:116) at com.splunk.dbx.server.api.resource.ConnectionResource.getConnectionStatusOfEntity(ConnectionResource.java:72) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:52) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:124) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:167) at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$TypeOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:219) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:79) at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:469) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:391) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:80) at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:253) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244) at org.glassfish.jersey.internal.Errors.process(Errors.java:292) at org.glassfish.jersey.internal.Errors.process(Errors.java:274) at org.glassfish.jersey.internal.Errors.process(Errors.java:244) at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265) at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:232) at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:680) at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:392) at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:346) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:365) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:318) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:205) at io.dropwizard.jetty.NonblockingServletHolder.handle(NonblockingServletHolder.java:50) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1617) at io.dropwizard.servlets.ThreadNameFilter.doFilter(ThreadNameFilter.java:35) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1604) at io.dropwizard.jersey.filter.AllowedMethodsFilter.handle(AllowedMethodsFilter.java:47) at io.dropwizard.jersey.filter.AllowedMethodsFilter.doFilter(AllowedMethodsFilter.java:41) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1604) at com.splunk.dbx.server.api.filter.ResponseHeaderFilter.doFilter(ResponseHeaderFilter.java:30) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1604) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:545) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1297) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:485) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1212) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at com.codahale.metrics.jetty9.InstrumentedHandler.handle(InstrumentedHandler.java:249) at io.dropwizard.jetty.RoutingHandler.handle(RoutingHandler.java:52) at org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:717) at org.eclipse.jetty.server.handler.RequestLogHandler.handle(RequestLogHandler.java:54) at org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:173) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at org.eclipse.jetty.server.Server.handle(Server.java:500) at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383) at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:547) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:270) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103) at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:129) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:388) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:806) at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:938) at java.lang.Thread.run(Thread.java:748)    
So I'm getting the notice regarding small buckets on an index, 100% small buckets on one particular index. Now this index is a summary index that only gets a small volume of new records every day. So... See more...
So I'm getting the notice regarding small buckets on an index, 100% small buckets on one particular index. Now this index is a summary index that only gets a small volume of new records every day. So it makes sense that the buckets never get large before they're rolled to warm. Now for various reason we want to keep this data separate from other indexes, mainly this summary data will live forever whereas  other indexes are set for a limited retention period. The index is tiny, current size is 8MB and it's holding summary info for the past 8 months, 7 small buckets so far this year. I have two questions: 1) since this is a small index do I have to worry about it only having small buckets? 2) Assuming having just small buckets in this particular index doesn't cause any major performance problem for the system overall how do I turn off the alert for this one index?  
Hello I have log like below FEATURES_USING=[tokenValidatorInfo=false, requestValidationRequired=false, requestPayloadValidationRequired=false, responsePayloadValidationRequired=false, aopUsed=f... See more...
Hello I have log like below FEATURES_USING=[tokenValidatorInfo=false, requestValidationRequired=false, requestPayloadValidationRequired=false, responsePayloadValidationRequired=false, aopUsed=false, tibcoCommunicatorUsed=false, secretsSecured=false] I want result should be like below(should splitin 2 columns) Column1                                                                                                   column2 tokenValidatorInfo                                                                                  false requestValidationRequired                                                                false requestPayloadValidationRequired                                                false -----                                                                                                               ---
Hi, I'm new to AppD. I'm getting lots of 404 errors in App Server. Surprising that doesn't correlate with business transactions. How do I know which URL/Txn failed due to this?   Thanks
Hi,  Is there any tools to visualize data lineage in splunk ? https://en.wikipedia.org/wiki/Data_lineage  We would like to see what was orginal data, how many stops it had an the result data   du... See more...
Hi,  Is there any tools to visualize data lineage in splunk ? https://en.wikipedia.org/wiki/Data_lineage  We would like to see what was orginal data, how many stops it had an the result data   duoms
I'm working on dashboard in which I would like to compare data across two different time periods.  (I posted a previous question here: https://community.splunk.com/t5/Splunk-Search/Compare-percentage... See more...
I'm working on dashboard in which I would like to compare data across two different time periods.  (I posted a previous question here: https://community.splunk.com/t5/Splunk-Search/Compare-percentages-with-a-week-ago/m-p/513799#M144200) I would like to have two time pickers on my dashboard.  The first would be for time period 1 and the second would be for time period 2.  I have much of this worked out conceptually ... but I don't see how to have the second time picker work for the inner query.    In its simplest form it would look something like this: <base query> | append [search <base query> $timePicker2$] | <collate data> The question is how to make that timePicker2 actually work.  I have this working with just a dropdown that includes a handful of preset values like earliest=-169h@h latest=-168h@h to be "same hour last week"  but if i wanted to make it more flexible with a time picker, I don't understand how to make that work.
Hi,  I am trying to inboard a new Syslog coming from a Syslog ng server but data is not indexing. Getting the below error in the internal logs in SH. ERROR TcpInputProc - Message rejected. Receive... See more...
Hi,  I am trying to inboard a new Syslog coming from a Syslog ng server but data is not indexing. Getting the below error in the internal logs in SH. ERROR TcpInputProc - Message rejected. Received unexpected message of size=369296128 bytes from src=xx.xx.xx.xx:xxxxx in streaming mode. Maximum message size allowed=67108864. (::) Possible invalid source sending data to splunktcp port or valid source sending unsupported payload. Below is the path I have set for the incoming logs. Syslog-ng server > Universal Forwarder(TCP port) > Indexer Below are the configurations set at the forwarder end: inputs.conf [tcp://xxxxx] sourcetype=syslog index = Index_name disabled=false outputs.conf [tcpout] defaultGroup = ABC maxQueueSize = 7MB useACK = true [tcpout:ABC] server = index_server1:42000, index_server2:42000, index_server3:42000 # SSL SETTINGS sslCertPath = $SPLUNK_HOME/etc/auth/server.pem sslRootCAPath = $SPLUNK_HOME/etc/auth/ca.pem sslPassword = xxxx sslVerifyServerCert = true   After the issue, I have tried to resolve it by setting the value of bucketRebuildMemoryHint to auto and manually both in the indexes.conf but it didn't work. indexes.conf [default] bucketRebuildMemoryHint = 569366123.   Can anyone please advise me on this?  Please let me know in case I am missing any information I missed to share which might help in reaching out to the solution.   Thanks in Advance    
Hello Everyone!   I have a field(FieldA) which contains multiple URLs together. I would like to have a new field(FieldB) with the list of last part of all the URLs. FieldA: https://...../...../...... See more...
Hello Everyone!   I have a field(FieldA) which contains multiple URLs together. I would like to have a new field(FieldB) with the list of last part of all the URLs. FieldA: https://...../...../..../123994https://.../....../....../....../123441 https://.../....../....../....../133456 FieldB: 123994 123441 133456 Currently I am using the below query for extraction but I am only getting '133456' not the list of all the values. Query: | rex field=FieldA "\/(?<FieldB>\w+)$"   Can you please help me with expression for desired output or is there a better way of doing the same?
How to search a exception in splunk which didn't occurred in past
I have a splunk query in paloalto data (index=idx_paloalto) something like this: index=idx_paloalto sourcetype=pan:traffic app:subcategory=encrypted-tunnel OR app:subcategory=gaming OR app:subcatego... See more...
I have a splunk query in paloalto data (index=idx_paloalto) something like this: index=idx_paloalto sourcetype=pan:traffic app:subcategory=encrypted-tunnel OR app:subcategory=gaming OR app:subcategory=proxy OR app:subcategory=remote-access NOT(application=ssl OR app:subcategory=storage-backup OR app:subcategory=email) | search action=allowed bytes>=10000000 | eval user=mvindex(split(user,"\\"),-1) | table app:subcategory generated_time user src_ip application src_zone dest_zone action bytes_in bytes_out bytes | sort 0 -bytes result: app:subcategory generated_time username src_ip application src_zone dest_zone action bytes_in bytes_out bytes encrypted-tunnel 8/25/2020 11:19 user123 10.24.144.81 ssh GDC-ENET ENET allowed 3649914812 167157295 3817072107 encrypted-tunnel 8/25/2020 6:16 user546 10.21.132.48 ssh SVS-In SVS-In allowed 259262655 871766 260134421   Then another query in Active Directory data (index=idx_ms_ad) something like this: index=idx_msad sourcetype=ActiveDirectory | eval username = sAMAccountName | dedup username | table username displayName mail | sort -username result: username displayName mail user123 Tommy Lee tommy.lee@domain.com user546 Richard White richard.white@domain.com   What I need is to lookup the username from  index  ms_ad and get additional details like the displayname and mail to my paloalto query getting a result something like this: app:subcategory generated_time username displayName mail src_ip application src_zone dest_zone action bytes_in bytes_out bytes encrypted-tunnel 8/25/2020 11:19 user123 Tommy Lee tommy.lee@domain.com 10.24.144.81 ssh GDC-ENET ENET allowed 3649914812 167157295 3817072107 encrypted-tunnel 8/25/2020 6:16 user546 Richard White richard.white@domain.com 10.21.132.48 ssh SVS-In SVS-In allowed 259262655 871766 260134421
Hello, We need to find the highest CPU consumed Process in the windows machine, not the total highest cpu. Please help how to implement the same.  Is the PercentProcessorTime is the field to be con... See more...
Hello, We need to find the highest CPU consumed Process in the windows machine, not the total highest cpu. Please help how to implement the same.  Is the PercentProcessorTime is the field to be considered  for the splunk query or how we can calculate the same   [WMI:ProcessesCPU] interval = 60 wql = SELECT Name, PercentProcessorTime, PercentPrivilegedTime, PercentUserTime, ThreadCount FROM Win32_PerfFormattedData_PerfProc_Process WHERE PercentProcessorTime>0 disabled = 0 Below query is not giving exact output,  its giving sum of all processes above 100. We need to find out the process which uses highest cpu index="index1" host=windows2 source="WMI:ProcessesCPU" | WHERE NOT Name="_Total" | WHERE NOT Name="System" | WHERE NOT Name="Idle" | streamstats dc(_time) as distinct_times | head (distinct_times == 1) | stats latest(PercentProcessorTime) as CPU% by Name | sort -ProcessorTime |eval AlertStatus=if('CPU%'> 90, "Alert", "Ignore") |search AlertStatus="Alert"    
I have a clustered enviroment. But still getting duplicate events in splunk -ITSI indexes. Please give some recommendation . 
Hi, How can I pull data from sharepoint to splunk? What are various options available to pull data? any suggestions? Thanks  
使用的版本: splunk:6.2.2 splunkforwarder:6.2.2 问题: 索引-当前大小/事件计数/最晚的事件:都显示有数据,而应用:Search&Reporting的“数据摘要”无法显示“主机/来源/来源类型”。望告知问题出在哪里?谢谢 如下图:
Hello I have log like below FEATURES_USING=[tokenValidatorInfo=false, requestValidationRequired=false, requestPayloadValidationRequired=false, responsePayloadValidationRequired=false, aopUsed=f... See more...
Hello I have log like below FEATURES_USING=[tokenValidatorInfo=false, requestValidationRequired=false, requestPayloadValidationRequired=false, responsePayloadValidationRequired=false, aopUsed=false, tibcoCommunicatorUsed=false, secretsSecured=false] I want result should be like below     tokenValidatorInfo=false equestValidationRequired=false requestPayloadValidationRequired=false responsePayloadValidationRequired=false aopUsed=false tibcoCommunicatorUsed=false secretsSecured=false
Hi Team, How do I identify the management node or master node in an existing distributed UBA Deployment (7 node or 10 node or 20 node) https://docs.splunk.com/Documentation/UBA/5.0.3/Install/TSUBAS... See more...
Hi Team, How do I identify the management node or master node in an existing distributed UBA Deployment (7 node or 10 node or 20 node) https://docs.splunk.com/Documentation/UBA/5.0.3/Install/TSUBAServicesNodes doesn't have details about the same.
Hi Everyone! I have a usecase where I need to compare daily reports and set up an alert on the deltas. Usecase is: Hosts that are reporting yesterday, but not today. I have the search as this. in... See more...
Hi Everyone! I have a usecase where I need to compare daily reports and set up an alert on the deltas. Usecase is: Hosts that are reporting yesterday, but not today. I have the search as this. index=os source=daily reporting hosts reporting=yes   | table host, ip | dedup host. I'm using set command , NOT and passing time parameters but I'm not getting the right result.    1st logic: set diff [search index=os source=daily reporting hosts reporting=yes earliest=-3d latest=-2d  | table host, ip | dedup host] [search index=os source=daily reporting hosts reporting=yes earliest=-2d latest=-1d   | table host, ip | dedup host] | table host . 2nd logic: [search index=os source=daily reporting hosts reporting=yes earliest=-3d latest=-2d  | table host, ip | dedup host] NOT [search index=os source=daily reporting hosts reporting=yes earliest=-2d latest=-1d   | table host, ip | dedup host] | table host . Help me if you have any suggestions to better deal with this usecase or any change to the query. TIA.
Currently I'm trying to utilise the Windows Infrastructure app to monitor group changes, however when I go to the dropdown menus they are severely limited but when I run the search by itself I get ma... See more...
Currently I'm trying to utilise the Windows Infrastructure app to monitor group changes, however when I go to the dropdown menus they are severely limited but when I run the search by itself I get many results. Is there a reason for this? Is there a way I can get the app to more accurately reflect the search results?