All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, Is there any guide I can follow for splunk app and splunk connect with hyperledger network? Thank you. Regards,
Hello all, I have an add-on with written a custom search command and I wanted to know in How can I push the results of a custom search command into 'my_index', and how I can associate an index with ... See more...
Hello all, I have an add-on with written a custom search command and I wanted to know in How can I push the results of a custom search command into 'my_index', and how I can associate an index with a sourcetype? Please, help me. Thank You.
Hello, I'm new on reddit and I'd like a little help, I will try to be the clearest as possible. I have 2 Pfsense 2.4.5 (1 PFWAN and 1 PFLAN) And I want to receive all the syslogs logs from thes... See more...
Hello, I'm new on reddit and I'd like a little help, I will try to be the clearest as possible. I have 2 Pfsense 2.4.5 (1 PFWAN and 1 PFLAN) And I want to receive all the syslogs logs from these 2 fw on my splunk on my LAN. My architecture is the following : -PFWAN: Wan interface : Internet address(let's say 99.99.99.99 to simplify) Lan interface : 10.10.1.2 -PFLAN : Wan interface : 10.10.1.1 Lan interface 10.10.10.1 -My splunk on 10.10.10.30 On my 2 pfsense I activated the syslog remote to my server 10.10.10.30 (I activated listening on my splunk). I currently receive perfectly the logs from my PFLAN but I have some problem to receive the logs from my PFWAN.Indeed, my firewall logs from external (like src=66.66.66.66 dst=99.99.99.99 port=445) come perfectly to my WAN interface of my PFLAN. But after that, even if the rules is allowed, the splunk doesn't receive this logs. Instead, I just have the logs src=10.10.1.2 dst=10.10.1.1 port=514. When I listen packets on 10.10.1.2 I can see the logs from external. When I listen packets on 10.10.10.1 I just have logs src=10.10.1.2 dst=10.10.1.1 port=514 and can't see the logs from external anymore. I tried to change the port to 5514 it did the same things. Could anyone help me on this topic please? Thanking you in advance,
I have this query: search index="paloaltologs" user="*UserName" | table _time, user, url, action However it doesn't appear to return the desired data - which is a table of the following: Ev... See more...
I have this query: search index="paloaltologs" user="*UserName" | table _time, user, url, action However it doesn't appear to return the desired data - which is a table of the following: EventDate / username / URL Visited / Allowed Query seems to return everything except for the traffic that I want to view which is simply internet history and not internal network traffic. Please help!
Currently am having a code that converts the time value to a formatted way. | rename _time as Time_CST | fieldformat Time_CST=strftime(Time_CST,"%x %X") Raw timing format : 2020-04-10 15:... See more...
Currently am having a code that converts the time value to a formatted way. | rename _time as Time_CST | fieldformat Time_CST=strftime(Time_CST,"%x %X") Raw timing format : 2020-04-10 15:28:30.333 (UTC) After adding the fieldformat : 04/10/20 10:28:30 (CST) all the timings are in UTC and CST zone Please help me convert this to IST time zone. How can i achieve it.
In bar chart when the no of results are getting increased the view gets more clumsy. Instead of increasing the chart height is how to provide a panel slider to the barchart. For example consider t... See more...
In bar chart when the no of results are getting increased the view gets more clumsy. Instead of increasing the chart height is how to provide a panel slider to the barchart. For example consider the below scenario: | makeresults | eval team_name="team1,team2,team3,team4,team5,team6,team7,team8,team9" | makemv team_name delim="," | mvexpand team_name | eval alpha=2, beta =4 | streamstats count as flag | eval alpha=alpha*flag,beta=beta*flag | table team_name alpha beta when the results are getting increased instead of showing team_names on the y axis it will change to common name as team_names(mentioned in pic2).since my search query results different no of counts at each time, increasing the height won't be right solution. so i want to introduce page vertical slider so irrespective of the count it will give the details and visualization also wont impact. Please suggest your thoughts. | makeresults | eval team_name="team1,team2,team3,team4,team5,team6,team7,team8,team9,team10,team11,team12,team13,team14,team15" | makemv team_name delim="," | mvexpand team_name | eval alpha=2, beta =4 | streamstats count as flag | eval alpha=alpha*flag,beta=beta*flag | table team_name alpha beta
Hello, Can we get support for Endpoint data model? This is kind of expected for an EDR tool. Thanks
Why am I getting error : Add-on Installation Failed. Application does not exist: TA-cmdReporter. Screenshot attached.
We recently added Exchange 2016 to our Exchange environment and moved all mailboxes/pubic folders to it. We have an universal forwarder installed on it and are getting a lot more security events f... See more...
We recently added Exchange 2016 to our Exchange environment and moved all mailboxes/pubic folders to it. We have an universal forwarder installed on it and are getting a lot more security events forwarded to Splunk than the old Exchange 2010 server (from 4k per hour to 191k). 98% of the events from the security log of the Exchange 2016 server are Event Codes 4624, 4634 and 4672. Should I be blacklisting at least a portion of these codes so I can drop it down to a reasonable number? This traffic is pushing me over my license limit and I need to address it. The audit policies and blacklist on the Exchange 2010 and 2016 servers seem to be the same so I am not sure why the new server is so much more noisier.
Hey there folks! Can't believe I'm stuck on something which could be pretty simple. I have a timechart with span=1d working well. However, I can't seem to get the x-axis label (tick marks) to ... See more...
Hey there folks! Can't believe I'm stuck on something which could be pretty simple. I have a timechart with span=1d working well. However, I can't seem to get the x-axis label (tick marks) to show Sundays instead of Mondays. The weekly interval tick marks is ideal, but I just need it to be Sundays instead. Is there a way? |inputlookup test.csv | replace N/A WITH N IN 2B_FLAG | replace N WITH 2C, Y WITH 2B IN 2B_FLAG | eval NewTime=strptime(SHP_CREATED_DTM,"%d-%b-%y") | where NewTime>=1577836800 | eval _time=NewTime | timechart span=1d sum(SHP_VOL) by 2B_FLAG | sort _time
Hello, I deployed a free trial of Splunk Cloud instance to learn how to onboard logs into Splunk. I tried for hours but I am still unable onboard logs. Here is what I did... Spun up a Splunk Cl... See more...
Hello, I deployed a free trial of Splunk Cloud instance to learn how to onboard logs into Splunk. I tried for hours but I am still unable onboard logs. Here is what I did... Spun up a Splunk Cloud instance (pretty straightforward). Downloaded the Splunk Universal Forwarder (pretty straightforward). Installed Splunk universal forwarder on my local windows machine 1. Unchecked Splunk on-prem as this is a cloud instance. 2. It asked to create a username and password, I created some crap login details and I don't know why these are for. 3. I chose a local installation not network or domain. 4. It asked for what logs do I need, I chose all except AD logs because mine is local. 5. Now I asked for the location of the deployment server and port. I used my deployment server and left port blank as it takes 8089. 6. Now I wasn't asked for any receiver server details here, which I say in youtube videos for others its asking for receiver server details. 7. Now, click on the install button and installation is successful. Back to the Splunk cloud instance, I went to Data Inputs Choose Windows Events and added my workstation hostname in there (it's displaying in here). I picked to add to index main. Now it says all done, start searching. I tried searching and nothing comes up in the server for index=main or host=myhostname I tried going to the forwarding and receiving section and there there is the only an option for configuring forwarding but there are no receiving options. Also, in my windows I went to C:\Program Files\SplunkUniversalForwarder\etc\system\local and there is no outputs.conf file here. There are deploymentclient.conf, authentication.conf, server.conf and input.conf files but there is no outputs.conf. Can anyone tell me what I have done wrong? Why am I not able to onboard my logs? I also temporarily disabled my firewall to see if my firewall is blocking but that's not the case and I am able to telnet to the splunk cloud instance.
I've created an alert with a throttle, but it appears that the actions are not honoring the throttle. Does the action run regardless of the throttle?
has anyone had issues with the whois app command from centralops? https://splunkbase.splunk.com/app/3506/ - the dashboard form works fine, but when I enter the spl command - | centralopswhois outp... See more...
has anyone had issues with the whois app command from centralops? https://splunkbase.splunk.com/app/3506/ - the dashboard form works fine, but when I enter the spl command - | centralopswhois output=fields limit=2 google.com it returns nothing. I'm running this from a Splunk cloud instance.
I'd like to parse and index JSON data which come from MQTT. Lets say that (for now) it is simple time-value JSON: {"time": "2020-04-07 16:30:00", "value": 40} I've installed MQTT Modular Inp... See more...
I'd like to parse and index JSON data which come from MQTT. Lets say that (for now) it is simple time-value JSON: {"time": "2020-04-07 16:30:00", "value": 40} I've installed MQTT Modular Input, cloned default "_json" Source Type and named it "simple_json". Only thing I've changed was setting "Timestamp fields" to "time". I've added new MQTT Data Input: Stanza Name: simple_json_mqtt Activation Key: valid key for trial version Data Output: STDOUT Topic Name: simplejson/1 Broker Host: mqtt-broker (name of docker image with Mosquitto broker) Broker Port: 1883 (no security) Client ID: simplejsonmqtt QOS: 1 Set sourcetype: From list Select source type from list: simple_json (other fields left blank/default) Now I'm sending single message (using MQTTBox): {"time": "2020-04-07 16:30:00", "value": 40} In splunk/data/var/log/splunk/splunkd.log I can see: 04-07-2020 15:17:07.800 +0000 ERROR JsonLineBreaker - JSON StreamId:14709566222301315061 had parsing error:Unexpected character while looking for value: 'T' - data_source="mqtt://simple_json_mqtt", data_host="splunk", data_sourcetype="simple_json" Search for sourcetype="simple_json" returns no results Lets try with empty lines before and after json: ` {"time": "2020-04-07 16:31:00", "value": 41} In log: 04-07-2020 15:19:34.655 +0000 ERROR JsonLineBreaker - JSON StreamId:14709566222301315061 had parsing error:Unexpected character while looking for value: 'T' - data_source="mqtt://simple_json_mqtt", data_host="splunk", data_sourcetype="simple_json"` Search for sourcetype="simple_json" returns: {"time": "2020-04-07 16:31:00", "value": 41} Ok, now lets try to send two "events" in one MQTT message (with empty line at the end): `{"time": "2020-04-07 16:32:00", "value": 42} {"time": "2020-04-07 16:33:00", "value": 43} In log: 04-07-2020 15:23:21.951 +0000 ERROR JsonLineBreaker - JSON StreamId:14709566222301315061 had parsing error:Unexpected character while looking for value: 'T' - data_source="mqtt://simple_json_mqtt", data_host="splunk", data_sourcetype="simple_json"` Search for sourcetype="simple_json" returns: {"time": "2020-04-07 16:33:00", "value": 43} {"time": "2020-04-07 16:31:00", "value": 41} So i guess there is some kind of problem with LINE_BREAKER setting in source type (by default set to: ([\r\n]+)) In the real world scenario, I won't be able to control format of JSON messages put in MQTT topic: - order of fields - existence of fields (lets say that "time" and "value" will be always there but also other objects/arrays/simple fields may appear) - LINE_BREAKER It is even possible to configure input type / source type to be able to parse "anything"?
I use TIME_PREFIX and TIME_FORMAT to recognize the timestamp of my logs. There is a field, named timezone. It is the timezone of the logs. This value depends on the system generated the logs. It may ... See more...
I use TIME_PREFIX and TIME_FORMAT to recognize the timestamp of my logs. There is a field, named timezone. It is the timezone of the logs. This value depends on the system generated the logs. It may be timezone=-0400, timezone=+0000 etc. That depends on the coming data. How I can I set the timezone so that _time will adjust correct. For example: TIMESTAMP= 2020/04/10 08:20:50.370 timezone = -400 Local timezone of my SPLUNK is +0800, How to set the timezone so that 2020/04/10 08:20:50.370 can be convert to my local time 2020/04/10 20:20:50.370. When I search my data , I want the Time(_time) will shown as my local time.
when I create a Correlation Search ,this Correlation Search will trige Adaptive Response Actions. But search result is very large,so action will run for a long time. But when action run after 5min,th... See more...
when I create a Correlation Search ,this Correlation Search will trige Adaptive Response Actions. But search result is very large,so action will run for a long time. But when action run after 5min,this action stop. I don't know why action stop but search result didn't process completely. How to make the 5min disappear.
Hi Please help! I've been struggling to configure a Splunk DB Connect connection to connect to an MS SQL 2008 DB. I suspect it's a JVM / JDBC driver issue but just cannot win this one. W... See more...
Hi Please help! I've been struggling to configure a Splunk DB Connect connection to connect to an MS SQL 2008 DB. I suspect it's a JVM / JDBC driver issue but just cannot win this one. When trying to save the connection using the UI I get the following message: "There was an error processing your request. It has been logged (ID 239a7f9b0a5bb917)." The ID corresponds to the stack trace in the log, shown below. Installation: CentOS Linux 7 (Core) Single-instance Splunk Enterprise Version: 8.0.2 Splunk DB Connect 3.3.0 OpenJDK 1.8.0 64-Bit Server VM (build 25.242-b08, mixed mode) (also tried openjdk-11.0.6.10-1.el7_7.x86_64) Tried these JDBC drivers downloaded from Microsoft: sqljdbc_6.2/enu/mssql-jdbc-6.2.2.jre8.jar sqljdbc_4.2/enu/jre8/sqljdbc42.jar I've tried setting the log level to DEBUG using the UI, but it gives an error. I've set the log level stanzas but unable to see any DEBUG level logs: local/dbx_settings.conf [loglevel] connector = DEBUG processor = DEBUG and also local/dbx_logging.conf [logger_dbx] level=DEBUG [logger_splunk] level=DEBUG This error is logged (splunk/var/log/splunk/splunk_app_db_connect_server.log): [dw-56 - POST /api/connections/status] ERROR io.dropwizard.jersey.errors.LoggingExceptionMapper - Error handling a request: 239a7f9b0a5bb917 java.lang.NullPointerException: null at com.splunk.dbx.connector.logger.AuditLogger.replace(AuditLogger.java:50) at com.splunk.dbx.connector.logger.AuditLogger.error(AuditLogger.java:44) at com.splunk.dbx.server.api.service.database.impl.DatabaseMetadataServiceImpl.getStatus(DatabaseMetadataServiceImpl.java:159) at com.splunk.dbx.server.api.service.database.impl.DatabaseMetadataServiceImpl.getConnectionStatus(DatabaseMetadataServiceImpl.java:116) at com.splunk.dbx.server.api.resource.ConnectionResource.getConnectionStatusOfEntity(ConnectionResource.java:72) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:52) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:124) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:167) at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$TypeOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:219) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:79) at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:469) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:391) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:80) at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:253) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244) at org.glassfish.jersey.internal.Errors.process(Errors.java:292) at org.glassfish.jersey.internal.Errors.process(Errors.java:274) at org.glassfish.jersey.internal.Errors.process(Errors.java:244) at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265) at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:232) at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:680) at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:392) at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:346) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:365) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:318) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:205) at io.dropwizard.jetty.NonblockingServletHolder.handle(NonblockingServletHolder.java:50) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1617) at io.dropwizard.servlets.ThreadNameFilter.doFilter(ThreadNameFilter.java:35) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1604) at io.dropwizard.jersey.filter.AllowedMethodsFilter.handle(AllowedMethodsFilter.java:47) at io.dropwizard.jersey.filter.AllowedMethodsFilter.doFilter(AllowedMethodsFilter.java:41) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1604) at com.splunk.dbx.server.api.filter.ResponseHeaderFilter.doFilter(ResponseHeaderFilter.java:30) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1604) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:545) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1297) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:485) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1212) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at com.codahale.metrics.jetty9.InstrumentedHandler.handle(InstrumentedHandler.java:249) at io.dropwizard.jetty.RoutingHandler.handle(RoutingHandler.java:52) at org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:717) at org.eclipse.jetty.server.handler.RequestLogHandler.handle(RequestLogHandler.java:54) at org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:173) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at org.eclipse.jetty.server.Server.handle(Server.java:500) at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383) at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:547) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:270) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103) at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:129) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:388) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:806) at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:938) at java.lang.Thread.run(Thread.java:748)
Hello! I was able to configure the Splunk Add-on for ServiceNow to extract data from the standard ServiceNow tables using just the base URL in the configuration: https://<instance_name>.serv... See more...
Hello! I was able to configure the Splunk Add-on for ServiceNow to extract data from the standard ServiceNow tables using just the base URL in the configuration: https://<instance_name>.service-now.com/ I have an additional need which is to be able to sync results from a custom table rather than a standard one. The endpoint is set up and has the URL: https://<instance_name>.service-now.com/api/<name_space>/<api_name>/<resource_name>; If I access that endpoint via browser it returns the results, but if I try to configure it in the add-on, then it doesn't work. I checked the logs and I see that it is adding /api/now/table/ to the URL, effectively converting the above URL into the following: https://<instance_name>.service-now.com/api/<name_space>/<api_name>/api/now/table/<resource_name>; What I would like to know is whether the add-on is compatible with scripted REST API URIs (info on ServiceNow website). Thanks! Andrew
Running the search query - returns 18 results in Statatics tab. Running the search query with outputlookup command - returns 18 results in Statatics tab. But when trying to query using inputlookup ... See more...
Running the search query - returns 18 results in Statatics tab. Running the search query with outputlookup command - returns 18 results in Statatics tab. But when trying to query using inputlookup - returns 15 results in Statatics tab. What could be the reason?
Splunk python3 can't work import pandas How should I do? sample: % $SPLUNK_HOME/bin/splunk cmd python3 >>> import pandas Traceback (most recent call last): File "<stdin>", line 1, in <modu... See more...
Splunk python3 can't work import pandas How should I do? sample: % $SPLUNK_HOME/bin/splunk cmd python3 >>> import pandas Traceback (most recent call last): File "<stdin>", line 1, in <module> ModuleNotFoundError: No module named 'pandas' and how does custom apps use python3? where should I put on python.version=python3 ?