All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Folks, I want to create a correlation for inactive account activity including last login with timestamp and app used to login. Any suggestion would be helpful.
All, Looking to bring in general security data from about 200 MacOS laptops. Ideally CIM friendly and filtered down all the junk as much as possible. 1) Any good docs/read throughs you'd reco... See more...
All, Looking to bring in general security data from about 200 MacOS laptops. Ideally CIM friendly and filtered down all the junk as much as possible. 1) Any good docs/read throughs you'd recommend I start with? 2) I checked out cmdReporter, seems pretty good. Anything else similar I should look at? 3) Is Splunk_TA_nix worth doing on MacOS? thanks -Daniel
I am using SSO and I want to be able to edit the error message you get when SSO authenticates, but the user account you is not authorized to use Splunk. I get the following error message: IDP fail... See more...
I am using SSO and I want to be able to edit the error message you get when SSO authenticates, but the user account you is not authorized to use Splunk. I get the following error message: IDP failed to authenticate request. Status Code="Responder,RequestDenied" Check splunkd.log for more information about the failure. I would like to be able to change this message to something that tells the end users to contact me. Does anyone know where this error message lives?
Hello, I would like to set up splunk cloud (vs 7) to accept emails as events.
Hello, I'm using AWS GuardDuty Add-on with an SQS input as appose to the Lambda HEC method. The findings are indexed correctly in Search. However, the add-on dashboards are not mapping/pulling Fin... See more...
Hello, I'm using AWS GuardDuty Add-on with an SQS input as appose to the Lambda HEC method. The findings are indexed correctly in Search. However, the add-on dashboards are not mapping/pulling Finding data correctly. Is there a known issue with using generic SQS input(pull method) as appose to the Lambda HEC (push method)? If the above is true, is there a published troubleshooting guide to help users modify dashboard/drilldown XML to fix the dashboard mapping/search issues? The add-on is currently -not supported- does that mean Splunk Support will no longer be able to help update the add-on or debug issues moving forward? Thanks!
I have several types of metric data going into a metric index. One has 'username' and 'DimA' as dimensions, and 'ValueA' as the metric_name. The second type of metric data has 'DimA' as its dimens... See more...
I have several types of metric data going into a metric index. One has 'username' and 'DimA' as dimensions, and 'ValueA' as the metric_name. The second type of metric data has 'DimA' as its dimension and 'ValueB' as the metric_name. I would like to associate the 'username' with 'ValueB'. How can I accomplish this?
Hi! I'm trying to get a DB Connect 3.1.4 running, and what's strange is I had one working, but it stopped. Now, even with a fresh installation, I can't get it to start. I also tried from another serv... See more...
Hi! I'm trying to get a DB Connect 3.1.4 running, and what's strange is I had one working, but it stopped. Now, even with a fresh installation, I can't get it to start. I also tried from another server, and it's also failing It looks from the logs like the Task Server is failing with a 404. I do have Java in the path, so I can't see why this wouldn't be working. Any help would be appreciated. 020-02-03 16:16:55.187 -0500 45786@ [main] INFO org.eclipse.jetty.server.AbstractConnector - Started application @62b26bef{HTTP/1.1,[http/1.1]}{127.0.0.1:9998} 2020-02-03 16:16:55.187 -0500 45786@[main] ERROR io.dropwizard.cli.ServerCommand - Unable to start server, shutti ng down javax.servlet.ServletException: io.dropwizard.jersey.setup.JerseyServletContainer-71daf5d6@74ee0fd6==io.dropwizard.jersey.setup. JerseyServletContainer,jsp=null,order=1,inst=false at org.eclipse.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:691) at org.eclipse.jetty.servlet.ServletHolder.initialize(ServletHolder.java:427) at org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:760) at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:374) at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:848) at org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:287) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:138) at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:108) at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113) at com.codahale.metrics.jetty9.InstrumentedHandler.doStart(InstrumentedHandler.java:103) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:138) at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117) at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:138) at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:108) at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113) at org.eclipse.jetty.server.handler.gzip.GzipHandler.doStart(GzipHandler.java:403) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:138) at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:108) at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:138) at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:108) at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113) at org.eclipse.jetty.server.handler.StatisticsHandler.doStart(StatisticsHandler.java:252) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:138) at org.eclipse.jetty.server.Server.start(Server.java:419) at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:108) at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113) at org.eclipse.jetty.server.Server.doStart(Server.java:386) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) at io.dropwizard.cli.ServerCommand.run(ServerCommand.java:53) at io.dropwizard.cli.EnvironmentCommand.run(EnvironmentCommand.java:44) at io.dropwizard.cli.ConfiguredCommand.run(ConfiguredCommand.java:85) at io.dropwizard.cli.Cli.run(Cli.java:75) at io.dropwizard.Application.run(Application.java:93) at com.splunk.dbx.server.bootstrap.TaskServerStart.startTaskServer(TaskServerStart.java:121) at com.splunk.dbx.server.bootstrap.TaskServerStart.streamEvents(TaskServerStart.java:72) at com.splunk.modularinput.Script.run(Script.java:66) at com.splunk.modularinput.Script.run(Script.java:44) at com.splunk.dbx.server.bootstrap.TaskServerStart.main(TaskServerStart.java:150) Caused by: java.lang.RuntimeException: com.splunk.HttpException: HTTP 404 -- Could not find object id=http at com.splunk.dbx.server.dbinput.recordwriter.HttpEventCollectorFactory.create(HttpEventCollectorFactory.java:71) at com.splunk.dbx.server.dbinput.recordwriter.HttpEventCollectorFactory.create(HttpEventCollectorFactory.java:52) at com.splunk.dbx.server.TaskServer.createHecService(TaskServer.java:487) at com.splunk.dbx.server.TaskServer.access$300(TaskServer.java:158) at com.splunk.dbx.server.TaskServer$2.configure(TaskServer.java:238) at org.glassfish.hk2.utilities.binding.AbstractBinder.bind(AbstractBinder.java:187) at org.glassfish.jersey.model.internal.CommonConfig.configureBinders(CommonConfig.java:676) at org.glassfish.jersey.model.internal.CommonConfig.configureMetaProviders(CommonConfig.java:641) at org.glassfish.jersey.server.ResourceConfig.configureMetaProviders(ResourceConfig.java:829) at org.glassfish.jersey.server.ApplicationHandler.initialize(ApplicationHandler.java:453) at org.glassfish.jersey.server.ApplicationHandler.access$500(ApplicationHandler.java:184) at org.glassfish.jersey.server.ApplicationHandler$3.call(ApplicationHandler.java:350) at org.glassfish.jersey.server.ApplicationHandler$3.call(ApplicationHandler.java:347) at org.glassfish.jersey.internal.Errors.process(Errors.java:315) at org.glassfish.jersey.internal.Errors.process(Errors.java:297) at org.glassfish.jersey.internal.Errors.processWithException(Errors.java:255) at org.glassfish.jersey.server.ApplicationHandler.<init>(ApplicationHandler.java:347) at org.glassfish.jersey.servlet.WebComponent.<init>(WebComponent.java:392) at org.glassfish.jersey.servlet.ServletContainer.init(ServletContainer.java:177) at org.glassfish.jersey.servlet.ServletContainer.init(ServletContainer.java:369) at javax.servlet.GenericServlet.init(GenericServlet.java:244) at org.eclipse.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:670) ... 45 common frames omitted Caused by: com.splunk.HttpException: HTTP 404 -- Could not find object id=http at com.splunk.HttpException.create(HttpException.java:84) at com.splunk.DBXService.sendImpl(DBXService.java:131) at com.splunk.DBXService.send(DBXService.java:43) at com.splunk.HttpService.get(HttpService.java:154) at com.splunk.Entity.refresh(Entity.java:381) at com.splunk.Entity.refresh(Entity.java:24) at com.splunk.Resource.validate(Resource.java:186) at com.splunk.Entity.validate(Entity.java:462) at com.splunk.Entity.getContent(Entity.java:157) at com.splunk.Entity.size(Entity.java:416) at java.util.HashMap.putMapEntries(HashMap.java:501) at java.util.HashMap.<init>(HashMap.java:490) at com.splunk.dbx.model.repository.JsonMapperEntityResolver.apply(JsonMapperEntityResolver.java:34) at com.splunk.dbx.server.dbinput.recordwriter.HttpEventCollectorFactory.enableGlobalHec(HttpEventCollectorFactory.jav
Hello, I have a problem with my replication status.. getting the below result when ever I am trying to see kvstore status. host 2 replication status is showing recovering from past 1 week. I tr... See more...
Hello, I have a problem with my replication status.. getting the below result when ever I am trying to see kvstore status. host 2 replication status is showing recovering from past 1 week. I tried to resync both shcluster config using "splunk resync shcluster-replicated-config" and kvstore using "splunk resync kvstore -source MyGUID" But, both didn't work. Any help would be appreciated. This member: backupRestoreStatus : Ready date : Mon Feb 3 15:50:19 2020 dateSec : 1580763019.583 disabled : 0 guid : ****-46ED-450B-BE21-****** oplogEndTimestamp : Mon Feb 3 15:50:18 2020 oplogEndTimestampSec : 1580763018 oplogStartTimestamp : Mon Feb 3 06:06:00 2020 oplogStartTimestampSec : 1580727960 port : 8191 replicaSet : splunkrs replicationStatus : KV store captain standalone : 0 status : ready Enabled KV store members: host1:8191 guid : ****-46ED-450B-BE21-****** hostAndPort : host1:8191 host2:8191 guid : ***-9A10-4962-9A3F-****** hostAndPort : host2:8191 host3:8191 guid : ****-F4B3-4E58-B765-******* hostAndPort : host3:8191 KV store members: host1:8191 configVersion : 205602 hostAndPort : host1:8191 lastHeartbeat : Mon Feb 3 15:50:17 2020 lastHeartbeatRecv : Mon Feb 3 15:50:17 2020 lastHeartbeatRecvSec : 1580763017.874 lastHeartbeatSec : 1580763017.873 optimeDate : Mon Feb 3 15:50:14 2020 optimeDateSec : 1580763014 pingMs : 0 replicationStatus : Non-captain KV store member uptime : 253492 host2:8191 configVersion : 205602 hostAndPort : host:8191 lastHeartbeat : Mon Feb 3 15:50:17 2020 lastHeartbeatRecv : Mon Feb 3 15:50:19 2020 lastHeartbeatRecvSec : 1580763019.366 lastHeartbeatSec : 1580763017.873 optimeDate : Wed Oct 2 11:09:46 2019 optimeDateSec : 1570028986 pingMs : 0 replicationStatus : Recovering uptime : 253417 host3:8191 configVersion : 205602 electionDate : Fri Jan 24 21:18:18 2020 electionDateSec : 1579918698 hostAndPort : host3:8191 optimeDate : Mon Feb 3 15:50:18 2020 optimeDateSec : 1580763018 replicationStatus : KV store captain uptime : 844587 Thanks,
I'ma beginner with Splunk hoping someone can help me with my syntax around the following query. I have queries with defined categories in a given field (error_msg_service): index=wsi_tax_summa... See more...
I'ma beginner with Splunk hoping someone can help me with my syntax around the following query. I have queries with defined categories in a given field (error_msg_service): index=wsi_tax_summary sourcetype=stash partnerId=* error_msg_service=* ein=* ein!="" tax_year=2019 capability=W2 | eval error_msg_service = case(match(error_msg_service, "OK"), "Success", match(error_msg_service, "W2 forms"), "Forms Unavailable", match(error_msg_service, "Invalid Credentials"), "Invalid Credentials", 1==1, "Other") | stats dc(intuit_tid) as Total by partnerId ein error_msg_service This will give me 4 columns: partnerId, ein, error_ms_service, and total count. My goal combines providing granularity of stats but then creating multiple columns as what is done with chart for the unique values I've defined in my case arguments, so that I get the following columns | partnerId | ein | error_msg_service when equal to "Success" | error_msg_service when equal to "Forms Unavailable" | error_msg_service when equal to "Invalid Credentials" |error_msg_service when equal to "Other" |Total count of events Has someone done something like this that could help?
I need to create a KPI in service. This KPI is the percent between error and session. Error is count from index a, session is from index b, I need to join an create field as percent; index=a | ev... See more...
I need to create a KPI in service. This KPI is the percent between error and session. Error is count from index a, session is from index b, I need to join an create field as percent; index=a | eval time_hour = strftime(_time, "%D-%H")|stats count as error| join time_hour [search index=b eventType="use" | eval time_hour = strftime(_time, "%D-%H") | stats count AS sessions ]|eval percent=error/session *100 It does not show the data in ITSI service. What should I change
Hi, For some power users, we need them to be able to get to Advanced Edit on report. They need to be able to change dispatch.ttl value, for example. If they navigate to Settings -> Searches, Re... See more...
Hi, For some power users, we need them to be able to get to Advanced Edit on report. They need to be able to change dispatch.ttl value, for example. If they navigate to Settings -> Searches, Reports, and Alerts -> "Their report" and click on Edit - they don't see "Advanced Edit". What Capability is required to be added to non-admin user in order to have access to "Advance Edit"? I appreciate your advice.
I am trying to pass number from subsearch to main search and find before or after 10 values of number. So if number is 50 parent search should pull events from 59 to 59 or 50 to 41. index=* Numb... See more...
I am trying to pass number from subsearch to main search and find before or after 10 values of number. So if number is 50 parent search should pull events from 59 to 59 or 50 to 41. index=* Number="*" [search index=* "idnumber" | tail 1 | fields Number | format] | head/tail 10
I got a message that the TCP output processor has paused data flow. Forwarding to output group heavy forwarder has blocked for 3570 seconds. This will probably stall the data flow towards indexing an... See more...
I got a message that the TCP output processor has paused data flow. Forwarding to output group heavy forwarder has blocked for 3570 seconds. This will probably stall the data flow towards indexing and other network outputs. 1)Does this cause data drop 2)Does it catch up the data once it is free 3)How to set up the persistent queue and where for addressing these issues 4)How to create an alert for these type of issues or when the queue is full Thanks in advance.
I will be installing version 7.3.4 in a testing environment and I am wondering if this version already has the patched datetime.xml that was released before the beginning of this year. The notes ... See more...
I will be installing version 7.3.4 in a testing environment and I am wondering if this version already has the patched datetime.xml that was released before the beginning of this year. The notes released under "Timestamp recognition of dates with two-digit years fails beginning January 1, 2020" list all the major/minor versions that have the patched datetime.xml, but does that mean that any version outside of those need to have the file updated manually (or follow the other options included in the released documentation)?
I have a dashboard which displays some simple "top 15" visualizations based on outbound network traffic. The base search just pulls some basic stats from All_Traffic, filtering in the tstats ... whe... See more...
I have a dashboard which displays some simple "top 15" visualizations based on outbound network traffic. The base search just pulls some basic stats from All_Traffic, filtering in the tstats ... where clause to include only outbound traffic. I define "outbound" to be any traffic for which the source is an internal IP and the destination is NOT an internal IP. This worked up until we upgraded from to Splunk 7.3.1 to 8.0.1, but now the clause filtering out All_Traffic.dest_ip!=10.0.0.0/8 , etc. are completely ignored (running the same search with and without the condition return the same results without the desired filtering) Here's the original base search: | tstats count(All_Traffic.dest_ip) AS ip_count count(All_Traffic.dest_port) AS port_count from datamodel=Network_Traffic where (All_Traffic.src_ip=10.0.0.0/8 OR All_Traffic.src_ip=192.168.0.0/16 OR All_Traffic.src_ip=172.16.0.0/12) AND NOT (All_Traffic.dest_ip=10.0.0.0/8 OR All_Traffic.dest_ip=192.168.0.0/16 OR All_Traffic.dest_ip=172.16.0.0/12) by All_Traffic.dest_ip, All_Traffic.dest_port | rename All_Traffic.* AS * A simpler version with only one exclusion in the tstats ... where clause which also does not work: | tstats count(All_Traffic.dest_ip) AS ip_count count(All_Traffic.dest_port) AS port_count from datamodel=Network_Traffic where All_Traffic.dest_ip!=10.0.0.0/8 by All_Traffic.dest_ip, All_Traffic.dest_port | rename All_Traffic.* AS * This seems very similar (but not identical) to the problem described in the release notes for 8.0.1 as fixed: SPL-179594, SPL-177665 - tstats where clause does not filter as expected when structured like "WHERE * NOT (field1=foo AND field2=bar)"* Also seems related to the question here: hxxps://answers.splunk.com/answers/760542/why-only-one-condition-works-for-where-clause-in-a.html Similar to the asker above, I am hoping to do the filtering in the WHERE clause of the tstats for performance. I run this search over the past 24h and it takes a while to run. I'd rather not split the tstats by src_ip and have to reaggregate with another stats, and would prefer to do the filtering BEFORE passing the stats to |search. I can work around it if I have to (the search below DOES work), but I'd rather go with something a bit more performant. | tstats count(All_Traffic.dest_ip) AS ip_count count(All_Traffic.dest_port) AS port_count from datamodel=Network_Traffic by All_Traffic.dest_ip, All_Traffic.dest_port, All_Traffic.src_ip | rename All_Traffic.* AS * | where (cidrmatch("10.0.0.0/8",src_ip) OR cidrmatch("172.16.0.0/12",src_ip) OR cidrmatch("192.168.0.0/16",src_ip) OR cidrmatch("169.254.0.0/16",src_ip)) AND NOT (cidrmatch("10.0.0.0/8",dest_ip) OR cidrmatch("172.16.0.0/12",dest_ip) OR cidrmatch("192.168.0.0/16",dest_ip) OR cidrmatch("169.254.0.0/16",dest_ip))
I recently setup a Docker implementation on a Test Server (CentOS 8). Pulled down the Splunk Base Enterprise container and got it working with no issues. Obviously there were no events going to it ... See more...
I recently setup a Docker implementation on a Test Server (CentOS 8). Pulled down the Splunk Base Enterprise container and got it working with no issues. Obviously there were no events going to it since I hadn't pulled a universal forwarder yet. So I pulled down the UF and got it working too and events started flowing into my Splunk base container - all was good. Then I shut down the server and have several times since but when I start the base Splunk container and leave the forwarder off - I am still getting events flowing into base Splunk container. It doesn't seem to matter if I start the forwarder or leave it off - I still get events. Is this normal behavior? I cannot find any documentation on why I would still be getting events with the forwarder off.
I need to ingest Proofpoint Campaign data and it seems that there is no canned TA/App for this. What have other done for this datasource?
Hi Folks, We receive several hundred files per day from 20 different sources. The filenames contain the source that we received the file from, and have a three digit sequence number as a suffix. ... See more...
Hi Folks, We receive several hundred files per day from 20 different sources. The filenames contain the source that we received the file from, and have a three digit sequence number as a suffix. Occasionally a file gets lost in transit, so I have designed a dashboard with 20 panels (one for each source) to highlight missing files by doing a makeresults and then a streamstats to generate a list of sequence numbers, and then a join to a search which extracts the sequence numbers from the filenames received, and then any sequence numbers that are not 'joined' to a filename are flagged as missing files. To make the dashboard more efficient, I'm trying to implement a base search to list the files from all sources, which I then want to pass to my subsearches - I have to use subsearches because of the makeresults which generates the full list of sequence numbers. (please see a cut-down version of the code below) However, it seems that the subsearches are unable to read my base search. I see that this question has been asked a few times in this forum, but none of the questions I viewed have accepted answers, and none of them were trying to use the same technique. So I just wanted to check . . . is there a way to pass base search results to subsearches? If not, is there another strategy that I could use to detect missing files? Thanks, Doug. <dashboard> <label>Base Post Question</label> <search id="filelist"> <query> my base search which extracts filenames and the times that they arrived | eval source=substr(filename,1,3) | eval seq=ltrim(substr(filename,14,3),"0") | table _time filename source seq </query> <earliest>-24h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> <refresh>1h</refresh> <refreshType>interval</refreshType> </search> <row> <panel> <table> <search> <query> | makeresults count=99 | streamstats count as seq | join type=left seq [ | search base="filelist" source="ABC" | table _time filename source seq ] | eval filename=if(isnull(filename),"Missing File!",filename) | table _time filename </query> </search> </table> </panel> </row> </dashboard>
I'm using nodejs appdynamics agent, when deployed in AWS EC2 Container application is not up because of this error. Even in local when tried to run this, I'm stuck with same error. Please help! ... See more...
I'm using nodejs appdynamics agent, when deployed in AWS EC2 Container application is not up because of this error. Even in local when tried to run this, I'm stuck with same error. Please help! docker container run <<DOCKERIMAGE>> TypeError: Cannot read property 'logError' of undefined at LibagentConnector.logError (/usr/src/app/node_modules/appdynamics/lib/libagent/libagent-connector.js:367:17) at Logger.error (/usr/src/app/node_modules/appdynamics/lib/core/logger.js:167:28) at Agent.init (/usr/src/app/node_modules/appdynamics/lib/core/agent.js:319:17) at AppDynamics.self.(anonymous function) [as profile] (/usr/src/app/node_modules/appdynamics/lib/core/agent.js:610:26)
Hi All, I have 1 master cluster and 2 indexers . I've been using configuration bundle actions - push , several times. The last time I used this process I was trying to push an add-on the resul... See more...
Hi All, I have 1 master cluster and 2 indexers . I've been using configuration bundle actions - push , several times. The last time I used this process I was trying to push an add-on the results on both of the indexers were: Added certificates , after that I edit the server.conf on both indexers with the correct URL and still didn't work. Any suggestion? Thanks! Hen