All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Depends on the actual use case - the data you have and the desired output. You already had one example in this thread from @ITWhisperer .
Hi Based on your error message it's related to network connection. Just check both host and network based FWs to see that everything is ok. If I understand you already fixed this on your FW side? S... See more...
Hi Based on your error message it's related to network connection. Just check both host and network based FWs to see that everything is ok. If I understand you already fixed this on your FW side? Should you use HF as a HUB/consentrator is totally dependent on your security policy. If you have strictly security zone based architecture (don't allowed direct connection to outside) then you definitely need an intermediate forwarders. But if not then those just create more complexity on your environment and don't give to best perfomance for you. If you have lot of UFs and haven't any other configuration management software/service/system then you should use DS and if you have already something in place then you should use it instead of bring totally new way to do it. r. Ismo
Hello, Firstly, requirement is we want to monitor the docker containers present on the server, and we were tried aproch to istrument our machine agent inside each docker container, but by this aproc... See more...
Hello, Firstly, requirement is we want to monitor the docker containers present on the server, and we were tried aproch to istrument our machine agent inside each docker container, but by this aproch our docker image is going to heavy and our application performace may decrease beacause of this approch. So, we had instrumented machine agent on docker container, which is present on that local server and that machine agent correctly working and also providing metrics for some containers but not for all containers, so can anyone help me to solve this issue. we have take reffernce from the github repository(https://github.com/Appdynamics/docker-machine-agent.git), but in our environment there are 40 containers and by this method it is monitoring only 9 containers so can anyone help me to solve this issue. here you can see only 9 containers. Regards, Dishant
Hi Splunk Experts, We are trying to integrate CA UIM with Splunk to send Splunk alerts to CA UIM. So, we had installed Nimbus (CA UIM) add-on and configured Alert to trigger events and also we had i... See more...
Hi Splunk Experts, We are trying to integrate CA UIM with Splunk to send Splunk alerts to CA UIM. So, we had installed Nimbus (CA UIM) add-on and configured Alert to trigger events and also we had installed nimbus agent on the Splunk enterprise server where is was deployed on Linux x64 as per the instructions but no alerts are triggering for search even if the condition match.  but when we are checking manually we can see many triggered alerts under trigger section. So, can any one suggest what could be the issue and suggest me to resolve it. Below is the search and alert configuration.   Thank you in advance. Regards, Eshwar    
@bowesmana  Sure thing, I have 2 problems. 1st I would like to add more than one page to my dashboard. So I want a single dashboard page where I have multiple pages organized similarly to tabs o... See more...
@bowesmana  Sure thing, I have 2 problems. 1st I would like to add more than one page to my dashboard. So I want a single dashboard page where I have multiple pages organized similarly to tabs on a browser.  2nd I would like one of the tabs on the same page will redirect the page to another splunk dashboard in a different app. This is why I am asking to know if It is possible or if I will have to clone the dashboard into one of the tabs.  Thanks!
183 at io.cucumber.core.runtime.RethrowingThrowableCollector.executeAndThrow(RethrowingThrowableCollector.java:23) 184 at io.cucumber.core.runtime.CucumberExecutionContext.runTestCase(CucumberExecut... See more...
183 at io.cucumber.core.runtime.RethrowingThrowableCollector.executeAndThrow(RethrowingThrowableCollector.java:23) 184 at io.cucumber.core.runtime.CucumberExecutionContext.runTestCase(CucumberExecutionContext.java:129) 185 at io.cucumber.core.runtime.Runtime.lambda$executePickle$7(Runtime.java:128) 186 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 187 at java.util.concurrent.FutureTask.run(FutureTask.java:266) 188 at io.cucumber.core.runtime.Runtime$SameThreadExecutorService.execute(Runtime.java:249) 189 at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) 190 at io.cucumber.core.runtime.Runtime.lambda$runFeatures$3(Runtime.java:110) 191 at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) 192 at java.util.stream.SliceOps$1$1.accept(SliceOps.java:204) 193 at java.util.ArrayList$ArrayListSpliterator.tryAdvance(ArrayList.java:1359) 194 at java.util.stream.ReferencePipeline.forEachWithCancel(ReferencePipeline.java:126) 195 at java.util.stream.AbstractPipeline.copyIntoWithCancel(AbstractPipeline.java:499) 196 at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:486) 197 at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472) 198 at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708) 199 at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) 200 at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:566) 201 at io.cucumber.core.runtime.Runtime.runFeatures(Runtime.java:111) 202 at io.cucumber.core.runtime.Runtime.lambda$run$0(Runtime.java:82) 203 at io.cucumber.core.runtime.Runtime.execute(Runtime.java:94) 204 at io.cucumber.core.runtime.Runtime.run(Runtime.java:80) 205 at com.siemens.mindsphere.pss.testing.cli.Runner.run(Runner.java:122) 206 at com.siemens.mindsphere.pss.testing.cli.Main.main(Main.java:43) 207Caused by: java.io.IOException: HTTPS hostname wrong: should be <splunk.sws.siemens.com> 208 at sun.net.www.protocol.https.HttpsClient.checkURLSpoofing(HttpsClient.java:649) 209 at sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:573) 210 at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185) 211 at sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:167) 212 at com.splunk.HttpService.send(HttpService.java:510) 213 ... 57 common frames omitted
Here is the stack trace.  java.lang.RuntimeException: HTTPS hostname wrong: should be <splunk.org.company.com> 149 at com.splunk.HttpService.send(HttpService.java:512) 150 at com.splunk.Service.se... See more...
Here is the stack trace.  java.lang.RuntimeException: HTTPS hostname wrong: should be <splunk.org.company.com> 149 at com.splunk.HttpService.send(HttpService.java:512) 150 at com.splunk.Service.send(Service.java:1351) 151 at com.splunk.HttpService.get(HttpService.java:169) 152 at com.splunk.Entity.refresh(Entity.java:383) 153 at com.splunk.Entity.refresh(Entity.java:24) 154 at com.splunk.Resource.validate(Resource.java:186) 155 at com.splunk.Entity.validate(Entity.java:484) 156 at com.splunk.Entity.getContent(Entity.java:159) 157 at com.splunk.Entity.getString(Entity.java:295) 158 at com.splunk.ServiceInfo.getInstanceType(ServiceInfo.java:158) 159 at com.splunk.Service.enableV2SearchApi(Service.java:1389) 160 at com.splunk.JobCollection.<init>(JobCollection.java:49) 161 at com.splunk.Service.getJobs(Service.java:676) 162 at com.splunk.Service.getJobs(Service.java:665) 163 at com.splunk.Service.getJobs(Service.java:652) 164 at com.siemens.mindsphere.pss.testing.splunk.EC2Scan.vulnerableEc2InstancesData(EC2Scan.java:64) 165 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 166 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 167 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 168 at java.lang.reflect.Method.invoke(Method.java:498) 169 at io.cucumber.java.Invoker.doInvoke(Invoker.java:66) 170 at io.cucumber.java.Invoker.invoke(Invoker.java:24) 171 at io.cucumber.java.AbstractGlueDefinition.invokeMethod(AbstractGlueDefinition.java:47) 172 at io.cucumber.java.JavaStepDefinition.execute(JavaStepDefinition.java:29) 173 at io.cucumber.core.runner.CoreStepDefinition.execute(CoreStepDefinition.java:66) 174 at io.cucumber.core.runner.PickleStepDefinitionMatch.runStep(PickleStepDefinitionMatch.java:63) 175 at io.cucumber.core.runner.ExecutionMode$1.execute(ExecutionMode.java:10) 176 at io.cucumber.core.runner.TestStep.executeStep(TestStep.java:85) 177 at io.cucumber.core.runner.TestStep.run(TestStep.java:57) 178 at io.cucumber.core.runner.PickleStepTestStep.run(PickleStepTestStep.java:51) 179 at io.cucumber.core.runner.TestCase.run(TestCase.java:84) 180 at io.cucumber.core.runner.Runner.runPickle(Runner.java:75) 181 at io.cucumber.core.runtime.Runtime.lambda$executePickle$6(Runtime.java:128) 182 at io.cucumber.core.runtime.CucumberExecutionContext.lambda$runTestCase$5(CucumberExecutionContext.java:129)  
@tscroggins Yes, the certificate is imported on the client. SDK is calling from a docker image, and in the docker startup we have added instructions to import root CA, and the the splunk certificate.... See more...
@tscroggins Yes, the certificate is imported on the client. SDK is calling from a docker image, and in the docker startup we have added instructions to import root CA, and the the splunk certificate.   
thank you so what is the best practice to combine two queries and produce the output
append and appendcol simply appending the query its like a glue. Please correct me if i am wrong what i really want is  This is  query 1  - output ------------------------------- (eventtype =axs_... See more...
append and appendcol simply appending the query its like a glue. Please correct me if i am wrong what i really want is  This is  query 1  - output ------------------------------- (eventtype =axs_event_txn_visa_req_parsedbody "++EXT-ID[C0] FLD[Authentication Program..] FRMT[TLV] LL[1] LEN[2] DATA[01]") | rex field=_raw "(?s)(.*?FLD\[Acquiring Institution.*?DATA\[(?<F19>[^\]]*).*)" | rex field=_raw "(?s)(.*?FLD\[Authentication Program.*?DATA\[(?<FCO>[^\]]*).*)" | rex field=_raw "(?s)(.*?FLD\[62-2 Transaction Ident.*?DATA\[(?<F62_2>[^\]]*).*)" | stats values(F19) as F19, values(FCO) as FCO by F62_2 | where F19!=036 AND FCO=01 F62_2 F19 FCO 384011068172061 840 1 584011056069894 826 1   Query 2 eventtype=axs_event_txn_visa_rsp_formatting | rex field=_raw "(?s)(.*?FLD\[62-2 Transaction Ident.*?DATA\[(?<F62_2>[^\]]*).*)" | stats values(txn_uid) as txn_uid, values(txn_timestamp) as txn_timestamp, by F62_2 What I really want is the output of the for query 1 and pass as an input to query, common field between two queries is F62_2. if i run the query it would be different output, so basically two queries should be combined and when it run it should take from F62_2 from query 1 and produce values(txn_uid) as txn_uid, values(txn_timestamp) as txn_timestamp  
Hello, One of our MF Local Administrative Group Member rule is generating a significant number of alerts because sccmadmin group removed from MF member server, assistance is needed in refining this ... See more...
Hello, One of our MF Local Administrative Group Member rule is generating a significant number of alerts because sccmadmin group removed from MF member server, assistance is needed in refining this search to minimize unnecessary alerts.   index=foo sourcetype=XmlWinEventLog (EventCode=4732) dest="mf" user!="nt service" NOT (EventCode="4732" src_user="root" MemberSid="Domain Admins" Group_Name="Administrators") NOT (EventCode="4732" MemberSid="NT SERVICE\\*" (Group_Name="Administrators" OR Group_Name="Remote Desktop Users")) | eval user=lower(MemberSid) | eval src_user=lower(src_user) | stats values(user) as user, values(Group_Domain) as Group_Domain, values(dest) as dest by src_user,Group_Name,EventCode,signature _time Thanks...
Can you explain in more words - cloning dashboards and drilldown to new dashboards are different things
You can do index=* | eval group=index.":".host | timechart span=1h sum(eval(len(_raw))) as len by group Use subsearches with lookups to determine which index / host set you want to restrict to No... See more...
You can do index=* | eval group=index.":".host | timechart span=1h sum(eval(len(_raw))) as len by group Use subsearches with lookups to determine which index / host set you want to restrict to Note with timechart, it will limit the number of groups to 10, so use limit=X where X is the number of index/host pairs to watch  
OK, so it looks like the main issue is probably values(*) as *, which is taking all values for all fields for each guid, so as it seems you have JSON data containing arrays, there would appear to be ... See more...
OK, so it looks like the main issue is probably values(*) as *, which is taking all values for all fields for each guid, so as it seems you have JSON data containing arrays, there would appear to be more data than just guid/resourceId/sourcenumber. As for cardinality, what you are doing with stats values(*) as * vs @dtburrows3 version, which is ONLY returning 2 collected fields, is that every field in your data is being collected. If there is a 1:1 ratio of events to guid, then cardinality is high and you will effectively be returning EVERY single piece of the 10M events to the search head before it can then  do the stats count by sourcenumber. If there are 20 events per guid, then you will get a reduced event count sent to the SH i.e. a lower cardinality., but with potentially 20 values per multivalue field. So, with this statement, you are returning 3 discrete bits of info | stats max(eval(if(disposition=="TERMINATED", 1, 0))) as guid_terminated, values(sourcenumber) as sourcenumber by guid guid guid_terminated = 0 or 1 depending whether that guid was terminated sourcenumber - the values of sourcenumber Indexed extractions https://docs.splunk.com/Documentation/SplunkCloud/9.1.2308/Data/Aboutindexedfieldextraction Tstats https://docs.splunk.com/Documentation/Splunk/9.1.1/SearchReference/tstats A good document on how tstats/TERM/PREFIX can massively improve searches, but for JSON data it will not generally help unless indexed extractions are being made. https://conf.splunk.com/files/2020/slides/PLA1089C.pdf  
Hi Guys (and Gals), Hopefully quick question, and it's late, so my brain isn't firing quickly/properly. I need to run a query to get the ingestion over time over two variables: host, index In th... See more...
Hi Guys (and Gals), Hopefully quick question, and it's late, so my brain isn't firing quickly/properly. I need to run a query to get the ingestion over time over two variables: host, index In the specific case, need to determine if the data ingestion from a specific set of hosts, and whether the data inbound has been increasing more than normally expected.  So the query would look like:   index=linuxos host IN (server1, server2, server3...) [or possibly you may have a lookup of the set of hosts] | eval sum(the data per host over hour {or whatever regular chunk of time you want} for a 7 day period) | timechart xyz ==> chart over a line graph     Also, if there is relevant dashboard/console in the monitoring console I am not thinking of please direct me to the relevant menu or docs. Appreciate any assistance.    
What are "indexed extractions"?  Doc link? Never heard of tstats. We have 5 streams of data coming in, others are sip* so need that enum*, yes. Of the 2M there will be 2M guids.  2M dataset ... See more...
What are "indexed extractions"?  Doc link? Never heard of tstats. We have 5 streams of data coming in, others are sip* so need that enum*, yes. Of the 2M there will be 2M guids.  2M dataset 1, 2M data set 2, and then similar numbers on the sip side (so another 2M + 2M + 2M).  So 10M total for all streams (more or less) sourcenumbers per guid definitely isn't 1:1, so I'd have to count up, but guessing pareto principle applies, so like 20% makes 80% of the calls. just the one indexer. What does 'cardinality is high for the guid' mean? How does "values(*) as * " help? Thank you!  learning so much!  
You could do a simple | eval resource=if($t_resource|s$="IN(*)", "All", resource) which would make all resources be "All" if the only selected dropdown is "All", so the split by resource only creat... See more...
You could do a simple | eval resource=if($t_resource|s$="IN(*)", "All", resource) which would make all resources be "All" if the only selected dropdown is "All", so the split by resource only creates a single split, of you could add a change element to the input that sets the split by clause, e.g. something like <change> <eval token="split_by_resouce">if($t_resource|s$="IN(*)", "", "resource")</eval> </change> and then change your stats (and eventstats) command to | stats count(status_code) as StatusCodeCount by _time, status_code, $split_by_resource$ You would also need a default for that token, so need an init block, i.e. <init> <set token="split_by_resource"></set> <init> and then you would also need to create a 'resource' field to be "All" for the final table display if you want "All" to appear. The first option requires the least fiddling around
Join The Event Get Resiliency in the Cloud on January 18th, 2024 (8:30AM PST)  You will hear from the industry experts from Pacific Dental Services, IDC, The Futurum Group CEO, Daniel Newman and Spl... See more...
Join The Event Get Resiliency in the Cloud on January 18th, 2024 (8:30AM PST)  You will hear from the industry experts from Pacific Dental Services, IDC, The Futurum Group CEO, Daniel Newman and Splunk leaders on how to build resilience for your expansion to the cloud. You will learn about the drivers that lead enterprises to build data-centric security and observability use cases on Splunk Cloud Platform, delivered as a service and it's benefits.  Additionally, you will learn about: How digital transformation is influencing businesses expand to cloud  Cloud transformation journey from Pacific Dental Services with Splunk New advancements in Splunk Cloud Platform that accelerate journey to cloud Achieving faster value realization with Splunk services Register today for the event Get Resiliency in the Cloud happening on January 18th, 2024 (8:30AM PST) 
Are you doing indexed extractions on the JSON? If so, you may be able to use tstats to avoid looking at the raw data. It could be possible to use tstats (with prestats) to get the TERMINATED data set... See more...
Are you doing indexed extractions on the JSON? If so, you may be able to use tstats to avoid looking at the raw data. It could be possible to use tstats (with prestats) to get the TERMINATED data set and then the full guid set using tstats append=t but cardinality may still be an issue - see below. Do all events have resourceId="enum" in them? If so, adding the resourceId="enum*" is unnecessary Of the 2M events, how many guids would you typically expect and how many sourcenumbers would you expect to see per guid? How many indexers do you have in your deployment. If the cardinality is high for guid, then you are effectively returning much of the data to the search head for your 2M events. You can be more specific with values(*) as * because you don't need resourceId
Thank you! That was it.