All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Here is the stack trace.  java.lang.RuntimeException: HTTPS hostname wrong: should be <splunk.org.company.com> 149 at com.splunk.HttpService.send(HttpService.java:512) 150 at com.splunk.Service.se... See more...
Here is the stack trace.  java.lang.RuntimeException: HTTPS hostname wrong: should be <splunk.org.company.com> 149 at com.splunk.HttpService.send(HttpService.java:512) 150 at com.splunk.Service.send(Service.java:1351) 151 at com.splunk.HttpService.get(HttpService.java:169) 152 at com.splunk.Entity.refresh(Entity.java:383) 153 at com.splunk.Entity.refresh(Entity.java:24) 154 at com.splunk.Resource.validate(Resource.java:186) 155 at com.splunk.Entity.validate(Entity.java:484) 156 at com.splunk.Entity.getContent(Entity.java:159) 157 at com.splunk.Entity.getString(Entity.java:295) 158 at com.splunk.ServiceInfo.getInstanceType(ServiceInfo.java:158) 159 at com.splunk.Service.enableV2SearchApi(Service.java:1389) 160 at com.splunk.JobCollection.<init>(JobCollection.java:49) 161 at com.splunk.Service.getJobs(Service.java:676) 162 at com.splunk.Service.getJobs(Service.java:665) 163 at com.splunk.Service.getJobs(Service.java:652) 164 at com.siemens.mindsphere.pss.testing.splunk.EC2Scan.vulnerableEc2InstancesData(EC2Scan.java:64) 165 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 166 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 167 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 168 at java.lang.reflect.Method.invoke(Method.java:498) 169 at io.cucumber.java.Invoker.doInvoke(Invoker.java:66) 170 at io.cucumber.java.Invoker.invoke(Invoker.java:24) 171 at io.cucumber.java.AbstractGlueDefinition.invokeMethod(AbstractGlueDefinition.java:47) 172 at io.cucumber.java.JavaStepDefinition.execute(JavaStepDefinition.java:29) 173 at io.cucumber.core.runner.CoreStepDefinition.execute(CoreStepDefinition.java:66) 174 at io.cucumber.core.runner.PickleStepDefinitionMatch.runStep(PickleStepDefinitionMatch.java:63) 175 at io.cucumber.core.runner.ExecutionMode$1.execute(ExecutionMode.java:10) 176 at io.cucumber.core.runner.TestStep.executeStep(TestStep.java:85) 177 at io.cucumber.core.runner.TestStep.run(TestStep.java:57) 178 at io.cucumber.core.runner.PickleStepTestStep.run(PickleStepTestStep.java:51) 179 at io.cucumber.core.runner.TestCase.run(TestCase.java:84) 180 at io.cucumber.core.runner.Runner.runPickle(Runner.java:75) 181 at io.cucumber.core.runtime.Runtime.lambda$executePickle$6(Runtime.java:128) 182 at io.cucumber.core.runtime.CucumberExecutionContext.lambda$runTestCase$5(CucumberExecutionContext.java:129)  
@tscroggins Yes, the certificate is imported on the client. SDK is calling from a docker image, and in the docker startup we have added instructions to import root CA, and the the splunk certificate.... See more...
@tscroggins Yes, the certificate is imported on the client. SDK is calling from a docker image, and in the docker startup we have added instructions to import root CA, and the the splunk certificate.   
thank you so what is the best practice to combine two queries and produce the output
append and appendcol simply appending the query its like a glue. Please correct me if i am wrong what i really want is  This is  query 1  - output ------------------------------- (eventtype =axs_... See more...
append and appendcol simply appending the query its like a glue. Please correct me if i am wrong what i really want is  This is  query 1  - output ------------------------------- (eventtype =axs_event_txn_visa_req_parsedbody "++EXT-ID[C0] FLD[Authentication Program..] FRMT[TLV] LL[1] LEN[2] DATA[01]") | rex field=_raw "(?s)(.*?FLD\[Acquiring Institution.*?DATA\[(?<F19>[^\]]*).*)" | rex field=_raw "(?s)(.*?FLD\[Authentication Program.*?DATA\[(?<FCO>[^\]]*).*)" | rex field=_raw "(?s)(.*?FLD\[62-2 Transaction Ident.*?DATA\[(?<F62_2>[^\]]*).*)" | stats values(F19) as F19, values(FCO) as FCO by F62_2 | where F19!=036 AND FCO=01 F62_2 F19 FCO 384011068172061 840 1 584011056069894 826 1   Query 2 eventtype=axs_event_txn_visa_rsp_formatting | rex field=_raw "(?s)(.*?FLD\[62-2 Transaction Ident.*?DATA\[(?<F62_2>[^\]]*).*)" | stats values(txn_uid) as txn_uid, values(txn_timestamp) as txn_timestamp, by F62_2 What I really want is the output of the for query 1 and pass as an input to query, common field between two queries is F62_2. if i run the query it would be different output, so basically two queries should be combined and when it run it should take from F62_2 from query 1 and produce values(txn_uid) as txn_uid, values(txn_timestamp) as txn_timestamp  
Hello, One of our MF Local Administrative Group Member rule is generating a significant number of alerts because sccmadmin group removed from MF member server, assistance is needed in refining this ... See more...
Hello, One of our MF Local Administrative Group Member rule is generating a significant number of alerts because sccmadmin group removed from MF member server, assistance is needed in refining this search to minimize unnecessary alerts.   index=foo sourcetype=XmlWinEventLog (EventCode=4732) dest="mf" user!="nt service" NOT (EventCode="4732" src_user="root" MemberSid="Domain Admins" Group_Name="Administrators") NOT (EventCode="4732" MemberSid="NT SERVICE\\*" (Group_Name="Administrators" OR Group_Name="Remote Desktop Users")) | eval user=lower(MemberSid) | eval src_user=lower(src_user) | stats values(user) as user, values(Group_Domain) as Group_Domain, values(dest) as dest by src_user,Group_Name,EventCode,signature _time Thanks...
Can you explain in more words - cloning dashboards and drilldown to new dashboards are different things
You can do index=* | eval group=index.":".host | timechart span=1h sum(eval(len(_raw))) as len by group Use subsearches with lookups to determine which index / host set you want to restrict to No... See more...
You can do index=* | eval group=index.":".host | timechart span=1h sum(eval(len(_raw))) as len by group Use subsearches with lookups to determine which index / host set you want to restrict to Note with timechart, it will limit the number of groups to 10, so use limit=X where X is the number of index/host pairs to watch  
OK, so it looks like the main issue is probably values(*) as *, which is taking all values for all fields for each guid, so as it seems you have JSON data containing arrays, there would appear to be ... See more...
OK, so it looks like the main issue is probably values(*) as *, which is taking all values for all fields for each guid, so as it seems you have JSON data containing arrays, there would appear to be more data than just guid/resourceId/sourcenumber. As for cardinality, what you are doing with stats values(*) as * vs @dtburrows3 version, which is ONLY returning 2 collected fields, is that every field in your data is being collected. If there is a 1:1 ratio of events to guid, then cardinality is high and you will effectively be returning EVERY single piece of the 10M events to the search head before it can then  do the stats count by sourcenumber. If there are 20 events per guid, then you will get a reduced event count sent to the SH i.e. a lower cardinality., but with potentially 20 values per multivalue field. So, with this statement, you are returning 3 discrete bits of info | stats max(eval(if(disposition=="TERMINATED", 1, 0))) as guid_terminated, values(sourcenumber) as sourcenumber by guid guid guid_terminated = 0 or 1 depending whether that guid was terminated sourcenumber - the values of sourcenumber Indexed extractions https://docs.splunk.com/Documentation/SplunkCloud/9.1.2308/Data/Aboutindexedfieldextraction Tstats https://docs.splunk.com/Documentation/Splunk/9.1.1/SearchReference/tstats A good document on how tstats/TERM/PREFIX can massively improve searches, but for JSON data it will not generally help unless indexed extractions are being made. https://conf.splunk.com/files/2020/slides/PLA1089C.pdf  
Hi Guys (and Gals), Hopefully quick question, and it's late, so my brain isn't firing quickly/properly. I need to run a query to get the ingestion over time over two variables: host, index In th... See more...
Hi Guys (and Gals), Hopefully quick question, and it's late, so my brain isn't firing quickly/properly. I need to run a query to get the ingestion over time over two variables: host, index In the specific case, need to determine if the data ingestion from a specific set of hosts, and whether the data inbound has been increasing more than normally expected.  So the query would look like:   index=linuxos host IN (server1, server2, server3...) [or possibly you may have a lookup of the set of hosts] | eval sum(the data per host over hour {or whatever regular chunk of time you want} for a 7 day period) | timechart xyz ==> chart over a line graph     Also, if there is relevant dashboard/console in the monitoring console I am not thinking of please direct me to the relevant menu or docs. Appreciate any assistance.    
What are "indexed extractions"?  Doc link? Never heard of tstats. We have 5 streams of data coming in, others are sip* so need that enum*, yes. Of the 2M there will be 2M guids.  2M dataset ... See more...
What are "indexed extractions"?  Doc link? Never heard of tstats. We have 5 streams of data coming in, others are sip* so need that enum*, yes. Of the 2M there will be 2M guids.  2M dataset 1, 2M data set 2, and then similar numbers on the sip side (so another 2M + 2M + 2M).  So 10M total for all streams (more or less) sourcenumbers per guid definitely isn't 1:1, so I'd have to count up, but guessing pareto principle applies, so like 20% makes 80% of the calls. just the one indexer. What does 'cardinality is high for the guid' mean? How does "values(*) as * " help? Thank you!  learning so much!  
You could do a simple | eval resource=if($t_resource|s$="IN(*)", "All", resource) which would make all resources be "All" if the only selected dropdown is "All", so the split by resource only creat... See more...
You could do a simple | eval resource=if($t_resource|s$="IN(*)", "All", resource) which would make all resources be "All" if the only selected dropdown is "All", so the split by resource only creates a single split, of you could add a change element to the input that sets the split by clause, e.g. something like <change> <eval token="split_by_resouce">if($t_resource|s$="IN(*)", "", "resource")</eval> </change> and then change your stats (and eventstats) command to | stats count(status_code) as StatusCodeCount by _time, status_code, $split_by_resource$ You would also need a default for that token, so need an init block, i.e. <init> <set token="split_by_resource"></set> <init> and then you would also need to create a 'resource' field to be "All" for the final table display if you want "All" to appear. The first option requires the least fiddling around
Join The Event Get Resiliency in the Cloud on January 18th, 2024 (8:30AM PST)  You will hear from the industry experts from Pacific Dental Services, IDC, The Futurum Group CEO, Daniel Newman and Spl... See more...
Join The Event Get Resiliency in the Cloud on January 18th, 2024 (8:30AM PST)  You will hear from the industry experts from Pacific Dental Services, IDC, The Futurum Group CEO, Daniel Newman and Splunk leaders on how to build resilience for your expansion to the cloud. You will learn about the drivers that lead enterprises to build data-centric security and observability use cases on Splunk Cloud Platform, delivered as a service and it's benefits.  Additionally, you will learn about: How digital transformation is influencing businesses expand to cloud  Cloud transformation journey from Pacific Dental Services with Splunk New advancements in Splunk Cloud Platform that accelerate journey to cloud Achieving faster value realization with Splunk services Register today for the event Get Resiliency in the Cloud happening on January 18th, 2024 (8:30AM PST) 
Are you doing indexed extractions on the JSON? If so, you may be able to use tstats to avoid looking at the raw data. It could be possible to use tstats (with prestats) to get the TERMINATED data set... See more...
Are you doing indexed extractions on the JSON? If so, you may be able to use tstats to avoid looking at the raw data. It could be possible to use tstats (with prestats) to get the TERMINATED data set and then the full guid set using tstats append=t but cardinality may still be an issue - see below. Do all events have resourceId="enum" in them? If so, adding the resourceId="enum*" is unnecessary Of the 2M events, how many guids would you typically expect and how many sourcenumbers would you expect to see per guid? How many indexers do you have in your deployment. If the cardinality is high for guid, then you are effectively returning much of the data to the search head for your 2M events. You can be more specific with values(*) as * because you don't need resourceId
Thank you! That was it.
To give further examples, a distributable streaming command that can run on an indexer can also run on the search head, so take this example index=_audit ``` This eval runs on the indexer ``` | eval... See more...
To give further examples, a distributable streaming command that can run on an indexer can also run on the search head, so take this example index=_audit ``` This eval runs on the indexer ``` | eval isAdmin=if(user="admin", 1, 0) ``` This lookup runs on the indexer ``` | lookup actions.csv action OUTPUT action_name ``` This stats runs on both indexer and search head, i.e. the indexer will generate stats and then pass its set of stats to the search head, along with all other stats from other indexers and then the final counters are merged on the search head ``` | stats count by user action_name isAdmin ``` This lookup runs on the search head, as the data now exists on the SH. Once the data is on the SH, it will not go back to the indexer. ``` | lookup users.csv user OUTPUT user_name ``` So now this eval runs on the search head ``` | eval do_alert=if(isAdmin, 1, 0) As you can see it contains some eval, lookup and stats commands. This search will be sent from the SH to the "search peers", which are the indexers it can use to search against. Each indexer will run this same search on the set of data it owns. The key point here is that once it hits the stats command, that is the trigger for the indexers to return their dataset to the search head. If you look at the job properties of any search that does a stats command, you will see in the phase0 detail something like the following for a simple "index=_audit | stats count by user" litsearch index=_audit | addinfo type=count label=prereport_events track_fieldmeta_events=true | fields keepcolorder=t "prestats_reserved_*" "psrsvd_*" "user" | prestats count by user this is showing that the indexer will return some "prestats", which is its own reduced data set that it will send to the search head. In the above example, the first lookup will run first on the indexer then the second on the SH. So when it talks about 'invoking' the command, it's really about where the data happens to be in the execution of the entire SPL.  As you can see, as soon as you use a dataset processing command or a transforming command, the data is shifted from the indexers to the search head, so you immediately lose parallelism, so it is best to put those type of commands as far down the SPL pipeline as possible. If you look at the command types table, you can see some commands can work differently depending on how it's called, e.g. fillnull is a dataset processing command with no parameters, but distributable streaming when used with a field name, so be aware of these subtle distinctions when considering search performance.
Do you have the same issue when referencing the lookup definition itself instead of the CSV file? Example:   <base_search> | lookup <lookup definition pointing to networks.csv> ip as src_ip OU... See more...
Do you have the same issue when referencing the lookup definition itself instead of the CSV file? Example:   <base_search> | lookup <lookup definition pointing to networks.csv> ip as src_ip OUTPUT category I think that the advanced settings may only be applied when referencing the definition.  
Hello, I have a search that's coming back with 'src' which is the source IP of a client, and I have a lookup file  called "networks.csv" that has a column with a header 'ip' which is a list of CID... See more...
Hello, I have a search that's coming back with 'src' which is the source IP of a client, and I have a lookup file  called "networks.csv" that has a column with a header 'ip' which is a list of CIDR networks. I have gone into the lookup definitions and set under the advanced options "CIDR(ip)" for that lookup file. I can see the headers being automatically being extracted in that UI. However, when I run the search and try to pull the category for the 'src' respective network, it does not work.  basesearch | lookup networks.csv ip as src_ip OUTPUT category I have validated that it's a CIDR issue by doing a "...| rex mode=sed field=src_ip " and placing a literal CIDR entry in there and having the category come out. Thank you for your help!
Oh dang, good catch with the trailing comma! So just tried to limit the initially called out events as much as possible on the base search with the additional filter of  (disposition.disposition=... See more...
Oh dang, good catch with the trailing comma! So just tried to limit the initially called out events as much as possible on the base search with the additional filter of  (disposition.disposition="TERMINATED" OR "connections{}.left.facets{}.number"=*) and then limiting the stats aggregations to just the fields that are required for your downstream analysis and display. Glad its running faster!
Hi,  Is it possible to create a tab on a dashboard while also creating a redirection to a new dashboard when the tab is clicked without having to click the clone the dashboard. Thanks in adva... See more...
Hi,  Is it possible to create a tab on a dashboard while also creating a redirection to a new dashboard when the tab is clicked without having to click the clone the dashboard. Thanks in advance! 
Ahh I see, Note: this response is assuming usage of classic Splunk dashboards (XML) So for panel_1 (used to gather the top source IP) You can add a <done> tag and set a token based on the valu... See more...
Ahh I see, Note: this response is assuming usage of classic Splunk dashboards (XML) So for panel_1 (used to gather the top source IP) You can add a <done> tag and set a token based on the value of Source_Network_Address. Example of Search_1: index="windows_logs" LogName="Security" Account_Domain=EXCH OR Account_Domain="-" EventCode="4625" OR EventCode="4740" user="john@doe.com" OR user="johndoe" | where NOT cidrmatch("192.168.0.0/16", Source_Network_Address) | stats count as count, values(Account_Domain) as Account_Domain, values(EventCode) as EventCode, values(user) as user by Source_Network_Address | sort 1 -count This token can then be referenced in panel_2 index="iis_logs" sourcetype="iis" s_port="443" sc_status=401 cs_method!="HEAD" c_ip=$ip$   In the XML this would look something like this, . . . <search> <query> index="windows_logs" LogName="Security" Account_Domain=EXCH OR Account_Domain="-" EventCode="4625" OR EventCode="4740" user="john@doe.com" OR user="johndoe" | where NOT cidrmatch("192.168.0.0/16", Source_Network_Address) | stats count as count, values(Account_Domain) as Account_Domain, values(EventCode) as EventCode, values(user) as user by Source_Network_Address | sort 1 -count </query> <earliest>-24h@h</earliest> <latest>now</latest> <done> <set token="ip">$result.Source_Network_Address$</set> </done> </search> . . .  Notice the <done><set token="ip">$result.Source_Network_Address$</set></done> nested in the <search> tags. This is taking the final result's value from the field Source_Network_Address and assigning it to a token named $ip$. This token can then be referenced by panel_2.