All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Ah, ok - hence my confusion - I had to test whether that uses the definition or the csv and it appears to use the definition. I've always used the abstraction to hide the underlying name of the CSV,... See more...
Ah, ok - hence my confusion - I had to test whether that uses the definition or the csv and it appears to use the definition. I've always used the abstraction to hide the underlying name of the CSV, as that can sometimes change or be substituted.
Hi You mean it excluded that traffic? Because I expect it to exclude the traffic from my results. version 9.0
Given the sample event below representing a user sign-in, I am trying to create a table that shows each combination of a 'policy' and 'result' and the number of occurrences for that combination. Ther... See more...
Given the sample event below representing a user sign-in, I am trying to create a table that shows each combination of a 'policy' and 'result' and the number of occurrences for that combination. There are only three possible result values for any given policy (success, failure, or notApplied). In essence, I need this table to find out how which policies are not being used by looking at the number of times it was not applied. i.e.: Input:   Desired Output: displayName result count Policy1 success 1 Policy2 failure 1 Policy3 notApplied 1   However, the query I currently have is returning a sum that isn't possible because the sum is exceeding the number of sign-in events. What is wrong with my query? <my_search> | stats count by Policies{}.displayName, ConditionalAccessPolicies{}.result  
Looking to create a dashboard in Dashboard Studio that drills down on an Event Messages column in in a table.  According to this blog post,  a "Link to search" option was added a few months ago, but ... See more...
Looking to create a dashboard in Dashboard Studio that drills down on an Event Messages column in in a table.  According to this blog post,  a "Link to search" option was added a few months ago, but I don't see the option in my editor in Splunk 9.1.2:                          I've also tried adding the JSON directly:     "eventHandlers": [ { "type": "drilldown.linkToSearch", "options": { "type": "auto", "newTab": true } } ]    and that didn't work either.   Any help is appreciated.  
Hello all, is there a way to automate playbook to work only on events with specific tag? I saw in playbook settings an option to choose tag but it stills run on every event thank you in advance  ... See more...
Hello all, is there a way to automate playbook to work only on events with specific tag? I saw in playbook settings an option to choose tag but it stills run on every event thank you in advance  @phanTom  @SOARt_of_Lost 
Hi @jholman2000, I don't think there's a way to set two token values from the one dropdown like you can with simpleXML dashboards, but here's a workaround -  You can create a simple search that w... See more...
Hi @jholman2000, I don't think there's a way to set two token values from the one dropdown like you can with simpleXML dashboards, but here's a workaround -  You can create a simple search that will use the environment token and produce the appropriate index name, which can then be used in your main search. { "type": "ds.search", "options": {"query": "| makeresults\n| eval index=if(\"$api_env$\"=\"prod\",\"wf_wb_cbs\",\"wf_wb_cbs_np\")\n| table index", "enableSmartSources": true }, "name": "IndexName" }   The search will be pretty quick, and will only run on the search head. It just looks at the environment token and sets the index to prod or nonprod as appropriate. The key part is the "enableSmartSources" which you get when checking  the "Access search results or metadata" checkbox. Now you can refer to the index name:  $IndexName:result.index$ So your final search will be: index=$IndexName:result.index$ CA03430 sourcetype="cf:logmessage" cf_app_name="CA03430-cmsviewapi-$api_env$" | spath "msg.customerIdType" | eval eventHour = strftime(_time,"%H") | where eventHour >= "07" and eventHour < "20" | stats count by "msg.customerIdType" Hope that helps you out. Cheers, Daniel  
While it's possible to change the color of a single value icon based on a result, is it possible display an entirely different icon for different results or ranges? Not readily seeing an option in th... See more...
While it's possible to change the color of a single value icon based on a result, is it possible display an entirely different icon for different results or ranges? Not readily seeing an option in the Dashboard Studio. https://docs.splunk.com/Documentation/Splunk/9.0.2/DashStudio/chartsSV#Single_value_icon
You probably already figured it out by now but you will can use the ACS CLI or Terraform splunk/scp provider to manage indexes in Splunk Cloud. Splunk ACS API REST reference: https://docs.splunk.co... See more...
You probably already figured it out by now but you will can use the ACS CLI or Terraform splunk/scp provider to manage indexes in Splunk Cloud. Splunk ACS API REST reference: https://docs.splunk.com/Documentation/SplunkCloud/9.1.2308/Config/ACSREF#Manage_indexes Creating a new index in Splunk Cloud example: curl -X POST 'https://admin.splunk.com/{stack}/adminconfig/v2/indexes' --header 'Authorization: Bearer eyJraWQiOiJzcGx1bmsuc2VjcmV0Iiwi…' \ --header 'Content-Type: application/json' \ --data-raw '{ "name": "testindex" }' ACS CLI: https://docs.splunk.com/Documentation/SplunkCloud/latest/Config/ACSCLI Splunk Splunk Cloud Platform Terraform provider: https://github.com/splunk/terraform-provider-scp
Have you checked the _audit logs to confirm user and roles values?
@richgalloway  I think HF/UF doesn't have any role here; main use case: we have a server need to write data from that server to AWS S3 Bucket; do we have any TA?
Unfortunately it doesn't work, I configured the same rules in a working instance and it works.    
Please tell us more about the environment.  Can the server relay data to Splunk via an intermediate forwarder?  Why is an HF installed instead of a Universal Forwarder (UF)?  UFs have a much smaller ... See more...
Please tell us more about the environment.  Can the server relay data to Splunk via an intermediate forwarder?  Why is an HF installed instead of a Universal Forwarder (UF)?  UFs have a much smaller footprint and attack surface.
Hi guys, So heres  what im trying to do. I have a lookup csv with 3 columns. I have data with string values that might contain a value in my lookup. I have the basic setup working but i want to popul... See more...
Hi guys, So heres  what im trying to do. I have a lookup csv with 3 columns. I have data with string values that might contain a value in my lookup. I have the basic setup working but i want to populate additional fields in my data set. Here is a very stripped down version of what i am doing.  First I have a basic lookup csv. It has  3 columns: active flagtype colorkey yes sticker blue yes tape red no tape pink then my search which creates a couple test records looks like this: | makeresults count=4 | streamstats count | eval number = case(count=1, 25, count=2, 39, count=3, 31, count=4, null()) | eval string1 = case(count=1, "I like blue berries", count=3, "The sea is blue", count=2, "black is all colors", count=4, "Theredsunisredhot") | table flagtype, flag, string1, ck |search [ inputlookup templookup.csv | eval string1 = "string1=" + "\"" + "*" + colorkey + "*" + "\"" | return 500 $string1 ] | eval flag = "KEYWORD FLAG" | table flagtype, flag, string1, colorkey my 4 column output results are: flagtype flag string1 colorkey empty   KEYWORD FLAG   I like blue berries     empty empty   KEYWORD FLAG   The sea is blue          empty empty   KEYWORD FLAG   Theredsunisredhot empty How do  I populate the two empty columns using other columns in the lookup table. Thanks in advance for any help I can get.
Hello, I'm writing some field extractions for a Tomcat access log. The logging format is "%{E M/d/y @ hh:mm:ss.S a z}t %h (%{X-Forwarded-For}i) > %A:%p &quot;%r&quot; %{requestBodyLength}r %D %s %B... See more...
Hello, I'm writing some field extractions for a Tomcat access log. The logging format is "%{E M/d/y @ hh:mm:ss.S a z}t %h (%{X-Forwarded-For}i) > %A:%p &quot;%r&quot; %{requestBodyLength}r %D %s %B %I &quot;%{Referer}i&quot; &quot;%{User-Agent}i&quot; %u %S %{username}s %{sessionTracker}s" The X-Forwarded Field has multiple headers, so multiple X-Forwarded-For IP's are being logged for a small, but important, percentage of these events. An example log is Thu 1/18/2024 @ 06:52:30.918 PM UTC 00.000.00.000 (00.000.000.000, 00.000.00.00, 00.000.00.00) > 00.000.00.0:0000 "PUT /uri/query/here HTTP/1.1" -  1270 200 3466 https-openssl-nio-00.000.00.0-000-exec-15 "hxxps://url.splunk.com/" "user_agent" - - - - How can I perform a multivalue field extraction to grab 0, 1, 2 or 3 x-forwarded-for IP's?
Thanks you very much , your solution worked perfectly.  
@bowesmana  Your suggested solution solved memory issue. Thank you!! 
By going off what you pasted it is coming back as an invalid JSON, I would check that first. But assuming that it is just a copy/paste error and you do have a valid json object as _raw then I wo... See more...
By going off what you pasted it is coming back as an invalid JSON, I would check that first. But assuming that it is just a copy/paste error and you do have a valid json object as _raw then I would probably do an spath like this to retain associations between url and durations. index=hello | spath input=_raw path=details.sub-trans{} output=sub_trans | fields - _raw | table sub_trans | mvexpand sub_trans | spath input=sub_trans | fields - sub_trans   You can see here all the field are extracted and they maintained their relationships to their individual url/duration according to the structure of detail.sub-trans{} array. Does require an mvexpand though, just keep an eye out for memory limits. To retain specific associations of the url to its respective duration by extracting both as individual multivalued fields is possible but can be problematic. If any of them have a null entry for whatever reason then all associations are thrown off from that point on. Thats why in these sort of situations I would much rather extract the entire nested json object out of the array, mvexpand that, then spath that internal json.  Also want to note that doing a mvexpand against two multivalue fields like in your original search will completely loose all association between which url should have which duration. you will actually end up with N^2 results when by the structure of the json I believe there should only be N results.
yes it does. this actually worked. appreciate a ton
@richgalloway  Thank you so much for your quick response. It's not exporting SPLUNK search results, it about writing Logs into S3 bucket using SPLUNK TA. For Example, we have some Application logs ... See more...
@richgalloway  Thank you so much for your quick response. It's not exporting SPLUNK search results, it about writing Logs into S3 bucket using SPLUNK TA. For Example, we have some Application logs within server, we would prefer to use SPLUNK TA to write those logs into S3 Buckets from there and ingest data from S3/SQS. This server has the HF install on them. We cannot perform direct ingestion from that server due to security reason.  Any thoughts or recommendations
The duration field populates in my sandbox, but values are duplicated.  That's likely because the two mvexpand calls break the association between url and duration.  Try this query, instead: index=h... See more...
The duration field populates in my sandbox, but values are duplicated.  That's likely because the two mvexpand calls break the association between url and duration.  Try this query, instead: index=hello | spath output=url details.sub-trans{}.req.url | spath output=duration details.sub-trans{}.duration ``` Combine url and duration ``` | eval pairs=mvzip(url,duration) ``` Put each pair into a separate event ``` | mvexpand pairs ``` Extract the url and duration fields ``` | eval pairs=split(pairs,","), url=mvindex(pairs,0), duration=mvindex(pairs,1) | table url,duration