All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

With DS you can make two boxes - but you can use a single search and use the same search as a chained search for each box. My other solution was posted in this thread last Friday - see post containi... See more...
With DS you can make two boxes - but you can use a single search and use the same search as a chained search for each box. My other solution was posted in this thread last Friday - see post containing... As to whether there is an alternate solution, the following is probably a better option as it does not have the limitations of list() and does not require mvfind. It may be more efficient. Those links to the Aggregate functions are SPL2, but you can't use percentiles because rank is somewhat different to the percentiles.
Filled the form for Splunk Channel signup for Slack yesterday. Awaiting update from them. Thanks!
| streamstats count as Rank | streamstats window=2 range(Score) as range | eval Rank=if(Rank=1 OR range != 0, Rank, null()) | filldown Rank
Hi Are you trying to count those events or sum or what you are trying? If count then change c -> count or dc on your stat. r. Ismo
From the top of my head. Untested, might need some tweaking. | stats values(App) as App count by Score | streamstats sum(count) as rank | mvexpand App
I am trying to create a pie chart of success vs. failure with stats command with the following: search | stats c(assigned_user) AS Success c(authorization_failure_user) AS Failed
If you mean modular inputs using java, it's up to you to configure proper jre for given TA/input. Specific TAs can have their own requirements but it's not up to HF as such.
Generally speaking, the properly designed cluster should continue to function properly. The primaries were reassigned when you lost your indexer, some replication might have been triggered to make yo... See more...
Generally speaking, the properly designed cluster should continue to function properly. The primaries were reassigned when you lost your indexer, some replication might have been triggered to make your cluster complete, life goes on. When you get your power back on the indexer should rejoin your cluster and you should have surplus buckets (which you might be able to get rid of). Question is whether your cluster was properly configured. Since you're talking about losing just one indexer in "another datacenter" and it being "on cluster with another indexer", that might as well mean that you had just two indexers and rf=sf=2 (if you had rf=sf=1, you're in trouble already). So your cluster can be searchable but not complete and when the currently offline indexer rejoins the cluster, CM will trigger replication of all buckets ingested during the downtime.
Does this post help? Solved: How to format X axis label in the timechart? - Splunk Community
What do you mean by support? For example, do you want to write Scripted Inputs in Coretto?  
Thank you so much. It worked!  The only problem  I am facing now is that for some reason when I use that query(earliest=-d@d latest=@d), the "user" field shows up as a dollar sign ($) instead of the ... See more...
Thank you so much. It worked!  The only problem  I am facing now is that for some reason when I use that query(earliest=-d@d latest=@d), the "user" field shows up as a dollar sign ($) instead of the name of the user. Do you know what why? *I was asked to group it by time but would like to know how to show the first or last time of the failed login for my own knowledge. Thanks again!
Assuming things are configured "correct" for what you expect a Splunk Cluster to be, I would assume you're fine for right now. For example, the Cluster Master probably has eyes on that missing indexe... See more...
Assuming things are configured "correct" for what you expect a Splunk Cluster to be, I would assume you're fine for right now. For example, the Cluster Master probably has eyes on that missing indexer and is waiting for it to come up, and has been communicating to all of your Splunk deployment the necessary info like "Hey UF, don't send your data to this Indexer" and "Hey SH, this indexer is no longer a search peer." Can you describe how you have your cluster configured - replication/search factors?  How many indexers do you have (e.g. is this one of two)?  That could give us a hint on what you should expect when things do come back up.  For example, if that indexer comes up and says hello to your Cluster Master, then that CM is going to start doing any replication/balancing of buckets.  That's going to suck a chunk of your network pipe that you might not want when this data center comes up, so if that indexer is lower priority you leave Splunk shut down until your higher priority stuff in that data center is recovered.
When you say, "upload the pdf of  search results based on the alert to the ticket in an automated way" are you wanting to take the PDF that the code creates (the file /etc/apps/Splunk_Ivanti/local/ti... See more...
When you say, "upload the pdf of  search results based on the alert to the ticket in an automated way" are you wanting to take the PDF that the code creates (the file /etc/apps/Splunk_Ivanti/local/ticket.pdf) and post it to some other endpoint?  If the answer is yes, then you'd need to write the python to do that send.  If you want an example how this is done, take a look at the %SPLUNK_HOME%/etc/apps/alert_webhook/bin/webhook.py.  That's the code behind the Webhook Alert Action and it does a fairly simple send of data to a URL. 
I've seen a few of the spath topics around, but wasn't able to understand enough to make it work for my data. I have the following json: { "Record": { "contentId": "429636", "levelId": "57... See more...
I've seen a few of the spath topics around, but wasn't able to understand enough to make it work for my data. I have the following json: { "Record": { "contentId": "429636", "levelId": "57", "levelGuid": "3c5b481a-6698-49f5-8111-e43bb7604486", "moduleId": "83", "parentId": "0", "Field": [ { "id": "22811", "guid": "6c6bbe96-deab-46ab-b83b-461364a204e0", "type": "1", "_value": "Need This with 22811 as the field name" }, { "id": "22810", "guid": "08f66941-8f2f-42ce-87ae-7bec95bb5d3b", "type": "1", "p": "need this with 22810 as the field name" }, { "id": "478", "guid": "4e17baea-f624-4d1a-9c8c-83dd18448689", "type": "1", "p": [ "Needs to have 478 as field name", "Needs to have 478 as field name" ] }, { "id": "22859", "guid": "f45d3578-100e-44aa-b3d3-1526aa080742", "type": "3", "xmlConvertedValue": "2023-06-16T00:00:00Z", "_value": "needs 22859 as field name" }, { "id": "482", "guid": "a7ae0730-508b-4545-8cdc-fb68fc2e985a", "type": "3", "xmlConvertedValue": "2023-08-22T00:00:00Z", "_value": "needs 482 as field name" }, { "id": "22791", "guid": "89fb3582-c325-4bc9-812e-0d25e319bc52", "type": "4", "ListValues": { "ListValue": { "id": "74192", "displayName": "Exception Closed", "_value": "needs 22791 as field name" } } }, { "id": "22818", "guid": "e2388e72-cace-42e6-9364-4f936df1b7f4", "type": "4", "ListValues": { "ListValue": { "id": "74414", "displayName": "Yes", "_value": "needs 22818 as field name" } } }, { "id": "22981", "guid": "8f8df6e3-8fb8-478b-8aa0-0be02bec24e3", "type": "4", "ListValues": { "ListValue": { "id": "74550", "displayName": "Critical", "_value": "needs 22981 as field name" } } }, { "id": "22876", "guid": "4cc725ad-d78d-4fc0-a3b2-c2805da8f29a", "type": "9", "Reference": { "id": "256681", "_value": "needs 22876 as field name" } }, { "id": "23445", "guid": "f4f262f7-290a-4ffc-af2b-dcccde673dba", "type": "9", "Reference": { "id": "255761", "_value": "needs 23445 as field name" } }, { "id": "1675", "guid": "ea8f9a24-3d35-49f9-b74e-e3b9e48f8b3b", "type": "2" }, { "id": "22812", "guid": "e563eb9e-6390-406a-ac79-386e1c3006a3", "type": "2", "_value": "needs 22812 as field name" }, { "id": "22863", "guid": "a9fe7505-5877-4bdf-aa28-9f6c86af90ae", "type": "8", "Users": { "User": { "id": "5117", "firstName": "data", "middleName": "data", "lastName": "data", "_value": "needs 22863 as field name" } } }, { "id": "22784", "guid": "4466fd31-3ab3-4117-8aa0-40f765d20c10", "type": "3", "xmlConvertedValue": "2023-07-18T00:00:00Z", "_value": "7/18/2023" }, { "id": "22786", "guid": "d1c7af3e-a350-4e59-9353-132a04a73641", "type": "1" }, { "id": "2808", "guid": "4392ae76-9ee1-45bf-ac31-9e323a518622", "type": "1", "p": "needs 2808 as field name" }, { "id": "22802", "guid": "ad7d4268-e386-441d-90b1-2da2fba0d002", "type": "1", "table": { "style": "width: 954px", "border": "1", "cellspacing": "0", "cellpadding": "0", "tbody": { "tr": { "style": "height: 73.05pt", "td": { "style": "width: 715.5pt", "valign": "top", "p": "needs 22802 as field name" } } } } }, { "id": "8031", "guid": "fbcfdf2c-2990-41d1-9139-8a1d255688b0", "type": "1", "table": { "style": "width: 954px", "border": "1", "cellspacing": "0", "cellpadding": "0", "tbody": { "tr": { "style": "height: 71.1pt", "td": { "style": "width: 715.5pt", "valign": "top", "p": [ "needs 8031 as field name", "needs 8031 as field name" ] } } } } }, { "id": "22820", "guid": "0f98830d-48b3-497c-b965-55be276037f2", "type": "1", "p": "needs 22820 as field name" }, { "id": "22807", "guid": "8aa0d0fa-632d-4dfa-9867-b0cc407fa96b", "type": "3" }, { "id": "22855", "guid": "e55cbc59-ad8d-4831-8e6f-d350046026e9", "type": "1" }, { "id": "8032", "guid": "f916365b-e6eb-4ab9-a4ff-c7812a404854", "type": "1", "p": "needs 8032 as field name" }, { "id": "22792", "guid": "8e70c28a-2eec-4e38-b78b-5495c2854b3e", "type": "1", "_value": "needs 22792 as field name " }, { "id": 22793, "guid": "ffeaa385-643a-4f04-8a00-c28ddd026b7f", "type": "4", "ListValues": "" }, { "id": "22795", "guid": "c46eac60-d86e-4af4-9292-d194a601f8b6", "type": "1" }, { "id": "22797", "guid": "8cd6e398-e565-4034-8db8-2e2ecb2f0b31", "type": "4", "ListValues": { "ListValue": { "id": "73060", "displayName": "data", "_value": "needs 22797 as field name" } } }, { "id": "22799", "guid": "20823b18-cb9b-47a3-854d-58f874164b27", "type": "4", "ListValues": { "ListValue": { "id": "74410", "displayName": "Other", "_value": "needs 22799 as field name" } } }, { "id": "22798", "guid": "5b32be4c-bc40-45b3-add4-1b22162fd882", "type": "4", "ListValues": { "ListValue": { "id": "74405", "displayName": "N/A", "_value": "needs 22798 as field name" } } }, { "id": "22800", "guid": "6b020db0-780f-4eaf-8381-c122425b71ed", "type": "1", "p": "needs 22800 as field name" }, { "id": "22801", "guid": "06334da8-5392-4a9d-a3eb-d4075ee30787", "type": "1", "p": "needs 22801 as field name" }, { "id": "22794", "guid": "25da1de8-8e81-4281-8ef3-d82d1dc005ad", "type": "4", "ListValues": { "ListValue": { "id": "74398", "displayName": "Yes", "_value": "needs 22794 as field name" } } }, { "id": "22813", "guid": "89760b4f-49be-40ad-8429-89c247e3e95a", "type": "1", "p": "needs 22813 as field name" }, { "id": "22803", "guid": "03b6c826-e15c-4356-89e8-b0bd509aaeb5", "type": "3", "xmlConvertedValue": "2023-06-15T00:00:00Z", "_value": "needs 22803 as field name" }, { "id": "22804", "guid": "d7683f9c-97bb-461a-97df-36ec6596b4fc", "type": "1", "p": "needs 22804 as field name" }, { "id": "22805", "guid": "33386a3a-c331-4d8c-9825-166c0a5235c2", "type": "3", "xmlConvertedValue": "2023-06-15T00:00:00Z", "_value": "needs 22805 as field name" }, { "id": "22806", "guid": "cd486293-9857-475c-9da3-a06f836edb59", "type": "1", "p": "needs 22806 as field name" } ] } } and have been able to extract id, (some) p data and _value data from Record.Field{} using: | spath path=Record.Field{} output=Field | mvexpand Field | spath input=Field | rename id AS Field_id, value AS Field_value, p AS Field_p , but have been unable get any other data out. The p values that I can get out are single value only. In particular, I need to get the multi-value fields for ListValues{}.ListValue out. In addition, I need to map the values in _value and p to the top ID field in that array. I think the code sample provided above explains what's needed. I know I can do a |eval {id}=value but it's complicated when there are so many more fields other than value, or complicated when the fields are nested. Can someone help with this?
Hello, How do I give same rank for same score? Student d and e has the same score of 73, thus they both Rank 4, but student f has Rank 6. Rank 5 is skipped because Student d and e has the same scor... See more...
Hello, How do I give same rank for same score? Student d and e has the same score of 73, thus they both Rank 4, but student f has Rank 6. Rank 5 is skipped because Student d and e has the same score.  Thank you for your help Expected result: Student Score Rank a 100 1 b 95 2 c 84 3 d 73 4 e 73 4 f 54 6 g 43 7 h 37 8 i 22 9 j 12 10   This is what I figured out so far, but i won't take into consideration of same Score     | makeresults format=csv data="Student, Score a,100 b,95 c,84 d,73 e,73 f,54 g,43 h,37 i,22 j,12" | streamstats count      
Hello! This is probably a simple question but I've been kind of struggling with it. I'm building out my first playbook which triggers off of new artifacts. The artifacts include fields for: type, val... See more...
Hello! This is probably a simple question but I've been kind of struggling with it. I'm building out my first playbook which triggers off of new artifacts. The artifacts include fields for: type, value, tag. What I'm trying to do is have those fields from the artifact passed directly into a custom code block in my playbook. How do I go about accessing those fields? I've tried using phantom.collect2(container=container, datapath=["artifact:FIELD_NAME*"]) in the code block but it doesn't return anything. I thought maybe I needed to setup custom fields to define type, value and tag in the custom fields settings, but that didn't change anything either. Any help would be appreciated, thank you!
| makeresults | eval field_id="/key1/value1/key2/value2/key3/value3/key4/value4" | rex field=field_id max_match=0 "/(?<key>[^/]*)/(?<value>[^/]*)" | foreach 0 1 2 3 4 5 6 7 8 9 10[ eval _key=mvindex(... See more...
| makeresults | eval field_id="/key1/value1/key2/value2/key3/value3/key4/value4" | rex field=field_id max_match=0 "/(?<key>[^/]*)/(?<value>[^/]*)" | foreach 0 1 2 3 4 5 6 7 8 9 10[ eval _key=mvindex(key, <<FIELD>>), {_key}=mvindex(value, <<FIELD>>) ] Hi @bowesmana .. instead of k and v, i used key and value, it works fine as well.  could you pls explain how the last eval works (why do you use "eval _k")
I am looking to extract some information from a Values field that has two values within it.  How can i specify which one of the values I need in a search as the two values is meant to be "read" ... See more...
I am looking to extract some information from a Values field that has two values within it.  How can i specify which one of the values I need in a search as the two values is meant to be "read" and "written"? This is my current search right now and I think it is including both values together. index="collectd_test" plugin=disk type=disk_octets plugin_instance=$plugin_instance1$ | stats min(value) as min max(value) as max avg(value) as avg | eval min=round(min, 2) | eval max=round(max, 2) | eval avg=round(avg, 2)
All, Leveraging the following article (https://community.splunk.com/t5/Other-Usage/How-to-export-reports-using-the-REST-API/m-p/640406/highlight/false#M475) I was able to successfully manipulate the... See more...
All, Leveraging the following article (https://community.splunk.com/t5/Other-Usage/How-to-export-reports-using-the-REST-API/m-p/640406/highlight/false#M475) I was able to successfully manipulate the script to: 1. Run using an API token (as opposed to credentials). 2. Get it to run a search I am interested in returning data from. I am however running into an error with my search (shown below).   <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">Unparsable URI-encoded request data</msg> </messages> </response>    The script itself now looks like this (I have removed the token and obscured the Splunk endpoint for obvious reasons.   #!/bin/bash # A simple bash script example of how to get notable events details from REST API # EXECUTE search and retrieve SID SID=$(curl -H "Authorization: Bearer <token ID here>" -k https://host.domain.com:8089/services/search/jobs -d search=" search index=index sourcetype="sourcetype" source="source" [ search index="index" sourcetype="sourcetype" source="source" deleted_at="null" | rename uuid AS host_uuid | stats count by host_uuid | fields host_uuid ] | rename data.id AS Data_ID host_uuid AS Host_ID port AS Network_Port | mvexpand data.xrefs{}.type | strcat Host_ID : Data_ID : Network_Port Custom_ID_1 | strcat Host_ID : Data_ID Custom_ID_2 | stats latest(*) as * by Custom_ID_1 | search state!="fixed" | search category!="informational" | eval unixtime=strptime(first_found,"%Y-%m-%dT%H:%M:%S")" <removed some of the search for brevity> \ | grep "sid" | awk -F\> '{print $2}' | awk -F\< '{print $1}') echo "SID=${SID}" Omitted the remaining portion of the script for brevity....     It is at this point shown in brackets (| eval unixtime=strptime(first_found,"%Y-%m-%dT%H:%M:%S") that I am getting the error in question. The search returns fine up to the point where I am converting time ---- I tried escaping using "\", but that did not seem to help. I am sure I am missing something simple and looking for some help.
That can be accomplished with a single list by good list management.  Anything on the baddomains list should not be in the gooddomains list and vice versa. That said, what you have should work, but ... See more...
That can be accomplished with a single list by good list management.  Anything on the baddomains list should not be in the gooddomains list and vice versa. That said, what you have should work, but not as efficiently as a single list.  A domain on both lists will not appear in the results, which is the intent. For debugging purposes, run each subsearch on its own with | format appended to verify it returns the expected results.  You may need to leave the format command in the final query, combined.