All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @yuvaraj_m91  The Splunk command "spath" enables you to extract information from the structured data formats XML and JSON  Command Ref is given here: https://docs.splunk.com/Documentation/Splun... See more...
Hi @yuvaraj_m91  The Splunk command "spath" enables you to extract information from the structured data formats XML and JSON  Command Ref is given here: https://docs.splunk.com/Documentation/Splunk/9.1.2/SearchReference/Spath   Pls let us know if you are able to use the spath command.  or you could use direct "rex" command extract field values and do the stats  or where like command also should be good i think.    but, the spath is the simplest option i think. pls let us know if you are ok with spath or not, thanks. 
i have all the below messages in the "response" field. {"errors": ["Message: Payment failed. Reason: Hi, we attempted to process the transaction but it seems there was an error. Please check your ... See more...
i have all the below messages in the "response" field. {"errors": ["Message: Payment failed. Reason: Hi, we attempted to process the transaction but it seems there was an error. Please check your information and try again. If the problem persists please contact your bank."]} {"errors": ["Unable to retrieve User Profile with sub '2415d' as it does not exist"]} {"errors": ["Unable to retrieve User Profile with sub 'dfadf' as it does not exist"]} {"errors": ["Unable to retrieve User Profile with sub 'fdsgad' as it does not exist"]} {"errors": ["Unallocated LRW seat not found with product id fdafdsaddsfa and start datetime utc 2024-01-06T05:30:00+00:00 and test location id dfafdfa"]} {"errors": ["Unallocated LRW seat not found with product id sfgdfa and start datetime utc 2024-01-06T05:30:00+00:00 and test location id dsfadfsa"]} I wanted to display the result with the count as  Message: Payment failed. Reason: Hi, we attempted to process the transaction but it seems there was an error. Please check your information and try again. If the problem persists please contact your bank. Unable to retrieve User Profile with sub '***' as it does not exist Unallocated LRW seat not found with product id *** and start datetime utc 2024-01-06T05:30:00+00:00 and test location id ***
I dont know the complete path to the nested tags array but you can do something like this to target the value contained within the Contact Key in the MV json fields. Something like this.     <b... See more...
I dont know the complete path to the nested tags array but you can do something like this to target the value contained within the Contact Key in the MV json fields. Something like this.     <base_search> | eval tags_json=spath(_raw, "tags{}"), contact=case( mvcount(tags_json)==1, if(spath(tags_json, "Key")=="Contact", spath(tags_json, "Value"), null()), mvcount(tags_json)>1, mvmap(tags_json, if(spath(tags_json, "Key")=="Contact", spath(tags_json, "Value"), null())) ) | fields + _time, _raw, tags_json, contact     Below is a screenshot of an example on my local instance. First we extract all json objects from the tags array as a multivalued field named "tags_json". From there you can use the mvmap() function to loop through the multivalue field and check each entry to see if the Key field value of the json object is equal to "Contact". If it is, then we know this is the json object we want to target the extraction of the "Value" key from. So we do an Spath specificly on that object and store the returned value as a field named "contact". Option 2: Another route to take (depending on the structure of your event and if it make sense to do i this way). We can loop through each json object in the tags array and stuff the key/values into a temporary json object that we can then do a full spath against. This is a more exhaustive approach as apposed to the targeted one in the previous example. SPL to do this would look something like this. <base_search> | eval ``` extract array of json objects a multivalued field ``` tags_json=spath(_raw, "tags{}"), ``` initialize the temporary json object that will hold all the key/value pairs contained within the tags array ``` final_tag_json=json_object() ``` use the mode=multivalue foreach loop to loop through each entry in the multivalued field ``` | foreach mode=multivalue tags_json [ | eval ``` json_set() function will set up each Key/Value as a new key/value pair in the temporary json "final_tag_json" ``` final_tag_json=json_set(final_tag_json, spath('<<ITEM>>', "Key"), spath('<<ITEM>>', "Value")) ] | fields - tags_json ``` full spath against the final_tag_json field ``` | spath input=final_tag_json | fields - final_tag_json | fields + _time, _raw, Contact, Name You can see in this screenshot that not only is the "Contact" field extracted but "Name" value is extracted as well, this method would loop through each json array and extract a new Key/Value pair for each entry. Below is a screenshot showing what the temporary final_tag_json object looks like that we did the full spath against for context.  
Hi All, I have a multivalue field that contains nested key value pair with key named as "Key" and Value named as "Value". Example snippet : tags: [ [-] { [-] Key: Contact V... See more...
Hi All, I have a multivalue field that contains nested key value pair with key named as "Key" and Value named as "Value". Example snippet : tags: [ [-] { [-] Key: Contact Value: abc@gmail.com } { [-] Key: Name Value: abc } I want to extract only the Contact value from here i.e abc@gmail.com. I am trying with multivalue functions and spath. Still stuck here. Please help me. Regards, PNV
Hi,  Just thought to update you(and all others), as per the doc https://docs.splunk.com/Documentation/Splunk/9.1.2/Troubleshooting/Usebtooltotroubleshootconfigurations the right command is: splunk... See more...
Hi,  Just thought to update you(and all others), as per the doc https://docs.splunk.com/Documentation/Splunk/9.1.2/Troubleshooting/Usebtooltotroubleshootconfigurations the right command is: splunk btool inputs list  thanks. 
>>> i would like to know how to install btool on windows when we install Splunk, the btool automatically installs with the installation. From your question i understand that you are looking for "how... See more...
>>> i would like to know how to install btool on windows when we install Splunk, the btool automatically installs with the installation. From your question i understand that you are looking for "how to run btool on windows". >>> i was trying to open in windows as an administrator and I could get the results. Just to make sure you are running the cmd prompt with admin rights, pls check if the top left on cmd prompt shows as "Administrator: Command Prompt" >>> C:\Program Files\Splunk\bin>splunk btool inputs list 'splunk' is not recognized as an internal or external command, operable program or batch file. Please suggest us, if you installed splunk on the default path or did you install on custom path. thanks. 
Hi @krutika_ag  As per Splunk docs: If you add new data to an existing archive file, the forwarder reprocesses the entire file rather than just the new data. This can result in event duplication. t... See more...
Hi @krutika_ag  As per Splunk docs: If you add new data to an existing archive file, the forwarder reprocesses the entire file rather than just the new data. This can result in event duplication. thus, to avoid duplication, Splunk monitors whole archive files and does not support single file monitoring.    so, you/we can not monitor a single file inside an archive.  what i would like to suggest you is that, you can ask the developers/app team who creates that archive file to put it in a separate archive file everytime when there is an update to the archive file.  i am still not much sure of this suggestion, but this should be possible as per my understanding, thanks.       
Hi,  I checked the known issues, fixed issues on both SOAR cloud and onprim, but no luck.   ( One URL for your ref - https://docs.splunk.com/Documentation/SOARonprem/6.2.0/ReleaseNotes/KnownIssues ... See more...
Hi,  I checked the known issues, fixed issues on both SOAR cloud and onprim, but no luck.   ( One URL for your ref - https://docs.splunk.com/Documentation/SOARonprem/6.2.0/ReleaseNotes/KnownIssues )   the error string says: Error string: ''LDAPInvalidDnError' object has no attribute 'description'' may we know, how you are trying the password reset, what attributes you are passing, pls suggest, thanks. 
Got it. Thanks. My application has an API. If the add-on can send HTTP requests to my API in order to fetch events and then index them in Splunk then it sounds like exactly what I need. I'll start w... See more...
Got it. Thanks. My application has an API. If the add-on can send HTTP requests to my API in order to fetch events and then index them in Splunk then it sounds like exactly what I need. I'll start writing the add-on then. Thank you for being so responsive and informative.
You can ask customers to use HEC. Many big players do. I just find it a bit cringe to ask customers to poke holes in their infrastructure. Opinions vary.
So first I think it makes sense to do a stats aggregation for all the values of the prod field for each cust value. <base_search> | stats values(prod) as all_prod by cust ... See more...
So first I think it makes sense to do a stats aggregation for all the values of the prod field for each cust value. <base_search> | stats values(prod) as all_prod by cust This will leave us with a multivalue field look something like this. From here you can do evaluations against the multivalue fields to check for specific conditions. Example: The unique combination of PROD values you mentioned in the original post can be done in an eval like this. <base_search> | stats values(prod) as all_prod by cust ``` subset inclusion ``` | eval scenario_1=mvappend( case('all_prod'=="100" AND 'all_prod'=="200", "PROD=100 & PROD=200"), case('all_prod'=="100" AND 'all_prod'=="300", "PROD=100 & PROD=300"), case('all_prod'=="200" AND 'all_prod'=="300", "PROD=200 & PROD=300") ) ``` direct match ``` | eval scenario_2=mvappend( case('all_prod'=="100" AND 'all_prod'=="200" AND mvcount(all_prod)==2, "PROD=100 & PROD=200"), case('all_prod'=="100" AND 'all_prod'=="300" AND mvcount(all_prod)==2, "PROD=100 & PROD=300"), case('all_prod'=="200" AND 'all_prod'=="300" AND mvcount(all_prod)==2, "PROD=200 & PROD=300") )  Notice the 2 different scenarios here, I wasn't exactly sure if when you mentioned that a cust has 100 and 200, if that means 100 and 200 only and no other values or if the 100 and 200 values is allowed to be a subset of all that custs values. So I included both scenarios here to show the output. Now, to get a distinct count of 'custs' that fall into each category you would just do a simple stats to tally them up by your specific scenario. Something like this. <base_search> | stats values(prod) as all_prod by cust ``` subset inclusion ``` | eval scenario_1=mvappend( case('all_prod'=="100" AND 'all_prod'=="200", "PROD=100 & PROD=200"), case('all_prod'=="100" AND 'all_prod'=="300", "PROD=100 & PROD=300"), case('all_prod'=="200" AND 'all_prod'=="300", "PROD=200 & PROD=300") ) ``` direct match ``` | eval scenario_2=mvappend( case('all_prod'=="100" AND 'all_prod'=="200" AND mvcount(all_prod)==2, "PROD=100 & PROD=200"), case('all_prod'=="100" AND 'all_prod'=="300" AND mvcount(all_prod)==2, "PROD=100 & PROD=300"), case('all_prod'=="200" AND 'all_prod'=="300" AND mvcount(all_prod)==2, "PROD=200 & PROD=300") ) | stats values(cust) as custs dc(cust) as dc_cust by scenario_1  and the output should have your distinct counts of custs for each PROD MV combos defined.  
If your application already supports a standard API--TAXII for threat intelligence, for example--there may be an existing app or add-on you can leverage.
It depends on your SaaS application. The add-on is the integration layer or "glue" between your application and Splunk. The add-on is a combination of the code you write, the configuration needed to ... See more...
It depends on your SaaS application. The add-on is the integration layer or "glue" between your application and Splunk. The add-on is a combination of the code you write, the configuration needed to parse events before they're indexed, and the configuration needed to search events after they're indexed. There's a good overview of the Splunk pipeline at https://docs.splunk.com/Documentation/Splunk/9.1.2/Deploy/Datapipeline; however, you don't necessarily need to know everything about Splunk to write an add-on. The code you write depends on your application. Do you have an API? If yes, your code would read or receive data from your API, break it into logical events, and write the events to Splunk using Splunk's API. When I write "read or receive," I'm implying that the add-on establishes a connection your application and polls an API for new data, waits for new data through a publisher/subscriber interface, or otherwise gets data from your application. It's up to you to determine how data is retrieved, how checkpoints are tracked, etc.
The commands you want are: splunk cmd btool inputs list splunk cmd btool transforms list However, splunk.exe is either missing or inaccessible in the context of your cmd.exe process. Are you runni... See more...
The commands you want are: splunk cmd btool inputs list splunk cmd btool transforms list However, splunk.exe is either missing or inaccessible in the context of your cmd.exe process. Are you running as Administrator, and does Administrator or the Administrators group have basic Read & Execute permissions on C:\Program Files\Splunk\bin and C:\Program Files\Splunk\bin\splunk.exe?
Thanks. Regarding the 1st answer - understood. Regarding the 2nd answer - I'm not sure understand (because I didn't explain well) the setup. My system is a SaaS application that is running somewhe... See more...
Thanks. Regarding the 1st answer - understood. Regarding the 2nd answer - I'm not sure understand (because I didn't explain well) the setup. My system is a SaaS application that is running somewhere in a public cloud, it doesn't know Splunk, so I don't see how writing to stdout or rotating log files is relevant here. The link you sent for the developer tools API doesn't seem relevant either.  The add-on, if I understand correctly, is a piece of code that runs in the customer's Splunk deployment (either enterprise or in their Splunk cloud) and should somehow interact with my SaaS security product in order to get events from it, right? My question is - how does the add-on interact with my SaaS system that isn't where the Splunk /add-on is running?  I can open an additional question it just seems related to my first question which is about the difference between add-on and HEC and I still haven't figured out how the add-on actually works (getting events from an external system that runs elsewhere).
The first question is answered directly by the developer guide: https://dev.splunk.com/enterprise/docs/developapps https://dev.splunk.com/enterprise/docs/releaseapps The second question depends on... See more...
The first question is answered directly by the developer guide: https://dev.splunk.com/enterprise/docs/developapps https://dev.splunk.com/enterprise/docs/releaseapps The second question depends on your product, but modular (or otherwise custom) inputs typically do one of three things: Write events to stdout, which is automatically indexed by Splunk. Write events to rotating logs files, which are separately indexed by Splunk using monitor stanzas. Write events through a Splunk-provided API. See https://dev.splunk.com/enterprise/docs/devtools. If you have more specific questions about a particular topic, post a new question, and the community will gladly assist. Welcome to Splunk!
Thank you for replying. OK, so sounds like asking customers to enable HEC, create a token, and give it to me so that my product can send events to their Splunk isn't a good practice. Writing an add... See more...
Thank you for replying. OK, so sounds like asking customers to enable HEC, create a token, and give it to me so that my product can send events to their Splunk isn't a good practice. Writing an add-on does make sense to me, considering what you wrote, I believe that this is the way to go. Followup questions: 1. Can you describe the process of creating an add-on and publish to the Splunkbase (?) so that my customers can install(?) it? 2. What's the actual logic that this add-on should have? i.e. how would the add-on installed at the customers' Splunk (either cloud or enterprise deployment) be used to deliver events?
As a Splunk practitioner, I prefer third-party products that allow me to manage the flow of data, typically through a published API. Allowing a third-party to connect to a Splunk Cloud HEC endpoint r... See more...
As a Splunk practitioner, I prefer third-party products that allow me to manage the flow of data, typically through a published API. Allowing a third-party to connect to a Splunk Cloud HEC endpoint requires modifying a network ACL, which in turn exposes the Splunk Cloud stack to additional risk. Connections to a self-hosted Splunk Enterprise HEC endpoint require implementing and maintaining edge infrastructure--firewalls, proxies, etc.--which expose not only Splunk Enterprise but typically, an entire enterprise network to additional risk. The best ISVs write and support Splunk add-ons to integrate with their product's API. Whether that makes sense for you depends entirely on your business. Allowing external integration exposes you to risk, but that may be justified by the reward. The process you followed to enabled HEC and create a token in your test environment is the same process other Splunk Cloud customers would follow, although some may choose to manage HEC tokens via configuration files uploaded as custom apps. Splunk Enterprise customers would do the latter. Either is a relatively low effort task for experienced Splunk administrators, but I wouldn't depend on every customer having experienced staff. I do not recommend creating an add-on for your product that simply enables HEC. That's a misuse of the feature, and you'd want your customers to control authentication and authorization through their own HEC token definitions.
(index="B" logType="REQUEST") OR( index="B" logType="TRACES" message="searchString1*") OR (index="B" logType="TRACES" message="searchString2*") | stats values(message) as messages, latest(*) as * by... See more...
(index="B" logType="REQUEST") OR( index="B" logType="TRACES" message="searchString1*") OR (index="B" logType="TRACES" message="searchString2*") | stats values(message) as messages, latest(*) as * by Id | where like(messages, "searchString1%") and like(messages, "searchString2%")
I'm not sure what you're looking for, but it sounds like you want the stats command to return the _raw field, perhaps like this. (index="B" logType="REQUEST") OR( index="B" logType="TRACES" message... See more...
I'm not sure what you're looking for, but it sounds like you want the stats command to return the _raw field, perhaps like this. (index="B" logType="REQUEST") OR( index="B" logType="TRACES" message="searchString1*") OR (index="B" logType="TRACES" message="searchString2*") | stats values(_raw) as raw by Id