All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

>>> i would like to know how to install btool on windows when we install Splunk, the btool automatically installs with the installation. From your question i understand that you are looking for "how... See more...
>>> i would like to know how to install btool on windows when we install Splunk, the btool automatically installs with the installation. From your question i understand that you are looking for "how to run btool on windows". >>> i was trying to open in windows as an administrator and I could get the results. Just to make sure you are running the cmd prompt with admin rights, pls check if the top left on cmd prompt shows as "Administrator: Command Prompt" >>> C:\Program Files\Splunk\bin>splunk btool inputs list 'splunk' is not recognized as an internal or external command, operable program or batch file. Please suggest us, if you installed splunk on the default path or did you install on custom path. thanks. 
Hi @krutika_ag  As per Splunk docs: If you add new data to an existing archive file, the forwarder reprocesses the entire file rather than just the new data. This can result in event duplication. t... See more...
Hi @krutika_ag  As per Splunk docs: If you add new data to an existing archive file, the forwarder reprocesses the entire file rather than just the new data. This can result in event duplication. thus, to avoid duplication, Splunk monitors whole archive files and does not support single file monitoring.    so, you/we can not monitor a single file inside an archive.  what i would like to suggest you is that, you can ask the developers/app team who creates that archive file to put it in a separate archive file everytime when there is an update to the archive file.  i am still not much sure of this suggestion, but this should be possible as per my understanding, thanks.       
Hi,  I checked the known issues, fixed issues on both SOAR cloud and onprim, but no luck.   ( One URL for your ref - https://docs.splunk.com/Documentation/SOARonprem/6.2.0/ReleaseNotes/KnownIssues ... See more...
Hi,  I checked the known issues, fixed issues on both SOAR cloud and onprim, but no luck.   ( One URL for your ref - https://docs.splunk.com/Documentation/SOARonprem/6.2.0/ReleaseNotes/KnownIssues )   the error string says: Error string: ''LDAPInvalidDnError' object has no attribute 'description'' may we know, how you are trying the password reset, what attributes you are passing, pls suggest, thanks. 
Got it. Thanks. My application has an API. If the add-on can send HTTP requests to my API in order to fetch events and then index them in Splunk then it sounds like exactly what I need. I'll start w... See more...
Got it. Thanks. My application has an API. If the add-on can send HTTP requests to my API in order to fetch events and then index them in Splunk then it sounds like exactly what I need. I'll start writing the add-on then. Thank you for being so responsive and informative.
You can ask customers to use HEC. Many big players do. I just find it a bit cringe to ask customers to poke holes in their infrastructure. Opinions vary.
So first I think it makes sense to do a stats aggregation for all the values of the prod field for each cust value. <base_search> | stats values(prod) as all_prod by cust ... See more...
So first I think it makes sense to do a stats aggregation for all the values of the prod field for each cust value. <base_search> | stats values(prod) as all_prod by cust This will leave us with a multivalue field look something like this. From here you can do evaluations against the multivalue fields to check for specific conditions. Example: The unique combination of PROD values you mentioned in the original post can be done in an eval like this. <base_search> | stats values(prod) as all_prod by cust ``` subset inclusion ``` | eval scenario_1=mvappend( case('all_prod'=="100" AND 'all_prod'=="200", "PROD=100 & PROD=200"), case('all_prod'=="100" AND 'all_prod'=="300", "PROD=100 & PROD=300"), case('all_prod'=="200" AND 'all_prod'=="300", "PROD=200 & PROD=300") ) ``` direct match ``` | eval scenario_2=mvappend( case('all_prod'=="100" AND 'all_prod'=="200" AND mvcount(all_prod)==2, "PROD=100 & PROD=200"), case('all_prod'=="100" AND 'all_prod'=="300" AND mvcount(all_prod)==2, "PROD=100 & PROD=300"), case('all_prod'=="200" AND 'all_prod'=="300" AND mvcount(all_prod)==2, "PROD=200 & PROD=300") )  Notice the 2 different scenarios here, I wasn't exactly sure if when you mentioned that a cust has 100 and 200, if that means 100 and 200 only and no other values or if the 100 and 200 values is allowed to be a subset of all that custs values. So I included both scenarios here to show the output. Now, to get a distinct count of 'custs' that fall into each category you would just do a simple stats to tally them up by your specific scenario. Something like this. <base_search> | stats values(prod) as all_prod by cust ``` subset inclusion ``` | eval scenario_1=mvappend( case('all_prod'=="100" AND 'all_prod'=="200", "PROD=100 & PROD=200"), case('all_prod'=="100" AND 'all_prod'=="300", "PROD=100 & PROD=300"), case('all_prod'=="200" AND 'all_prod'=="300", "PROD=200 & PROD=300") ) ``` direct match ``` | eval scenario_2=mvappend( case('all_prod'=="100" AND 'all_prod'=="200" AND mvcount(all_prod)==2, "PROD=100 & PROD=200"), case('all_prod'=="100" AND 'all_prod'=="300" AND mvcount(all_prod)==2, "PROD=100 & PROD=300"), case('all_prod'=="200" AND 'all_prod'=="300" AND mvcount(all_prod)==2, "PROD=200 & PROD=300") ) | stats values(cust) as custs dc(cust) as dc_cust by scenario_1  and the output should have your distinct counts of custs for each PROD MV combos defined.  
If your application already supports a standard API--TAXII for threat intelligence, for example--there may be an existing app or add-on you can leverage.
It depends on your SaaS application. The add-on is the integration layer or "glue" between your application and Splunk. The add-on is a combination of the code you write, the configuration needed to ... See more...
It depends on your SaaS application. The add-on is the integration layer or "glue" between your application and Splunk. The add-on is a combination of the code you write, the configuration needed to parse events before they're indexed, and the configuration needed to search events after they're indexed. There's a good overview of the Splunk pipeline at https://docs.splunk.com/Documentation/Splunk/9.1.2/Deploy/Datapipeline; however, you don't necessarily need to know everything about Splunk to write an add-on. The code you write depends on your application. Do you have an API? If yes, your code would read or receive data from your API, break it into logical events, and write the events to Splunk using Splunk's API. When I write "read or receive," I'm implying that the add-on establishes a connection your application and polls an API for new data, waits for new data through a publisher/subscriber interface, or otherwise gets data from your application. It's up to you to determine how data is retrieved, how checkpoints are tracked, etc.
The commands you want are: splunk cmd btool inputs list splunk cmd btool transforms list However, splunk.exe is either missing or inaccessible in the context of your cmd.exe process. Are you runni... See more...
The commands you want are: splunk cmd btool inputs list splunk cmd btool transforms list However, splunk.exe is either missing or inaccessible in the context of your cmd.exe process. Are you running as Administrator, and does Administrator or the Administrators group have basic Read & Execute permissions on C:\Program Files\Splunk\bin and C:\Program Files\Splunk\bin\splunk.exe?
Thanks. Regarding the 1st answer - understood. Regarding the 2nd answer - I'm not sure understand (because I didn't explain well) the setup. My system is a SaaS application that is running somewhe... See more...
Thanks. Regarding the 1st answer - understood. Regarding the 2nd answer - I'm not sure understand (because I didn't explain well) the setup. My system is a SaaS application that is running somewhere in a public cloud, it doesn't know Splunk, so I don't see how writing to stdout or rotating log files is relevant here. The link you sent for the developer tools API doesn't seem relevant either.  The add-on, if I understand correctly, is a piece of code that runs in the customer's Splunk deployment (either enterprise or in their Splunk cloud) and should somehow interact with my SaaS security product in order to get events from it, right? My question is - how does the add-on interact with my SaaS system that isn't where the Splunk /add-on is running?  I can open an additional question it just seems related to my first question which is about the difference between add-on and HEC and I still haven't figured out how the add-on actually works (getting events from an external system that runs elsewhere).
The first question is answered directly by the developer guide: https://dev.splunk.com/enterprise/docs/developapps https://dev.splunk.com/enterprise/docs/releaseapps The second question depends on... See more...
The first question is answered directly by the developer guide: https://dev.splunk.com/enterprise/docs/developapps https://dev.splunk.com/enterprise/docs/releaseapps The second question depends on your product, but modular (or otherwise custom) inputs typically do one of three things: Write events to stdout, which is automatically indexed by Splunk. Write events to rotating logs files, which are separately indexed by Splunk using monitor stanzas. Write events through a Splunk-provided API. See https://dev.splunk.com/enterprise/docs/devtools. If you have more specific questions about a particular topic, post a new question, and the community will gladly assist. Welcome to Splunk!
Thank you for replying. OK, so sounds like asking customers to enable HEC, create a token, and give it to me so that my product can send events to their Splunk isn't a good practice. Writing an add... See more...
Thank you for replying. OK, so sounds like asking customers to enable HEC, create a token, and give it to me so that my product can send events to their Splunk isn't a good practice. Writing an add-on does make sense to me, considering what you wrote, I believe that this is the way to go. Followup questions: 1. Can you describe the process of creating an add-on and publish to the Splunkbase (?) so that my customers can install(?) it? 2. What's the actual logic that this add-on should have? i.e. how would the add-on installed at the customers' Splunk (either cloud or enterprise deployment) be used to deliver events?
As a Splunk practitioner, I prefer third-party products that allow me to manage the flow of data, typically through a published API. Allowing a third-party to connect to a Splunk Cloud HEC endpoint r... See more...
As a Splunk practitioner, I prefer third-party products that allow me to manage the flow of data, typically through a published API. Allowing a third-party to connect to a Splunk Cloud HEC endpoint requires modifying a network ACL, which in turn exposes the Splunk Cloud stack to additional risk. Connections to a self-hosted Splunk Enterprise HEC endpoint require implementing and maintaining edge infrastructure--firewalls, proxies, etc.--which expose not only Splunk Enterprise but typically, an entire enterprise network to additional risk. The best ISVs write and support Splunk add-ons to integrate with their product's API. Whether that makes sense for you depends entirely on your business. Allowing external integration exposes you to risk, but that may be justified by the reward. The process you followed to enabled HEC and create a token in your test environment is the same process other Splunk Cloud customers would follow, although some may choose to manage HEC tokens via configuration files uploaded as custom apps. Splunk Enterprise customers would do the latter. Either is a relatively low effort task for experienced Splunk administrators, but I wouldn't depend on every customer having experienced staff. I do not recommend creating an add-on for your product that simply enables HEC. That's a misuse of the feature, and you'd want your customers to control authentication and authorization through their own HEC token definitions.
(index="B" logType="REQUEST") OR( index="B" logType="TRACES" message="searchString1*") OR (index="B" logType="TRACES" message="searchString2*") | stats values(message) as messages, latest(*) as * by... See more...
(index="B" logType="REQUEST") OR( index="B" logType="TRACES" message="searchString1*") OR (index="B" logType="TRACES" message="searchString2*") | stats values(message) as messages, latest(*) as * by Id | where like(messages, "searchString1%") and like(messages, "searchString2%")
I'm not sure what you're looking for, but it sounds like you want the stats command to return the _raw field, perhaps like this. (index="B" logType="REQUEST") OR( index="B" logType="TRACES" message... See more...
I'm not sure what you're looking for, but it sounds like you want the stats command to return the _raw field, perhaps like this. (index="B" logType="REQUEST") OR( index="B" logType="TRACES" message="searchString1*") OR (index="B" logType="TRACES" message="searchString2*") | stats values(_raw) as raw by Id  
If I understand your question here, I believe adding something like this to your stats aggregation can give you additional fields you can use to filter on and only include the Ids that have events oc... See more...
If I understand your question here, I believe adding something like this to your stats aggregation can give you additional fields you can use to filter on and only include the Ids that have events occurring from each 3 of the scenarios you have separated with ORs in the original search.     index="B" AND (logType="REQUEST" OR (logType="TRACES" AND message IN ("searchString1*", "searchString2*"))) ``` In the below stats aggregation the max(eval(if())) functions are checking if a specific event matches a condition inside your if statement. If there is at least a single event that matches the criteria for a specific 'Id' then this value will be 1. If the condition is not met for an 'Id' then it will be a 0. ``` | stats max(eval(if('logType'=="REQUEST", 1, 0))) as has_request_log, max(eval(if('logType'=="TRACES" AND like(message, "searchString1%"), 1, 0))) as has_trace_type_1, max(eval(if('logType'=="TRACES" AND like(message, "searchString2%"), 1, 0))) as has_trace_type_2, values(message) as messages, latest(*) as * by Id ``` Only include the Ids where there were events from all 3 of these search criteria ``` | where 'has_request_log'==1 AND 'has_trace_type_1'==1 AND 'has_trace_type_2'==1   Alternatively, you can just classify the log types before the stats aggregation and do your filtration based off of that field. index="B" AND (logType="REQUEST" OR (logType="TRACES" AND message IN ("searchString1*", "searchString2*"))) ``` Eval to classify the logs that are returned from your search to a field named 'event_category' ``` | eval event_category=case( 'logType'=="REQUEST", "Request", 'logType'=="TRACES" AND LIKE(message, "searchString1%"), "Traces_1", 'logType'=="TRACES" AND LIKE(message, "searchString2%"), "Traces_2" ) ``` Group all unique values of 'event_category' seen for each Id ``` | stats values(event_category) as event_category values(message) as messages, latest(*) as * by Id ``` Only include the Ids where there were events from all 3 of these search criteria. mvcount() function checks how many values are contained withing the field, since we used a values(event_category) as event_category we only want the Ids that have all 3 unique classifications ``` | where mvcount(event_category)>=3
thanks @richgalloway  for the response! This indeed helps. Can I extend the question also to understand how I can enforce that the individual searches between the OR conditions return result for sure... See more...
thanks @richgalloway  for the response! This indeed helps. Can I extend the question also to understand how I can enforce that the individual searches between the OR conditions return result for sure and only then combine the results (similar to inner join) using Id field?
It sounds like you need the values function.   (index="B" logType="REQUEST") OR( index="B" logType="TRACES" message="searchString1*") OR (index="B" logType="TRACES" message="searchString2*") | sta... See more...
It sounds like you need the values function.   (index="B" logType="REQUEST") OR( index="B" logType="TRACES" message="searchString1*") OR (index="B" logType="TRACES" message="searchString2*") | stats values(message) as messages, latest(*) as * by Id  
I am new to splunk queries and was trying to combine results from multiple queries without using subsearches due to its limitation of restricting subsearches to 50000 results but our dataset has more... See more...
I am new to splunk queries and was trying to combine results from multiple queries without using subsearches due to its limitation of restricting subsearches to 50000 results but our dataset has more than 50000 records to be considered. Below is the query I was trying (index="B"  logType="REQUEST") OR( index="B" logType="TRACES" message="searchString1*") OR (index="B" logType="TRACES" message="searchString2*") | stats latest(*) as * by Id All above queries have the id field in the result which match and correspond to some kind of a correlation id between these logs. I would like to have the end result show all the common fields which has same values, but also with message field having the consolidated message content from the individual queries made on the same index B. The message field alone can have different values between the queries and need to be consolidated on the result. Can someone help on how this can be done ? @splunk 
Some of my customers are using Splunk as their SIEM solution. I have a security platform that needs to integrate into their Splunk to send security events (probably syslog) into a certain index (mig... See more...
Some of my customers are using Splunk as their SIEM solution. I have a security platform that needs to integrate into their Splunk to send security events (probably syslog) into a certain index (might be an existing or brand new one). I already made a PoC using HEC and successfully managed to deliver my syslog events into an index in my test Splunk account (using Splunk Cloud Platform). The setup process that my customers will have to do for the integration using HEC is to create a new data input, create a token, and eventually deliver it to me (alongside their Splunk hostname). Now I'm wondering if this process can somehow be simplified using an app/add-on. Not sure exactly what is functionality using an add-on gives and if I can somehow leverage it in order to simplify the integration onboarding process between my security product and my customers. Is there anything else I should consider? Would love to know, I'm completely new to Splunk. Also, case it matters, most of my customers, are using Splunk Cloud Platform but in the future there might be customers that will have Splunk Enterprise, case it matters. Thanks