All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello @richgalloway , Thanks for your help.   It's odd that I didn't receive notification when you responded. 1) It looks like it also works if I do the index first, then DBX query.  2) How do I p... See more...
Hello @richgalloway , Thanks for your help.   It's odd that I didn't receive notification when you responded. 1) It looks like it also works if I do the index first, then DBX query.  2) How do I put company ID in the brackets on DBX query dynamically?     eval variable = .....   A, B, C, ...   Z  (Company ID)      where companyID in $variable$ index=company | append [ | dbxquery query="select * from employee where companyID in (A,B,C)" | stats values(*) as * by CompanyID
Thanks for your reply.  It returns multiple results because there's more than one tag in the array per event.  "stats count by tags{}.name" returns 1 count for each tag. os_system_name: Microsoft... See more...
Thanks for your reply.  It returns multiple results because there's more than one tag in the array per event.  "stats count by tags{}.name" returns 1 count for each tag. os_system_name: Microsoft Windows os_type: Workstation os_vendor: Microsoft os_version: 22H2 risk_score: 747.0674438476562 severe_vulnerabilities: 1 tags: [ [-] { [-] name: Asset_Workstation type: CUSTOM } { [-] name: Dept_Finance type: SITE } ] total_vulnerabilities: 1  Results: tags{}.name count Asset_Workstation 1 Dept_Finance 1   I wasn't able to run eval or where operations on the tags{}.name without getting an error so I was stuck.  I just stumbled on my answer but I appreciate your time looking at this.  I knew it had to be a simple query but I wasn't initially able to put it together.  Feel free to offer a better more efficient way to get the below results. (index="index_name") | dedup id | stats count by tags{}.name | rename tags{}.name AS dept | where (dept like "Dept_%")  Results: dept count Dept_Finance 1
Use spath for json data | spath input=properties | spath input=response.result custom_tags | spath input=custom_tags
Hello, I am brand new to Splunk and after watching a short tutorial to get started, I saw that Settings => Data Input => Local Event Log Collection did not appear on my version of Splunk Enterprise.... See more...
Hello, I am brand new to Splunk and after watching a short tutorial to get started, I saw that Settings => Data Input => Local Event Log Collection did not appear on my version of Splunk Enterprise. I have it on Mac OS Monterey and it seems to work fine, but I know most use it on Windows. Please, can someone help me find how to log local events on Splunk for Mac? Thank you for your help. Noé
@ITWhisperer suggests to add it to the by clause. (Also known as groupby in Splunk lingo.) Literally just added it after by.  Something like `mbp_ocp4` kubernetes.container.name =*service* level=NG_... See more...
@ITWhisperer suggests to add it to the by clause. (Also known as groupby in Splunk lingo.) Literally just added it after by.  Something like `mbp_ocp4` kubernetes.container.name =*service* level=NG_SERVICE_PERFORMANCE SERVICE!=DPTDRetrieveArrangementDetail* | eval resp_time_exceeded = if(EXETIME>3000, "1","0") |bin span=30m _time bins=2 | stats count as "total_requests", sum(resp_time_exceeded) as long_calls by _time kubernetes.namespace.name, kubernetes.container.name | eval Percent_Exceeded = (long_calls/total_requests)*100 | where total_requests>200 and Percent_Exceeded>5   
Thank you for making a very nuanced question.  Is that "hostname" Splunk field "host"?  It doesn't matter to the solution, though.   | stats values(result) as result by hostname | where mvcount(res... See more...
Thank you for making a very nuanced question.  Is that "hostname" Splunk field "host"?  It doesn't matter to the solution, though.   | stats values(result) as result by hostname | where mvcount(result) == 1 AND result == "1"    
 This returns the correct count of tags for "Dept_" but it's also including all other tags that do not begin with "Dept_". I don't understand this statement.  Other than groupby field tags{}.n... See more...
 This returns the correct count of tags for "Dept_" but it's also including all other tags that do not begin with "Dept_". I don't understand this statement.  Other than groupby field tags{}.name itself,  | stats count by tags{}.name only gives one single output, that is count.  How does it "include other tags", Dept_ or not? You can help us by illustrating the results you want, and the results the search actually gives you. (Anonymize as needed.)  Explain why the result is not what you expected.
As stated I've tried searching through all files within /etc/* (including all .conf files) for the following: "514", "tcp", "udp", "syslog", or "SC4S". I get no results. You mentioned I should check ... See more...
As stated I've tried searching through all files within /etc/* (including all .conf files) for the following: "514", "tcp", "udp", "syslog", or "SC4S". I get no results. You mentioned I should check inputs.conf, but I've already done this and found nothing - could you elaborate on what exactly I should be searching for? Are there additional keywords I should try? I confirmed that the Heavy Forwarders are listening on port 514. Syslog is working... I just don't see how it's configured.  Edit: I also want to ask - what could btool find that a sudo grep search wouldn't have located?
Hi All, I have a sample json event in Splunk as below.  Please help me to understand how I can parse custom_tags value from below . This may have multiple key value pairs in it { account:xyz, ev... See more...
Hi All, I have a sample json event in Splunk as below.  Please help me to understand how I can parse custom_tags value from below . This may have multiple key value pairs in it { account:xyz, eventdate:01/25/2024, properties: {             version:1.0,              requestID: cvv,               response: {"statusCode":"200", "result":"{\"run_id\":465253,\"custom_tags\":{\"jobname\":\"xyz\",\"domain\":\"bgg\"}}}               time:12:55 } }
Hi Team, I have tried executing the cluster-merge-buckets command on Cluster Manager and got the following error / exception: Command: ./splunk cluster-merge-buckets -index-name _audit -startda... See more...
Hi Team, I have tried executing the cluster-merge-buckets command on Cluster Manager and got the following error / exception: Command: ./splunk cluster-merge-buckets -index-name _audit -startdate 2020/01/01 -enddate 2024/01/24 -max-count 1000 -min-size 1 -max-total-size 1024 Output: Using the following config:  -max-count=1000 -min-size=1 -max-size=1000 -max-timespan=7776000 Dryrun has started. merge_txn_id=1706209703.24 [...] peer=IDX01 processStatus=Merge_Done totalBucketsToMerge=28 mergedBuckets=28 bucketsUnableToMerge=0 createdBuckets=1 sizeOfMergedBuckets= 868MB progress=100.0% [...] peer=IDX02 processStatus=Merge_Done totalBucketsToMerge=23 mergedBuckets=23 bucketsUnableToMerge=0 createdBuckets=1 sizeOfMergedBuckets= 718MB progress=100.0% progress=100.0% peers=2 completedPeers=2 failedPeers=0 totalBucketsToMerge=51 mergedBuckets=51 bucketsUnableToMerge=0 createdBuckets=2 totalSizeOfMergedBucket s=1586MB (Additional space required for localizing S2 buckets up to the equivalent of sizeOfMergedBuckets for each peer) ---------------------------------------------------------------------------------------------------------------------------------------- Have anyone experienced the same earlier or could help me with the resolutions.
I have events with an array field named "tags".  The tags array has 2 fields for each array object named "name" and "type".  I reference this array as tags{}.name.  The values being returned for one... See more...
I have events with an array field named "tags".  The tags array has 2 fields for each array object named "name" and "type".  I reference this array as tags{}.name.  The values being returned for one event are: name, type Dept_Finance, Custom Asset_Workstation, Custom My goal is to count the events by tags starting with "Dept_".     (index="index_name") | dedup id | stats count by tags{}.name      This returns the correct count of tags for "Dept_" but it's also including all other tags that do not begin with "Dept_".  The Asset_Workstation tag is attached to this event however I don't want it to output in the query.  How can I pull records with multiple tags but exclude all tags not beginning with "Dept_" from the output? I know this is an easy thing to do but I'm still learning SPL.  Thanks for your help.
Use the transpose command to flip the table. | stats max(gb) as GB by metric_name | transpose header_field=metric_name  
Hey everyone, I'm stumped trying to put together a query to find specific hosts that return some value but not some other possible values over a given timeframe, where each result is itself a separat... See more...
Hey everyone, I'm stumped trying to put together a query to find specific hosts that return some value but not some other possible values over a given timeframe, where each result is itself a separate log entry (and each device returns multiple results each time it does an operation). E.g., given a list of possible results, the data itself looks something like this:     (results from today:) hostname=x result=2 hostname=x result=3 hostname=y result=1 hostname=z result=1 (results from yesterday/previous days:) hostname=x result=1 hostname=y result=1 hostname=z result=1     and I need to find all hostnames that had a result of "1" but also not results "2" or "3" over some given timeframe. So, from the data above, I'd be looking to return hostnames "y" and "z", but not "x". Unfortunately, the timeframe would be weeks, and would be looking at many thousands of possible hostnames. The only data point I'd know ahead of time would be the list of possible results (it'd only be a handful of possibilities, but a device can potentially return some/all of them at once). Any advice on where to start? Thanks!
I know this is quite an old post... There's 2 parts to consider: The Data Model constraints filter  Event Types and tagging Ensure you have created appropriate Event Types and Tag(s) within S... See more...
I know this is quite an old post... There's 2 parts to consider: The Data Model constraints filter  Event Types and tagging Ensure you have created appropriate Event Types and Tag(s) within Settings >> Event types per the data model constraints query. Then the field extractions will occur within the data model / dataset because the events will then be mapped via the tags to the appropriate data model.
Hi ITwhispper,   Thanks for getting back!! Can you show what you mean in terms of where to add what you are saying?   Thanks  
You have bin'd _time but not included it in the by clause of your stats command
Hi Using following query: `mbp_ocp4` kubernetes.container.name =*service* level=NG_SERVICE_PERFORMANCE SERVICE!=DPTDRetrieveArrangementDetail* | eval resp_time_exceeded = if(EXETIME>3000, "1","0") ... See more...
Hi Using following query: `mbp_ocp4` kubernetes.container.name =*service* level=NG_SERVICE_PERFORMANCE SERVICE!=DPTDRetrieveArrangementDetail* | eval resp_time_exceeded = if(EXETIME>3000, "1","0") |bin span=30m _time bins=2 | stats count as "total_requests", sum(resp_time_exceeded) as long_calls by kubernetes.namespace.name, kubernetes.container.name | eval Percent_Exceeded = (long_calls/total_requests)*100 | where total_requests>200 and Percent_Exceeded>5   Getting results as shown below: I use the following IN THE CODE ABOVE |bin span=30m _time bins=2 BUT NOT GETTING so that the data is shown in 30 minutes increments? How can I refine the query so that it shows 30 minute increments instead of all  at once?
Hi! you'll want to poke thru the docs on splunk config files: https://docs.splunk.com/Documentation/Splunk/9.1.2/Admin/AboutConfigurationFiles But tldr is I would use "btool" - https://docs.splunk... See more...
Hi! you'll want to poke thru the docs on splunk config files: https://docs.splunk.com/Documentation/Splunk/9.1.2/Admin/AboutConfigurationFiles But tldr is I would use "btool" - https://docs.splunk.com/Documentation/Splunk/9.1.2/Troubleshooting/Usebtooltotroubleshootconfigurations and you'll want to go hunting for "inputs.conf" - this is where your spunk instances would be taking the data in, then comb thru props.conf - where the sourcetypes and event parsing/transformation/routing happens.. It is also common to have splunk co-located with a syslog listener who puts logs down that we pick up. So a quick `ss -tulpn` or `netstat -tulpn` will show what ports, if any, are open on your Heavy Forwarders.  so getting good with btool or reviewing Inputs and sourcetypes in your splunk ui will be key
Hi! Can you clarify what the actual source of these logs are? Are these logs in the pod's stdout/stderr logs from /var/log/pods, or are they in some other location? What is the path to them? Ni... See more...
Hi! Can you clarify what the actual source of these logs are? Are these logs in the pod's stdout/stderr logs from /var/log/pods, or are they in some other location? What is the path to them? Nix nodes or windows nodes? As mentioned already, OTel does fingerprint logs and filelog settings expect to be tailing live logs, so if this is a bit different use case we can add a custom receiver in the "extraFilelogs" section of the helm chart. 
Anytime there's a large amount of alerts or data, you just have to find a way to summarize it and then break it apart and focus on one thing at a time to see what is actually going on. I'd start by r... See more...
Anytime there's a large amount of alerts or data, you just have to find a way to summarize it and then break it apart and focus on one thing at a time to see what is actually going on. I'd start by reviewing what signatures are coming in. Is it a majority of certain types? Is that type actually a concern in your environment? Are these alerts actually things that are concerning or is something set up strangely on the firewall? You will likely need to work with the firewall team to ensure that the threat detections on their side are set up in a useful way and you're not getting alerts for expected traffic. So, lots of things to do here, but start by breaking down the information into chunks. I'd focus on types of signatures and known assets/networks.