All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

 This returns the correct count of tags for "Dept_" but it's also including all other tags that do not begin with "Dept_". I don't understand this statement.  Other than groupby field tags{}.n... See more...
 This returns the correct count of tags for "Dept_" but it's also including all other tags that do not begin with "Dept_". I don't understand this statement.  Other than groupby field tags{}.name itself,  | stats count by tags{}.name only gives one single output, that is count.  How does it "include other tags", Dept_ or not? You can help us by illustrating the results you want, and the results the search actually gives you. (Anonymize as needed.)  Explain why the result is not what you expected.
As stated I've tried searching through all files within /etc/* (including all .conf files) for the following: "514", "tcp", "udp", "syslog", or "SC4S". I get no results. You mentioned I should check ... See more...
As stated I've tried searching through all files within /etc/* (including all .conf files) for the following: "514", "tcp", "udp", "syslog", or "SC4S". I get no results. You mentioned I should check inputs.conf, but I've already done this and found nothing - could you elaborate on what exactly I should be searching for? Are there additional keywords I should try? I confirmed that the Heavy Forwarders are listening on port 514. Syslog is working... I just don't see how it's configured.  Edit: I also want to ask - what could btool find that a sudo grep search wouldn't have located?
Hi All, I have a sample json event in Splunk as below.  Please help me to understand how I can parse custom_tags value from below . This may have multiple key value pairs in it { account:xyz, ev... See more...
Hi All, I have a sample json event in Splunk as below.  Please help me to understand how I can parse custom_tags value from below . This may have multiple key value pairs in it { account:xyz, eventdate:01/25/2024, properties: {             version:1.0,              requestID: cvv,               response: {"statusCode":"200", "result":"{\"run_id\":465253,\"custom_tags\":{\"jobname\":\"xyz\",\"domain\":\"bgg\"}}}               time:12:55 } }
Hi Team, I have tried executing the cluster-merge-buckets command on Cluster Manager and got the following error / exception: Command: ./splunk cluster-merge-buckets -index-name _audit -startda... See more...
Hi Team, I have tried executing the cluster-merge-buckets command on Cluster Manager and got the following error / exception: Command: ./splunk cluster-merge-buckets -index-name _audit -startdate 2020/01/01 -enddate 2024/01/24 -max-count 1000 -min-size 1 -max-total-size 1024 Output: Using the following config:  -max-count=1000 -min-size=1 -max-size=1000 -max-timespan=7776000 Dryrun has started. merge_txn_id=1706209703.24 [...] peer=IDX01 processStatus=Merge_Done totalBucketsToMerge=28 mergedBuckets=28 bucketsUnableToMerge=0 createdBuckets=1 sizeOfMergedBuckets= 868MB progress=100.0% [...] peer=IDX02 processStatus=Merge_Done totalBucketsToMerge=23 mergedBuckets=23 bucketsUnableToMerge=0 createdBuckets=1 sizeOfMergedBuckets= 718MB progress=100.0% progress=100.0% peers=2 completedPeers=2 failedPeers=0 totalBucketsToMerge=51 mergedBuckets=51 bucketsUnableToMerge=0 createdBuckets=2 totalSizeOfMergedBucket s=1586MB (Additional space required for localizing S2 buckets up to the equivalent of sizeOfMergedBuckets for each peer) ---------------------------------------------------------------------------------------------------------------------------------------- Have anyone experienced the same earlier or could help me with the resolutions.
I have events with an array field named "tags".  The tags array has 2 fields for each array object named "name" and "type".  I reference this array as tags{}.name.  The values being returned for one... See more...
I have events with an array field named "tags".  The tags array has 2 fields for each array object named "name" and "type".  I reference this array as tags{}.name.  The values being returned for one event are: name, type Dept_Finance, Custom Asset_Workstation, Custom My goal is to count the events by tags starting with "Dept_".     (index="index_name") | dedup id | stats count by tags{}.name      This returns the correct count of tags for "Dept_" but it's also including all other tags that do not begin with "Dept_".  The Asset_Workstation tag is attached to this event however I don't want it to output in the query.  How can I pull records with multiple tags but exclude all tags not beginning with "Dept_" from the output? I know this is an easy thing to do but I'm still learning SPL.  Thanks for your help.
Use the transpose command to flip the table. | stats max(gb) as GB by metric_name | transpose header_field=metric_name  
Hey everyone, I'm stumped trying to put together a query to find specific hosts that return some value but not some other possible values over a given timeframe, where each result is itself a separat... See more...
Hey everyone, I'm stumped trying to put together a query to find specific hosts that return some value but not some other possible values over a given timeframe, where each result is itself a separate log entry (and each device returns multiple results each time it does an operation). E.g., given a list of possible results, the data itself looks something like this:     (results from today:) hostname=x result=2 hostname=x result=3 hostname=y result=1 hostname=z result=1 (results from yesterday/previous days:) hostname=x result=1 hostname=y result=1 hostname=z result=1     and I need to find all hostnames that had a result of "1" but also not results "2" or "3" over some given timeframe. So, from the data above, I'd be looking to return hostnames "y" and "z", but not "x". Unfortunately, the timeframe would be weeks, and would be looking at many thousands of possible hostnames. The only data point I'd know ahead of time would be the list of possible results (it'd only be a handful of possibilities, but a device can potentially return some/all of them at once). Any advice on where to start? Thanks!
I know this is quite an old post... There's 2 parts to consider: The Data Model constraints filter  Event Types and tagging Ensure you have created appropriate Event Types and Tag(s) within S... See more...
I know this is quite an old post... There's 2 parts to consider: The Data Model constraints filter  Event Types and tagging Ensure you have created appropriate Event Types and Tag(s) within Settings >> Event types per the data model constraints query. Then the field extractions will occur within the data model / dataset because the events will then be mapped via the tags to the appropriate data model.
Hi ITwhispper,   Thanks for getting back!! Can you show what you mean in terms of where to add what you are saying?   Thanks  
You have bin'd _time but not included it in the by clause of your stats command
Hi Using following query: `mbp_ocp4` kubernetes.container.name =*service* level=NG_SERVICE_PERFORMANCE SERVICE!=DPTDRetrieveArrangementDetail* | eval resp_time_exceeded = if(EXETIME>3000, "1","0") ... See more...
Hi Using following query: `mbp_ocp4` kubernetes.container.name =*service* level=NG_SERVICE_PERFORMANCE SERVICE!=DPTDRetrieveArrangementDetail* | eval resp_time_exceeded = if(EXETIME>3000, "1","0") |bin span=30m _time bins=2 | stats count as "total_requests", sum(resp_time_exceeded) as long_calls by kubernetes.namespace.name, kubernetes.container.name | eval Percent_Exceeded = (long_calls/total_requests)*100 | where total_requests>200 and Percent_Exceeded>5   Getting results as shown below: I use the following IN THE CODE ABOVE |bin span=30m _time bins=2 BUT NOT GETTING so that the data is shown in 30 minutes increments? How can I refine the query so that it shows 30 minute increments instead of all  at once?
Hi! you'll want to poke thru the docs on splunk config files: https://docs.splunk.com/Documentation/Splunk/9.1.2/Admin/AboutConfigurationFiles But tldr is I would use "btool" - https://docs.splunk... See more...
Hi! you'll want to poke thru the docs on splunk config files: https://docs.splunk.com/Documentation/Splunk/9.1.2/Admin/AboutConfigurationFiles But tldr is I would use "btool" - https://docs.splunk.com/Documentation/Splunk/9.1.2/Troubleshooting/Usebtooltotroubleshootconfigurations and you'll want to go hunting for "inputs.conf" - this is where your spunk instances would be taking the data in, then comb thru props.conf - where the sourcetypes and event parsing/transformation/routing happens.. It is also common to have splunk co-located with a syslog listener who puts logs down that we pick up. So a quick `ss -tulpn` or `netstat -tulpn` will show what ports, if any, are open on your Heavy Forwarders.  so getting good with btool or reviewing Inputs and sourcetypes in your splunk ui will be key
Hi! Can you clarify what the actual source of these logs are? Are these logs in the pod's stdout/stderr logs from /var/log/pods, or are they in some other location? What is the path to them? Ni... See more...
Hi! Can you clarify what the actual source of these logs are? Are these logs in the pod's stdout/stderr logs from /var/log/pods, or are they in some other location? What is the path to them? Nix nodes or windows nodes? As mentioned already, OTel does fingerprint logs and filelog settings expect to be tailing live logs, so if this is a bit different use case we can add a custom receiver in the "extraFilelogs" section of the helm chart. 
Anytime there's a large amount of alerts or data, you just have to find a way to summarize it and then break it apart and focus on one thing at a time to see what is actually going on. I'd start by r... See more...
Anytime there's a large amount of alerts or data, you just have to find a way to summarize it and then break it apart and focus on one thing at a time to see what is actually going on. I'd start by reviewing what signatures are coming in. Is it a majority of certain types? Is that type actually a concern in your environment? Are these alerts actually things that are concerning or is something set up strangely on the firewall? You will likely need to work with the firewall team to ensure that the threat detections on their side are set up in a useful way and you're not getting alerts for expected traffic. So, lots of things to do here, but start by breaking down the information into chunks. I'd focus on types of signatures and known assets/networks.
Hey! Yeah OTel file exporter can be used.  https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/fileexporter You could also configure otel to filter the data you... See more...
Hey! Yeah OTel file exporter can be used.  https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/fileexporter You could also configure otel to filter the data you want and send to file and hec simultaneously! https://github.com/signalfx/splunk-otel-collector/tree/main/examples/otel-logs-routing Splunk otel distro has these components: https://github.com/signalfx/splunk-otel-collector/blob/main/docs/components.md
Hi @scelikok  Hi I have done all the things that you have tell in this communication , still it is not sending logs , can it be an issue of time or cluster, or any other configuration we have to do ... See more...
Hi @scelikok  Hi I have done all the things that you have tell in this communication , still it is not sending logs , can it be an issue of time or cluster, or any other configuration we have to do in helm chart  Your help will be appreciated Thanks
As far as I know, this isn't a feature that's available built in. However, you can make dashboards about it using the notable index. https://lantern.splunk.com/?title=Security%2FUCE%2FPrioritized_Act... See more...
As far as I know, this isn't a feature that's available built in. However, you can make dashboards about it using the notable index. https://lantern.splunk.com/?title=Security%2FUCE%2FPrioritized_Actions%2FCyber_frameworks%2FAssessing_and_expanding_MITRE_ATT%26CK_coverage_in_Splunk_Enterprise_Security outlines some searches that would help you gather that information.
My organization has a handful of heavy forwarders that were configured to listen to syslog sources through udp://514. This was set up by a 3rd party, and now we are trying to understand the configura... See more...
My organization has a handful of heavy forwarders that were configured to listen to syslog sources through udp://514. This was set up by a 3rd party, and now we are trying to understand the configuration. Searching the heavy forwarders' /etc/* recursively for "514", "tcp", "udp", "syslog", or "SC4S" returns no relevant results. We know syslog is working, because we have multiple sources that are pointed at the heavy forwarders using udp over port 514 and their data is being indexed. Curiously, when a new syslog source is pointed at the HFs, a new index with a random name pops up in our LastChanceIndex. We have no idea how any of this is configured - the index selection, or the syslog listener. We usually create an index that matches the name given, since we've never been able to find the config to set it manually. Any suggestions on how syslog might be set up, or what else I could try searching for?
Hi, I'm trying to run this below sample code to send msg to Splunk, but getting error - Host not found. Am I doing right? I'm able to ping the splunk server (171.134.154.114) from my dev Linux ser... See more...
Hi, I'm trying to run this below sample code to send msg to Splunk, but getting error - Host not found. Am I doing right? I'm able to ping the splunk server (171.134.154.114) from my dev Linux server. However, I'm able to use Curl command successfully and able to see my msg in Splunk dashboard. doc/html/boost_asio/example/http/client/sync_client.cpp - 1.47.0 ./sync_client 171.134.154.114 /services/collector arg[1]:171.134.154.114 Exception: resolve: Host not found (authoritative) [asio.netdb:1]  
I am having trouble to create propper drilldown action between two Custom ITSI Entity Dashboards. They both work fine, when called by clicking the Entity Name in the Service Analyzer. The two Entity... See more...
I am having trouble to create propper drilldown action between two Custom ITSI Entity Dashboards. They both work fine, when called by clicking the Entity Name in the Service Analyzer. The two Entity Dashboards show data from two custom entity types with some relation to each other. I want to create a navigation between the two Dashboards. I did create a normal drilldown action to call the related Dashboard. This works somehow, but the Token is not handled correctly. for example I defined Token Parameters: host = $click.value2$ and in the target dashboard I see |search host=$click.value2$ instead of the real value that should have been handed over in the token. When I use the Dashboards outside of ITSI, the drilldown action works fine. Looks to me that in ITSI some scripts are used and the handover is not directly to the other Entity Dashboard, but somehow through the Entity (_key) and the defined entity type. Great if somebody could shed some insights on that!