All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hey all, I have question regarding license enforcement. We currently have a "50 GB (No enforcement) Enterprise Term-license" and exceeding it since 10 days. I already read the "License Enforcement ... See more...
Hey all, I have question regarding license enforcement. We currently have a "50 GB (No enforcement) Enterprise Term-license" and exceeding it since 10 days. I already read the "License Enforcement FAQ" and other posts but its no 100% clear to me if there will be enforcement in our environment after 45 warnings in 60 days. I understand that below 100 GB there is conditional enforcement but is this is also the case with the "no enforcement" key we have? What will happen to our environment / search function if we exceed the volume more than 45 times in 60 days? Best regards, Cimey
Hello @judithsr  in that context, what would happen. When there is a 50GB Enterprise license withe the "no enforcement" key? (so below 100GB) Will the search still be turned off in that case? Best ... See more...
Hello @judithsr  in that context, what would happen. When there is a 50GB Enterprise license withe the "no enforcement" key? (so below 100GB) Will the search still be turned off in that case? Best regards
Hi @PickleRick , I also tried to associate the transformation to a sourcetype and it doesn't work: In props.conf: [cisco:ios] TRANSFORMS-relay_hostname = relay_hostname [cisco:ise:syslog] TRANSFOR... See more...
Hi @PickleRick , I also tried to associate the transformation to a sourcetype and it doesn't work: In props.conf: [cisco:ios] TRANSFORMS-relay_hostname = relay_hostname [cisco:ise:syslog] TRANSFORMS-relay_hostname = relay_hostname [f5:bigip:ltm:tcl:error] TRANSFORMS-relay_hostname = relay_hostname [f5:bigip:syslog] TRANSFORMS-relay_hostname = relay_hostname [fortigate_event] TRANSFORMS-relay_hostname = relay_hostname [fortigate_traffic] TRANSFORMS-relay_hostname = relay_hostname [infoblox:audit] TRANSFORMS-relay_hostname = relay_hostname [infoblox:dhcp] TRANSFORMS-relay_hostname = relay_hostname [infoblox:dns] TRANSFORMS-relay_hostname = relay_hostname [infoblox:file] TRANSFORMS-relay_hostname = relay_hostname [juniper:junos:firewall] TRANSFORMS-relay_hostname = relay_hostname [juniper:junos:switch] TRANSFORMS-relay_hostname = relay_hostname [pan:system] TRANSFORMS-relay_hostname = relay_hostname [pan:traffic] TRANSFORMS-relay_hostname = relay_hostname [pan:userid] TRANSFORMS-relay_hostname = relay_hostname [pps_log] TRANSFORMS-relay_hostname = relay_hostname in transforms.conf: [relay_hostname] REGEX = /var/log/remote/([^/]+)/.* FORMAT = relay_hostname::$1 WRITE_META = true SOURCE_KEY = MetaData:Source REPEAT_MATCH = false and also tried: [relay_hostname] INGEST_EVAL = relay_hostname = replace(source, "(/var/log/remote/)([^/]+)(/.*)","\2") but both thies failed! Thank you for your support, have you any other idea, wher to search the issue? Ciao. Giuseppe
how get svc usage for each installed apps and add on in splunk cloud
I'm not sure that approach would work? Are you able to share some SPL with me so I can fully understand what your approach is. Here's some sample data. I've worked out what the values should be using... See more...
I'm not sure that approach would work? Are you able to share some SPL with me so I can fully understand what your approach is. Here's some sample data. I've worked out what the values should be using excel with a good ol' fashioned countif. DepOrArr displayed_flyt_no ASRT asrt_epoch ATOT_ALDT runway_epoch queue A flight1     29/02/2024 00:52 1709167953   A flight2     29/02/2024 01:41 1709170889   A flight3     29/02/2024 05:08 1709183310   A flight4     29/02/2024 05:33 1709184834   A flight5     29/02/2024 05:36 1709185003   A flight6     29/02/2024 05:40 1709185247   A flight7     29/02/2024 05:51 1709185889   A flight8     29/02/2024 06:01 1709186519   A flight9     29/02/2024 06:04 1709186679   D flight10 29/02/2024 05:49 1709185740 29/02/2024 06:08 1709186895 3 D flight11 29/02/2024 05:57 1709186220 29/02/2024 06:10 1709187000 3 D flight12 29/02/2024 06:03 1709186580 29/02/2024 06:14 1709187280 3 A flight13     29/02/2024 06:19 1709187540   D flight14 29/02/2024 06:15 1709187300 29/02/2024 06:29 1709188186 1 D flight15 29/02/2024 06:16 1709187360 29/02/2024 06:31 1709188261 2 D flight16 29/02/2024 06:19 1709187540 29/02/2024 06:32 1709188338 3 D flight17 29/02/2024 06:22 1709187720 29/02/2024 06:34 1709188485 3 D flight18 29/02/2024 06:31 1709188260 29/02/2024 06:42 1709188973 3 A flight19     29/02/2024 06:44 1709189074   D flight20 29/02/2024 06:32 1709188320 29/02/2024 06:46 1709189180 4 D flight21 29/02/2024 06:38 1709188680 29/02/2024 06:47 1709189278 3 A flight22     29/02/2024 06:49 1709189378   D flight23 29/02/2024 06:27 1709188020 29/02/2024 06:51 1709189475 9 A flight24     29/02/2024 06:52 1709189531   D flight25 29/02/2024 06:36 1709188560 29/02/2024 06:54 1709189648 7 A flight26     29/02/2024 06:55 1709189707   D flight27 29/02/2024 06:43 1709188980 29/02/2024 06:56 1709189807 8 A flight28     29/02/2024 06:57 1709189868   D flight29 29/02/2024 06:45 1709189100 29/02/2024 06:59 1709189970 9 A flight30     29/02/2024 07:01 1709190080   D flight31 29/02/2024 06:46 1709189160 29/02/2024 07:03 1709190229 11 D flight32 29/02/2024 06:49 1709189340 29/02/2024 07:04 1709190292 10 D flight33 29/02/2024 06:47 1709189220 29/02/2024 07:06 1709190373 12 D flight34 29/02/2024 06:53 1709189580 29/02/2024 07:07 1709190447 9 D flight35 29/02/2024 06:50 1709189400 29/02/2024 07:09 1709190577 12 A flight36     29/02/2024 07:10 1709190630   D flight37 29/02/2024 06:56 1709189760 29/02/2024 07:12 1709190720 10 A flight38     29/02/2024 07:13 1709190798   D flight39 29/02/2024 06:55 1709189700 29/02/2024 07:14 1709190892 13 A flight40     29/02/2024 07:15 1709190939   D flight41 29/02/2024 06:57 1709189820 29/02/2024 07:16 1709191019 13 D flight42 29/02/2024 06:45 1709189100 29/02/2024 07:18 1709191096 22 A flight43     29/02/2024 07:18 1709191123   D flight44 29/02/2024 07:04 1709190240 29/02/2024 07:20 1709191225 12 D flight45 29/02/2024 07:06 1709190360 29/02/2024 07:21 1709191299 12 A flight46     29/02/2024 07:22 1709191364   D flight47 29/02/2024 07:07 1709190420 29/02/2024 07:24 1709191474 13 A flight48     29/02/2024 07:25 1709191548   D flight49 29/02/2024 06:59 1709189940 29/02/2024 07:27 1709191640 20 A flight50     29/02/2024 07:28 1709191701   D flight51 29/02/2024 06:58 1709189880 29/02/2024 07:29 1709191786 22 A flight52     29/02/2024 07:31 1709191881   D flight53 29/02/2024 07:10 1709190600 29/02/2024 07:34 1709192073 17 D flight54 29/02/2024 07:11 1709190660 29/02/2024 07:35 1709192137 17 A flight55     29/02/2024 07:36 1709192194   D flight56 29/02/2024 07:12 1709190720 29/02/2024 07:38 1709192299 19 A flight57     29/02/2024 07:38 1709192339   D flight58 29/02/2024 07:16 1709190960 29/02/2024 07:40 1709192441 17 A flight59     29/02/2024 07:41 1709192511   D flight60 29/02/2024 07:04 1709190240 29/02/2024 07:43 1709192613 28 A flight61     29/02/2024 07:44 1709192692   D flight62 29/02/2024 07:17 1709191020 29/02/2024 07:46 1709192788 20 D flight63 29/02/2024 07:19 1709191140 29/02/2024 07:47 1709192845 19 A flight64     29/02/2024 07:48 1709192888   D flight65 29/02/2024 07:23 1709191380 29/02/2024 07:49 1709192986 18 D flight66 29/02/2024 07:27 1709191620 29/02/2024 07:50 1709193048 17 A flight67     29/02/2024 07:51 1709193089  
Hi @isoutamo , the use of [default] stanza is a must because I have many sourcetypes and I would avoid to write a stanza for each of them. For this reason I also tried to use [source::/var/log/*] b... See more...
Hi @isoutamo , the use of [default] stanza is a must because I have many sourcetypes and I would avoid to write a stanza for each of them. For this reason I also tried to use [source::/var/log/*] but it didn't run!. Anyway, there isn't any othe HF before of these because this is an rsyslog server that receives syslogs. Thank you, have you any other check that I could try? Now I'm trying using a fixed string to understand if the issue is in the regx or in the [default] stanza. Ciao. Giuseppe
Hi Alessandro. If that's your actual copy-pasted excerpt from props.conf you have transforms-rebuild = group1 instead of TRANSFORMS-rebuild = group1 (yes, case does matter here)
The current bundle directory contains a large lookup file that might cause bundle replication fail. The path to the directory is /opt/splunk/var/run/sh-i-***********.splunlcloud.com-17*********.bundl... See more...
The current bundle directory contains a large lookup file that might cause bundle replication fail. The path to the directory is /opt/splunk/var/run/sh-i-***********.splunlcloud.com-17*********.bundle. It's a cloud environment. Checked the HF and found nothing on the path to tar. There's seems nothing on the community for cloud environment. Found some solution for on prem. Thanks for your help
UFs are supported on a relatively wide range of equipment and OS versions (and even if the current UF doesn't support your older hardware or OS release you can still use an older version of UF within... See more...
UFs are supported on a relatively wide range of equipment and OS versions (and even if the current UF doesn't support your older hardware or OS release you can still use an older version of UF within the compatibility boundaries - and sometimes even beyond that but I wouldn't advise running UFs that old anyway). if I'm not mistaken, the error is from the service trying to connect to a running splunkd instance. Check your splunkd.log to see what's going on. Also - how did you install that forwarder? RPM? Or just unpacked the tgz?
Any update on this? We have the same issue here.
I think I'd approach it from a completely different side. I wouldn't try to track down single ATOT_ALDT occurrences. Just count how many ATOT_ALDT occured before "your" ASRT and then check how many w... See more...
I think I'd approach it from a completely different side. I wouldn't try to track down single ATOT_ALDT occurrences. Just count how many ATOT_ALDT occured before "your" ASRT and then check how many were at the time of your ATOT_ALDT. Depending on whether you include your own ATOT_ALDT in that count or not you might need to correct the result by one but the answer to your question would be simply count@ATOT_ALDT-count@ASRT. So use the streamstats to count ATOT_ALDTs, then use stats/eventsats or even transaction to match ASRT with ATOT_ALDT for a single flight and calculate the difference in counts.
Argh. That's ugly data. You need to firstly extract the array part | spath inboundErrorSummary{} Then you have to split it into separate rows | mvexpand inboundErrorSummary{} And then you have t... See more...
Argh. That's ugly data. You need to firstly extract the array part | spath inboundErrorSummary{} Then you have to split it into separate rows | mvexpand inboundErrorSummary{} And then you have to parse the json again | spath input=inboundErrorSummary{} At this point you'll have separate fields called "name" and "value" at each result row and you'll be able to do stats/chart/timechart/whatever you want with it.
To monitor Apache Hadoop clusters with Splunk Enterprise 6 or later versions, you can follow these steps: Configure data ingestion: Use the Hadoop Connect feature in Splunk Enterprise to collect dat... See more...
To monitor Apache Hadoop clusters with Splunk Enterprise 6 or later versions, you can follow these steps: Configure data ingestion: Use the Hadoop Connect feature in Splunk Enterprise to collect data from the Hadoop cluster. This feature allows you to ingest data from various sources, including Hadoop Distributed File System (HDFS), YARN, MapReduce, and Hive1. Set up Hadoop monitoring inputs: Configure Splunk to monitor the Hadoop cluster by specifying the relevant log files and metrics to collect. You can use the Hadoop App for Splunk, which provides pre-configured inputs and dashboards specifically designed for monitoring Hadoop clusters1. Monitor performance and troubleshoot issues: Leverage the monitoring capabilities of Splunk Enterprise to gain insights into the performance of your Hadoop cluster. Use the pre-built dashboards and reports to visualize and analyze various metrics, such as resource utilization, job status, data ingestion rates, and error logs. This allows you to identify bottlenecks, troubleshoot issues, and optimize the cluster performance1. Set up alerts and notifications: Configure alerts in Splunk Enterprise to receive real-time notifications when specific events or conditions occur in the Hadoop cluster. This enables you to proactively address issues and ensure the smooth operation of the cluster1. By following these steps, you can effectively monitor Apache Hadoop clusters using Splunk Enterprise 6 or later versions. For more detailed instructions and best practices, you can refer to the Splunk community forum, where users have discussed how to monitor Hadoop with Splunk1. Hope this helps! For more details visit our website: https://www.strongboxit.com
I didn't notice that it was in the [default] stanza. I'm not sure but I seem to recall that there was something about it and applying a "default transform". Stupid question but just to be on the saf... See more...
I didn't notice that it was in the [default] stanza. I'm not sure but I seem to recall that there was something about it and applying a "default transform". Stupid question but just to be on the safe side - you don't have any HF before this one? (so you're doing all this on the parsing component, not just getting parsed data from earlier, right?). Oh, and remember that if you're using indexed extractions, they are parsed at UF so your transforms won't work on them later. Anyway, assuming that it's done in the proper spot in the path, I'd try something like that to verify that the transform is being run at all.   [relay_hostname] REGEX = . FORMAT = relay_hostname::constantvalue WRITE_META = true SOURCE_KEY = MetaData:Source REPEAT_MATCH = false   Anyway, with (/var/log/remote/)([^/]+)(/.*) you don't have to capture neither first nor last group. You just need to capture the middle part so your regex can as well just be /var/log/remote/([^/]+)/.*
Yes, eventstats can indeed be sometimes used when you need to retain the original events but remember that eventstats is a "heavier" command than single stats (it has to keep all the events and add t... See more...
Yes, eventstats can indeed be sometimes used when you need to retain the original events but remember that eventstats is a "heavier" command than single stats (it has to keep all the events and add the summarized data to all events so it needs potentially way way more resources than simple stats; it's also not that well distributable)
Hi,   I have multiple events with the following JSON object.   { "timeStamp": "2024-02-29T10:00:00.673Z", "collectionIntervalInMinutes": "1", "node": "plgiasrtfing001", "inboundErrorSummary": ... See more...
Hi,   I have multiple events with the following JSON object.   { "timeStamp": "2024-02-29T10:00:00.673Z", "collectionIntervalInMinutes": "1", "node": "plgiasrtfing001", "inboundErrorSummary": [ { "name": "400BadRequestMalformedHeader", "value": 1 }, { "name": "501NotImplementedMethod", "value": 2 }, { "name": "otherErrorResponses", "value": 1 } ] }     I am trying to extract the name/values from the inboundErrorSummary array and display the sum total of all the values of the same name and plot them by time. So the output should be something like             Date 400BadRequestMalformedHeader 501NotImplementedMethod otherErrorResponses 2024-02-29T10:00:00 1 2 1 2024-02-29T11:00:00 10 40 50   Even a total count of each name field should also work. I am quite new to splunk queries, so hope someone can help and also explain the steps on how its done. Thanks in advance.
At least earlier I have had some issue to use [default]. The end result was that I must move those to actual sourcetype definition or otherwise those didn't affect as I was hopping. Also CLONE_SOURC... See more...
At least earlier I have had some issue to use [default]. The end result was that I must move those to actual sourcetype definition or otherwise those didn't affect as I was hopping. Also CLONE_SOURCETYPE has some caveat when you want to manipulate it. I think that @PickleRick  has some case on last autumn about this, where we try to solve same kind of situation?
Hi @isoutamo , yes I have some CLONE_SOURCETYPE, but I applied the transformation in props.conf in the default stanza:   [default] TRANSFORMS-abc = fieldname   and this should be applied to all ... See more...
Hi @isoutamo , yes I have some CLONE_SOURCETYPE, but I applied the transformation in props.conf in the default stanza:   [default] TRANSFORMS-abc = fieldname   and this should be applied to all the sourcetypes. Maybe I could try to apply to source: [source::/var/log/remote/*] TRANSFORMS-abc = fieldname Ciao. Giuseppe
Based on your example data etc. this works. | makeresults | eval source="/var/log/remote/abc/def.xyx" | eval relay_hostname = replace(source, "/var/log/remote/([^/]+)/.*","\1") So it should work al... See more...
Based on your example data etc. this works. | makeresults | eval source="/var/log/remote/abc/def.xyx" | eval relay_hostname = replace(source, "/var/log/remote/([^/]+)/.*","\1") So it should work also on props.conf! Are you absolutely sure that those sourcetype names are correct on your props.conf and that there are not any CLONE_SOURCETYPE etc. which can lead to wrong path? You should also check that there is no host or source definitions which overrides that sourcetype  definition.
Hi @isoutamo , I tried your solution: [relay_hostname] INGEST_EVAL = relay_hostname = replace(source, "(/var/log/remote/)([^/]+)(/.*)","\2") with no luck. As I said, I have the doubt that I would... See more...
Hi @isoutamo , I tried your solution: [relay_hostname] INGEST_EVAL = relay_hostname = replace(source, "(/var/log/remote/)([^/]+)(/.*)","\2") with no luck. As I said, I have the doubt that I would extract the new field from the source field that maybe isn't still extracted! I also tried a transformation at search time: with the same result. thank you and ciao. Giuseppe