All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @livehybrid , Thank you for your anwser, FYI, I have separated the index for each device or vendor because each device has different data retention policies. Due to it, I need to calculate "Dai... See more...
Hi @livehybrid , Thank you for your anwser, FYI, I have separated the index for each device or vendor because each device has different data retention policies. Due to it, I need to calculate "Daily Data Volume" and configure stanza for each index on indexes.conf For example: [idx_fgt] (180 days searchable) [idx_windows] (365 days searchable) Can I use *license_usage.log* by each index for this situation ? Thanks & best regards.
Hi @gcusello , Thank you for your anwser, FYI, I have separated the index for each device or vendor because each device has different data retention policies. Due to it, I need to calculate "Daily... See more...
Hi @gcusello , Thank you for your anwser, FYI, I have separated the index for each device or vendor because each device has different data retention policies. Due to it, I need to calculate "Daily Data Volume" and configure stanza for each index on indexes.conf For example: [idx_fgt] (180 days searchable) [idx_windows] (365 days searchable)   Do you have any suggestion ?   Thanks & best regards.
"Thanks a lot for the detailed info — I really appreciate it! I'm fully on board and diving into it. Great to have your attention on this. By the way, the DS server is running on Linux."
Had similar issue.  https://regexr.com helped me figure it out
Hi @SCK  I do not know much about Snowflake but it seems you might be able to create a User Defined Function (UDF) and then use Python to call the Splunk REST API to pull your data? If this isnt an... See more...
Hi @SCK  I do not know much about Snowflake but it seems you might be able to create a User Defined Function (UDF) and then use Python to call the Splunk REST API to pull your data? If this isnt an option then you might be able to achieve the same results by using something like Amazon S3 Sink Alert Action For Splunk to send your output from Splunk into S3 before then importing this in to Snowflake.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Kim  Are you able to post the streamfwd logs to see if there is anything in there which might suggest why it isnt re-establishing the connection to the indexers listed?  Does a restart of strea... See more...
Hi @Kim  Are you able to post the streamfwd logs to see if there is anything in there which might suggest why it isnt re-establishing the connection to the indexers listed?  Does a restart of streamfwd re-instate the connection to the other indexer nodes?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
That's a great solution. If you know you don't need/want events with status_code=3xx then you can (but don't have to) filter them out in the base query.  Filtering out events and fields you know you... See more...
That's a great solution. If you know you don't need/want events with status_code=3xx then you can (but don't have to) filter them out in the base query.  Filtering out events and fields you know you don't need will help the search perform better.
Thank you. I would set this as the solution, but i can only do one solution   Thank you for your time. 
Thank you both for the prompt response.    This is the impression I was under as well when speaking with our Splunk rep. Now that we are upgrading, the mis match of OS will only be for a week or tw... See more...
Thank you both for the prompt response.    This is the impression I was under as well when speaking with our Splunk rep. Now that we are upgrading, the mis match of OS will only be for a week or two at most, but was wanting to confirm that the data availability/searchability wouldn't be affected. Thank you both for the resources and in depth answers given. 
Ah! Okay, this is because you're using earliest/latest not the time-picker, we can fix that - try the below table section instead: | table Source [| makeresults count=7 | streamstats count as mont... See more...
Ah! Okay, this is because you're using earliest/latest not the time-picker, we can fix that - try the below table section instead: | table Source [| makeresults count=7 | streamstats count as month_offset | eval start_epoch=relative_time(now(),"-6mon@mon"), end_epoch=now() | eval start_month=strftime(start_epoch, "%Y-%m-01") | eval month_epoch = relative_time(strptime(start_month, "%Y-%m-%d"), "+" . (month_offset-1) . "mon") | where month_epoch <= end_epoch | eval month=strftime(month_epoch, "%b") | stats list(month) as search ]  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
You can find earlier discussions on this topic within Answers. It's... a bit tricky. There are at least several different things at play here. 1. The technicalities - as Splunk brings with it most ... See more...
You can find earlier discussions on this topic within Answers. It's... a bit tricky. There are at least several different things at play here. 1. The technicalities - as Splunk brings with it most of the things it requires and generally uses just the "bare OS" level of what is provided by OS, there shouldn't be a problem with running different OS versions (even different distros - I run a combined CentOS/SuSE environment for some time). It should work. 2. The maintainability - as different distros and different releases have some different mechanisms (like different startup scripts, different ways of configuring the system and so on), a mixed environment is much more prone to errors and misconfiguration. 3. The supportability - the official docs say that all cluster members must run the same "OS and version". And here's where it gets really tricky - there is no single official explanation what this means so while technically it could just mean that all boxes must be Linux-based and they must be running 64-bit OS version (and that could really be the bare minimum to make the cluster work), it can also be understood as "all boxes must use the same Linux distro and they all must be running the same release". So long story short - from the technical point of view it usually doesn't make much of a difference whether you're running RHEL9 across your whole environment or if some boxes are still at RHEL8 (if you already have a RHEL8 environment and want to migrate to RHEL9, you will have at some point a situation when some boxes are already migrated and some are not) but if you raise a support case and support finds out that you have a mixed setup, they might want to tell you "get your environment in order and align your OS versions".
Hi @Abass42  From my experience you can run a Splunk cluster (indexer or search head) with a mix of RHEL 7 and RHEL 8 hosts, including having your cluster manager on RHEL 8 while some peers remain o... See more...
Hi @Abass42  From my experience you can run a Splunk cluster (indexer or search head) with a mix of RHEL 7 and RHEL 8 hosts, including having your cluster manager on RHEL 8 while some peers remain on RHEL 7, as long as all Splunk nodes are running the same supported Splunk version.  The underlying OS version does not affect Splunk clustering compatibility, provided both OS versions are supported by the Splunk version in use, although it becomes much more complicated if mis-matching underlying OS e.g. Windows Vs Linux! This is from a technical standpoint though, While mixed OS versions may be supported during migration periods, the recommended long-term state is to standardize all nodes in the cluster on the same, newer supported OS version. You run the risk of being in an un-supported state if you remain in a mixed version state. OS differences alone should not impact search or cluster management functionality, Splunk communicates via supported network protocols, not OS-specific mechanisms.  You have possibly already seen this but its worth reviewing https://help.splunk.com/en/splunk-enterprise/get-started/install-and-upgrade/9.2/plan-your-splunk-enterprise-installation/system-requirements-for-use-of-splunk-enterprise-on-premises  I would also recommend that after migrating all nodes to RHEL 8, to revalidate ulimits etc. Avoid running mismatched Splunk software versions across cluster nodes/IDX where possible to avoid different performance across different nodes. If you use custom scripts or apps, validate their dependencies (Python, OS libraries) for OS compatibility.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I am upgrading from RHEL 7 to RHEL 8 in light of end of support for Red Hat. We have a clustered environment. We have two sites per cluster for each the SH and Indexer Cluster. All splunk servers are... See more...
I am upgrading from RHEL 7 to RHEL 8 in light of end of support for Red Hat. We have a clustered environment. We have two sites per cluster for each the SH and Indexer Cluster. All splunk servers are on 9.2.0.1.     My question is, can we run a RHEL 8 Cluster Master and have a mixed environment of RH 8 and RH 7 servers within the cluster? I know there is a hierarchy for the servers, but i wasn't sure to what extent the OS affected the application.    With the upgrade, I might have:   RHEL 8 Indexer Cluster Manager while Indexers themselves are on RHEL 7. RHEL 8 SH cluster Manager while SH may be on RHEL 7. Depending on how the in-place upgrade goes, determines how many servers I upgrade at once. These are all Azure servers or VmWare servers.    Would any search functionality for any of the search peers be affected by differing OS versions?   Thank you for any clarity. 
@livehybrid , I tried the same query as you suggested, not sure why it is giving me data only for May month | tstats summariesonly=false dc(Message_Log.msg.header.message-id) as Blocked from dat... See more...
@livehybrid , I tried the same query as you suggested, not sure why it is giving me data only for May month | tstats summariesonly=false dc(Message_Log.msg.header.message-id) as Blocked from datamodel=pps_ondemand where (Message_Log.filter.routeDirection="inbound") AND (Message_Log.filter.disposition="discard" OR Message_Log.filter.disposition="reject" OR Message_Log.filter.quarantine.folder="Spam*") earliest=-6mon@mon latest=now by _time | eval Source="Email" | eval MonthNum=strftime(_time, "%Y-%m"), MonthName=strftime(_time, "%b") | stats sum(Blocked) as Blocked by Source MonthNum MonthName | xyseries Source MonthName Blocked | addinfo | table Source [| makeresults count=60 | streamstats count as month_offset | addinfo | eval start_epoch=info_min_time, end_epoch=info_max_time | eval start_month=strftime(start_epoch, "%Y-%m-01") | eval month_epoch = relative_time(strptime(start_month, "%Y-%m-%d"), "+" . (month_offset-1) . "mon") | where month_epoch <= end_epoch | eval month=strftime(month_epoch, "%b") | stats list(month) as search ]  
Also remember that you should not have your environment sized "tightly" - RF and SF should be smaller than your number of indexers. Otherwise your cluster will not be able to rebalance data in case o... See more...
Also remember that you should not have your environment sized "tightly" - RF and SF should be smaller than your number of indexers. Otherwise your cluster will not be able to rebalance data in case of indexer failure.
Ok. Back to square one - what (in terms of business goal of your search, not technical means you're trying to use) is your search supposed to achieve?
Hello, colleagues. I'm using an independent stream forwarder installed on Ubuntu 22.04.05 as a service. After updating to 8.1.5 bytes_in, bytes_out, packets_in, packets_out are always equal to zero... See more...
Hello, colleagues. I'm using an independent stream forwarder installed on Ubuntu 22.04.05 as a service. After updating to 8.1.5 bytes_in, bytes_out, packets_in, packets_out are always equal to zero. If I stop the service and change /opt/streamfwd/bin/streamfwd from 8.1.5 to 8.1.3 and start sert service again, everything is ok.  Anybody run into this? thanks. { [-] app_tag: PANA-L7-PEN : ххххххххх bytes_in: 0 bytes_out: 0 dest_ip: x.x.x.x dest_port: 55438 endtime: 2025-05-28T15:01:26Z event_name: netFlowData exporter_ip: x.x.x.x exporter_time: 2025-May-28 15:01:26 exporter_uptime: 3148584010 flow_end_reason: 3 flow_end_rel: 0 flow_start_rel: 0 fwd_status: 64 input_snmpidx: 168 netflow_elements: [ [+] ] netflow_version: 9 observation_domain_id: 1 output_snmpidx: 127 packets_in: 0 packets_out: 0 protoid: 6 selector_id: 0 seqnumber: 2278842767 src_ip: x.x.x.x src_port: 9997 timestamp: 2025-05-28T15:01:26Z tos: 0 }
Hello, colleagues. I am using independent streamfwd as a service installed on Linux Ubuntu 22.04.05. Streamfwd gets settings from the stream app and gets the indexers list. Everything is ok, streamf... See more...
Hello, colleagues. I am using independent streamfwd as a service installed on Linux Ubuntu 22.04.05. Streamfwd gets settings from the stream app and gets the indexers list. Everything is ok, streamfwd balancing data between all indexers, but if I made a push from the master node to the indexers cluster, and the indexers are rebooting, data balancing breaks after that streamfwd sending data just to one indexer. I can't find how to fix this. Please help thanks
Did you resolve this issue @gazoscreek ? I have same problem when upgrade from 9.2 to 9.4 currently.
Hi Rich, since i am breaking them into separate columns - i used this using if condition | eval TwoXXonly=if(status_code>=200 and status_code <300,1,0) | eval FourXXonly=if(status_code>=400 and s... See more...
Hi Rich, since i am breaking them into separate columns - i used this using if condition | eval TwoXXonly=if(status_code>=200 and status_code <300,1,0) | eval FourXXonly=if(status_code>=400 and status_code <500,1,0) | eval FiveXXonly=if(status_code>=500 and status_code <600,1,0) | stats sum(TwoXXonly) as Total_2xx, sum(FourXXonly) as Total_4xx,sum(FiveXXonly) as Total_5xx by date_only, org,cId,pPath, apie,apiPct,envnt | table list of fieds say for ex; in my data today i dont have 300 events but if they show up tomorrow - do i need to explicitly filter them out as i dont need them at all  - i have not used the status_code in by clause just confused - should i use the filter to explicitly exclude 300 ?