All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Kim  Please can you share details on setting up indexer discovery for an independent streamfwd installation as you suggest? This isnt something I am particularly familiar with however I didnt th... See more...
Hi @Kim  Please can you share details on setting up indexer discovery for an independent streamfwd installation as you suggest? This isnt something I am particularly familiar with however I didnt think this was possible, as the streamfwd.conf only takes a list of indexers and HEC token? https://docs.splunk.com/Documentation/StreamApp/8.1.5/DeployStreamApp/InstallStreamForwarderonindependentmachine Is there an associated reference for the known issue so I can look into this further?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Priya  Please can you confirm, you say that logs stop appearing - is it that logs you were previously able to see are no longer visible? Or that logs start coming in (and still visible) but then... See more...
Hi @Priya  Please can you confirm, you say that logs stop appearing - is it that logs you were previously able to see are no longer visible? Or that logs start coming in (and still visible) but then stop arriving?  If logs are being indexed but not searchable for very long then this could indicate an issue with the indexes.conf configuration (e.g. archive/freezing too soon). If logs start being indexed but then seem to pause (and the old logs are still available/visible in Splunk) then this seems to suggest a blockage either receiving the logs or sending the logs. What is the source of the logs? Can you check the _internal logs for any errors, specifically around ingestion? Can you see the _internal logs for the hosts sending your data? Sorry for all the questions, but this will help understand the problem better and prevent too much speculation!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @thanh_on , different data retentions is one of the most reasons to have different indexes. In this case, in indexes.conf, you have to define for each index you created the frozenTimePeriodInSec... See more...
Hi @thanh_on , different data retentions is one of the most reasons to have different indexes. In this case, in indexes.conf, you have to define for each index you created the frozenTimePeriodInSecs option in indexes.conf. In your cases, 180 days is 15552000 seconds and 365 days is 31536000 seconds: [idx_fgt] <other settings> frozenTimePeriodInSecs = 15552000 [idx_windows] <other settings> frozenTimePeriodInSecs = 31536000 Passed this period, data can be deleted or moved offline (copied in a different location). Remember that the retention policies are applied at index level on buckets, in other words, you could have data that exceed data retention because they are in a bucket where there is at least one event with timestamp inside the retention period. When the earliest event exceed the retention period the bucket is deleted or moved offline. Ciao. Giuseppe
In addition to @PrewinThomas's breakdown method, I can suggest relative_time to take advantage of Splunk's format strings. | eval offset = replace('Time', "Ended: (\d+d)(\d+h)(\d+m)(\d+s)", "+\1+\2+... See more...
In addition to @PrewinThomas's breakdown method, I can suggest relative_time to take advantage of Splunk's format strings. | eval offset = replace('Time', "Ended: (\d+d)(\d+h)(\d+m)(\d+s)", "+\1+\2+\3") | eval time_sec = relative_time(0, offset) relative_time's offset requires a + or a - before every time unit.  So, we transform 0d1h55m0s to +0d+1h+55m.
splunk index is flowing, but in application its not reflecting. We are currently investigating an issue where logs stop appearing in the UI after a short period of time. For example, in the apps_log... See more...
splunk index is flowing, but in application its not reflecting. We are currently investigating an issue where logs stop appearing in the UI after a short period of time. For example, in the apps_log, logs are visible for a few minutes but then stop showing up. This behavior is inconsistent across environments — in some, logs are visible as expected, while in others, they're missing entirely. The Splunk index appears to be receiving the data, but it's not being reflected in the application UI. We're not yet sure what’s causing this discrepancy and would appreciate any insights or assistance you can provide.
There are several ways to do this.  A traditional method is to backfill every day. index IN (index1, index2, index3, index4) | bin span=1d _time | chart count _time over index | append [ makeres... See more...
There are several ways to do this.  A traditional method is to backfill every day. index IN (index1, index2, index3, index4) | bin span=1d _time | chart count _time over index | append [ makeresults | timechart span=1d@d count | fields - count] | stats values(*) as * by _time | fillnull Note I replaced your last line with fillnull. (This command is worth learning.) Another somewhat sneaky way to do this depends on the real stats you perform.  If it is simple stats such as count, you can just "sneak in" something that always have some value, such as index _internal. index IN (index1, index2, index3, index4, _internal) | bin span=1d _time | chart count _time over index | fields - VALUE_internal | fillnull
@ma620k  Did you defined as an output variable in the custom code block’s configuration? Your variable likely not being exported due to this. Reference - https://docs.splunk.com/Documentation/SO... See more...
@ma620k  Did you defined as an output variable in the custom code block’s configuration? Your variable likely not being exported due to this. Reference - https://docs.splunk.com/Documentation/SOARonprem/6.3.1/Playbook/CustomFunction Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a kudos. Thanks!
@Kim  This is a known issue with independent Streamfwd when used with an indexer cluster: After a cluster bundle push and indexer restarts, Streamfwd may lose its knowledge of all available indexer... See more...
@Kim  This is a known issue with independent Streamfwd when used with an indexer cluster: After a cluster bundle push and indexer restarts, Streamfwd may lose its knowledge of all available indexers and start sending all data to a single indexer. Workaround - Restart Streamfwd After Indexer Restarts Solution - Use Indexer Discovery Indexer Discovery is the recommended way for forwarders (including Streamfwd) to dynamically learn the available indexers from the master node Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a kudos. Thanks!
@thanh_on  Yes, you can use license_usage.log to calculate daily data volume per index. simple query to check by index. index=_internal source=*license_usage.log type="Usage" idx=* | timechart s... See more...
@thanh_on  Yes, you can use license_usage.log to calculate daily data volume per index. simple query to check by index. index=_internal source=*license_usage.log type="Usage" idx=* | timechart span=1d sum(b) as bytes by idx | eval GB=round(bytes/1024/1024/1024, 2)
@rafiq_rehman  These errors mean the new member’s KV Store is still in standalone mode. Initialize SHC Config on the New Member splunk init shcluster-config \ -auth <admin_user>:<admin_pass> ... See more...
@rafiq_rehman  These errors mean the new member’s KV Store is still in standalone mode. Initialize SHC Config on the New Member splunk init shcluster-config \ -auth <admin_user>:<admin_pass> \ -mgmt_uri https://<new_member_host>:8089 \ -replication_port <replication_port> \ -replication_factor <factor> \ -conf_deploy_fetch_url https://<deployer_host>:8089 \ -secret <pass4SymmKey> \ -shcluster_label <label> Add the New Member to the Cluster splunk add shcluster-member -current_member_uri https://<existing_member_or_captain>:8089 Confirm pass4SymmKey and mgmt_uri in $SPLUNK_HOME/etc/system/local/server.conf
@dtaylor  strptime expects a date/time string, not a duration. Your field (Ended: 0d1h55m0s) is a duration (days, hours, minutes, seconds), not an absolute date/time. try below, | rex field=Time ... See more...
@dtaylor  strptime expects a date/time string, not a duration. Your field (Ended: 0d1h55m0s) is a duration (days, hours, minutes, seconds), not an absolute date/time. try below, | rex field=Time "Ended: (?<days>\d+)d(?<hours>\d+)h(?<minutes>\d+)m(?<seconds>\d+)s" | eval duration = (days*86400) + (hours*3600) + (minutes*60) + seconds Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a kudos. Thanks!
Hopefully I've only got a small problem this time, but I've had no luck fixing it despite hours of trying. All I'm trying to do is convert a string time field to unix using strptime. This is my time ... See more...
Hopefully I've only got a small problem this time, but I've had no luck fixing it despite hours of trying. All I'm trying to do is convert a string time field to unix using strptime. This is my time field: Ended: 0d1h55m0s   I've been trying to convert it to unix using the following command: | eval time_sec = strptime('Time', "Ended: %dd%Hh%Mm%Ss")   For clarity, this is the full search: | inputlookup metrics.csv | eval occurred=strftime(strptime(occurred,"%a, %d %b %Y %T %Z"), "%F %T %Z") | eval closed=strftime(strptime(closed,"%a, %d %b %Y %T %Z"), "%F %T %Z") | eval time_sec = strptime('Time', "Ended: %dd%Hh%Mm") | where strptime(occurred, "%F %T") >= strptime("2025-05-01 00:00:00", "%F %T") AND (isnull(closeReason) OR closeReason="Resolved") | fillnull value=Resolved closeReason   The example time I've posted above 0d1h55m0s should ideally convert to 6900(seconds).
My bad I removed Splunk and just rebuilt it fresh. Still same issue, kvstore has been in starting status. This is what I see in mongod.log: 2025-05-29T03:22:15.751Z I CONTROL [LogicalSessionCa... See more...
My bad I removed Splunk and just rebuilt it fresh. Still same issue, kvstore has been in starting status. This is what I see in mongod.log: 2025-05-29T03:22:15.751Z I CONTROL [LogicalSessionCacheReap] Failed to reap transaction table: NotYetInitialized: Replication has not yet been configured 2025-05-29T03:22:15.747Z I CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Replication has not yet been configured  
It's the existing SHC, just trying to add new member. Captain is working and 8191 port is also opened. This is what I see in mongod log: 2025-05-29T03:22:15.751Z I CONTROL [LogicalSessionCacheReap... See more...
It's the existing SHC, just trying to add new member. Captain is working and 8191 port is also opened. This is what I see in mongod log: 2025-05-29T03:22:15.751Z I CONTROL [LogicalSessionCacheReap] Failed to reap transaction table: NotYetInitialized: Replication has not yet been configured 2025-05-29T03:22:15.747Z I CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Replication has not yet been configured
Hi @livehybrid , Thank you for your anwser, FYI, I have separated the index for each device or vendor because each device has different data retention policies. Due to it, I need to calculate "Dai... See more...
Hi @livehybrid , Thank you for your anwser, FYI, I have separated the index for each device or vendor because each device has different data retention policies. Due to it, I need to calculate "Daily Data Volume" and configure stanza for each index on indexes.conf For example: [idx_fgt] (180 days searchable) [idx_windows] (365 days searchable) Can I use *license_usage.log* by each index for this situation ? Thanks & best regards.
Hi @gcusello , Thank you for your anwser, FYI, I have separated the index for each device or vendor because each device has different data retention policies. Due to it, I need to calculate "Daily... See more...
Hi @gcusello , Thank you for your anwser, FYI, I have separated the index for each device or vendor because each device has different data retention policies. Due to it, I need to calculate "Daily Data Volume" and configure stanza for each index on indexes.conf For example: [idx_fgt] (180 days searchable) [idx_windows] (365 days searchable)   Do you have any suggestion ?   Thanks & best regards.
"Thanks a lot for the detailed info — I really appreciate it! I'm fully on board and diving into it. Great to have your attention on this. By the way, the DS server is running on Linux."
Had similar issue.  https://regexr.com helped me figure it out
Hi @SCK  I do not know much about Snowflake but it seems you might be able to create a User Defined Function (UDF) and then use Python to call the Splunk REST API to pull your data? If this isnt an... See more...
Hi @SCK  I do not know much about Snowflake but it seems you might be able to create a User Defined Function (UDF) and then use Python to call the Splunk REST API to pull your data? If this isnt an option then you might be able to achieve the same results by using something like Amazon S3 Sink Alert Action For Splunk to send your output from Splunk into S3 before then importing this in to Snowflake.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Kim  Are you able to post the streamfwd logs to see if there is anything in there which might suggest why it isnt re-establishing the connection to the indexers listed?  Does a restart of strea... See more...
Hi @Kim  Are you able to post the streamfwd logs to see if there is anything in there which might suggest why it isnt re-establishing the connection to the indexers listed?  Does a restart of streamfwd re-instate the connection to the other indexer nodes?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing