All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Pooja1  There are a few things to cover off here, I guess the first is who did the migration? Usually Splunk PS will check that all scheduled searches are running without errors and cleanly befo... See more...
Hi @Pooja1  There are a few things to cover off here, I guess the first is who did the migration? Usually Splunk PS will check that all scheduled searches are running without errors and cleanly before handing over.  Regarding the search - I see there isnt much difference between them, mainly the index you're collecting in to.  How have you determined that the search *isnt* running? Have you seen any specific errors in _internal/_audit regarding the search?  Has the proofpoint_summary index been created in Splunk Cloud? Who is the search owned by, is this a service account/nobody/specific user? Do you, and the search owner have access to the proofpoint_summary index?  Please let me know if you're able to provide some of the answers to this as it will help pinpoint the issue.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi Team, On May 20th, we successfully migrated from Splunk On-Prem to Splunk Cloud. We have a scheduled search that runs every 31 minutes, which was functioning correctly in the on-prem environmen... See more...
Hi Team, On May 20th, we successfully migrated from Splunk On-Prem to Splunk Cloud. We have a scheduled search that runs every 31 minutes, which was functioning correctly in the on-prem environment. However, after the migration, the same search query is no longer working in the cloud environment. on-prem index=proofpoint earliest=-32m@m latest=-1m@m | transaction x, qid keepevicted=true | search action=* cmd=env_from cmd=env_rcpt | addinfo | fields action country delay dest duration file_hash file_name file_size internal_message_id message_id message_info orig_dest orig_recipient orig_src process process_id protocol recipient recipient_count recipient_status reply response_time retries return_addr size src src_user status_code subject url user vendor_product xdelay xref filter_action filter_score signature signature signature_extra signature_id | fields - _raw | join type=outer internal_message_id [search index=summary sourcetype=proofpoint_stash earliest=-48m | fields internal_message_id | dedup internal_message_id | eval inSummary="T"] | search NOT inSummary="T"| collect index=summary addtime=true source=proofpoint sourcetype=proofpoint_stash Cloud index=proofpoint earliest=-32m@m latest=-1m@m | transaction x, qid keepevicted=true | search action=* cmd=env_from cmd=env_rcpt | addinfo | fields action country delay dest duration file_hash file_name file_size internal_message_id message_id message_info orig_dest orig_recipient orig_src process process_id protocol recipient recipient_count recipient_status reply response_time retries return_addr size src src_user status_code subject url user vendor_product xdelay xref filter_action filter_score signature signature signature_extra signature_id | fields - _raw | join type=outer internal_message_id [search index=summary sourcetype=stash earliest=-48m | fields internal_message_id | dedup internal_message_id | eval inSummary="T"] | search NOT inSummary="T"| collect index=proofpoint_summary addtime=true source=proofpoint sourcetype=stash Thanks
Try checking their GitHub page. They have documentation there. https://github.com/livehybrid/TA-aws-trusted-advisor   Regards
Hi @gcusello , As you see in 3rd screenshot, the smaple time that i have ingested is 2025-05-27 17:38:07.991, but in the 2nd screenshot the time stamp chnaged to 2025-05-23 05:25:50.795 in the fie... See more...
Hi @gcusello , As you see in 3rd screenshot, the smaple time that i have ingested is 2025-05-27 17:38:07.991, but in the 2nd screenshot the time stamp chnaged to 2025-05-23 05:25:50.795 in the field name Loo_time in the results , I dont know the reason. also I have use only epoch time from the lookup while comparing with index data which is already in epoch time. This was my only concern, Can you please help me to fix this.  Thanks!
Dear @gcusello , Thank you for your advice, As your recommnedation, I just do these steps (please fix me if I'm wrong): 1) Using Splunk Clustering Daily License Usage as a Daily Data Volume (Examp... See more...
Dear @gcusello , Thank you for your advice, As your recommnedation, I just do these steps (please fix me if I'm wrong): 1) Using Splunk Clustering Daily License Usage as a Daily Data Volume (Example: 10GB per day) 2) Using Splunk Sizing site to calculate with retention policy requirement 3) Configuring indexes.conf as your recommendation for each volume [volume:hotwarm] path = /mnt/hotwarm_disk maxVolumeDataSizeMB = 102400 #100G [volume:cold] path = /mnt/cold_disk maxVolumeDataSizeMB = 204,800 #200G #Frozen Disk: /mnt/frozen_disk is 410G [idx] homePath = volume:hotwarm/defaultdb/db coldPath = volume:cold/defaultdb/colddb thawedPath = $SPLUNK_DB/defaultdb/thaweddb frozenTimePeriodInSecs = 7776000 #90 days searchable coldToFrozenDir = /mnt/frozen_disk/defaultdb/frozendb   Thanks & Best regards.
Thank you so much! Understood.
Thank you so much! Understood.
Hi @thanh_on , yes, obviously: I hinted only for the retention period. Only one hint: don't define the capacity of each index, but create a volume that will contain all your indexes and define the... See more...
Hi @thanh_on , yes, obviously: I hinted only for the retention period. Only one hint: don't define the capacity of each index, but create a volume that will contain all your indexes and define the max volume dimension. In this way, you can dynamicall manage indexes dimensions. For volume creation and configuration see at https://docs.splunk.com/Documentation/Splunk/9.4.2/Admin/Indexesconf#indexes.conf.spec This is an example: ### This example demonstrates the use of volumes ### # volume definitions; prefixed with "volume:" [volume:hot1] path = /mnt/fast_disk maxVolumeDataSizeMB = 100000 [volume:cold1] path = /mnt/big_disk # maxVolumeDataSizeMB not specified: no data size limitation on top of the # existing ones [volume:cold2] path = /mnt/big_disk2 maxVolumeDataSizeMB = 1000000 # index definitions [idx1] homePath = volume:hot1/idx1 coldPath = volume:cold1/idx1 # thawedPath must be specified, and cannot use volume: syntax # choose a location convenient for reconstitition from archive goals # For many sites, this may never be used. thawedPath = $SPLUNK_DB/idx1/thaweddb [idx2] # note that the specific indexes must take care to avoid collisions homePath = volume:hot1/idx2 coldPath = volume:cold2/idx2 thawedPath = $SPLUNK_DB/idx2/thaweddb [idx3] homePath = volume:hot1/idx3 coldPath = volume:cold2/idx3 thawedPath = $SPLUNK_DB/idx3/thaweddb [idx4] datatype = metric homePath = volume:hot1/idx4 coldPath = volume:cold2/idx4 thawedPath = $SPLUNK_DB/idx4/thaweddb metric.maxHotBuckets = 6 metric.splitByIndexKeys = metric_name Ciao. Giuseppe
For linux systems you can execute touch on the folder that the logs reside in. Location of files: /mnt/mymountedazurefile/myfolder/logfile.json Mounted shared drive: /mnt/mymountedazurefile Co... See more...
For linux systems you can execute touch on the folder that the logs reside in. Location of files: /mnt/mymountedazurefile/myfolder/logfile.json Mounted shared drive: /mnt/mymountedazurefile Command: touch /mnt/mymountedazurefile/myfolder Splunk detects the new files instantly. It is a better solution than restarting splunk service.  We have a splunk server that has a mounted shared from Azure Files. All the app services in azure write to the same shared disk for log files. This is a temporary solution until we migrate application logging to Azure Event Hub. A useful commad for debuggin: /opt/splunk/bin/splunk list inputstatus | grep <filename>
Dear @gcusello  Thanks you for your advice, I think frozenTimePeriodInSecs is not enough, We need to define homePatch.maxDatasizeMB, coldPath.maxDataSizeMB, maxTotalsizeMB. Then summary all Inde... See more...
Dear @gcusello  Thanks you for your advice, I think frozenTimePeriodInSecs is not enough, We need to define homePatch.maxDatasizeMB, coldPath.maxDataSizeMB, maxTotalsizeMB. Then summary all Index capacity to define disk capacity for our retention polices. For example below: [idx_fgt] <other settings> homePath.maxDataSizeMB = 101200 # ~100GB coldPath.maxDataSizeMB = 256000 # 250GB maxTotalDataSizeMB = 357200 frozenTimePeriodInSecs = 15552000 [idx_windows] <other settings> homePath.maxDataSizeMB = 201200 # ~200GB coldPath.maxDataSizeMB = 356000 # ~350GB maxTotalDataSizeMB = 557200 frozenTimePeriodInSecs = 31536000   Summary [idx_fgt] and [idx_windows] we got for each indexer instance: ~300GB capacity for volume ../hot_warm/ ~600GB capacity for volume ../cold/ Our final goal is calculate addtitional capacity for disks on indexer instance. That's why in the title, we need to calculate Daily Data Volume More any suggestion from you ? Thanks & best regards.
Hello!  Solution - Use Indexer Discovery  - Could you please tell me how to configure this for streamfwd? As I understand it, streamfwd takes the indexer list from the stream app.... After restarti... See more...
Hello!  Solution - Use Indexer Discovery  - Could you please tell me how to configure this for streamfwd? As I understand it, streamfwd takes the indexer list from the stream app.... After restarting streamfwd the problem goes away, you are right. But this is not a solution in my perfectionist world.)  Now I have the following settings: cat /opt/streamfwd/local/inputs.conf [streamfwd://streamfwd] splunk_stream_app_location = https://x.x.x.x:8000/en-us/custom/splunk_app_stream/ disabled = 0 cat /opt/streamfwd/local/streamfwd.conf [streamfwd] port = 8889 ipAddr = x.x.x.x httpEventCollectorToken = xxxxxxxxxxxxxxxxxxxxxxxxxxxxx netflowReceiver.0.port = 9996 netflowReceiver.0.decoder = netflow netflowReceiver.0.ip = 0.0.0.0 netflowReceiver.0.decodingThreads = 4   Thanks.
I am facing the same scenissue. Can some please guide me on this?  Just adding version="1.1" on the dashboard source , remove the ldashboard name isted in jquery scan. But, unaware tif hat's the exp... See more...
I am facing the same scenissue. Can some please guide me on this?  Just adding version="1.1" on the dashboard source , remove the ldashboard name isted in jquery scan. But, unaware tif hat's the expected work around .  Also , pls explain this notes mention on  Remediation "Note: The above mentioned changes must happen in the default folder, and those changes must come by uploading a new TARBAL"
Hi @Kim  Please can you share details on setting up indexer discovery for an independent streamfwd installation as you suggest? This isnt something I am particularly familiar with however I didnt th... See more...
Hi @Kim  Please can you share details on setting up indexer discovery for an independent streamfwd installation as you suggest? This isnt something I am particularly familiar with however I didnt think this was possible, as the streamfwd.conf only takes a list of indexers and HEC token? https://docs.splunk.com/Documentation/StreamApp/8.1.5/DeployStreamApp/InstallStreamForwarderonindependentmachine Is there an associated reference for the known issue so I can look into this further?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Priya  Please can you confirm, you say that logs stop appearing - is it that logs you were previously able to see are no longer visible? Or that logs start coming in (and still visible) but then... See more...
Hi @Priya  Please can you confirm, you say that logs stop appearing - is it that logs you were previously able to see are no longer visible? Or that logs start coming in (and still visible) but then stop arriving?  If logs are being indexed but not searchable for very long then this could indicate an issue with the indexes.conf configuration (e.g. archive/freezing too soon). If logs start being indexed but then seem to pause (and the old logs are still available/visible in Splunk) then this seems to suggest a blockage either receiving the logs or sending the logs. What is the source of the logs? Can you check the _internal logs for any errors, specifically around ingestion? Can you see the _internal logs for the hosts sending your data? Sorry for all the questions, but this will help understand the problem better and prevent too much speculation!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @thanh_on , different data retentions is one of the most reasons to have different indexes. In this case, in indexes.conf, you have to define for each index you created the frozenTimePeriodInSec... See more...
Hi @thanh_on , different data retentions is one of the most reasons to have different indexes. In this case, in indexes.conf, you have to define for each index you created the frozenTimePeriodInSecs option in indexes.conf. In your cases, 180 days is 15552000 seconds and 365 days is 31536000 seconds: [idx_fgt] <other settings> frozenTimePeriodInSecs = 15552000 [idx_windows] <other settings> frozenTimePeriodInSecs = 31536000 Passed this period, data can be deleted or moved offline (copied in a different location). Remember that the retention policies are applied at index level on buckets, in other words, you could have data that exceed data retention because they are in a bucket where there is at least one event with timestamp inside the retention period. When the earliest event exceed the retention period the bucket is deleted or moved offline. Ciao. Giuseppe
In addition to @PrewinThomas's breakdown method, I can suggest relative_time to take advantage of Splunk's format strings. | eval offset = replace('Time', "Ended: (\d+d)(\d+h)(\d+m)(\d+s)", "+\1+\2+... See more...
In addition to @PrewinThomas's breakdown method, I can suggest relative_time to take advantage of Splunk's format strings. | eval offset = replace('Time', "Ended: (\d+d)(\d+h)(\d+m)(\d+s)", "+\1+\2+\3") | eval time_sec = relative_time(0, offset) relative_time's offset requires a + or a - before every time unit.  So, we transform 0d1h55m0s to +0d+1h+55m.
splunk index is flowing, but in application its not reflecting. We are currently investigating an issue where logs stop appearing in the UI after a short period of time. For example, in the apps_log... See more...
splunk index is flowing, but in application its not reflecting. We are currently investigating an issue where logs stop appearing in the UI after a short period of time. For example, in the apps_log, logs are visible for a few minutes but then stop showing up. This behavior is inconsistent across environments — in some, logs are visible as expected, while in others, they're missing entirely. The Splunk index appears to be receiving the data, but it's not being reflected in the application UI. We're not yet sure what’s causing this discrepancy and would appreciate any insights or assistance you can provide.
There are several ways to do this.  A traditional method is to backfill every day. index IN (index1, index2, index3, index4) | bin span=1d _time | chart count _time over index | append [ makeres... See more...
There are several ways to do this.  A traditional method is to backfill every day. index IN (index1, index2, index3, index4) | bin span=1d _time | chart count _time over index | append [ makeresults | timechart span=1d@d count | fields - count] | stats values(*) as * by _time | fillnull Note I replaced your last line with fillnull. (This command is worth learning.) Another somewhat sneaky way to do this depends on the real stats you perform.  If it is simple stats such as count, you can just "sneak in" something that always have some value, such as index _internal. index IN (index1, index2, index3, index4, _internal) | bin span=1d _time | chart count _time over index | fields - VALUE_internal | fillnull
@ma620k  Did you defined as an output variable in the custom code block’s configuration? Your variable likely not being exported due to this. Reference - https://docs.splunk.com/Documentation/SO... See more...
@ma620k  Did you defined as an output variable in the custom code block’s configuration? Your variable likely not being exported due to this. Reference - https://docs.splunk.com/Documentation/SOARonprem/6.3.1/Playbook/CustomFunction Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a kudos. Thanks!
@Kim  This is a known issue with independent Streamfwd when used with an indexer cluster: After a cluster bundle push and indexer restarts, Streamfwd may lose its knowledge of all available indexer... See more...
@Kim  This is a known issue with independent Streamfwd when used with an indexer cluster: After a cluster bundle push and indexer restarts, Streamfwd may lose its knowledge of all available indexers and start sending all data to a single indexer. Workaround - Restart Streamfwd After Indexer Restarts Solution - Use Indexer Discovery Indexer Discovery is the recommended way for forwarders (including Streamfwd) to dynamically learn the available indexers from the master node Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a kudos. Thanks!