All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @lakshman239 , I am working on something similar. I have a dashboard which is simply showing all the indexes, its sourcetypes, source_counts and host_counts using tstats query- | tstats values(s... See more...
Hi @lakshman239 , I am working on something similar. I have a dashboard which is simply showing all the indexes, its sourcetypes, source_counts and host_counts using tstats query- | tstats values(sourcetype) as sourcetypes dc(host) as hosts_count dc(source) as sources_count where index = * by index I am thinking to use rest query to populate the indexes in the dropdown and use then use that on tstats to show the sourcetypes/hosts for a particular index you pick in the dropdown. Can you help here how you are suggesting using the output of rest api into tstats. | rest splunk_server=local /services/cluster_blaster_indexes/sh_indexes_manager/ | stats count by title | fields - count  P.S. I am fairly new to splunk!
Hello @Paul_Szoke , I looked into the app further and it seems it wasn’t developed by Splunk, but by a third-party developer called Rojo Consultancy. Given that, I’m not sure if anyone from Splunk... See more...
Hello @Paul_Szoke , I looked into the app further and it seems it wasn’t developed by Splunk, but by a third-party developer called Rojo Consultancy. Given that, I’m not sure if anyone from Splunk would be able to list it again. The best approach would be to reach out directly to the app developer with a strong use case. You can contact them at hello@rojoconsultancy.com.
1. Yes Splunk PS was involved, and yes it was same query only summary index is changed.  2. In on-prem many fields are showing but in Cloud only 5-6 fields showing. 3.  proofpoint_summary was creat... See more...
1. Yes Splunk PS was involved, and yes it was same query only summary index is changed.  2. In on-prem many fields are showing but in Cloud only 5-6 fields showing. 3.  proofpoint_summary was created just to get the diff between summary index and proofpoint_summary and yes user is having access to it.  4. proofpoint_summary was created in Cloud
Hi @Nawab  That should work too, its using a regex to match so 127.0.0.1 I guess will also match  Did this answer help you? If so, please consider: Adding karma to show it was useful Markin... See more...
Hi @Nawab  That should work too, its using a regex to match so 127.0.0.1 I guess will also match  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @BB2  You could use the the following to search for a failed attempt or success delete: (index=_internal "You do not have the capability to delete") OR (index=_audit action=delete_by_keyword inf... See more...
Hi @BB2  You could use the the following to search for a failed attempt or success delete: (index=_internal "You do not have the capability to delete") OR (index=_audit action=delete_by_keyword info=granted)  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
I just pasted blacklist 127.0.0.1, and when I ran list monitor all 127.0.0.1 is remove, is it correct or should i used your method
Hi @Nawab  Modify the inputs.conf file on your Splunk forwarder to add a blacklist entry for the specific directory you want to exclude. Locate the stanza for your directory input, e.g. [monitor://... See more...
Hi @Nawab  Modify the inputs.conf file on your Splunk forwarder to add a blacklist entry for the specific directory you want to exclude. Locate the stanza for your directory input, e.g. [monitor:///opt/syslog/Fortigate], and add the blacklist line. [monitor:///opt/syslog/Fortigate] disabled = false # ... other settings like index, sourcetype ... blacklist = /opt/syslog/Fortigate/127\.0\.0\.1/.* This configuration tells Splunk to monitor the /opt/syslog/Fortigate directory but ignore any files or subdirectories within the /opt/syslog/Fortigate/127.0.0.1 path. The \. escapes the dots in the IP address, and /.* ensures that everything within that specific directory is excluded. After saving the changes to inputs.conf, you must restart the Splunk forwarder for the changes to take effect. Check out the following inputs.conf docs on blacklist of files: https://docs.splunk.com/Documentation/Splunk/latest/Admin/Inputsconf#:~:text=way.%0A*%20No%20default.-,blacklist%20%3D%20%3Cregular%20expression%3E,-*%20If%20set%2C%20files  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I have configured syslog-ng to listen on multiple ports, save them in a folder with IP name and hf to send logs to indexers,  In one case i have 127.0.0.1 sending as loopback to syslog-ng server, no... See more...
I have configured syslog-ng to listen on multiple ports, save them in a folder with IP name and hf to send logs to indexers,  In one case i have 127.0.0.1 sending as loopback to syslog-ng server, now i want to remove this IP from my input configs. let suppose I have below folder /opt/syslog/Fortigate under fortigate I have mutiple fortigates sending logs and i dont know in future we can add a new fortigate here 1 hot is 127.0.0.1, i want to remove this from my inputs, what should I do.
Absolutely @mchoudhary  So, what we are doing here is using a subsearch within the "table" command to generate the list of months you are interested in. Not many people realise but you can use a sub... See more...
Absolutely @mchoudhary  So, what we are doing here is using a subsearch within the "table" command to generate the list of months you are interested in. Not many people realise but you can use a subsearch in a lot more places than as part of an original search, e.g. to derive a variable for timechart span, or in our case to list some fields for your table command. Regarding the subsearch, this is what it is doing: 1. | makeresults count=7 Generates 7 dummy events (rows) to work with in the pipeline. (6 months ago + current month) 2. | streamstats count as month_offset For each of the 7 rows, assigns a sequential number in month_offset (from 1 to 7). This will be used to generate one value per month, going backwards in time. 3. | eval start_epoch=relative_time(now(),"-6mon@mon"), end_epoch=now() start_epoch calculates the epoch time at the start of the month, six months ago. -6mon@mon: Go back 6 months, then snap to the beginning of the month. end_epoch is the current epoch time. This sets the time range: from the start of 6 months ago until now. 4. | eval start_month=strftime(start_epoch, "%Y-%m-01") Formats start_epoch into a string representing the first day of the starting month (e.g., "2024-11-01"). 5. | eval month_epoch = relative_time(strptime(start_month, "%Y-%m-%d"), "+" . (month_offset-1) . "mon") For each row, this creates a timestamp for a month in the range. Increments from start_month by (month_offset - 1) months. month_offset runs 1 to 7. So, months generated will be: start_month + 0, +1, +2, ..., +6 months. This way, you get the start-of-month epoch for each month in the range. 6. | where month_epoch <= end_epoch Filters out any months whose starting epoch is greater than now (in case the 7 generated months go slightly into the future). 7. | eval month=strftime(month_epoch, "%b") Converts month_epoch into a "short month name" format (e.g., "Jan", "Feb", etc). 8. | stats list(month) as search Aggregates the results into a single row, with the months as a list, titled "search".   This is then returned from the subsearch as a list which is consumed by the table command. If you ran the search by itself you would get: Please let me know if you have any further questions on this! I'm really pleased to have got to the bottom of it!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello, Our company has gone through an audit and one of the auditors has asked us to monitor attempts to delete records in Splunk.  I did some research and found the search item below which would do... See more...
Hello, Our company has gone through an audit and one of the auditors has asked us to monitor attempts to delete records in Splunk.  I did some research and found the search item below which would do the trick.  The issue is if I setup an alert with this, the alert is triggered because the previous search for this alert is saved and we get alerted for that search because the word delete is in that search.   index=_audit action=search | regex search="\\|(\\s|\\n|\\r|([\\s\\S]*))*delete" Is there a way to ignore this search string when doing a search?  Or has anybody been able to setup an alert for attempts to delete records? We only have 4 admins with the can_delete role but the auditors want to be sure if an admin tries to delete records, there will be an alert.  
I found with v9.4.1 that if I click on any cell in the last row and then press the down button on my keyboard that I am then able to get to the rest of the rows.
@livehybrid It worked like magic! Thank you so much. Also, if you could explain the logic behind it, I would be grateful
Hi @Pooja1  There are a few things to cover off here, I guess the first is who did the migration? Usually Splunk PS will check that all scheduled searches are running without errors and cleanly befo... See more...
Hi @Pooja1  There are a few things to cover off here, I guess the first is who did the migration? Usually Splunk PS will check that all scheduled searches are running without errors and cleanly before handing over.  Regarding the search - I see there isnt much difference between them, mainly the index you're collecting in to.  How have you determined that the search *isnt* running? Have you seen any specific errors in _internal/_audit regarding the search?  Has the proofpoint_summary index been created in Splunk Cloud? Who is the search owned by, is this a service account/nobody/specific user? Do you, and the search owner have access to the proofpoint_summary index?  Please let me know if you're able to provide some of the answers to this as it will help pinpoint the issue.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi Team, On May 20th, we successfully migrated from Splunk On-Prem to Splunk Cloud. We have a scheduled search that runs every 31 minutes, which was functioning correctly in the on-prem environmen... See more...
Hi Team, On May 20th, we successfully migrated from Splunk On-Prem to Splunk Cloud. We have a scheduled search that runs every 31 minutes, which was functioning correctly in the on-prem environment. However, after the migration, the same search query is no longer working in the cloud environment. on-prem index=proofpoint earliest=-32m@m latest=-1m@m | transaction x, qid keepevicted=true | search action=* cmd=env_from cmd=env_rcpt | addinfo | fields action country delay dest duration file_hash file_name file_size internal_message_id message_id message_info orig_dest orig_recipient orig_src process process_id protocol recipient recipient_count recipient_status reply response_time retries return_addr size src src_user status_code subject url user vendor_product xdelay xref filter_action filter_score signature signature signature_extra signature_id | fields - _raw | join type=outer internal_message_id [search index=summary sourcetype=proofpoint_stash earliest=-48m | fields internal_message_id | dedup internal_message_id | eval inSummary="T"] | search NOT inSummary="T"| collect index=summary addtime=true source=proofpoint sourcetype=proofpoint_stash Cloud index=proofpoint earliest=-32m@m latest=-1m@m | transaction x, qid keepevicted=true | search action=* cmd=env_from cmd=env_rcpt | addinfo | fields action country delay dest duration file_hash file_name file_size internal_message_id message_id message_info orig_dest orig_recipient orig_src process process_id protocol recipient recipient_count recipient_status reply response_time retries return_addr size src src_user status_code subject url user vendor_product xdelay xref filter_action filter_score signature signature signature_extra signature_id | fields - _raw | join type=outer internal_message_id [search index=summary sourcetype=stash earliest=-48m | fields internal_message_id | dedup internal_message_id | eval inSummary="T"] | search NOT inSummary="T"| collect index=proofpoint_summary addtime=true source=proofpoint sourcetype=stash Thanks
Try checking their GitHub page. They have documentation there. https://github.com/livehybrid/TA-aws-trusted-advisor   Regards
Hi @gcusello , As you see in 3rd screenshot, the smaple time that i have ingested is 2025-05-27 17:38:07.991, but in the 2nd screenshot the time stamp chnaged to 2025-05-23 05:25:50.795 in the fie... See more...
Hi @gcusello , As you see in 3rd screenshot, the smaple time that i have ingested is 2025-05-27 17:38:07.991, but in the 2nd screenshot the time stamp chnaged to 2025-05-23 05:25:50.795 in the field name Loo_time in the results , I dont know the reason. also I have use only epoch time from the lookup while comparing with index data which is already in epoch time. This was my only concern, Can you please help me to fix this.  Thanks!
Dear @gcusello , Thank you for your advice, As your recommnedation, I just do these steps (please fix me if I'm wrong): 1) Using Splunk Clustering Daily License Usage as a Daily Data Volume (Examp... See more...
Dear @gcusello , Thank you for your advice, As your recommnedation, I just do these steps (please fix me if I'm wrong): 1) Using Splunk Clustering Daily License Usage as a Daily Data Volume (Example: 10GB per day) 2) Using Splunk Sizing site to calculate with retention policy requirement 3) Configuring indexes.conf as your recommendation for each volume [volume:hotwarm] path = /mnt/hotwarm_disk maxVolumeDataSizeMB = 102400 #100G [volume:cold] path = /mnt/cold_disk maxVolumeDataSizeMB = 204,800 #200G #Frozen Disk: /mnt/frozen_disk is 410G [idx] homePath = volume:hotwarm/defaultdb/db coldPath = volume:cold/defaultdb/colddb thawedPath = $SPLUNK_DB/defaultdb/thaweddb frozenTimePeriodInSecs = 7776000 #90 days searchable coldToFrozenDir = /mnt/frozen_disk/defaultdb/frozendb   Thanks & Best regards.
Thank you so much! Understood.
Thank you so much! Understood.
Hi @thanh_on , yes, obviously: I hinted only for the retention period. Only one hint: don't define the capacity of each index, but create a volume that will contain all your indexes and define the... See more...
Hi @thanh_on , yes, obviously: I hinted only for the retention period. Only one hint: don't define the capacity of each index, but create a volume that will contain all your indexes and define the max volume dimension. In this way, you can dynamicall manage indexes dimensions. For volume creation and configuration see at https://docs.splunk.com/Documentation/Splunk/9.4.2/Admin/Indexesconf#indexes.conf.spec This is an example: ### This example demonstrates the use of volumes ### # volume definitions; prefixed with "volume:" [volume:hot1] path = /mnt/fast_disk maxVolumeDataSizeMB = 100000 [volume:cold1] path = /mnt/big_disk # maxVolumeDataSizeMB not specified: no data size limitation on top of the # existing ones [volume:cold2] path = /mnt/big_disk2 maxVolumeDataSizeMB = 1000000 # index definitions [idx1] homePath = volume:hot1/idx1 coldPath = volume:cold1/idx1 # thawedPath must be specified, and cannot use volume: syntax # choose a location convenient for reconstitition from archive goals # For many sites, this may never be used. thawedPath = $SPLUNK_DB/idx1/thaweddb [idx2] # note that the specific indexes must take care to avoid collisions homePath = volume:hot1/idx2 coldPath = volume:cold2/idx2 thawedPath = $SPLUNK_DB/idx2/thaweddb [idx3] homePath = volume:hot1/idx3 coldPath = volume:cold2/idx3 thawedPath = $SPLUNK_DB/idx3/thaweddb [idx4] datatype = metric homePath = volume:hot1/idx4 coldPath = volume:cold2/idx4 thawedPath = $SPLUNK_DB/idx4/thaweddb metric.maxHotBuckets = 6 metric.splitByIndexKeys = metric_name Ciao. Giuseppe