All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have an audit table with before and after records of changes made to a user table. So every time an update is made to the user table, a record is logged to the audit table with the current value fo... See more...
I have an audit table with before and after records of changes made to a user table. So every time an update is made to the user table, a record is logged to the audit table with the current value for each field, and and a second record is logged with the new value for each field. So if a record was disabled and that was the only change made, the 2 records would look like this: Mod_type  user ID Email change # Active OLD 123 Me@hotmail.com 152 No NEW 123 Me@hotmail.com 152 Yes   I need to match the 2 records by user ID and change # so I can find all the records where specific changes were made, such as going from inactive to active, or where the email address changed, etc. I've looked into selfjoin, appendpipe, etc., but none of them seem to be what I need. I'm trying to say "give me all the records where the active field was changed from "No" to "Yes" and the Mod_Type is "New"".  Thanks for any help.
You are adding it as a new member like @PrewinThomas shows? And you are using correct and same format for host names than you are used for those other nodes like FQDN, short names or IPs?
Apologies @BB2  How about just  index=_audit action=delete_by_keyword You will get granted if success or denied if they didnt have permission:   I dont know if you're aware but you can set d... See more...
Apologies @BB2  How about just  index=_audit action=delete_by_keyword You will get granted if success or denied if they didnt have permission:   I dont know if you're aware but you can set deleteIndexesAllowed for a role, for the can_delete role this is set to * which means any index but DOES NOT cover _* indexes. So the can_delete role wouldnt be able to delete _internal or _audit data.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@Poojitha were you ever able to get this sorted out?
No, that didn't help.  I tested deleting data on our test server but your search did return any results.  I get the error below when trying to delete. I updated your search with what is in bold but i... See more...
No, that didn't help.  I tested deleting data on our test server but your search did return any results.  I get the error below when trying to delete. I updated your search with what is in bold but it also did not return any results. Error in 'delete' command: You have insufficient privileges to delete events. I found a way to search for failed deletes by adding the info=failed.  That would take care of failed attempts to delete data but would not be helpful if an admin performed a delete in a search. index=_audit action=search info=failed | regex search="\\|(\\s|\\n|\\r|([\\s\\S]*))*delete"
Hi @lakshman239 , I am working on something similar. I have a dashboard which is simply showing all the indexes, its sourcetypes, source_counts and host_counts using tstats query- | tstats values(s... See more...
Hi @lakshman239 , I am working on something similar. I have a dashboard which is simply showing all the indexes, its sourcetypes, source_counts and host_counts using tstats query- | tstats values(sourcetype) as sourcetypes dc(host) as hosts_count dc(source) as sources_count where index = * by index I am thinking to use rest query to populate the indexes in the dropdown and use then use that on tstats to show the sourcetypes/hosts for a particular index you pick in the dropdown. Can you help here how you are suggesting using the output of rest api into tstats. | rest splunk_server=local /services/cluster_blaster_indexes/sh_indexes_manager/ | stats count by title | fields - count  P.S. I am fairly new to splunk!
Hello @Paul_Szoke , I looked into the app further and it seems it wasn’t developed by Splunk, but by a third-party developer called Rojo Consultancy. Given that, I’m not sure if anyone from Splunk... See more...
Hello @Paul_Szoke , I looked into the app further and it seems it wasn’t developed by Splunk, but by a third-party developer called Rojo Consultancy. Given that, I’m not sure if anyone from Splunk would be able to list it again. The best approach would be to reach out directly to the app developer with a strong use case. You can contact them at hello@rojoconsultancy.com.
1. Yes Splunk PS was involved, and yes it was same query only summary index is changed.  2. In on-prem many fields are showing but in Cloud only 5-6 fields showing. 3.  proofpoint_summary was creat... See more...
1. Yes Splunk PS was involved, and yes it was same query only summary index is changed.  2. In on-prem many fields are showing but in Cloud only 5-6 fields showing. 3.  proofpoint_summary was created just to get the diff between summary index and proofpoint_summary and yes user is having access to it.  4. proofpoint_summary was created in Cloud
Hi @Nawab  That should work too, its using a regex to match so 127.0.0.1 I guess will also match  Did this answer help you? If so, please consider: Adding karma to show it was useful Markin... See more...
Hi @Nawab  That should work too, its using a regex to match so 127.0.0.1 I guess will also match  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @BB2  You could use the the following to search for a failed attempt or success delete: (index=_internal "You do not have the capability to delete") OR (index=_audit action=delete_by_keyword inf... See more...
Hi @BB2  You could use the the following to search for a failed attempt or success delete: (index=_internal "You do not have the capability to delete") OR (index=_audit action=delete_by_keyword info=granted)  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
I just pasted blacklist 127.0.0.1, and when I ran list monitor all 127.0.0.1 is remove, is it correct or should i used your method
Hi @Nawab  Modify the inputs.conf file on your Splunk forwarder to add a blacklist entry for the specific directory you want to exclude. Locate the stanza for your directory input, e.g. [monitor://... See more...
Hi @Nawab  Modify the inputs.conf file on your Splunk forwarder to add a blacklist entry for the specific directory you want to exclude. Locate the stanza for your directory input, e.g. [monitor:///opt/syslog/Fortigate], and add the blacklist line. [monitor:///opt/syslog/Fortigate] disabled = false # ... other settings like index, sourcetype ... blacklist = /opt/syslog/Fortigate/127\.0\.0\.1/.* This configuration tells Splunk to monitor the /opt/syslog/Fortigate directory but ignore any files or subdirectories within the /opt/syslog/Fortigate/127.0.0.1 path. The \. escapes the dots in the IP address, and /.* ensures that everything within that specific directory is excluded. After saving the changes to inputs.conf, you must restart the Splunk forwarder for the changes to take effect. Check out the following inputs.conf docs on blacklist of files: https://docs.splunk.com/Documentation/Splunk/latest/Admin/Inputsconf#:~:text=way.%0A*%20No%20default.-,blacklist%20%3D%20%3Cregular%20expression%3E,-*%20If%20set%2C%20files  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I have configured syslog-ng to listen on multiple ports, save them in a folder with IP name and hf to send logs to indexers,  In one case i have 127.0.0.1 sending as loopback to syslog-ng server, no... See more...
I have configured syslog-ng to listen on multiple ports, save them in a folder with IP name and hf to send logs to indexers,  In one case i have 127.0.0.1 sending as loopback to syslog-ng server, now i want to remove this IP from my input configs. let suppose I have below folder /opt/syslog/Fortigate under fortigate I have mutiple fortigates sending logs and i dont know in future we can add a new fortigate here 1 hot is 127.0.0.1, i want to remove this from my inputs, what should I do.
Absolutely @mchoudhary  So, what we are doing here is using a subsearch within the "table" command to generate the list of months you are interested in. Not many people realise but you can use a sub... See more...
Absolutely @mchoudhary  So, what we are doing here is using a subsearch within the "table" command to generate the list of months you are interested in. Not many people realise but you can use a subsearch in a lot more places than as part of an original search, e.g. to derive a variable for timechart span, or in our case to list some fields for your table command. Regarding the subsearch, this is what it is doing: 1. | makeresults count=7 Generates 7 dummy events (rows) to work with in the pipeline. (6 months ago + current month) 2. | streamstats count as month_offset For each of the 7 rows, assigns a sequential number in month_offset (from 1 to 7). This will be used to generate one value per month, going backwards in time. 3. | eval start_epoch=relative_time(now(),"-6mon@mon"), end_epoch=now() start_epoch calculates the epoch time at the start of the month, six months ago. -6mon@mon: Go back 6 months, then snap to the beginning of the month. end_epoch is the current epoch time. This sets the time range: from the start of 6 months ago until now. 4. | eval start_month=strftime(start_epoch, "%Y-%m-01") Formats start_epoch into a string representing the first day of the starting month (e.g., "2024-11-01"). 5. | eval month_epoch = relative_time(strptime(start_month, "%Y-%m-%d"), "+" . (month_offset-1) . "mon") For each row, this creates a timestamp for a month in the range. Increments from start_month by (month_offset - 1) months. month_offset runs 1 to 7. So, months generated will be: start_month + 0, +1, +2, ..., +6 months. This way, you get the start-of-month epoch for each month in the range. 6. | where month_epoch <= end_epoch Filters out any months whose starting epoch is greater than now (in case the 7 generated months go slightly into the future). 7. | eval month=strftime(month_epoch, "%b") Converts month_epoch into a "short month name" format (e.g., "Jan", "Feb", etc). 8. | stats list(month) as search Aggregates the results into a single row, with the months as a list, titled "search".   This is then returned from the subsearch as a list which is consumed by the table command. If you ran the search by itself you would get: Please let me know if you have any further questions on this! I'm really pleased to have got to the bottom of it!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello, Our company has gone through an audit and one of the auditors has asked us to monitor attempts to delete records in Splunk.  I did some research and found the search item below which would do... See more...
Hello, Our company has gone through an audit and one of the auditors has asked us to monitor attempts to delete records in Splunk.  I did some research and found the search item below which would do the trick.  The issue is if I setup an alert with this, the alert is triggered because the previous search for this alert is saved and we get alerted for that search because the word delete is in that search.   index=_audit action=search | regex search="\\|(\\s|\\n|\\r|([\\s\\S]*))*delete" Is there a way to ignore this search string when doing a search?  Or has anybody been able to setup an alert for attempts to delete records? We only have 4 admins with the can_delete role but the auditors want to be sure if an admin tries to delete records, there will be an alert.  
I found with v9.4.1 that if I click on any cell in the last row and then press the down button on my keyboard that I am then able to get to the rest of the rows.
@livehybrid It worked like magic! Thank you so much. Also, if you could explain the logic behind it, I would be grateful
Hi @Pooja1  There are a few things to cover off here, I guess the first is who did the migration? Usually Splunk PS will check that all scheduled searches are running without errors and cleanly befo... See more...
Hi @Pooja1  There are a few things to cover off here, I guess the first is who did the migration? Usually Splunk PS will check that all scheduled searches are running without errors and cleanly before handing over.  Regarding the search - I see there isnt much difference between them, mainly the index you're collecting in to.  How have you determined that the search *isnt* running? Have you seen any specific errors in _internal/_audit regarding the search?  Has the proofpoint_summary index been created in Splunk Cloud? Who is the search owned by, is this a service account/nobody/specific user? Do you, and the search owner have access to the proofpoint_summary index?  Please let me know if you're able to provide some of the answers to this as it will help pinpoint the issue.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi Team, On May 20th, we successfully migrated from Splunk On-Prem to Splunk Cloud. We have a scheduled search that runs every 31 minutes, which was functioning correctly in the on-prem environmen... See more...
Hi Team, On May 20th, we successfully migrated from Splunk On-Prem to Splunk Cloud. We have a scheduled search that runs every 31 minutes, which was functioning correctly in the on-prem environment. However, after the migration, the same search query is no longer working in the cloud environment. on-prem index=proofpoint earliest=-32m@m latest=-1m@m | transaction x, qid keepevicted=true | search action=* cmd=env_from cmd=env_rcpt | addinfo | fields action country delay dest duration file_hash file_name file_size internal_message_id message_id message_info orig_dest orig_recipient orig_src process process_id protocol recipient recipient_count recipient_status reply response_time retries return_addr size src src_user status_code subject url user vendor_product xdelay xref filter_action filter_score signature signature signature_extra signature_id | fields - _raw | join type=outer internal_message_id [search index=summary sourcetype=proofpoint_stash earliest=-48m | fields internal_message_id | dedup internal_message_id | eval inSummary="T"] | search NOT inSummary="T"| collect index=summary addtime=true source=proofpoint sourcetype=proofpoint_stash Cloud index=proofpoint earliest=-32m@m latest=-1m@m | transaction x, qid keepevicted=true | search action=* cmd=env_from cmd=env_rcpt | addinfo | fields action country delay dest duration file_hash file_name file_size internal_message_id message_id message_info orig_dest orig_recipient orig_src process process_id protocol recipient recipient_count recipient_status reply response_time retries return_addr size src src_user status_code subject url user vendor_product xdelay xref filter_action filter_score signature signature signature_extra signature_id | fields - _raw | join type=outer internal_message_id [search index=summary sourcetype=stash earliest=-48m | fields internal_message_id | dedup internal_message_id | eval inSummary="T"] | search NOT inSummary="T"| collect index=proofpoint_summary addtime=true source=proofpoint sourcetype=stash Thanks
Try checking their GitHub page. They have documentation there. https://github.com/livehybrid/TA-aws-trusted-advisor   Regards