All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Are you sure you're adding the same Splunk version and kvstore version (and engine) instance as the rest of the SHC?
As usual with similar "monitoring" searches - this will not find searches when delete isn't invoked directly. The obvious way to do so would be with a macro.
@rahulkumar  Since the client is pushing Sentinel logs to your Splunk HEC endpoint, you can filter out unwanted events within Splunk to reduce the indexed data volume, using a null queue to discard ... See more...
@rahulkumar  Since the client is pushing Sentinel logs to your Splunk HEC endpoint, you can filter out unwanted events within Splunk to reduce the indexed data volume, using a null queue to discard events before they’re indexed.  Configure Splunk’s props.conf and transforms.conf on your heavy forwarder to route unwanted events to a null queue, preventing them from consuming your Splunk license.   https://docs.splunk.com/Documentation/Splunk/9.4.2/Forwarding/Routeandfilterdatad  https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Routeandfilterdatad#Filter_event_data_and_send_to_queues 
Thanks @kiran_panchavat  it helped alot I can look into this.  but if client denies for the azure credentials which have high chances ..! is there any other way also to do this ?
@rahulkumar  If the client is willing to set up the Azure AD application and provide you with the necessary credentials (Client ID, Client Secret, Tenant ID, Workspace ID, etc.), you can configure t... See more...
@rahulkumar  If the client is willing to set up the Azure AD application and provide you with the necessary credentials (Client ID, Client Secret, Tenant ID, Workspace ID, etc.), you can configure the Splunk Add-on to pull logs from the Log Analytics workspace, Event Hub, or Blob Storage without needing direct Azure access.
Hi @kiran_panchavat  thanks for replying  the concern is using the Splunk Add-on for Microsoft Cloud Services in splunk enterprise needs azure side configurations as well I think and I will be hav... See more...
Hi @kiran_panchavat  thanks for replying  the concern is using the Splunk Add-on for Microsoft Cloud Services in splunk enterprise needs azure side configurations as well I think and I will be having no access to it as its client side and they will just provide us with the data only ? is it possible you can clear my doubt about add on services using azure config as well
Does not answer the question. I know how to set this. I don't want to explicitly list every possible domain. I want a wildcard for the sake of maintenance. 
@rahulkumar  Instead of HEC, the Azure team can configure Sentinel to stream logs to an Azure Event Hub, which Splunk can then pull using the Splunk Add-on for Microsoft Cloud Services or a custom A... See more...
@rahulkumar  Instead of HEC, the Azure team can configure Sentinel to stream logs to an Azure Event Hub, which Splunk can then pull using the Splunk Add-on for Microsoft Cloud Services or a custom Azure Function.   https://lantern.splunk.com/Data_Descriptors/Microsoft/Getting_started_with_Microsoft_Azure_Event_Hub_data  https://www.splunk.com/en_us/blog/platform/splunking-azure-event-hubs.html  https://splunkbase.splunk.com/app/3110 
Hello  I just want to know if I have Sentinel logs forwarded to Splunk via HEC directly. Is there any other way to get these logs? I am given the Sentinel logs directly in Splunk and have no acces... See more...
Hello  I just want to know if I have Sentinel logs forwarded to Splunk via HEC directly. Is there any other way to get these logs? I am given the Sentinel logs directly in Splunk and have no access to Azure. I do not want to use HEC because of the huge amount of unfiltered data. Is there any way to resolve this issue or can I ask the Azure team to do something that gives me filtered data, even if I have to use HEC in the end? Sentinel 
@drodman29  @livehybrid has already been explained in the community post linked below — kindly take a look. Solved: Why does my Email Allowed Domain List in Alert Act... - Splunk Community  
After upgrade to version 9.4 I have attempted to configure a list of acceptable domains for the alert_actions.conf.  My environment has a *wide* variety of acceptable email sub-domains which have th... See more...
After upgrade to version 9.4 I have attempted to configure a list of acceptable domains for the alert_actions.conf.  My environment has a *wide* variety of acceptable email sub-domains which have the same base.  However, the domain matching appears to the strict and wildcards are not matching. For example, users may have emails like:  a@temp.mydomain.com  b@perm.mydomain.com  Setting an allow domain like *.mydomain.com   does not match the users and they are removed from alerts and reports.  Does any one have a workaround other than adding every possible sub-domain? 
I am curious then why I do not get any results if I search for info=denied.  I had attempted to delete a record several times and got the insufficient privileges but nothing showed up in the audit.lo... See more...
I am curious then why I do not get any results if I search for info=denied.  I had attempted to delete a record several times and got the insufficient privileges but nothing showed up in the audit.log. index=_audit action=delete_by_keyword info=denied
These are the logs I get when doing a delete before having the capability, then adding the capability, then being able to delete with the new capability.    Did this answer help you? If so, p... See more...
These are the logs I get when doing a delete before having the capability, then adding the capability, then being able to delete with the new capability.    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @BB2  That doesnt get logged when the capability is assigned, it is logged when the capability is attempted to be used.  info=denied means they werent successful running |delete info=granted me... See more...
Hi @BB2  That doesnt get logged when the capability is assigned, it is logged when the capability is attempted to be used.  info=denied means they werent successful running |delete info=granted means they were successful. Hopefully this clears it up.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @bgresty  This behavior is almost certainly caused by ingestion-time processing rules configured in props.conf and transforms.conf on the indexer(s) receiving data for Index 2. Look for stanzas ... See more...
Hi @bgresty  This behavior is almost certainly caused by ingestion-time processing rules configured in props.conf and transforms.conf on the indexer(s) receiving data for Index 2. Look for stanzas in props.conf that match the source type, host, or index of the data being sent to Index 2. These stanzas might contain TRANSFORMS, FILTER, or ROUTEID attributes that point to stanzas in transforms.conf. In transforms.conf, look for stanzas referenced by props.conf that use REGEX or FILTER to evaluate the source field and then route events to a null queue (queue=nullQueue) or otherwise discard them. A regex or filter condition might be inadvertently matching sources shorter than 14 characters. Compare the props.conf and transforms.conf files on the indexers handling data for Index 1 and Index 2 to identify the difference. You can use the splunk btool command to inspect the effective configuration: splunk btool props list <your_sourcetype> --debug splunk btool transforms list --debug   This will show you which configuration files are contributing to the settings for your source type and transforms.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi, we've encountered some unusual behaviour when ingesting data and are at a loss as to what might be causing it. We have two presumably identical indexes ingesting identically structured messages f... See more...
Hi, we've encountered some unusual behaviour when ingesting data and are at a loss as to what might be causing it. We have two presumably identical indexes ingesting identically structured messages from different regions via an HEC for each. Index 1 has no issue, all messages are ingested no problem. On Index 2, events only appear if the source attribute of the message is equal or greater than 14 characters long. e.g.: {    other_data: ...    source: 12345678901234 }   Any ideas?
That will only tell me when the role delete_by_keyword has been assigned.  It will not tell me if someone deletes data from them main index.  I saw the role get assigned with that search as I added t... See more...
That will only tell me when the role delete_by_keyword has been assigned.  It will not tell me if someone deletes data from them main index.  I saw the role get assigned with that search as I added the can_delete role to the admin role.      
I have an audit table with before and after records of changes made to a user table. So every time an update is made to the user table, a record is logged to the audit table with the current value fo... See more...
I have an audit table with before and after records of changes made to a user table. So every time an update is made to the user table, a record is logged to the audit table with the current value for each field, and and a second record is logged with the new value for each field. So if a record was disabled and that was the only change made, the 2 records would look like this: Mod_type  user ID Email change # Active OLD 123 Me@hotmail.com 152 No NEW 123 Me@hotmail.com 152 Yes   I need to match the 2 records by user ID and change # so I can find all the records where specific changes were made, such as going from inactive to active, or where the email address changed, etc. I've looked into selfjoin, appendpipe, etc., but none of them seem to be what I need. I'm trying to say "give me all the records where the active field was changed from "No" to "Yes" and the Mod_Type is "New"".  Thanks for any help.
You are adding it as a new member like @PrewinThomas shows? And you are using correct and same format for host names than you are used for those other nodes like FQDN, short names or IPs?
Apologies @BB2  How about just  index=_audit action=delete_by_keyword You will get granted if success or denied if they didnt have permission:   I dont know if you're aware but you can set d... See more...
Apologies @BB2  How about just  index=_audit action=delete_by_keyword You will get granted if success or denied if they didnt have permission:   I dont know if you're aware but you can set deleteIndexesAllowed for a role, for the can_delete role this is set to * which means any index but DOES NOT cover _* indexes. So the can_delete role wouldnt be able to delete _internal or _audit data.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing