All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have the following dashboard table: <table> <title>messageIdTok=$messageIdTok$ userIdTok=$userIdTok$ subjectTok=$subjectTok$</title> <search base="baseSearch"> <que... See more...
I have the following dashboard table: <table> <title>messageIdTok=$messageIdTok$ userIdTok=$userIdTok$ subjectTok=$subjectTok$</title> <search base="baseSearch"> <query>| search subject="*$subjectTok$*" UserId="$userIdTok$" ipAddress="$ipAddressTok$" messageId="$messageIdTok$" | table _time,UserId,subject,Operation,messageId,ClientInfoString,ipAddress</query> </search> <option name="count">10</option> <option name="dataOverlayMode">none</option> <option name="drilldown">cell</option> <option name="percentagesRow">false</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <drilldown> <condition field="subject"> <set token="subjectTok">$row.subject$</set> <set token="messageIdTok">*</set> <set token="userIdTok">*</set> <set token="ipAddressTok">*</set> </condition> <condition field="messageId"> <set token="messageIdTok">$click.value$</set> <set token="subjectTok">*</set> <set token="userIdTok">*</set> <set token="ipAddressTok">*</set> </condition> <condition field="UserId"> <set token="messageIdTok">*</set> <set token="subjectTok">*</set> <set token="userIdTok">$row.UserId$</set> <set token="ipAddressTok">*</set> </condition> </drilldown> </table> In the "messageId" drilldown condition, i'm using $click.value$. But when I do this, clicking the cell always sets the messageIdTok to the value in the first column (_time in this case). I also notice that when i have table option "drilldown" set to "cell", moving the mouse over the table always highlights the cell I'm hovering over as well as the first cell of that row (video here).  Any suggestions on how to correct this behavior?
I understand this better now. Using the search you provided will search for anyone who did delete a record.  I can then go into Splunk and search the timeframe for who actually performed the delete.... See more...
I understand this better now. Using the search you provided will search for anyone who did delete a record.  I can then go into Splunk and search the timeframe for who actually performed the delete. index=_audit action=delete_by_keyword info=granted I can setup an alert using this search for any failure attempts to delete a record. index=_audit action=search info=failed | regex search="\\|(\\s|\\n|\\r|([\\s\\S]*))*delete"
What do you mean by "how to set that up"?
Could you tell me how to set that up?  Or, can you point me to a location on how to do it?
Hi @drodman29  Unfortunately it isnt possible to use wildcards in the allowedDomainList for emails, check out the following snippet of code where the checks are made: domains.extend(sec.EMA... See more...
Hi @drodman29  Unfortunately it isnt possible to use wildcards in the allowedDomainList for emails, check out the following snippet of code where the checks are made: domains.extend(sec.EMAIL_DELIM.split(ssContent['action.email.allowedDomainList'])) domains = [d.strip() for d in domains] domains = [d.lower() for d in domains] recipients = [r.lower() for r in recipients] for recipient in recipients: dom = recipient.partition("@")[2] if not dom in domains: logger.error("For subject=%s, email recipient=%s is not among the allowedDomainList=%s > % (ssContent.get('action.email.subject'), recipient, ssContent.get('action> else: validRecipients.append(recipient) This takes the value of allowedDomainList, splits it and converts to lowercase then checks if the second half (the domain) is in the list of domains. There is no regex matching etc so wildcarding isnt possible.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Are you sure you're adding the same Splunk version and kvstore version (and engine) instance as the rest of the SHC?
As usual with similar "monitoring" searches - this will not find searches when delete isn't invoked directly. The obvious way to do so would be with a macro.
@rahulkumar  Since the client is pushing Sentinel logs to your Splunk HEC endpoint, you can filter out unwanted events within Splunk to reduce the indexed data volume, using a null queue to discard ... See more...
@rahulkumar  Since the client is pushing Sentinel logs to your Splunk HEC endpoint, you can filter out unwanted events within Splunk to reduce the indexed data volume, using a null queue to discard events before they’re indexed.  Configure Splunk’s props.conf and transforms.conf on your heavy forwarder to route unwanted events to a null queue, preventing them from consuming your Splunk license.   https://docs.splunk.com/Documentation/Splunk/9.4.2/Forwarding/Routeandfilterdatad  https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Routeandfilterdatad#Filter_event_data_and_send_to_queues 
Thanks @kiran_panchavat  it helped alot I can look into this.  but if client denies for the azure credentials which have high chances ..! is there any other way also to do this ?
@rahulkumar  If the client is willing to set up the Azure AD application and provide you with the necessary credentials (Client ID, Client Secret, Tenant ID, Workspace ID, etc.), you can configure t... See more...
@rahulkumar  If the client is willing to set up the Azure AD application and provide you with the necessary credentials (Client ID, Client Secret, Tenant ID, Workspace ID, etc.), you can configure the Splunk Add-on to pull logs from the Log Analytics workspace, Event Hub, or Blob Storage without needing direct Azure access.
Hi @kiran_panchavat  thanks for replying  the concern is using the Splunk Add-on for Microsoft Cloud Services in splunk enterprise needs azure side configurations as well I think and I will be hav... See more...
Hi @kiran_panchavat  thanks for replying  the concern is using the Splunk Add-on for Microsoft Cloud Services in splunk enterprise needs azure side configurations as well I think and I will be having no access to it as its client side and they will just provide us with the data only ? is it possible you can clear my doubt about add on services using azure config as well
Does not answer the question. I know how to set this. I don't want to explicitly list every possible domain. I want a wildcard for the sake of maintenance. 
@rahulkumar  Instead of HEC, the Azure team can configure Sentinel to stream logs to an Azure Event Hub, which Splunk can then pull using the Splunk Add-on for Microsoft Cloud Services or a custom A... See more...
@rahulkumar  Instead of HEC, the Azure team can configure Sentinel to stream logs to an Azure Event Hub, which Splunk can then pull using the Splunk Add-on for Microsoft Cloud Services or a custom Azure Function.   https://lantern.splunk.com/Data_Descriptors/Microsoft/Getting_started_with_Microsoft_Azure_Event_Hub_data  https://www.splunk.com/en_us/blog/platform/splunking-azure-event-hubs.html  https://splunkbase.splunk.com/app/3110 
Hello  I just want to know if I have Sentinel logs forwarded to Splunk via HEC directly. Is there any other way to get these logs? I am given the Sentinel logs directly in Splunk and have no acces... See more...
Hello  I just want to know if I have Sentinel logs forwarded to Splunk via HEC directly. Is there any other way to get these logs? I am given the Sentinel logs directly in Splunk and have no access to Azure. I do not want to use HEC because of the huge amount of unfiltered data. Is there any way to resolve this issue or can I ask the Azure team to do something that gives me filtered data, even if I have to use HEC in the end? Sentinel 
@drodman29  @livehybrid has already been explained in the community post linked below — kindly take a look. Solved: Why does my Email Allowed Domain List in Alert Act... - Splunk Community  
After upgrade to version 9.4 I have attempted to configure a list of acceptable domains for the alert_actions.conf.  My environment has a *wide* variety of acceptable email sub-domains which have th... See more...
After upgrade to version 9.4 I have attempted to configure a list of acceptable domains for the alert_actions.conf.  My environment has a *wide* variety of acceptable email sub-domains which have the same base.  However, the domain matching appears to the strict and wildcards are not matching. For example, users may have emails like:  a@temp.mydomain.com  b@perm.mydomain.com  Setting an allow domain like *.mydomain.com   does not match the users and they are removed from alerts and reports.  Does any one have a workaround other than adding every possible sub-domain? 
I am curious then why I do not get any results if I search for info=denied.  I had attempted to delete a record several times and got the insufficient privileges but nothing showed up in the audit.lo... See more...
I am curious then why I do not get any results if I search for info=denied.  I had attempted to delete a record several times and got the insufficient privileges but nothing showed up in the audit.log. index=_audit action=delete_by_keyword info=denied
These are the logs I get when doing a delete before having the capability, then adding the capability, then being able to delete with the new capability.    Did this answer help you? If so, p... See more...
These are the logs I get when doing a delete before having the capability, then adding the capability, then being able to delete with the new capability.    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @BB2  That doesnt get logged when the capability is assigned, it is logged when the capability is attempted to be used.  info=denied means they werent successful running |delete info=granted me... See more...
Hi @BB2  That doesnt get logged when the capability is assigned, it is logged when the capability is attempted to be used.  info=denied means they werent successful running |delete info=granted means they were successful. Hopefully this clears it up.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @bgresty  This behavior is almost certainly caused by ingestion-time processing rules configured in props.conf and transforms.conf on the indexer(s) receiving data for Index 2. Look for stanzas ... See more...
Hi @bgresty  This behavior is almost certainly caused by ingestion-time processing rules configured in props.conf and transforms.conf on the indexer(s) receiving data for Index 2. Look for stanzas in props.conf that match the source type, host, or index of the data being sent to Index 2. These stanzas might contain TRANSFORMS, FILTER, or ROUTEID attributes that point to stanzas in transforms.conf. In transforms.conf, look for stanzas referenced by props.conf that use REGEX or FILTER to evaluate the source field and then route events to a null queue (queue=nullQueue) or otherwise discard them. A regex or filter condition might be inadvertently matching sources shorter than 14 characters. Compare the props.conf and transforms.conf files on the indexers handling data for Index 1 and Index 2 to identify the difference. You can use the splunk btool command to inspect the effective configuration: splunk btool props list <your_sourcetype> --debug splunk btool transforms list --debug   This will show you which configuration files are contributing to the settings for your source type and transforms.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing