All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi  I am some confuse the documentation for Role Based Field Filtering as following.  https://docs.splunk.com/Documentation/Splunk/9.0.1/Security/planfieldfiltering   from the documentation, ... See more...
Hi  I am some confuse the documentation for Role Based Field Filtering as following.  https://docs.splunk.com/Documentation/Splunk/9.0.1/Security/planfieldfiltering   from the documentation, the restricted command (tstats) return sensitive data that a role with field filters might not be allowed to access and it is very risky that someone with malicious intentions tries to use them to circumvent role-based field filtering.  it provide the workaround that assign one of two capability to role that have field filter.  the one of capability is he run_commands_ignoring_field_filter. here is my question, user_A have a role that include run_commands_ignoring_field_filter capability and configured field filtering and User_A run tstats to search information which include field that required masking, what happen in the result result?  I wonder if it show sensitive data or making data ?   thank you in advanced.     These commands can return sensitive data that a role with field filters might not be allowed to access. They might pose a potential security risk for your organization if someone with malicious intentions tries to use them to circumvent role-based field filtering. As a result, the Splunk platform restricts these commands when used by people with roles that are configured with field filtering.
Hi ,  I have a scenario where the files needs to be transferred for both inbound and outbound at 2 am daily.  I need to create an alert when files are present in inbound by 2 am but missing in ou... See more...
Hi ,  I have a scenario where the files needs to be transferred for both inbound and outbound at 2 am daily.  I need to create an alert when files are present in inbound by 2 am but missing in outbound by 2 am .  Here is my query below. please help  index=cas source="/bin/var/logs/log"  File 1OR File 2 OR File 3 OR File 4 Inbound  for outbound condition is to change to outbound and File 1 represents the file that is getting transferred 
Hello,  I wanted to ask if there was a way I can delete reports created by Enterprise Security? There are reports created by Enterprise Security that we will never use, and i would just like to clea... See more...
Hello,  I wanted to ask if there was a way I can delete reports created by Enterprise Security? There are reports created by Enterprise Security that we will never use, and i would just like to clean up the reports menu. 
Figuring out the best add-on(s) to ingest security data related to O365/Azure is an exercise in insanity...   Can we get some clarification and/or consolidation for this since all 5 of these add-on... See more...
Figuring out the best add-on(s) to ingest security data related to O365/Azure is an exercise in insanity...   Can we get some clarification and/or consolidation for this since all 5 of these add-ons are developed by Splunk or Microsoft?   Microsoft Graph Security API Add-On for Splunk: https://splunkbase.splunk.com/app/4564 https://learn.microsoft.com/en-us/graph/api/resources/security-api-overview?view=graph-rest-1.0#alerts Alerts from the following providers are available: Azure Active Directory Identity Protection Microsoft 365 Default Cloud App Security Custom Alert Microsoft Defender for Cloud Apps Microsoft Defender for Endpoint Microsoft Defender for Identity  Microsoft Sentinel (formerly Azure Sentinel)   Splunk Add-on for Microsoft Security: https://splunkbase.splunk.com/app/6207 Microsoft 365 Defender incidents and alerts OR Microsoft Defender for Endpoint alerts.   Splunk Add-on for Microsoft Office 365: https://splunkbase.splunk.com/app/4055 All service policies, alerts and entities visible through the Microsoft cloud application security portal. All audit events and reports visible through the Microsoft Graph API endpoints. This includes all log events and reports visible through the Microsoft Graph API.   Splunk Add-on for Microsoft Cloud Services: https://splunkbase.splunk.com/app/3110 mscs:azure:security:alert   Splunk Add on for Microsoft Azure: https://splunkbase.splunk.com/app/3757 Azure Security Center Alerts & Tasks   EDIT: There's also the Microsoft Defender Advanced Hunting Add-on for Splunk (https://splunkbase.splunk.com/app/5518) but the Splunk Add-on for Microsoft Security also seems to cover Advanced Hunting: https://docs.splunk.com/Documentation/AddOns/released/MSSecurity/Releasenotes#New_features  
Hi all, I have a timestamp in a format I havn't dealt with before and I am struggling to get it converted to my timezone using the offset. In raw event form it is like this: "TimeGenerated": "2022-... See more...
Hi all, I have a timestamp in a format I havn't dealt with before and I am struggling to get it converted to my timezone using the offset. In raw event form it is like this: "TimeGenerated": "2022-10-25T04:21:50.2975103Z" I have also attached a screenshot of how splunk is indexing it. My second question is how would I configure the sourcetype to have splunk use TimeGenerated field as _time automatically? I've attached a second screenshot with the sourcetype as well.   Any help or links would be greatly appreciated!
I'm not having any luck finding what the functional differences are between a lookup created in splunk core ( Settings > Lookups > add new) that lives in the ES app context, and a managed lookup crea... See more...
I'm not having any luck finding what the functional differences are between a lookup created in splunk core ( Settings > Lookups > add new) that lives in the ES app context, and a managed lookup created from the content management page ( ES > configure > Content Management > Create New Content ).  I have created and experimented with both and I can't find any functional difference. The documentation describes how to create managed lookups but I'm not finding anything on what the point is. 
Hi Splunkers  I'm trying to extract some fields using the opting under the log "Extract Fields" using the regix method. In the step of "Select Fields" when I select a filed that I would like to e... See more...
Hi Splunkers  I'm trying to extract some fields using the opting under the log "Extract Fields" using the regix method. In the step of "Select Fields" when I select a filed that I would like to extract, it freezes for a couple of minutes and returns with the following message: "The extraction failed. If you are extracting multiple fields, try removing one or more fields. Start with extractions that are embedded within longer text strings." So I'm not "extracting multiple fields", its just one filed, and yet the error still appears. Here is the log sample I used: 2022-10-26T20:10:11+03:00 192.168.xxx.xxx TRP|No Caller ID received: Line: 8 Slot: 2 Port: 12 I was just trying to extract the "TRP". I have tried different ways to solve this issue: I have tried the "I prefer to write the regular expression myself" option in the "Select Method" step and entered the regix and hit "Preview" but it just stuck. I have tried to use other log sample with no luck. Tried using totally different log from a totally different index but ended up with the same error message. Even restated Splunk but no luck either!   What am I missing here? 
hi , I have the below query. Index=Config source =“Java/path/ log.csv” inbound  Csv files are supposed to be delivered on a hourly basis before hour past 13 minutes ( eg : file delivered time is 12... See more...
hi , I have the below query. Index=Config source =“Java/path/ log.csv” inbound  Csv files are supposed to be delivered on a hourly basis before hour past 13 minutes ( eg : file delivered time is 12:12minutes) . I need to create an alert if any of the files are delivered after 13minutes every hour . (12:14 minutes ) - create alert 
Hi,  We upgraded Splunk from 8.2.6 to 9.0.1 recently and have one big internal app ( dashboard, views, field extractions) which is failing dring app readiness check. It shows that app is using py... See more...
Hi,  We upgraded Splunk from 8.2.6 to 9.0.1 recently and have one big internal app ( dashboard, views, field extractions) which is failing dring app readiness check. It shows that app is using python 2 and not compatible with python3 . Either uninstall it or update it.  How do i update this internal app ( this is not splunk app , builder or plugin) - is there any documentation or steps which anyone can provide.     
Hello, We're standing up Splunk HF on AWS via EC2.  With a 50GB/day ingest, what's the lowest vCPU/RAM we can configure, and would scaling it up later affect functionality?   Thanks!
We have a Splunk UI that allows the users to export a certain set of the rows from a lookup. The caveat is that each row might produce multiple output rows with quite a complex logic. So, I wonder... See more...
We have a Splunk UI that allows the users to export a certain set of the rows from a lookup. The caveat is that each row might produce multiple output rows with quite a complex logic. So, I wonder whether to use the mv set of functions maybe coupled with the map command or to develop a python function to do it. Any suggestions?     
Hello, We're coming from another platform that we used to keep track of incidents.  In order to keep things more consistent after the transition, we'd like to be able to have the IDs of new cases st... See more...
Hello, We're coming from another platform that we used to keep track of incidents.  In order to keep things more consistent after the transition, we'd like to be able to have the IDs of new cases start at a higher number than the default of 1, if possible.  Is this something that can be done? I haven't seen anything in the settings or documentation. Thanks!
I have a time chart of count by field     | timechart count by field_name limit=0     I would like to divide each value in the statistics table by the mean of that field.  Current Outpu... See more...
I have a time chart of count by field     | timechart count by field_name limit=0     I would like to divide each value in the statistics table by the mean of that field.  Current Output: Time A B 1 1 4 2 2 5 3 3 6   Desired Output: Time A B 1 0.5 0.8 2 1 1 3 1.5 1.2   I can use a `foreach` to perform an operation on every column but I am having trouble configuring a subquery within that to calculate the mean and divide by it.
Hi All, Actually i need a help how to identify what are the fields that are capturing in description of notables. Since i have given $filename$ to my one of the PA UC but it is not capturing the va... See more...
Hi All, Actually i need a help how to identify what are the fields that are capturing in description of notables. Since i have given $filename$ to my one of the PA UC but it is not capturing the value in description. Actual field is filename only in logs. Please help me how to solve this.
Hello Everyone, I am ingesting data from Azure EventHub to Splunk using Splunk Microsoft Cloud Service Add-On. Now I am looking for an App which I can use to visualize these events. Can anyone pl... See more...
Hello Everyone, I am ingesting data from Azure EventHub to Splunk using Splunk Microsoft Cloud Service Add-On. Now I am looking for an App which I can use to visualize these events. Can anyone please suggest any Pre-build dashboards for EventHub data. Thanks,
I'm working on a query with the goal of determining the percentage rate of request/response event pairs that match by way of a common field value. The events share the same index, sourcetype and sour... See more...
I'm working on a query with the goal of determining the percentage rate of request/response event pairs that match by way of a common field value. The events share the same index, sourcetype and source; the field name, however, is different. The field in the request event is called "ID" while the field in the response event is called "InResponseTo". NOTE: The response event also contains a field called "ID" which should be ignored. Here is a sample of each type of event (note that these two events share the matching value "_907b4184-e85c-41f2-9a32-c1c735f01510")... Request:   Oct 26 17:32:29 ServerX knick.knack [10/26/22, 17:32:20:292 EDT] 00018baf id=00000000 om.tivoli.am.fim.trustserver.sts.utilities.IDMappingExtUtils I traceString INFO --saml20-access-policy-utilities.js - [axamf4EhUmaVjkwwd+akl10BbjbDS1vVg6YJhu2F2E8=]:[protocolContext.getAuthnRequest()] <samlp:AuthnRequest xmlns:ds="http://www.w3.org/2000/09/xmldsig#" xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol" AssertionConsumerServiceURL="https://xyz.com" Destination="https://foobar.com" ForceAuthn="true" ID="_907b4184-e85c-41f2-9a32-c1c735f01510"</samlp:Response>   Response:   Oct 26 17:32:29 ServerX knick.knack [10/26/22, 17:32:20:455 EDT] 00018baf id=00000000 .am.fim.saml20.types.SAML20HTTPPostBrowserResponseWriterImpl 1 getStringMessage Request XML message: <samlp:Response xmlns:ds="http://www.w3.org/2000/09/xmldsig#" xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" Destination="https://dundermifflin.com" ID="FIMRSP_1635e05b-0184-1dc4-803d-f94b11a6d04e" InResponseTo="_907b4184-e85c-41f2-9a32-c1c735f01510"</samlp:Response>   The development flow: Look at all request and response events for a given timeframe. Sometimes, duplicate events show up...so they would need to be deduplicated. If the same value is found for both "ID" and "InResponseTo", that's considered a 100% success rate. Mark it as a match. The sample events provided above would be an example of this. If the same value is not found (i.e., there's a request event but no matching response event), that's considered a 0% success rate. Mark it as a non-match. Add up the number of matches and divide the sum by the total number of request events found. Multiply the quotient by 100 to determine the percentage rate. The results will be visualized using a simple line chart (timechart) showing the average percentage rate over time. I'm able to compare the overall number of requests against the overall number of responses (regardless of matching field values) to produce a percentage rate, as shown here:   index=zig sourcetype=zag samlp:AuthnRequest | timechart count as RequestCount | appendcols [ search index=zig sourcetype=zag samlp:Response | timechart count as ResponseCount ] | eval ResponseRequestRatio = round(((ResponseCount/RequestCount) * 100),2) | table _time, RequestCount, ResponseCount, ResponseRequestRatio   How would I modify this query to accommodate the additional rules? I'm guessing that some sort of count could be used to determine the percentage rate, with a "1" for matches and a "0" (zero) for non-matches.
Hi,  I understand that Zoom will add a new feature "Secret Token" to replace the "Verification Token", that will be retired in August 2023. The current Splunk Connect for Zoom Add-on will work wi... See more...
Hi,  I understand that Zoom will add a new feature "Secret Token" to replace the "Verification Token", that will be retired in August 2023. The current Splunk Connect for Zoom Add-on will work with this feature? or there is a new version scheduled that covers this? Thank you ! Luis Felipe
Hi Even if i have read some documentations, i have difficulty to understand the difference between macro and eventtype I use macro essentially for index + sourcetype agregation but is somebody ca... See more...
Hi Even if i have read some documentations, i have difficulty to understand the difference between macro and eventtype I use macro essentially for index + sourcetype agregation but is somebody can clearly explaining me the difference between macro and eventtype? Thanks
Dear community, I am using DB connect with Hive/Spark. I would like to know the benefits of using Livy Thrift server over Hive/Spark with DB connect. And if it is possible and it makes sense, what... See more...
Dear community, I am using DB connect with Hive/Spark. I would like to know the benefits of using Livy Thrift server over Hive/Spark with DB connect. And if it is possible and it makes sense, what would be the best way to configure DB connect to the Livy Thrift Server? Thanks a lot
Hi all, Wondering if it is possible to do 10 minute search from when you see an event instead of doing 10 minute windows such as "| bin _time span=10m as window" as this just looks at minutes from ... See more...
Hi all, Wondering if it is possible to do 10 minute search from when you see an event instead of doing 10 minute windows such as "| bin _time span=10m as window" as this just looks at minutes from the hour?