All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I am working with event data in Splunk where each event contains a command with multiple arguments. I'm extracting these arguments and their associated values using regex, resulting in multi-value fi... See more...
I am working with event data in Splunk where each event contains a command with multiple arguments. I'm extracting these arguments and their associated values using regex, resulting in multi-value fields within Splunk. However, I'm encountering a challenge where some arguments do not have an associated value, and for these cases, I would like to set their values to `true`. Here's the SPL I'm using for extraction: | rex max_match=0 field=Aptlauncher_cmd "\s(?<flag>--?[\w\-.@|$|#]+)(?:(?=\s--?)|(?=\s[\w\-.\/|$|#|\"|=])\s(?<value>[^\s]+))?" What I need is to refine this SPL so that after extraction, any argument without a value is automatically assigned a value of `true`. After setting the default values, I would then like to use `mvexpand` to separate each argument-value pair into its own event. Could you provide guidance on how to adjust my regex or SPL command to accomplish this within Splunk?
Will this add-on integrate with devices managed in Aruba Central as well?
Yes it does work with google cloud buckets as it is S3 compliant.  You can use S3 interoperability to create a kubernetes secret to authenticate with Smartstore. The operator's smartstore or appFrame... See more...
Yes it does work with google cloud buckets as it is S3 compliant.  You can use S3 interoperability to create a kubernetes secret to authenticate with Smartstore. The operator's smartstore or appFramework features can be used for the configuration.
These logs are collected using scripted input using .bat file it has several lines in one events , I only showed 6 lines per event but the repetion is same with more lines in between privatekey and i... See more...
These logs are collected using scripted input using .bat file it has several lines in one events , I only showed 6 lines per event but the repetion is same with more lines in between privatekey and issuer
Great.  Those two searches should be able to be easily combined into one.  Unfortunately, I've thought about this and I'm not sure I have quite enough information yet because I feel there's a *lot* ... See more...
Great.  Those two searches should be able to be easily combined into one.  Unfortunately, I've thought about this and I'm not sure I have quite enough information yet because I feel there's a *lot* still left unsaid.  So it would be great if you could describe the use case in a little more detail just using words and English, ignoring how you think the Splunk solution will be formulated. I'm guessing something like - "whenever a new gz file is created, we need to check if that file was also processed or not and send an email with that information as an alert."  That leaves as open questions how long is the time period involved how often will you have this alert scheduled for (different from the first question!) is it a 1 to 1 relationship between "create" events and and "processing" events what's the maximum time difference between those two events does it matter more if a file gets created but not processed, or does that situation matter less, or is this actually the only thing that matters do you already have the filename being extracted as a field in these two events how often do you expect the pair of messages (daily?  hourly?  hundreds per second?) The reason for so many questions is that there are quite a few ways to approach this, some may be better in certain circumstances, some may be better in others. All in all, the details matter, but I'm sure if we get good answers to those (and perhaps a sample of the two events too) that we'll get you on your way soon.  
Or is it possible that issue is related to same lookup file being referenced for next input dropdown subsequently causing issue.
I have a lookup file . It has 2 columns : Service and Entity and 500+ rows. Service has 34 unique values and Entity has 164.  I have a dashboard where for search i want to use values from this look... See more...
I have a lookup file . It has 2 columns : Service and Entity and 500+ rows. Service has 34 unique values and Entity has 164.  I have a dashboard where for search i want to use values from this lookup as input to search criteria. I have following logic .I get the dropdown values for "service" without any issues but not for "entity" when it's same lookup file ,same logic.   Any ideas ? Snippet : <input type="dropdown" token="Service" searchWhenChanged="true"> <label>Service</label> <search> <query> |inputlookup metadata.csv | dedup service | stats dc(service) by service </query> </search> <choice value="*">*</choice> <default>*</default> <initialValue>*</initialValue> </input> <input type="dropdown" token="Entity" searchWhenChanged="true"> <label>Entity</label> <search> <query> |inputlookup metadata.csv | dedup entity | stats dc(entity) by entity </query> </search> <choice value="*">*</choice> <default>*</default> <initialValue>*</initialValue> </input>  
@scelikok last queston, ¿Do you have support documentation where splunk indicate that setting RF and SF equal to indexer count is not a best practice?
We have a splunk query that pulls down a list of values daily.  We are looking to see if we can use splunk to find the field value that is new today, but was not present yesterday, and show in a stat... See more...
We have a splunk query that pulls down a list of values daily.  We are looking to see if we can use splunk to find the field value that is new today, but was not present yesterday, and show in a stats table. How can this be accomplished?  The idea is.. Yesterday - splunk db connect query pulls back a result of 5 log lines, all containing the field "name". field= name values - Bob, Kat, Abe, Doug, Sam Today - splunk db connect query pulls back a result of 6 log lines, all containing field "name". field= name values - Bob, Kat, Abe, Doug, Sam, Jim(new value found) So would like to show a stats table or alert that would let us know "Jim" is a new field value for name that did not exist yesterday.    
Hi I am trying to divide the the logs into different evwnt based on below scenario: I have one single event currently: Issuer : hjlhjk a: xyz  PrivateKey : abc Issuer : dfjh a: fhfh Privat... See more...
Hi I am trying to divide the the logs into different evwnt based on below scenario: I have one single event currently: Issuer : hjlhjk a: xyz  PrivateKey : abc Issuer : dfjh a: fhfh PrivateKey : dsgd   Now I want it as two events:   event1: Issuer : hjlhjk a: xyz  PrivateKey : abc   event2: Issuer : dfjh a: fhfh PrivateKey : dsgd   how can i get this?     I tried below line breaking which is not working [sourcetype] LINE_BREAKER = ([\r\n]+)(PrivateKey)   [sourcetype] BREAK_ONLY_BEFORE = Issuer SHOULD_LINEMERGE = false  
index=xxxx source=*xxxxxx* | eval respStatus=case(responseStatus>=500, "ERRORS", responseStatus>=400, "EXCEPTIONS", responseStatus>=200, "SUCCESS" ) | stats avg(responseTime), max(responseTime) by ... See more...
index=xxxx source=*xxxxxx* | eval respStatus=case(responseStatus>=500, "ERRORS", responseStatus>=400, "EXCEPTIONS", responseStatus>=200, "SUCCESS" ) | stats avg(responseTime), max(responseTime) by client_id, servicePath, respStatus The above query gives me the output as : I want to bring the respStatus column to split in 3 columns and should looks something like this:     Want my table in this format :  clientID | Service Path | Success count | Error Count | Exception Count | Avg Resp time | Max Resp time
Hi @evinasco08 , Yes that's normal and correct. Sorry for my typo, I edited my reply I advise keeping RF=2 and SF=2 with 3 indexers.  
search1: index="*" sourcetype="*" "Generating Event gz File for*" search2: index="*" sourcetype="*" "File Processed*" if search1 results greater than 0 then only search2 alert should trigger email ... See more...
search1: index="*" sourcetype="*" "Generating Event gz File for*" search2: index="*" sourcetype="*" "File Processed*" if search1 results greater than 0 then only search2 alert should trigger email alert.
@scelikok thank you, Then,  is it normal that the RF and SF appears like "is Not MeT" untill finish to replicate the buckets?, thus, the master node would show "Search Factor is Met" and " Replicati... See more...
@scelikok thank you, Then,  is it normal that the RF and SF appears like "is Not MeT" untill finish to replicate the buckets?, thus, the master node would show "Search Factor is Met" and " Replication Factor is Met". that is correct? besides, you advise to me apply  RF=2 and SF=3, but the replication Factor cannot be less than Search Factor.    
I've been working to recreate a query in Splunk from Microsoft Defender Endpoint that shows what files users have copied to USB Drives. The query works like this: Step 1: Get all file USB Mount Eve... See more...
I've been working to recreate a query in Splunk from Microsoft Defender Endpoint that shows what files users have copied to USB Drives. The query works like this: Step 1: Get all file USB Mount Events Step 2: Get all file creation events on drives that are not C.  Step 3: Join the above two datasources by Device ID.  Step 4: Match drive letters and make sure the USB Mount time is less than File Create time.  Here's Microsoft's query: Microsoft-365-Defender-Hunting-Queries/Exfiltration/Files copied to USB drives.md at master · microsoft/Microsoft-365-Defender-Hunting-Queries · GitHub In Splunk I get to step three and then I'm not able to filter values based on that. Below is my query so far. Any suggestions would be helpful.  index=atp category="AdvancedHunting-DeviceFileEvents" properties.InitiatingProcessAccountName!="system" properties.ActionType="FileCreated" properties.FolderPath!="C:\\*" properties.FolderPath!="\\*" | fields properties.ReportId, properties.DeviceId, properties.InitiatingProcessAccountDomain, properties.InitiatingProcessAccountName, properties.InitiatingProcessAccountUpn, properties.FileName, properties.FolderPath, properties.SHA256, properties.Timestamp, properties.SensitivityLabel, properties.IsAzureInfoProtectionApplied | rename properties.ReportId as ReportId, properties.DeviceId as DeviceId, properties.InitiatingProcessAccountDomain as InitiatingProcessAccountDomain, properties.InitiatingProcessAccountName as InitiatingProcessAccountName, properties.InitiatingProcessAccountUpn as InitiatingProcessAccountUpn, properties.FileName as FileName, properties.FolderPath as FolderPath, properties.SHA256 as SHA256, properties.Timestamp as Timestamp, properties.SensitivityLabel as SensitivityLabel, properties.IsAzureInfoProtectionApplied as IsAzureInfoProtectionApplied | eval Timestamp_epoch = strptime (Timestamp, "%Y-%m-%dT%H:%M:%S.%6N%Z") | sort DeviceId, Timestamp desc | join type=inner left=L right=R where L.DeviceId = R.DeviceId [search index=atp category="AdvancedHunting-DeviceEvents" properties.ActionType="UsbDriveMounted" | spath input=properties.AdditionalFields | fields properties.DeviceId, properties.DeviceName, DriveLetter, properties.Timestamp, ProductName, SerialNumber, Manufacturer | sort properties.DeviceId, properties.Timestamp desc | rename properties.DeviceId as DeviceId, properties.DeviceName as DeviceName, properties.Timestamp as MountTime | eval MountTime_epoch = strptime (MountTime, "%Y-%m-%dT%H:%M:%S.%6N%Z") ] | table L.FolderPath,R.DriveLetter, R.MountTime, R.MountTime_epoch, L.Timestamp, L.Timestamp_epoch      
What is it specifically about those events that you want to get rid of them?  (The "why" isn't important, what we probably need is "what in that event is the important bit that tells you that you can... See more...
What is it specifically about those events that you want to get rid of them?  (The "why" isn't important, what we probably need is "what in that event is the important bit that tells you that you can get rid of it") ALSO The formatting of that event may have been broken - if you can edit your post and paste it in again, on this time use the </> button to paste it in as code, that might be helpful! But pretending anything from `comm="elasticsearch"` can be gotten rid of, then... 1) Read the first section of this on discarding certain events and keeping the rest, it's not long but it's the pattern we'll use here.  https://docs.splunk.com/Documentation/Splunk/9.1.1/Forwarding/Routeandfilterdatad#Filter_event_data_and_send_to_queues 2) For this case, you'll want to create a local/props.conf entry in either the TA you are messing around with, or possibly in a new, specific tiny app you build just for these fixes: [source::/var/log/audit/audit.log] TRANSFORMS-null= setnull 3) Then as the docs say, you'll want a local/transforms.conf entry like this one [setnull] REGEX = comm="elasticsearch" DEST_KEY = queue FORMAT = nullQueue I don't believe the quotes need escaping in that REGEX line, though I reserve the right to be wrong about that.  Test, see if it works and let us know!
That specific error is usually caused by you having a Search Head Cluster, then trying to edit configs on a Search Head Member instead of via the Deployer then deploying it. See this for more inform... See more...
That specific error is usually caused by you having a Search Head Cluster, then trying to edit configs on a Search Head Member instead of via the Deployer then deploying it. See this for more information. https://docs.splunk.com/Documentation/Splunk/9.2.0/DistSearch/PropagateSHCconfigurationchanges If that does not seem to be the problem here, then reply back with a few more specifics!  
Hello, Where does Splunk get the data from CrowdStrike to form the Splunk drilldown dashboards under Detections and Events called "CrowdStrike Detections Allowed/Blocked Breakdown" and "CrowdStrike ... See more...
Hello, Where does Splunk get the data from CrowdStrike to form the Splunk drilldown dashboards under Detections and Events called "CrowdStrike Detections Allowed/Blocked Breakdown" and "CrowdStrike Events Allowed/Blocked Breakdown"? My confusion is that in CrowdStrike Falcon console I don't see the terms "Blocked/Allowed" being used for detections or events and I need to know how Splunk is correlating those drilldown dashboard sections to CrowdStrike? What data does Splunk use from CrowdStrike to create those Blocked/Allowed sections in Splunk?
My first reasonable thought is that you just need to rewrite the two searches and combine them into one. We can help with this!  What do you have for search 1 and search 2 right now? (Don't forget ... See more...
My first reasonable thought is that you just need to rewrite the two searches and combine them into one. We can help with this!  What do you have for search 1 and search 2 right now? (Don't forget to use the <code> button to paste searches, and if you have to obfuscate a bit of it, feel free - but try to keep the same structure to the searches!)
Hey Everyone! We just started using Splunk ES, we just got it up and running fairly well and I have a couple questions hopefully I could get some guidance on or maybe a point in the right direction.... See more...
Hey Everyone! We just started using Splunk ES, we just got it up and running fairly well and I have a couple questions hopefully I could get some guidance on or maybe a point in the right direction. I would like to somehow setup the ability for analyst to be able to run local scripts in the adaptive response that use dynamic user input as variables to query external APIs. Another scenario, I was hoping we could use, would be using specific tokens/fields as the dynamic variable for these scripts and just give the analyst the output in the adaptive response when they are ran. Are any of these scenarios possible with ES we have tried to find a way to do this but so far have not come up with any successful implementation. Is there any documentation on implementing something like this? Any help would be very much appreciated!