All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have a lookup file . It has 2 columns : Service and Entity and 500+ rows. Service has 34 unique values and Entity has 164.  I have a dashboard where for search i want to use values from this look... See more...
I have a lookup file . It has 2 columns : Service and Entity and 500+ rows. Service has 34 unique values and Entity has 164.  I have a dashboard where for search i want to use values from this lookup as input to search criteria. I have following logic .I get the dropdown values for "service" without any issues but not for "entity" when it's same lookup file ,same logic.   Any ideas ? Snippet : <input type="dropdown" token="Service" searchWhenChanged="true"> <label>Service</label> <search> <query> |inputlookup metadata.csv | dedup service | stats dc(service) by service </query> </search> <choice value="*">*</choice> <default>*</default> <initialValue>*</initialValue> </input> <input type="dropdown" token="Entity" searchWhenChanged="true"> <label>Entity</label> <search> <query> |inputlookup metadata.csv | dedup entity | stats dc(entity) by entity </query> </search> <choice value="*">*</choice> <default>*</default> <initialValue>*</initialValue> </input>  
@scelikok last queston, ¿Do you have support documentation where splunk indicate that setting RF and SF equal to indexer count is not a best practice?
We have a splunk query that pulls down a list of values daily.  We are looking to see if we can use splunk to find the field value that is new today, but was not present yesterday, and show in a stat... See more...
We have a splunk query that pulls down a list of values daily.  We are looking to see if we can use splunk to find the field value that is new today, but was not present yesterday, and show in a stats table. How can this be accomplished?  The idea is.. Yesterday - splunk db connect query pulls back a result of 5 log lines, all containing the field "name". field= name values - Bob, Kat, Abe, Doug, Sam Today - splunk db connect query pulls back a result of 6 log lines, all containing field "name". field= name values - Bob, Kat, Abe, Doug, Sam, Jim(new value found) So would like to show a stats table or alert that would let us know "Jim" is a new field value for name that did not exist yesterday.    
Hi I am trying to divide the the logs into different evwnt based on below scenario: I have one single event currently: Issuer : hjlhjk a: xyz  PrivateKey : abc Issuer : dfjh a: fhfh Privat... See more...
Hi I am trying to divide the the logs into different evwnt based on below scenario: I have one single event currently: Issuer : hjlhjk a: xyz  PrivateKey : abc Issuer : dfjh a: fhfh PrivateKey : dsgd   Now I want it as two events:   event1: Issuer : hjlhjk a: xyz  PrivateKey : abc   event2: Issuer : dfjh a: fhfh PrivateKey : dsgd   how can i get this?     I tried below line breaking which is not working [sourcetype] LINE_BREAKER = ([\r\n]+)(PrivateKey)   [sourcetype] BREAK_ONLY_BEFORE = Issuer SHOULD_LINEMERGE = false  
index=xxxx source=*xxxxxx* | eval respStatus=case(responseStatus>=500, "ERRORS", responseStatus>=400, "EXCEPTIONS", responseStatus>=200, "SUCCESS" ) | stats avg(responseTime), max(responseTime) by ... See more...
index=xxxx source=*xxxxxx* | eval respStatus=case(responseStatus>=500, "ERRORS", responseStatus>=400, "EXCEPTIONS", responseStatus>=200, "SUCCESS" ) | stats avg(responseTime), max(responseTime) by client_id, servicePath, respStatus The above query gives me the output as : I want to bring the respStatus column to split in 3 columns and should looks something like this:     Want my table in this format :  clientID | Service Path | Success count | Error Count | Exception Count | Avg Resp time | Max Resp time
Hi @evinasco08 , Yes that's normal and correct. Sorry for my typo, I edited my reply I advise keeping RF=2 and SF=2 with 3 indexers.  
search1: index="*" sourcetype="*" "Generating Event gz File for*" search2: index="*" sourcetype="*" "File Processed*" if search1 results greater than 0 then only search2 alert should trigger email ... See more...
search1: index="*" sourcetype="*" "Generating Event gz File for*" search2: index="*" sourcetype="*" "File Processed*" if search1 results greater than 0 then only search2 alert should trigger email alert.
@scelikok thank you, Then,  is it normal that the RF and SF appears like "is Not MeT" untill finish to replicate the buckets?, thus, the master node would show "Search Factor is Met" and " Replicati... See more...
@scelikok thank you, Then,  is it normal that the RF and SF appears like "is Not MeT" untill finish to replicate the buckets?, thus, the master node would show "Search Factor is Met" and " Replication Factor is Met". that is correct? besides, you advise to me apply  RF=2 and SF=3, but the replication Factor cannot be less than Search Factor.    
I've been working to recreate a query in Splunk from Microsoft Defender Endpoint that shows what files users have copied to USB Drives. The query works like this: Step 1: Get all file USB Mount Eve... See more...
I've been working to recreate a query in Splunk from Microsoft Defender Endpoint that shows what files users have copied to USB Drives. The query works like this: Step 1: Get all file USB Mount Events Step 2: Get all file creation events on drives that are not C.  Step 3: Join the above two datasources by Device ID.  Step 4: Match drive letters and make sure the USB Mount time is less than File Create time.  Here's Microsoft's query: Microsoft-365-Defender-Hunting-Queries/Exfiltration/Files copied to USB drives.md at master · microsoft/Microsoft-365-Defender-Hunting-Queries · GitHub In Splunk I get to step three and then I'm not able to filter values based on that. Below is my query so far. Any suggestions would be helpful.  index=atp category="AdvancedHunting-DeviceFileEvents" properties.InitiatingProcessAccountName!="system" properties.ActionType="FileCreated" properties.FolderPath!="C:\\*" properties.FolderPath!="\\*" | fields properties.ReportId, properties.DeviceId, properties.InitiatingProcessAccountDomain, properties.InitiatingProcessAccountName, properties.InitiatingProcessAccountUpn, properties.FileName, properties.FolderPath, properties.SHA256, properties.Timestamp, properties.SensitivityLabel, properties.IsAzureInfoProtectionApplied | rename properties.ReportId as ReportId, properties.DeviceId as DeviceId, properties.InitiatingProcessAccountDomain as InitiatingProcessAccountDomain, properties.InitiatingProcessAccountName as InitiatingProcessAccountName, properties.InitiatingProcessAccountUpn as InitiatingProcessAccountUpn, properties.FileName as FileName, properties.FolderPath as FolderPath, properties.SHA256 as SHA256, properties.Timestamp as Timestamp, properties.SensitivityLabel as SensitivityLabel, properties.IsAzureInfoProtectionApplied as IsAzureInfoProtectionApplied | eval Timestamp_epoch = strptime (Timestamp, "%Y-%m-%dT%H:%M:%S.%6N%Z") | sort DeviceId, Timestamp desc | join type=inner left=L right=R where L.DeviceId = R.DeviceId [search index=atp category="AdvancedHunting-DeviceEvents" properties.ActionType="UsbDriveMounted" | spath input=properties.AdditionalFields | fields properties.DeviceId, properties.DeviceName, DriveLetter, properties.Timestamp, ProductName, SerialNumber, Manufacturer | sort properties.DeviceId, properties.Timestamp desc | rename properties.DeviceId as DeviceId, properties.DeviceName as DeviceName, properties.Timestamp as MountTime | eval MountTime_epoch = strptime (MountTime, "%Y-%m-%dT%H:%M:%S.%6N%Z") ] | table L.FolderPath,R.DriveLetter, R.MountTime, R.MountTime_epoch, L.Timestamp, L.Timestamp_epoch      
What is it specifically about those events that you want to get rid of them?  (The "why" isn't important, what we probably need is "what in that event is the important bit that tells you that you can... See more...
What is it specifically about those events that you want to get rid of them?  (The "why" isn't important, what we probably need is "what in that event is the important bit that tells you that you can get rid of it") ALSO The formatting of that event may have been broken - if you can edit your post and paste it in again, on this time use the </> button to paste it in as code, that might be helpful! But pretending anything from `comm="elasticsearch"` can be gotten rid of, then... 1) Read the first section of this on discarding certain events and keeping the rest, it's not long but it's the pattern we'll use here.  https://docs.splunk.com/Documentation/Splunk/9.1.1/Forwarding/Routeandfilterdatad#Filter_event_data_and_send_to_queues 2) For this case, you'll want to create a local/props.conf entry in either the TA you are messing around with, or possibly in a new, specific tiny app you build just for these fixes: [source::/var/log/audit/audit.log] TRANSFORMS-null= setnull 3) Then as the docs say, you'll want a local/transforms.conf entry like this one [setnull] REGEX = comm="elasticsearch" DEST_KEY = queue FORMAT = nullQueue I don't believe the quotes need escaping in that REGEX line, though I reserve the right to be wrong about that.  Test, see if it works and let us know!
That specific error is usually caused by you having a Search Head Cluster, then trying to edit configs on a Search Head Member instead of via the Deployer then deploying it. See this for more inform... See more...
That specific error is usually caused by you having a Search Head Cluster, then trying to edit configs on a Search Head Member instead of via the Deployer then deploying it. See this for more information. https://docs.splunk.com/Documentation/Splunk/9.2.0/DistSearch/PropagateSHCconfigurationchanges If that does not seem to be the problem here, then reply back with a few more specifics!  
Hello, Where does Splunk get the data from CrowdStrike to form the Splunk drilldown dashboards under Detections and Events called "CrowdStrike Detections Allowed/Blocked Breakdown" and "CrowdStrike ... See more...
Hello, Where does Splunk get the data from CrowdStrike to form the Splunk drilldown dashboards under Detections and Events called "CrowdStrike Detections Allowed/Blocked Breakdown" and "CrowdStrike Events Allowed/Blocked Breakdown"? My confusion is that in CrowdStrike Falcon console I don't see the terms "Blocked/Allowed" being used for detections or events and I need to know how Splunk is correlating those drilldown dashboard sections to CrowdStrike? What data does Splunk use from CrowdStrike to create those Blocked/Allowed sections in Splunk?
My first reasonable thought is that you just need to rewrite the two searches and combine them into one. We can help with this!  What do you have for search 1 and search 2 right now? (Don't forget ... See more...
My first reasonable thought is that you just need to rewrite the two searches and combine them into one. We can help with this!  What do you have for search 1 and search 2 right now? (Don't forget to use the <code> button to paste searches, and if you have to obfuscate a bit of it, feel free - but try to keep the same structure to the searches!)
Hey Everyone! We just started using Splunk ES, we just got it up and running fairly well and I have a couple questions hopefully I could get some guidance on or maybe a point in the right direction.... See more...
Hey Everyone! We just started using Splunk ES, we just got it up and running fairly well and I have a couple questions hopefully I could get some guidance on or maybe a point in the right direction. I would like to somehow setup the ability for analyst to be able to run local scripts in the adaptive response that use dynamic user input as variables to query external APIs. Another scenario, I was hoping we could use, would be using specific tokens/fields as the dynamic variable for these scripts and just give the analyst the output in the adaptive response when they are ran. Are any of these scenarios possible with ES we have tried to find a way to do this but so far have not come up with any successful implementation. Is there any documentation on implementing something like this? Any help would be very much appreciated!
Hi, I have two splunk search -1, search-2 i have to create splunk alert for search-2 based on search-1. If search-1 count greater than 0 then trigger search-2 alert   regards vch
Hello, How to click a button or a link to run search and download CSV file in Dashboard Studio? At this time, I have to click magnifying glass to open a search, then click "export" to download the ... See more...
Hello, How to click a button or a link to run search and download CSV file in Dashboard Studio? At this time, I have to click magnifying glass to open a search, then click "export" to download the CSV file. I don't have access to REST API or Splunk Developer. Please suggest. Thank you for your help
I figured out the issue. The API fields needed to be double quoted or the reference broke. I assume it has something to do with the message being a JSON object. Outside of that minor syntax issue, yo... See more...
I figured out the issue. The API fields needed to be double quoted or the reference broke. I assume it has something to do with the message being a JSON object. Outside of that minor syntax issue, your solution worked. Thank you!
Hi @evinasco08, It may take some time for third indexer get replicated copies from other indexers and make them searchable. Did you wait enough time for this operations to finish? It is normal your ... See more...
Hi @evinasco08, It may take some time for third indexer get replicated copies from other indexers and make them searchable. Did you wait enough time for this operations to finish? It is normal your search and replication factors are not met because cluster has only two copies of some buckets while migration. You could monitor this process on Bucket Status page. You should have seen a lot of pending buckets. Cluster would be a complete state after these fix-ups completed. After rollback to RF=2 and SF=2 excess buckets are normal because cluster manager was trying to replicate buckets to match RF=3, SF=3 state, when you rollback these third copies became excess. If you want to keep RF=2, SF=2 you can simply/safely remove excess bucket from Bucket Status page.  Setting RF and SF equal to indexer count is not a best practice. Because if any of your indexers experience problem or restart your cluster will not be able to reach complete state because missing enough peers.  I advise keeping RF=2 and SF=2 with 3 indexers.   
Hi @Amit.Bisht, Thanks for letting me know! Could you come back here and share the outcome with the Support case? 
Good afternoon I hva e splunk srchitecture: 1 seach  2 indexers in cluster 1 master node/License Server 1 Moniotoring Console/Deploymen server 2 Heavy forwarders SF=2 RF=2 I added a new in... See more...
Good afternoon I hva e splunk srchitecture: 1 seach  2 indexers in cluster 1 master node/License Server 1 Moniotoring Console/Deploymen server 2 Heavy forwarders SF=2 RF=2 I added a new indexer to cluster, after that  tryed to change the RF and SF, both to 3, but when i change the values from splunk web in the master node and restart the instance, th aplatform show me the nex message:     then, I did rollabck, return SF=2 and RF=2, and evetrything normal, but the bucket status shows I need to change the SF and RF and I need to know if this will fix the iisues with the indexes Regards