All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @richgalloway  I tried with this query but not able to see any result: index="abc" sourcetype =$Regions$ source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "StatisticBalancer - st... See more...
Hi @richgalloway  I tried with this query but not able to see any result: index="abc" sourcetype =$Regions$ source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "StatisticBalancer - statisticData: StatisticData" |rex "totalOutputRecords=(?<totalOutputRecords>),busDt=(?<busDt>),fileName=(?<fileName>),totalAchCurrOutstBalAmt=(?<totalAchCurrOutstBalAmt>),totalAchBalLastStmtAmt=(?<totalAchBalLastStmtAmt>),totalClosingBal=(?<totalClosingBal>),totalRecordsWritten=(?<totalRecordsWritten>),totalRecords=(?<totalRecords>)" | where fileName="TRIM.UNB.D082923.T045920" |table busDt fileName totalAchCurrOutstBalAmt totalAchBalLastStmtAmt totalClosingBal totalRecordsWritten totalRecords
Hi @Dustem, let me understand: you want to discover if, in one time period, there was a Windows EventCode=4769 but not a Windows EventCode=4770, is it correct? I suppose that you have a common ID t... See more...
Hi @Dustem, let me understand: you want to discover if, in one time period, there was a Windows EventCode=4769 but not a Windows EventCode=4770, is it correct? I suppose that you have a common ID to correlate the events. If this is your requirement, you could try something like this: index=wineventlog EventCode IN (4769,4770) | stats dc(EventCode) AS EventCode_count BY TGT_Id | where EventCode_count=1 Cioao. Giuseppe  
Dear Splunkers, actual i am facing an issue, we have an Lookup on the SHC with some location infromation e.g location.csv   ____ location DE EN   Scope is to ingest data only on indexers, whe... See more...
Dear Splunkers, actual i am facing an issue, we have an Lookup on the SHC with some location infromation e.g location.csv   ____ location DE EN   Scope is to ingest data only on indexers, when the location in events showing up on lookups too. The solution works with ingest_eval and lookup filtering.   The question right know is do we have the possibility to manage this lookup on SH level and provide some roles the permission to add/remove locations on their demand from this index. e.g. I'll update the lookup on the SH and this will be replicated to lookup on Index Cluster too..how can i achieve this one? Kind Regards
Looping is not supported, yet, in the platform.  There was an announcement at .Conf23 that stated a Loop Block was coming soon. I would highly recommend waiting for that.  Also @nongingerale why ... See more...
Looping is not supported, yet, in the platform.  There was an announcement at .Conf23 that stated a Loop Block was coming soon. I would highly recommend waiting for that.  Also @nongingerale why are you not just passing all the items into the child playbook and then looping through the values inside the playbook? This is most certainly best practise for this. I appreciate I don't know your use case but there are many ways to not need to build bespoke loops using the platform capabilities. 
hi guys, I want to detect a service ticket request (Windows event code 4769) and one of the following corresponding events does not appear before the service ticket request: 1. User Ticket (TGT) req... See more...
hi guys, I want to detect a service ticket request (Windows event code 4769) and one of the following corresponding events does not appear before the service ticket request: 1. User Ticket (TGT) request, Windows event code 4768. 2. Ticket renewal request, Windows event code 4770.
@Erick995 SOAR will initiate the playbook automation in the order the event is received in the platform. The only thing that may affect this is severity-based prioritisation. E.G. If event 2 has a hi... See more...
@Erick995 SOAR will initiate the playbook automation in the order the event is received in the platform. The only thing that may affect this is severity-based prioritisation. E.G. If event 2 has a higher severity than the event 1, event 2 would be processed first.  I am confused why you would need it to work this way as I would expect all event information for a use case to be in 1 container and not spread across more than 1. Maybe you could get Splunk to aggregate and fire 1 event through?
hi guys, I want to detect that more than 10 different ports of the same host are sniffed and scanned every 15 minutes and triggered 5 times in a row, then the alarm; If the same time period is trigge... See more...
hi guys, I want to detect that more than 10 different ports of the same host are sniffed and scanned every 15 minutes and triggered 5 times in a row, then the alarm; If the same time period is triggered for three consecutive days, the alarm is triggered.
Are there any security related concerns due to this , as this file contains the authToken ? Can this be miss used  in any possible way?
I am also receving same error after installing and ask end user to access. Can any one please let me know what extra capabilities are required to access and add inputs.    Note: Admin user can be a... See more...
I am also receving same error after installing and ask end user to access. Can any one please let me know what extra capabilities are required to access and add inputs.    Note: Admin user can be able to access the add-on app and creating inputs. Issue is with end user (we can't give admin privileges to the end user).    Regards,  Ramesh Babu Chedulla
Hello, thank you for this idea. Will try this soulution this week. Thanks, Flenwy
Hello Thanks for your reply I cannot attach the real logs but let have an example The log will start with timestamp so 08:30:23 Started by Sarit Shvartzman Raw Raw Raw 08:32:34 Finished: I w... See more...
Hello Thanks for your reply I cannot attach the real logs but let have an example The log will start with timestamp so 08:30:23 Started by Sarit Shvartzman Raw Raw Raw 08:32:34 Finished: I want all of this to be in one event Instead of as it now that it breaks by raw
OK. The tstats command has a bit different way of specifying dataset than the from command. So you should be doing | tstats count from datamodel=internal_server.server And it's irrelevant whether ... See more...
OK. The tstats command has a bit different way of specifying dataset than the from command. So you should be doing | tstats count from datamodel=internal_server.server And it's irrelevant whether it's a docker container or any other way of deploying Splunk because the commands work the same way regardless of how the software is deployed.
Splunk should automatically be capturing that time into the _time field.  If you still need to extract it into a field though, try :  | rex field=_raw "^(?<time_field>[^\s]+)\s"  
Try looking to see if it has already been extracted - this is usually in a field called _time
Hi All, I have below two logs: First Log 2023-09-05 00:17:56.987 [INFO ] [pool-3-thread-1] ReadControlFileImpl - Reading Control-File /absin/CARS.HIERCTR.D090423.T001603 Second Log 2023-09-05 03... See more...
Hi All, I have below two logs: First Log 2023-09-05 00:17:56.987 [INFO ] [pool-3-thread-1] ReadControlFileImpl - Reading Control-File /absin/CARS.HIERCTR.D090423.T001603 Second Log 2023-09-05 03:55:15.808 [INFO ] [Thread-20] FileEventCreator - Completed Settlement file processing, CARS.HIER.D090423.T001603 records processed: 161094 I want to capture the trimmings for both logs: My current queries index="abc"sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "Reading Control-File /absin/CARS.HIERCTR." index="abc"sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "Completed Settlement file processing, CARS.HIER."
Hi @smanojkumar, in this case, please try this: | rex "OS\=\"*(?<OS>[^,\"]*).*OSRelease\=\"*(?<OSRelease>[^,\"]*)" that you can test at https://regex101.com/r/SQFX88/1 Ciao. Giuseppe
Without "" info_search_time=1693969036.181, OS=Linux, isBo=false, isFo=false, SCOPE=Unknown, isVIP=false, OSType=Linux, isCACP=false, isCMDB=false, isLost=false, Country=Unknown, isIndus=false, is... See more...
Without "" info_search_time=1693969036.181, OS=Linux, isBo=false, isFo=false, SCOPE=Unknown, isVIP=false, OSType=Linux, isCACP=false, isCMDB=false, isLost=false, Country=Unknown, isIndus=false, isMcAfee=true, isStolen=false, OSRelease=Unknown, With "" info_search_time=1693969036.181, OS="Windows Server 2019 Standard", isBo=true, isFo=false, SCOPE="IN", isVIP=false, OSType=Win, isCACP=false, isCMDB=true, isLost=false, Country=Germany, isIndus=false, isMcAfee=true, isStolen=false, OSRelease="EL Server 7.4 (Maipo", mcafee_LastCommunication="2023-09-05 20:30:35",
Hey @PickleRick , I went into the settings for the dataset and enabled acceleration (via Edit Acceleration). Also the dataset shows up as accelerated in the list of datasets. Shouldn't that have reso... See more...
Hey @PickleRick , I went into the settings for the dataset and enabled acceleration (via Edit Acceleration). Also the dataset shows up as accelerated in the list of datasets. Shouldn't that have resolved the issue? Also, why do you say it's irrelevant on a docker image? Is it not supposed to work on docker?   Do you know of any documentation describing this?
Hi @smanojkumar, if you don't have quotes, you should be sue about the log forma to find a different rule, could you share some samples of your logs with and without quotes? Ciao. Giuseppe
Hi @gcusello ,    Thanks for your response!    At rare cased we don't have " " in OS and OSRelease, What would be the regex, that should extract in both the cases, Like OS="Windows", OS=Window... See more...
Hi @gcusello ,    Thanks for your response!    At rare cased we don't have " " in OS and OSRelease, What would be the regex, that should extract in both the cases, Like OS="Windows", OS=Windows, OSRelease="jhvdhjc", OSRelease=nsvcv Thanks in advance! Manoj Kumar S