All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Under that subject line, the detail says: You do not have necessary authorization to access and use this application : App Content Manager. Access to all of its features has been restricted. If y... See more...
Under that subject line, the detail says: You do not have necessary authorization to access and use this application : App Content Manager. Access to all of its features has been restricted. If you believe this is in error, or if you require access for a specific reason, please reach out to your Splunk administrator for further assistance. But I am the Splunk Admin. This app is quite new and not supported by Splunk. So I am trying to get the authors' insights or anyone who has experiences with it. Much appreciated!
Thanks for the help Paul!  I have tried your tips:  index=*app_pcf cf_app_name="mddr-batch-integration-flow" msg.message="*Work Flow Passed | for endpoint Atmtransaction*" | rename msg.message as me... See more...
Thanks for the help Paul!  I have tried your tips:  index=*app_pcf cf_app_name="mddr-batch-integration-flow" msg.message="*Work Flow Passed | for endpoint Atmtransaction*" | rename msg.message as message | eval Status=if(like(message,"%Work Flow Passed | for endpoint Atmtransaction%"),"SUCCESS", "FAIL") | table _time, message, Status And now I have added the correct message (workflow Passed) however the Status is still showing as FAIL...   
I have a customer that want to disable alerting Mon-Fri 5PM - 6AM and All day Sat-Sun. I appears that I can only have one schedule per Health Rule.   Is it possible to have multiple schedules per H... See more...
I have a customer that want to disable alerting Mon-Fri 5PM - 6AM and All day Sat-Sun. I appears that I can only have one schedule per Health Rule.   Is it possible to have multiple schedules per Health Rule? Thanks, S/
Correct. This is also known as the De Morgan's Law of Union Or De Morgan's Law of Intersection
the missing role is admin_all_objects
I'm a bit new to Splunk; apologies if I miss anything obvious. I'm looking to selectively block events meeting a certain criteria from being indexed.  Here's the current setup: Splunk Universal Fo... See more...
I'm a bit new to Splunk; apologies if I miss anything obvious. I'm looking to selectively block events meeting a certain criteria from being indexed.  Here's the current setup: Splunk Universal Forwarder 9.1.4.0 Windows Server 2019 And the conf: & 'C:\Program Files\SplunkUniversalForwarder\bin\btool.exe' inputs list ... [WinEventLog://Security] blacklist1 = REDACTED blacklist2 = EventCode="4688" Message="New Process Name: (?i)C:\\Program Files\\Splunk(?:UniversalForwarder)?\\bin\\(?:btool|splunkd|splunk|splunk-(?:MonitorNoHandle|admon|netmon|perfmon|powershell|regmon|winevtlog|winhostinfo|winprintmon|wmi)).exe" blacklist3 = REDACTED disabled = 0 evt_dc_name = evt_dns_name = evt_resolve_ad_obj = 0 host = REDACTED index = REDACTED interval = 60 ...   Now here's what I see: No errors around processing this blacklist (if I use an invalid regex, it grumbles) So many splunk process events.  So many. Not clear on why this blacklist is not working.  Any suggestions? In Splunk, if I show source for the log, I get this: 06/18/2024 01:49:56 PM LogName=Security EventCode=4688 EventType=0 ComputerName=REDACTED SourceName=Microsoft Windows security auditing. Type=Information RecordNumber=3063451653 Keywords=Audit Success TaskCategory=Process Creation OpCode=Info Message=A new process has been created. Creator Subject: Security ID: S-1-5-18 Account Name: REDACTED Account Domain: REDACTED Logon ID: 0x3E7 Target Subject: Security ID: S-1-0-0 Account Name: - Account Domain: - Logon ID: 0x0 Process Information: New Process ID: 0x1e4c New Process Name: C:\Program Files\SplunkUniversalForwarder\bin\splunk-powershell.exe Token Elevation Type: %%1936 Mandatory Label: S-1-16-16384 Creator Process ID: 0x35e4 Creator Process Name: C:\Program Files\SplunkUniversalForwarder\bin\splunkd.exe Process Command Line: "C:\Program Files\SplunkUniversalForwarder\bin\splunk-powershell.exe" --ps2 Token Elevation Type indicates the type of token that was assigned to the new process in accordance with User Account Control policy. Type 1 is a full token with no privileges removed or groups disabled. A full token is only used if User Account Control is disabled or if the user is the built-in Administrator account or a service account. Type 2 is an elevated token with no privileges removed or groups disabled. An elevated token is used when User Account Control is enabled and the user chooses to start the program using Run as administrator. An elevated token is also used when an application is configured to always require administrative privilege or to always require maximum privilege, and the user is a member of the Administrators group. Type 3 is a limited token with administrative privileges removed and administrative groups disabled. The limited token is used when User Account Control is enabled, the application does not require administrative privilege, and the user does not choose to start the program using Run as administrator.   And finally, if I match that source, to the regex string, it matches, which... should that not mean the event would be blacklisted?  Is there any debug level logs / tooling I should check that might reveal what this is actually doing/not doing?  It seems like it should "just work", but, again, I am quite new with Splunk. Thanks for any help, and apologies if this is something obvious that I have missed!
Thank you for the response, @gcusello. I'll give it a try and see how it goes. It's strange that they make it so hard/confusing to buy their product!  
[ { "Parameters": null, "ID": 2185, "Category": null, "OriginatingHost": null, "OriginatingOSUser": null, "OriginatingPIUser": null, "ProcessID": 5300, "Priority": 10, "ProcessHost": null, ... See more...
[ { "Parameters": null, "ID": 2185, "Category": null, "OriginatingHost": null, "OriginatingOSUser": null, "OriginatingPIUser": null, "ProcessID": 5300, "Priority": 10, "ProcessHost": null, "ProcessOSUser": "SYSTEM", "ProcessPIUser": null, "Source1": "piarcset", "Source2": "Historical", "Source3": null, "SplunkTime": "1718122575.10669", "Severity": "Warning" } ] "TimeStamp": "\/Date(1718122575106)\/", "Message": "User query failed: Connection ID: 55, User: piadmin, User ID: 1, Point ID: 247000, Type: summary, Start: 11-Jun-24 12:14:45, End: 11-Jun-24 12:16:15, Mode: 5, Status: [-11059] No Good Data For Calculation", "ProgramName": "piarchss", "Category": null, "OriginatingHost": null, "OriginatingOSUser": null, "OriginatingPIUser": null, "ProcessID": 5300, "Priority": 10, "ProcessHost": null, "ProcessOSUser": "SYSTEM", "ProcessPIUser": null, "Source1": "piarcset", "Source2": "Historical", "Source3": null, "SplunkTime": "1718122570.13029", "Severity": "Warning" }, { "Parameters": null, "ID": 2185, "TimeStamp": "\/Date(1718122570130)\/", "Message": "User query failed: Connection ID: 55, User: piadmin, User ID: 1, Point ID: 247000, Type: summary, Start: 11-Jun-24 12:14:40, End: 11-Jun-24 12:16:10, Mode: 5, Status: [-11059] No Good Data For Calculation", "ProgramName": "piarchss", "Category": null, "OriginatingHost": null, "OriginatingOSUser": null, "OriginatingPIUser": null, "ProcessID": 5300, "Priority": 10, "ProcessHost": null, "ProcessOSUser": "SYSTEM", "ProcessPIUser": null, "Source1": "piarcset", "Source2": "Historical", "Source3": null, "SplunkTime": "1718122565.16875", "Severity": "Warning" }, { "Parameters": null, "ID": 2185, "TimeStamp": "\/Date(1718122565168)\/", "Message": "User query failed: Connection ID: 55, User: piadmin, User ID: 1, Point ID: 247000, Type: summary, Start: 11-Jun-24 12:14:35, End: 11-Jun-24 12:16:05, Mode: 5, Status: [-11059] No Good Data For Calculation", "ProgramName": "piarchss", "Category": null, "OriginatingHost": null, "OriginatingOSUser": null, "OriginatingPIUser": null, "ProcessID": 5300, "Priority": 10, "ProcessHost": null, "ProcessOSUser": "SYSTEM", "ProcessPIUser": null, "Source1": "piarcset", "Source2": "Historical", "Source3": null, "SplunkTime": "1718122564.42661", "Severity": "Warning" }, { "Parameters": null, "ID": 2185, "TimeStamp": "\/Date(1718122564426)\/", "Message": "User query failed: Connection ID: 55, User: piadmin, User ID: 1, Point ID: 247000, Type: summary, Start: 11-Jun-24 12:14:34, End: 11-Jun-24 12:16:04, Mode: 5, Status: [-11059] No Good Data For Calculation", "ProgramName": "piarchss", "Category": null, "OriginatingHost": null, "OriginatingOSUser": null, "OriginatingPIUser": null, "ProcessID": 5300, "Priority": 10, "ProcessHost": null, "ProcessOSUser": "SYSTEM", "ProcessPIUser": null, "Source1": "piarcset", "Source2": "Historical", "Source3": null, "SplunkTime": "1718122555.14693", "Severity": "Warning" }, { "Parameters": null, "ID": 2185, "TimeStamp": "\/Date(1718122555146)\/", "Message": "User query failed: Connection ID: 55, User: piadmin, User ID: 1, Point ID: 247000, Type: summary, Start: 11-Jun-24 12:14:25, End: 11-Jun-24 12:15:55, Mode: 5, Status: [-11059] No Good Data For Calculation", "ProgramName": "piarchss", "Category": null, "OriginatingHost": null, "OriginatingOSUser": null, "OriginatingPIUser": null, "ProcessID": 5300, "Priority": 10, "ProcessHost": null, "ProcessOSUser": "SYSTEM", "ProcessPIUser": null, "Source1": "piarcset", "Source2": "Historical", "Source3": null, "SplunkTime": "1718122550.12819", "Severity": "Warning" },
@robertlynch2020- As per the App documentation, there is no direct configuration parameter as of today (2024-06-18) to make that change. * https://docs.splunk.com/Documentation/Timeline/1.6.2/Timeli... See more...
@robertlynch2020- As per the App documentation, there is no direct configuration parameter as of today (2024-06-18) to make that change. * https://docs.splunk.com/Documentation/Timeline/1.6.2/TimelineViz/TimelineXML   I hope this helps!!!
Please provide multiple _raw events as raw, so community can help you write Line breaking configuration.
I wish I had a better answer for you, but after doing some testing, phantom.update() just doesn't seem to want to work from within a custom function. There are other functions which have the same pro... See more...
I wish I had a better answer for you, but after doing some testing, phantom.update() just doesn't seem to want to work from within a custom function. There are other functions which have the same problem but it's usually called out in the documentation.  What you've written works perfectly from within a custom code block in a playbook. You may just need to make a single block playbook you can call from a parent if you're planning to use this in multiple places.
Thanks @gcusello  Does that mean, TA and App are installed on SH for Splunk Cloud Victoria? If that's the case, then it should work as is, isnt it? And for Splunk Cloud Classic, it seems like kv ... See more...
Thanks @gcusello  Does that mean, TA and App are installed on SH for Splunk Cloud Victoria? If that's the case, then it should work as is, isnt it? And for Splunk Cloud Classic, it seems like kv store approach does not work, is that right?
Retracted. @richgalloway 's solution will work.
To run the alert at 7:30pm, use a cron schedule of 30 19 * * *. To set the search window, use earliest=-1d@d+19h latest=@d+19h
This looks like a corrupted / non-standard version of JSON (It would be helpful for you to share the unformatted version of the log since that is what the rex will be working with!). Try something li... See more...
This looks like a corrupted / non-standard version of JSON (It would be helpful for you to share the unformatted version of the log since that is what the rex will be working with!). Try something like this | rex mode=sed "s/\"log\": {[^}]*}/\"log\": {}/g"
Hi @akgmail , what do you mean with "%+" in straftime? as @ITWhisperer  said, now() and _time are in epochtime so you can compare them, so please try this (modifying your search): index=testdata s... See more...
Hi @akgmail , what do you mean with "%+" in straftime? as @ITWhisperer  said, now() and _time are in epochtime so you can compare them, so please try this (modifying your search): index=testdata sourcetype=testmydata | eval diff=tostring(round((now()-_time)/60), "duration"), currentEventTime=strftime(_time,"%Y-%m-%d %H:%M:%S"), currentTimeintheServer=strftime(now(),"%Y-%m-%d %H:%M:%S") |table currentEventTime currentTimeintheServer diff index _raw Ciao. Giuseppe
In conditionals, use "like" instead:   | makeresults | eval msg.message=mvappend("Work Flow Passed | for endpoint XYZ","STATUS - FAILED") | mvexpand msg.message ``` SPL above is to create sam... See more...
In conditionals, use "like" instead:   | makeresults | eval msg.message=mvappend("Work Flow Passed | for endpoint XYZ","STATUS - FAILED") | mvexpand msg.message ``` SPL above is to create sample data only ``` | rename msg.message as message | eval Status=if(like(message,"%Work Flow Passed | for endpoint XYZ%"),"SUCCESS", "FAIL") | table _time, message, Status   It also helps to rename fields with paths to avoid the need for quoting them. 
1.If you have your SSO/MFA data ingested and parsed correctly, also using Splunk's TA's most of them come with out of the box tags that can be used to search for the data type. Simple Example - Thi... See more...
1.If you have your SSO/MFA data ingested and parsed correctly, also using Splunk's TA's most of them come with out of the box tags that can be used to search for the data type. Simple Example - This will search for authentication data across your defined indexes - and present the results (The tags search for authentication data) You can add your sourcetypes as well index=linux OR index=Windows OR index=my_SSO_data tag=authentication You can find the tags via GUI – easy way, or inspects the TA itself (eventtypes and tags) 2. If you have not ingested data then you need to ensure the below. Example Okta SSO / MFA - Okta would provide authentication data somewhere, in logs or API, you then need to onboard this data into Splunk, ensure there is a TA that helps with the parsing and tagging, then analyse the data, to see what it gives you and run various queries to give you the results you are looking for.  Windows Event logs normally give you authentication data, based on AD / Logon events, they also provide Azure AD/ Entra, so if you used these you again would need to ingest that data into Splunk first and then run queries.    Side note: Using Splunk you can check with TA’s have tags for authentication | rest splunk_server=local services/configs/conf-tags | rename eai:acl.app AS app, title AS tag | table app tag authentication This will show you the eventtypes which are associated with tags | rest splunk_server=local services/configs/conf-eventtypes | rename eai:acl.app AS app, title AS eventtype | table app search eventtype  
There might be a more fluid way to do this, but one idea would be to make your alert a two-step process: 1) Add " | addinfo " to your search to get the search SID, and have the alert log an event ... See more...
There might be a more fluid way to do this, but one idea would be to make your alert a two-step process: 1) Add " | addinfo " to your search to get the search SID, and have the alert log an event with that SID instead of sending email.  2) Create the alert and make your alert decision by searching for the new event log, and either using the " | rest /services/search/jobs/<SID> ", or searching the _internal or _audit indexes to get metadata about that search.
Thank you a lot for your feedback . Indeed after hours of testing troubleshooting ... I put the props in UF as well and IT WORKED !