All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I wish I had a better answer for you, but after doing some testing, phantom.update() just doesn't seem to want to work from within a custom function. There are other functions which have the same pro... See more...
I wish I had a better answer for you, but after doing some testing, phantom.update() just doesn't seem to want to work from within a custom function. There are other functions which have the same problem but it's usually called out in the documentation.  What you've written works perfectly from within a custom code block in a playbook. You may just need to make a single block playbook you can call from a parent if you're planning to use this in multiple places.
Thanks @gcusello  Does that mean, TA and App are installed on SH for Splunk Cloud Victoria? If that's the case, then it should work as is, isnt it? And for Splunk Cloud Classic, it seems like kv ... See more...
Thanks @gcusello  Does that mean, TA and App are installed on SH for Splunk Cloud Victoria? If that's the case, then it should work as is, isnt it? And for Splunk Cloud Classic, it seems like kv store approach does not work, is that right?
Retracted. @richgalloway 's solution will work.
To run the alert at 7:30pm, use a cron schedule of 30 19 * * *. To set the search window, use earliest=-1d@d+19h latest=@d+19h
This looks like a corrupted / non-standard version of JSON (It would be helpful for you to share the unformatted version of the log since that is what the rex will be working with!). Try something li... See more...
This looks like a corrupted / non-standard version of JSON (It would be helpful for you to share the unformatted version of the log since that is what the rex will be working with!). Try something like this | rex mode=sed "s/\"log\": {[^}]*}/\"log\": {}/g"
Hi @akgmail , what do you mean with "%+" in straftime? as @ITWhisperer  said, now() and _time are in epochtime so you can compare them, so please try this (modifying your search): index=testdata s... See more...
Hi @akgmail , what do you mean with "%+" in straftime? as @ITWhisperer  said, now() and _time are in epochtime so you can compare them, so please try this (modifying your search): index=testdata sourcetype=testmydata | eval diff=tostring(round((now()-_time)/60), "duration"), currentEventTime=strftime(_time,"%Y-%m-%d %H:%M:%S"), currentTimeintheServer=strftime(now(),"%Y-%m-%d %H:%M:%S") |table currentEventTime currentTimeintheServer diff index _raw Ciao. Giuseppe
In conditionals, use "like" instead:   | makeresults | eval msg.message=mvappend("Work Flow Passed | for endpoint XYZ","STATUS - FAILED") | mvexpand msg.message ``` SPL above is to create sam... See more...
In conditionals, use "like" instead:   | makeresults | eval msg.message=mvappend("Work Flow Passed | for endpoint XYZ","STATUS - FAILED") | mvexpand msg.message ``` SPL above is to create sample data only ``` | rename msg.message as message | eval Status=if(like(message,"%Work Flow Passed | for endpoint XYZ%"),"SUCCESS", "FAIL") | table _time, message, Status   It also helps to rename fields with paths to avoid the need for quoting them. 
1.If you have your SSO/MFA data ingested and parsed correctly, also using Splunk's TA's most of them come with out of the box tags that can be used to search for the data type. Simple Example - Thi... See more...
1.If you have your SSO/MFA data ingested and parsed correctly, also using Splunk's TA's most of them come with out of the box tags that can be used to search for the data type. Simple Example - This will search for authentication data across your defined indexes - and present the results (The tags search for authentication data) You can add your sourcetypes as well index=linux OR index=Windows OR index=my_SSO_data tag=authentication You can find the tags via GUI – easy way, or inspects the TA itself (eventtypes and tags) 2. If you have not ingested data then you need to ensure the below. Example Okta SSO / MFA - Okta would provide authentication data somewhere, in logs or API, you then need to onboard this data into Splunk, ensure there is a TA that helps with the parsing and tagging, then analyse the data, to see what it gives you and run various queries to give you the results you are looking for.  Windows Event logs normally give you authentication data, based on AD / Logon events, they also provide Azure AD/ Entra, so if you used these you again would need to ingest that data into Splunk first and then run queries.    Side note: Using Splunk you can check with TA’s have tags for authentication | rest splunk_server=local services/configs/conf-tags | rename eai:acl.app AS app, title AS tag | table app tag authentication This will show you the eventtypes which are associated with tags | rest splunk_server=local services/configs/conf-eventtypes | rename eai:acl.app AS app, title AS eventtype | table app search eventtype  
There might be a more fluid way to do this, but one idea would be to make your alert a two-step process: 1) Add " | addinfo " to your search to get the search SID, and have the alert log an event ... See more...
There might be a more fluid way to do this, but one idea would be to make your alert a two-step process: 1) Add " | addinfo " to your search to get the search SID, and have the alert log an event with that SID instead of sending email.  2) Create the alert and make your alert decision by searching for the new event log, and either using the " | rest /services/search/jobs/<SID> ", or searching the _internal or _audit indexes to get metadata about that search.
Thank you a lot for your feedback . Indeed after hours of testing troubleshooting ... I put the props in UF as well and IT WORKED ! 
You've already done what is necessary.  A TCP connection to the indexer(s) is all you need. Forwarders are a one-way device.  They send data to indexers, but do not obtain search results.  Searches ... See more...
You've already done what is necessary.  A TCP connection to the indexer(s) is all you need. Forwarders are a one-way device.  They send data to indexers, but do not obtain search results.  Searches and their results go through a search head.
Hi all -  I am trying to create what I would think is a relatively simple conditional statement in Splunk.  Use Case:  I merely want to know if a job has passed or failed; the only thing that is... See more...
Hi all -  I am trying to create what I would think is a relatively simple conditional statement in Splunk.  Use Case:  I merely want to know if a job has passed or failed; the only thing that is maybe tricky about this is the only message we get for pass or fail look like:  msg.message="*Work Flow Passed | for endpoint XYZ*" OR msg.message="*STATUS - FAILED*" I have tried to create a conditional statement based on the messaging but I either return NULL value or the wrong value.  If I try:   index=*app_pcf cf_app_name="mddr-batch-integration-flow" msg.message="*Work Flow Passed | for endpoint XYZ*" OR msg.message="*STATUS - FAILED*" | eval Status=if('message.msg'="*Work Flow Passed | for endpoint XYZ*","SUCCESS", "FAIL") | table _time, Status    Then it just shows Status as FAIL (which, i know is objectively wrong because the only message produced for this event is "work flow passed..." which should induce a TRUE value and display "SUCCESS"). If I try another way:    index=*app_pcf cf_app_name="mddr-batch-integration-flow" msg.message="*Work Flow Passed | for endpoint XYZ*" OR msg.message="*STATUS - FAILED*" | eval Status=case(msg.message="*Work Flow Passed | for endpoint XYZ*", "SUCCESS", msg.message="*STATUS - FAILED*", "FAIL") | table _time, Status   I receive NULL value for the STATUS field...  If it helps, this is how the event looks when i don't add any conditional statement or table: How can I fix this?? Thanks! 
Hi @gcusello , @ITWhisperer  When adding the regular expression indicated by @ITWhisperer  to the search, it still appears in the result, which I need to remove
 hi Giuseppe, Thanks for the response. the issue is for both internal and external indexes , the event count  and  current size is not showing any value. You mentioned the default search path, coul... See more...
 hi Giuseppe, Thanks for the response. the issue is for both internal and external indexes , the event count  and  current size is not showing any value. You mentioned the default search path, could you please shed some info on that,may be i can explore that option.  
Hi @jose_sepulveda , ok, please, check my solution or the one from @ITWhisperer that's similar. Ciao. Giuseppe
Hi @gcusello  I have a service developed in JAVA that is dockerized, a shared tomcat image is used that is adding these fragments to the service's output logs, which are the ones I really need to vi... See more...
Hi @gcusello  I have a service developed in JAVA that is dockerized, a shared tomcat image is used that is adding these fragments to the service's output logs, which are the ones I really need to view in splunk. For this reason the response is no longer a valid Json and the visualization is presented as a string and I need to resolve that situation
Hi @adrifesa95 , how did you install the app? if you followed the instructions at https://docs.splunk.com/Documentation/SSE/3.8.0/User/Intro open a case to Splunk Support because this is a Splunk s... See more...
Hi @adrifesa95 , how did you install the app? if you followed the instructions at https://docs.splunk.com/Documentation/SSE/3.8.0/User/Intro open a case to Splunk Support because this is a Splunk supported app. Ciao. Giuseppe  
Hi @Namo, what's the search you runned? did you inserted the name of the index in your main search or at least index=*? maybe the index you're using isn't in the default search path, so you don't ... See more...
Hi @Namo, what's the search you runned? did you inserted the name of the index in your main search or at least index=*? maybe the index you're using isn't in the default search path, so you don't find anything. Ciao. Giuseppe
Hi @jose_sepulveda, at first: do you want to remove a part of your logs before indexing or at search time? if at index time remember that in this way you change the format of your logs, so the add-... See more...
Hi @jose_sepulveda, at first: do you want to remove a part of your logs before indexing or at search time? if at index time remember that in this way you change the format of your logs, so the add-ons could not work correctly! anyway, why do you want to remove a part of your logs? your request doesn't seem to be an obfuscation. anyway, you can do this using the command SEDCMD in the props.con using a substitution regex like the following: SEDCMD = s/([^\}]+\})(.*)/$2/g Ciao. Giuseppe
| rex mode=sed "s/log: {[^}]*}/log: {}/g"