All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Retracted. @richgalloway 's solution will work.
To run the alert at 7:30pm, use a cron schedule of 30 19 * * *. To set the search window, use earliest=-1d@d+19h latest=@d+19h
This looks like a corrupted / non-standard version of JSON (It would be helpful for you to share the unformatted version of the log since that is what the rex will be working with!). Try something li... See more...
This looks like a corrupted / non-standard version of JSON (It would be helpful for you to share the unformatted version of the log since that is what the rex will be working with!). Try something like this | rex mode=sed "s/\"log\": {[^}]*}/\"log\": {}/g"
Hi @akgmail , what do you mean with "%+" in straftime? as @ITWhisperer  said, now() and _time are in epochtime so you can compare them, so please try this (modifying your search): index=testdata s... See more...
Hi @akgmail , what do you mean with "%+" in straftime? as @ITWhisperer  said, now() and _time are in epochtime so you can compare them, so please try this (modifying your search): index=testdata sourcetype=testmydata | eval diff=tostring(round((now()-_time)/60), "duration"), currentEventTime=strftime(_time,"%Y-%m-%d %H:%M:%S"), currentTimeintheServer=strftime(now(),"%Y-%m-%d %H:%M:%S") |table currentEventTime currentTimeintheServer diff index _raw Ciao. Giuseppe
In conditionals, use "like" instead:   | makeresults | eval msg.message=mvappend("Work Flow Passed | for endpoint XYZ","STATUS - FAILED") | mvexpand msg.message ``` SPL above is to create sam... See more...
In conditionals, use "like" instead:   | makeresults | eval msg.message=mvappend("Work Flow Passed | for endpoint XYZ","STATUS - FAILED") | mvexpand msg.message ``` SPL above is to create sample data only ``` | rename msg.message as message | eval Status=if(like(message,"%Work Flow Passed | for endpoint XYZ%"),"SUCCESS", "FAIL") | table _time, message, Status   It also helps to rename fields with paths to avoid the need for quoting them. 
1.If you have your SSO/MFA data ingested and parsed correctly, also using Splunk's TA's most of them come with out of the box tags that can be used to search for the data type. Simple Example - Thi... See more...
1.If you have your SSO/MFA data ingested and parsed correctly, also using Splunk's TA's most of them come with out of the box tags that can be used to search for the data type. Simple Example - This will search for authentication data across your defined indexes - and present the results (The tags search for authentication data) You can add your sourcetypes as well index=linux OR index=Windows OR index=my_SSO_data tag=authentication You can find the tags via GUI – easy way, or inspects the TA itself (eventtypes and tags) 2. If you have not ingested data then you need to ensure the below. Example Okta SSO / MFA - Okta would provide authentication data somewhere, in logs or API, you then need to onboard this data into Splunk, ensure there is a TA that helps with the parsing and tagging, then analyse the data, to see what it gives you and run various queries to give you the results you are looking for.  Windows Event logs normally give you authentication data, based on AD / Logon events, they also provide Azure AD/ Entra, so if you used these you again would need to ingest that data into Splunk first and then run queries.    Side note: Using Splunk you can check with TA’s have tags for authentication | rest splunk_server=local services/configs/conf-tags | rename eai:acl.app AS app, title AS tag | table app tag authentication This will show you the eventtypes which are associated with tags | rest splunk_server=local services/configs/conf-eventtypes | rename eai:acl.app AS app, title AS eventtype | table app search eventtype  
There might be a more fluid way to do this, but one idea would be to make your alert a two-step process: 1) Add " | addinfo " to your search to get the search SID, and have the alert log an event ... See more...
There might be a more fluid way to do this, but one idea would be to make your alert a two-step process: 1) Add " | addinfo " to your search to get the search SID, and have the alert log an event with that SID instead of sending email.  2) Create the alert and make your alert decision by searching for the new event log, and either using the " | rest /services/search/jobs/<SID> ", or searching the _internal or _audit indexes to get metadata about that search.
Thank you a lot for your feedback . Indeed after hours of testing troubleshooting ... I put the props in UF as well and IT WORKED ! 
You've already done what is necessary.  A TCP connection to the indexer(s) is all you need. Forwarders are a one-way device.  They send data to indexers, but do not obtain search results.  Searches ... See more...
You've already done what is necessary.  A TCP connection to the indexer(s) is all you need. Forwarders are a one-way device.  They send data to indexers, but do not obtain search results.  Searches and their results go through a search head.
Hi all -  I am trying to create what I would think is a relatively simple conditional statement in Splunk.  Use Case:  I merely want to know if a job has passed or failed; the only thing that is... See more...
Hi all -  I am trying to create what I would think is a relatively simple conditional statement in Splunk.  Use Case:  I merely want to know if a job has passed or failed; the only thing that is maybe tricky about this is the only message we get for pass or fail look like:  msg.message="*Work Flow Passed | for endpoint XYZ*" OR msg.message="*STATUS - FAILED*" I have tried to create a conditional statement based on the messaging but I either return NULL value or the wrong value.  If I try:   index=*app_pcf cf_app_name="mddr-batch-integration-flow" msg.message="*Work Flow Passed | for endpoint XYZ*" OR msg.message="*STATUS - FAILED*" | eval Status=if('message.msg'="*Work Flow Passed | for endpoint XYZ*","SUCCESS", "FAIL") | table _time, Status    Then it just shows Status as FAIL (which, i know is objectively wrong because the only message produced for this event is "work flow passed..." which should induce a TRUE value and display "SUCCESS"). If I try another way:    index=*app_pcf cf_app_name="mddr-batch-integration-flow" msg.message="*Work Flow Passed | for endpoint XYZ*" OR msg.message="*STATUS - FAILED*" | eval Status=case(msg.message="*Work Flow Passed | for endpoint XYZ*", "SUCCESS", msg.message="*STATUS - FAILED*", "FAIL") | table _time, Status   I receive NULL value for the STATUS field...  If it helps, this is how the event looks when i don't add any conditional statement or table: How can I fix this?? Thanks! 
Hi @gcusello , @ITWhisperer  When adding the regular expression indicated by @ITWhisperer  to the search, it still appears in the result, which I need to remove
 hi Giuseppe, Thanks for the response. the issue is for both internal and external indexes , the event count  and  current size is not showing any value. You mentioned the default search path, coul... See more...
 hi Giuseppe, Thanks for the response. the issue is for both internal and external indexes , the event count  and  current size is not showing any value. You mentioned the default search path, could you please shed some info on that,may be i can explore that option.  
Hi @jose_sepulveda , ok, please, check my solution or the one from @ITWhisperer that's similar. Ciao. Giuseppe
Hi @gcusello  I have a service developed in JAVA that is dockerized, a shared tomcat image is used that is adding these fragments to the service's output logs, which are the ones I really need to vi... See more...
Hi @gcusello  I have a service developed in JAVA that is dockerized, a shared tomcat image is used that is adding these fragments to the service's output logs, which are the ones I really need to view in splunk. For this reason the response is no longer a valid Json and the visualization is presented as a string and I need to resolve that situation
Hi @adrifesa95 , how did you install the app? if you followed the instructions at https://docs.splunk.com/Documentation/SSE/3.8.0/User/Intro open a case to Splunk Support because this is a Splunk s... See more...
Hi @adrifesa95 , how did you install the app? if you followed the instructions at https://docs.splunk.com/Documentation/SSE/3.8.0/User/Intro open a case to Splunk Support because this is a Splunk supported app. Ciao. Giuseppe  
Hi @Namo, what's the search you runned? did you inserted the name of the index in your main search or at least index=*? maybe the index you're using isn't in the default search path, so you don't ... See more...
Hi @Namo, what's the search you runned? did you inserted the name of the index in your main search or at least index=*? maybe the index you're using isn't in the default search path, so you don't find anything. Ciao. Giuseppe
Hi @jose_sepulveda, at first: do you want to remove a part of your logs before indexing or at search time? if at index time remember that in this way you change the format of your logs, so the add-... See more...
Hi @jose_sepulveda, at first: do you want to remove a part of your logs before indexing or at search time? if at index time remember that in this way you change the format of your logs, so the add-ons could not work correctly! anyway, why do you want to remove a part of your logs? your request doesn't seem to be an obfuscation. anyway, you can do this using the command SEDCMD in the props.con using a substitution regex like the following: SEDCMD = s/([^\}]+\})(.*)/$2/g Ciao. Giuseppe
| rex mode=sed "s/log: {[^}]*}/log: {}/g"
I have a scheduled search/alert.  It validates that for every Splunk event of type A, there is a type B.  If it doesn't see a corresponding B, it will alert.  Occasionally I am getting false alerts b... See more...
I have a scheduled search/alert.  It validates that for every Splunk event of type A, there is a type B.  If it doesn't see a corresponding B, it will alert.  Occasionally I am getting false alerts because Splunk is not able to reach one or more indexers.  I'll see the message "The following error(s) occurred while the search ran. Therefore, search results might be incomplete. " along with additional details.  That means the search doesn't get back all the events, which will include a type B event and cause a false alert to fire.  Since Splunk knows it wasn't able to communicate to all the indexers, I'd like to abort the search.  Is there anything sort of like the "addinfo" command were I can add information about whether getting all the data was successful so that I can do a where clause on it and remove all my rows if there were errors?  How can I prevent an alert from firing if I didn't get all the results back from the indexers?
I need to filter a part of a log using regex, I have the following log log: {dx.trace_id=xxxxx, dx.span_id=yyyyy, dx.trace_sampled=true}{"logtopic":"x","appname":"y","module":"z","Id":"asdasd","trac... See more...
I need to filter a part of a log using regex, I have the following log log: {dx.trace_id=xxxxx, dx.span_id=yyyyy, dx.trace_sampled=true}{"logtopic":"x","appname":"y","module":"z","Id":"asdasd","traceId":"aaaaaaa","parentId":"sssssss","spanId":"ddddddd","traceFlags":"00","timestamp":"2024-05-29 11:42:37.675","event":"POST:geAll","level":"info","payload":{"orderId":"yyyy","channel":"zzz","skupCheck":true},"msgResponse":{"httpMethod":"POST","httpStatusCode":200,"httpMessage":"OK","url":"getAll"},"message":"Response in POST:getAll"} I need to remove this fragment from the answer {dx.trace_id=xxxxx, dx.span_id=yyyyy, dx.trace_sampled=true} so that the visible log is the following log: {"logtopic":"x","appname":"y","module":"z","Id":"asdasd","traceId":"aaaaaaa","parentId":"sssssss","spanId":"ddddddd","traceFlags":"00","timestamp":"2024-05-29 11:42:37.675","event":"POST:geAll","level":"info","payload":{"orderId":"yyyy","channel":"zzz","skupCheck":true},"msgResponse":{"httpMethod":"POST","httpStatusCode":200,"httpMessage":"OK","url":"getAll"},"message":"Response in POST:getAll"} There are also outputs where what I need to filter is presented with fewer fields or without fields, leaving it this way log: {dx.trace_sampled=true}{"logtopic":"x","appname":"y","module":"z","Id":"asdasd","traceId":"aaaaaaa","parentId":"sssssss","spanId":"ddddddd","traceFlags":"00","timestamp":"2024-05-29 11:42:37.675","event":"POST:geAll","level":"info","payload":{"orderId":"yyyy","channel":"zzz","skupCheck":true},"msgResponse":{"httpMethod":"POST","httpStatusCode":200,"httpMessage":"OK","url":"getAll"},"message":"Response in POST:getAll"} log: {}{"logtopic":"x","appname":"y","module":"z","Id":"asdasd","traceId":"aaaaaaa","parentId":"sssssss","spanId":"ddddddd","traceFlags":"00","timestamp":"2024-05-29 11:42:37.675","event":"POST:geAll","level":"info","payload":{"orderId":"yyyy","channel":"zzz","skupCheck":true},"msgResponse":{"httpMethod":"POST","httpStatusCode":200,"httpMessage":"OK","url":"getAll"},"message":"Response in POST:getAll"} In these last two examples I still need to filter the following respectively {dx.trace_sampled=true} {} So that the output is finally clean and leaves only what I need log: {"logtopic":"x","appname":"y","module":"z","Id":"asdasd","traceId":"aaaaaaa","parentId":"sssssss","spanId":"ddddddd","traceFlags":"00","timestamp":"2024-05-29 11:42:37.675","event":"POST:geAll","level":"info","payload":{"orderId":"yyyy","channel":"zzz","skupCheck":true},"msgResponse":{"httpMethod":"POST","httpStatusCode":200,"httpMessage":"OK","url":"getAll"},"message":"Response in POST:getAll"} I hope you can help me please