All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi all -  I am trying to create what I would think is a relatively simple conditional statement in Splunk.  Use Case:  I merely want to know if a job has passed or failed; the only thing that is... See more...
Hi all -  I am trying to create what I would think is a relatively simple conditional statement in Splunk.  Use Case:  I merely want to know if a job has passed or failed; the only thing that is maybe tricky about this is the only message we get for pass or fail look like:  msg.message="*Work Flow Passed | for endpoint XYZ*" OR msg.message="*STATUS - FAILED*" I have tried to create a conditional statement based on the messaging but I either return NULL value or the wrong value.  If I try:   index=*app_pcf cf_app_name="mddr-batch-integration-flow" msg.message="*Work Flow Passed | for endpoint XYZ*" OR msg.message="*STATUS - FAILED*" | eval Status=if('message.msg'="*Work Flow Passed | for endpoint XYZ*","SUCCESS", "FAIL") | table _time, Status    Then it just shows Status as FAIL (which, i know is objectively wrong because the only message produced for this event is "work flow passed..." which should induce a TRUE value and display "SUCCESS"). If I try another way:    index=*app_pcf cf_app_name="mddr-batch-integration-flow" msg.message="*Work Flow Passed | for endpoint XYZ*" OR msg.message="*STATUS - FAILED*" | eval Status=case(msg.message="*Work Flow Passed | for endpoint XYZ*", "SUCCESS", msg.message="*STATUS - FAILED*", "FAIL") | table _time, Status   I receive NULL value for the STATUS field...  If it helps, this is how the event looks when i don't add any conditional statement or table: How can I fix this?? Thanks! 
Hi @gcusello , @ITWhisperer  When adding the regular expression indicated by @ITWhisperer  to the search, it still appears in the result, which I need to remove
 hi Giuseppe, Thanks for the response. the issue is for both internal and external indexes , the event count  and  current size is not showing any value. You mentioned the default search path, coul... See more...
 hi Giuseppe, Thanks for the response. the issue is for both internal and external indexes , the event count  and  current size is not showing any value. You mentioned the default search path, could you please shed some info on that,may be i can explore that option.  
Hi @jose_sepulveda , ok, please, check my solution or the one from @ITWhisperer that's similar. Ciao. Giuseppe
Hi @gcusello  I have a service developed in JAVA that is dockerized, a shared tomcat image is used that is adding these fragments to the service's output logs, which are the ones I really need to vi... See more...
Hi @gcusello  I have a service developed in JAVA that is dockerized, a shared tomcat image is used that is adding these fragments to the service's output logs, which are the ones I really need to view in splunk. For this reason the response is no longer a valid Json and the visualization is presented as a string and I need to resolve that situation
Hi @adrifesa95 , how did you install the app? if you followed the instructions at https://docs.splunk.com/Documentation/SSE/3.8.0/User/Intro open a case to Splunk Support because this is a Splunk s... See more...
Hi @adrifesa95 , how did you install the app? if you followed the instructions at https://docs.splunk.com/Documentation/SSE/3.8.0/User/Intro open a case to Splunk Support because this is a Splunk supported app. Ciao. Giuseppe  
Hi @Namo, what's the search you runned? did you inserted the name of the index in your main search or at least index=*? maybe the index you're using isn't in the default search path, so you don't ... See more...
Hi @Namo, what's the search you runned? did you inserted the name of the index in your main search or at least index=*? maybe the index you're using isn't in the default search path, so you don't find anything. Ciao. Giuseppe
Hi @jose_sepulveda, at first: do you want to remove a part of your logs before indexing or at search time? if at index time remember that in this way you change the format of your logs, so the add-... See more...
Hi @jose_sepulveda, at first: do you want to remove a part of your logs before indexing or at search time? if at index time remember that in this way you change the format of your logs, so the add-ons could not work correctly! anyway, why do you want to remove a part of your logs? your request doesn't seem to be an obfuscation. anyway, you can do this using the command SEDCMD in the props.con using a substitution regex like the following: SEDCMD = s/([^\}]+\})(.*)/$2/g Ciao. Giuseppe
| rex mode=sed "s/log: {[^}]*}/log: {}/g"
I have a scheduled search/alert.  It validates that for every Splunk event of type A, there is a type B.  If it doesn't see a corresponding B, it will alert.  Occasionally I am getting false alerts b... See more...
I have a scheduled search/alert.  It validates that for every Splunk event of type A, there is a type B.  If it doesn't see a corresponding B, it will alert.  Occasionally I am getting false alerts because Splunk is not able to reach one or more indexers.  I'll see the message "The following error(s) occurred while the search ran. Therefore, search results might be incomplete. " along with additional details.  That means the search doesn't get back all the events, which will include a type B event and cause a false alert to fire.  Since Splunk knows it wasn't able to communicate to all the indexers, I'd like to abort the search.  Is there anything sort of like the "addinfo" command were I can add information about whether getting all the data was successful so that I can do a where clause on it and remove all my rows if there were errors?  How can I prevent an alert from firing if I didn't get all the results back from the indexers?
I need to filter a part of a log using regex, I have the following log log: {dx.trace_id=xxxxx, dx.span_id=yyyyy, dx.trace_sampled=true}{"logtopic":"x","appname":"y","module":"z","Id":"asdasd","trac... See more...
I need to filter a part of a log using regex, I have the following log log: {dx.trace_id=xxxxx, dx.span_id=yyyyy, dx.trace_sampled=true}{"logtopic":"x","appname":"y","module":"z","Id":"asdasd","traceId":"aaaaaaa","parentId":"sssssss","spanId":"ddddddd","traceFlags":"00","timestamp":"2024-05-29 11:42:37.675","event":"POST:geAll","level":"info","payload":{"orderId":"yyyy","channel":"zzz","skupCheck":true},"msgResponse":{"httpMethod":"POST","httpStatusCode":200,"httpMessage":"OK","url":"getAll"},"message":"Response in POST:getAll"} I need to remove this fragment from the answer {dx.trace_id=xxxxx, dx.span_id=yyyyy, dx.trace_sampled=true} so that the visible log is the following log: {"logtopic":"x","appname":"y","module":"z","Id":"asdasd","traceId":"aaaaaaa","parentId":"sssssss","spanId":"ddddddd","traceFlags":"00","timestamp":"2024-05-29 11:42:37.675","event":"POST:geAll","level":"info","payload":{"orderId":"yyyy","channel":"zzz","skupCheck":true},"msgResponse":{"httpMethod":"POST","httpStatusCode":200,"httpMessage":"OK","url":"getAll"},"message":"Response in POST:getAll"} There are also outputs where what I need to filter is presented with fewer fields or without fields, leaving it this way log: {dx.trace_sampled=true}{"logtopic":"x","appname":"y","module":"z","Id":"asdasd","traceId":"aaaaaaa","parentId":"sssssss","spanId":"ddddddd","traceFlags":"00","timestamp":"2024-05-29 11:42:37.675","event":"POST:geAll","level":"info","payload":{"orderId":"yyyy","channel":"zzz","skupCheck":true},"msgResponse":{"httpMethod":"POST","httpStatusCode":200,"httpMessage":"OK","url":"getAll"},"message":"Response in POST:getAll"} log: {}{"logtopic":"x","appname":"y","module":"z","Id":"asdasd","traceId":"aaaaaaa","parentId":"sssssss","spanId":"ddddddd","traceFlags":"00","timestamp":"2024-05-29 11:42:37.675","event":"POST:geAll","level":"info","payload":{"orderId":"yyyy","channel":"zzz","skupCheck":true},"msgResponse":{"httpMethod":"POST","httpStatusCode":200,"httpMessage":"OK","url":"getAll"},"message":"Response in POST:getAll"} In these last two examples I still need to filter the following respectively {dx.trace_sampled=true} {} So that the output is finally clean and leaves only what I need log: {"logtopic":"x","appname":"y","module":"z","Id":"asdasd","traceId":"aaaaaaa","parentId":"sssssss","spanId":"ddddddd","traceFlags":"00","timestamp":"2024-05-29 11:42:37.675","event":"POST:geAll","level":"info","payload":{"orderId":"yyyy","channel":"zzz","skupCheck":true},"msgResponse":{"httpMethod":"POST","httpStatusCode":200,"httpMessage":"OK","url":"getAll"},"message":"Response in POST:getAll"} I hope you can help me please  
You're right! my mistake, I didn't read the entire query. Thanks for pointing out my mistake!
Beautiful. Thank you, this worked and now I understand how to pass the time in when it gets stripped out earlier.
I am new to splunk and  observing the event count and current size showing a 0, even though we can search on the index and have data . Any insights will be helpful.
That is because timechart command requires to have the _time field, and you are removing it with the first stats command. Try this: [My search here] | stats earliest(eval(if(eventType="BEGIN",_time... See more...
That is because timechart command requires to have the _time field, and you are removing it with the first stats command. Try this: [My search here] | stats earliest(eval(if(eventType="BEGIN",_time,""))) AS Begin_time latest(eval(if(eventType="END",_time,""))) AS End_time BY UUID processName | eval ResponseTime=End_time-Begin_time | eval _time = Begin_time | timechart span=10m avg(ResponseTime) by processName
The date_wday is being created with the eval command on the second line... I'll break it down for you. | eval date_hour = strftime(_time, "%H") | eval date_wday = strftime(_time, "%A")  
I dont know if my approach is the right way to go. As I learned, that JOINs allow only 50.000 records to be joined. And I expect way more events to be joined to the filtered transactions.
Hi, I am looking to setup an alert which support to be run every weekday at 7:30PM. Search window for alert query should be from 7PM previous day to 7PM current day. How can I setup this alert. ... See more...
Hi, I am looking to setup an alert which support to be run every weekday at 7:30PM. Search window for alert query should be from 7PM previous day to 7PM current day. How can I setup this alert. Thanks
Thank you everyone for taking the time to ready this. I am new in Splunk and interested in learning more. I have a project at home, and this has to do with viewing authentication traffic on a given n... See more...
Thank you everyone for taking the time to ready this. I am new in Splunk and interested in learning more. I have a project at home, and this has to do with viewing authentication traffic on a given network The challenge I face: I need to view what authentication method is being used to access what resource on the network for a giving index and sourcetype. For example, Windows systems do not have an attribute solo representing if the access to the Nod was SSO or MFA all I get is an event ID 4624. Windows Event ID 4624, successful logon — Dummies guide, 3 minute read (manageengine.com) My understanding is that I have to gather a few attributes and make an educated guess about what access was used. I was hoping to find a one liner lol that will show me what resource is using what authentication method. Any help would be appreciated and virtual drinks on me if we strike gold
I asked in a previous thread for help to get response time based on time differential between two events connected by a UUID (Solved: Re: Measuring time difference between 2 entries - Splunk Communit... See more...
I asked in a previous thread for help to get response time based on time differential between two events connected by a UUID (Solved: Re: Measuring time difference between 2 entries - Splunk Community) which is working perfectly. I turned that into an average response time grouped by a particular transaction type (processName) and thats working fine as well, but I would very much like to use this as a timechart - but I can't seem to get it working. From what I understand, the fact that I am using Stats stripts out the _time which the timechart uses, but I am not sure how to work around that. My query goes as follows: [My search here] | stats earliest(eval(if(eventType="BEGIN",_time,""))) AS Begin_time latest(eval(if(eventType="END",_time,""))) AS End_time BY UUID processName | eval ResponseTime=End_time-Begin_time | stats avg(ResponseTime) by processName I've tried a number of things that didn't work, including changing stats to: | timechart span=10m Avg(ResponseTime) by processName While this did perform a search, it generated no result whatsoever. Won't bore everyone with my multiple failures. My query gives me basically ProcessName Avg(Response_time) Process1 0.5 Process2 0.6 Process3 0.7   My goal is to get this as a time chart visualization with a span of 10 mins. Any suggestions ? Thanks