All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @parthiban, you have two solutions: define a throttle time, so if the device isn't came back online after the throttle period, you have a remeber that the device is offline, save the offline a... See more...
Hi @parthiban, you have two solutions: define a throttle time, so if the device isn't came back online after the throttle period, you have a remeber that the device is offline, save the offline and online events in a summary index and use it to check the condition. The first is an easier solution, that could also be interesting to be sure not forgetting the status. The second is just a little more complicated. Ciao. Giuseppe
Hi @gcusello  I don't think my point was clear. This pertains to heartbeat monitoring for a specific device. When the device goes offline, we cannot predict when it will come online. In this case, h... See more...
Hi @gcusello  I don't think my point was clear. This pertains to heartbeat monitoring for a specific device. When the device goes offline, we cannot predict when it will come online. In this case, how do we set the throttle time?
Yes , sorry for the typo.  3rd logs has different requestId. I mistakenly pasted the same requestId.
Splunk is not a substitute for spreadsheet application like Excel where you merge cells for visual effects.  It organizes data like a database.  As @glc_slash_it and @bowesmana explained, you either ... See more...
Splunk is not a substitute for spreadsheet application like Excel where you merge cells for visual effects.  It organizes data like a database.  As @glc_slash_it and @bowesmana explained, you either split by col1 or by VM.  You must ask yourself: Do you want to achieve Excel-like visual (split by col1) or do you want to maintain data logic (split by VM)? If Excel-like effect is more important than data logic, the best you can emulate a cell merge as in your illustration is to use list function to retain the order of VM and col2, like   | stats list(*) as * by col1 | eval col2 = if(mvindex(col2, 0) == mvindex(col2, -1), mvindex(col2, 0), col2) | table VM col*   Using the emulation given by @bowesmana, your mock data will give you VM col1 col2 vm4 vm5 bike Fazer thunder vm1 vm2 car sedan vm3 plane Priv This is the closest to your mock results.  It then becomes your job to convince your users that there is an invisible split line between vm4 and vm5, vm1 and vm2, etc.
Hi @gcusello    As I mentioned, for example, one hour timeframe, we may not know exactly when the device will become online. Until the device comes online, there is no need to trigger multiple aler... See more...
Hi @gcusello    As I mentioned, for example, one hour timeframe, we may not know exactly when the device will become online. Until the device comes online, there is no need to trigger multiple alerts for the offline condition. For this case how throttling will works.
@Dharani - Do you want to see only the last event per RequestId? (like only the latest error per request is right info?)  
Hi @parthiban, if you don't want a new alert triggered for an hour after a triggered alert, you have to enable "Throttle". Ciao. Giuseppe
Hi @gcusello  If we configure it like this, for example, if the device goes OFFLINE for the next one hour, will we receive an alert every 5 minutes? If yes, that is not fulfills my requirement; I on... See more...
Hi @gcusello  If we configure it like this, for example, if the device goes OFFLINE for the next one hour, will we receive an alert every 5 minutes? If yes, that is not fulfills my requirement; I only want the notification to be sent once. The same applies for the ONLINE condition as well. If it is not possible in a single search, we can split it into two different searches: one for the OFFLINE condition alert and another for the ONLINE condition alert. Is this possible?
Hi @AL3Z, the run frequency depends on what's the max delay is acceptable for your in discovering the triggered alert: one day, one hour, I don't know, it depends by your requisites. Ciao. Giuseppe
@gcusello , how we can complete this mark a threshold,  alert count for all my searches alerts if the count >10 for last 7 days reads  (189,186,167,167,89,74,60,59,56,46,35,32,28,26,20,19,17,14,11)... See more...
@gcusello , how we can complete this mark a threshold,  alert count for all my searches alerts if the count >10 for last 7 days reads  (189,186,167,167,89,74,60,59,56,46,35,32,28,26,20,19,17,14,11). How often do we need to run this in a day?
Hi @AL3Z, run tis search and click on Save as. Ciao. Giuseppe
Hi @parthiban , thisis the procedure: run this search in the search dashboard of the app where you want to store the alert, using the correct time period, click on "Save as", choose "Alert", in... See more...
Hi @parthiban , thisis the procedure: run this search in the search dashboard of the app where you want to store the alert, using the correct time period, click on "Save as", choose "Alert", insert the required fields: Titel: <the name of your alert to display> Description: not mandatory Permissions: Shared in App Alert Type: Scheduled Run on clock schedule Time range: you should have 5 minutes Cron expression: */5 * * * * Expires: 24 hours Trigger alert when number of results is greater than 0 Trigger Once Throttle: not flagger Trigger actions:  Add to triggered alerts Send email: all the fields. Save Ciao. Giuseppe
@gcusello , How to configure this search as a alert scheduling? threshold should be  2 seconds.... Thanks
sample logs: 1.IBroker call failed, sessionId=855762c0-9a6b, requestId=bc819b42-6646, request=PUT  responseStatus=422  response={"ErrorCode":0,"UserMessage":null,"DeveloperMessage":null,"Documentati... See more...
sample logs: 1.IBroker call failed, sessionId=855762c0-9a6b, requestId=bc819b42-6646, request=PUT  responseStatus=422  response={"ErrorCode":0,"UserMessage":null,"DeveloperMessage":null,"DocumentationUrl":null,"LogId":null,"ValidationErrors":"Invalid product ","Parameters":null} 2. sessionId=855762c0-9a6b, requestId=bc819b42-6646, request=PUT responseStatus=422  ErrorMessage: unprocessable   3.sessionId=855762c0-9a6b, requestId=bc819b42-6646, request=PUT  responseStatus=422 ErrorMessage: unprocessable 1st 2 logs should be eliminated because they share same requestId, 3 rd logs should be shown.  
Hi, In our environment, we utilize Windows security logs for our security purposes. To reduce licensing costs, I'm considering switching the render XML setting to false. I'm wondering if this is adv... See more...
Hi, In our environment, we utilize Windows security logs for our security purposes. To reduce licensing costs, I'm considering switching the render XML setting to false. I'm wondering if this is advisable, especially given our focus on security use cases. Could you highlight the major distinctions between using XML and non-XML formats for these logs? Thanks.
@bowesmana  I like this logic but could be hectic to use in my current environment. thanks.     Regards,
Hi team, I have the following search code, and I want to trigger an alert when the condition is 'OFFLINE'. Note that we receive logs every 2 minutes, and the alert should be triggered only once; sub... See more...
Hi team, I have the following search code, and I want to trigger an alert when the condition is 'OFFLINE'. Note that we receive logs every 2 minutes, and the alert should be triggered only once; subsequent alerts should be suppressed. Similarly, when the condition becomes 'ONLINE', I want to trigger an alert only once, with subsequent alerts being suppressed. I hope my requirement is clear. index= "XXXX" invoked_component="YYYYY" "Genesys system is available" | spath input=_raw output=new_field path=response_details.response_payload.entities{} | mvexpand new_field | fields new_field | spath input=new_field output=serialNumber path=serialNumber | spath input=new_field output=onlineStatus path=onlineStatus | where serialNumber!="" | lookup Genesys_Monitoring.csv serialNumber | where Country="Egypt" | stats count(eval(onlineStatus="OFFLINE")) AS offline_count count(eval(onlineStatus="ONLINE")) AS online_count | fillnull value=0 offline_count | fillnull value=0 online_count | eval condition=case( offline_count=0 AND online_count>0,"ONLINE", offline_count>0 AND online_count=0,"OFFLINE", offline_count>0 AND online_count>0 AND online_count>offline_count, "OFFLINE", offline_count>0 AND online_count>0 AND offline_count>online_count, "OFFLINE", offline_count=0 AND online_count=0, "No data") | search condition="OFFLINE" OR condition="ONLINE" | table condition  
You can't combine Splunk columns inside a Splunk table, but you can make second and subsequent duplicates clear, like this example | makeresults format=csv data="VM,col1,col2 vm1,car,sedan vm2,car,s... See more...
You can't combine Splunk columns inside a Splunk table, but you can make second and subsequent duplicates clear, like this example | makeresults format=csv data="VM,col1,col2 vm1,car,sedan vm2,car,sedan vm3,plane,Priv vm4,bike,Fazer vm5,bike,thunder" | stats values(col*) as col* by VM | streamstats count as c1 by col1 | streamstats count as c2 by col2 | eval col1=if(c1>1, null(), col1) | eval col2=if(c2>1, null(), col2) | fields - c1 c2
@jianzgao - If you are just starting to work on a new solution, I won't recommend using C# as its been no changes to that for a long time. So you would have to end-up maintaining the library yourself... See more...
@jianzgao - If you are just starting to work on a new solution, I won't recommend using C# as its been no changes to that for a long time. So you would have to end-up maintaining the library yourself, fixing all the issues similar to this one. I have personally used Python SDK and its most widely used one if you are comfortable using it.   I hope this helps!!! Kindly upvote if this helps you!!!
@Muthu_Vinith - If the answer helps you, kindly upvote, and if it resolves your question accept by clicking on "Accept as Solution".