All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

If enabled, acknowledgements are returned within the connection established from the forwarder downstream (to an intermediate forwarder or directly to an indexer). There is no need for another connec... See more...
If enabled, acknowledgements are returned within the connection established from the forwarder downstream (to an intermediate forwarder or directly to an indexer). There is no need for another connection.  
Based on your latest update, the problem should be restated as: remove events with requestId that has a corresponding ValidationErrors value of "Invalid product". (I assume that the trailing space in... See more...
Based on your latest update, the problem should be restated as: remove events with requestId that has a corresponding ValidationErrors value of "Invalid product". (I assume that the trailing space in sample data is a typo.) Is this correct? In the format illustrated in sample data, Splunk should have given you compliant JSON in ValidationErrors.  Process this first, then literally implement the restated objective.   | spath input=response | stats values(*) as * by sessionId request requestId responseStatus | where NOT ValidationErrors == "Invalid product"   Your sample data will leave you with sessionId request requestId responseStatus DeveloperMessage DocumentationUrl ErrorCode LogId Parameters UserMessage ValidationErrors 855762c0-9a6b PUT bc819b42-6655 422               This is the emulation used to test the method:   | makeresults | fields - _time | eval data = mvappend("IBroker call failed, sessionId=855762c0-9a6b, requestId=bc819b42-6646, request=PUT responseStatus=422 response={\"ErrorCode\":0,\"UserMessage\":null,\"DeveloperMessage\":null,\"DocumentationUrl\":null,\"LogId\":null,\"ValidationErrors\":\"Invalid product\",\"Parameters\":null}", "sessionId=855762c0-9a6b, requestId=bc819b42-6646, request=PUT responseStatus=422 ErrorMessage: unprocessable", "sessionId=855762c0-9a6b, requestId=bc819b42-6655, request=PUT responseStatus=422 ErrorMessage: unprocessable") | mvexpand data | rename data AS _raw | extract ``` data emulation above ```  
First, it seems to me that (master!="yoda" AND master!="mace" AND master="Jinn") and master="Jinn" are semantically identical.  Is this correct? (I'm unfamiliar with the Jedi lore.)  I'll assume it t... See more...
First, it seems to me that (master!="yoda" AND master!="mace" AND master="Jinn") and master="Jinn" are semantically identical.  Is this correct? (I'm unfamiliar with the Jedi lore.)  I'll assume it to be true in the following. Second, what is preventing you from doing, for example index=sith broker sithlord!=darth_maul OR index=jedi domain="jedi.lightside.com" master="Jinn" | eval name=coalesce(Jname, Sname) | stats values(name) as names by saber_color strengths | where mvcount(names)=1 or even index=sith broker sithlord!=darth_maul OR index=jedi domain="jedi.lightside.com" master="Jinn" | eval name=coalesce(Jname, Sname) | stats values(*) as * by saber_color strengths | where mvcount(names)=1 This way, you will have all columns preserved. Third, could you explain "unable to utilize the index drill down for each in the search otherwise the query is 75% white noise?"  Are you trying to use "Automatic" in drilldown action?  Anything "automatic" is really Splunk's guess.  If you have something specific in mind. you will want to write custom drilldown instead.
Hi, we also experience this issue, where we observed the initial mail received by user will be display next 24 hours after user received. Need help if there any mitigation for almost real time also c... See more...
Hi, we also experience this issue, where we observed the initial mail received by user will be display next 24 hours after user received. Need help if there any mitigation for almost real time also can be good. 
Might be that there is another issue indeed. Keep us posted if there is something potentially hiting other users as well going on.
Hi @parthiban, you have two solutions: define a throttle time, so if the device isn't came back online after the throttle period, you have a remeber that the device is offline, save the offline a... See more...
Hi @parthiban, you have two solutions: define a throttle time, so if the device isn't came back online after the throttle period, you have a remeber that the device is offline, save the offline and online events in a summary index and use it to check the condition. The first is an easier solution, that could also be interesting to be sure not forgetting the status. The second is just a little more complicated. Ciao. Giuseppe
Hi @gcusello  I don't think my point was clear. This pertains to heartbeat monitoring for a specific device. When the device goes offline, we cannot predict when it will come online. In this case, h... See more...
Hi @gcusello  I don't think my point was clear. This pertains to heartbeat monitoring for a specific device. When the device goes offline, we cannot predict when it will come online. In this case, how do we set the throttle time?
Yes , sorry for the typo.  3rd logs has different requestId. I mistakenly pasted the same requestId.
Splunk is not a substitute for spreadsheet application like Excel where you merge cells for visual effects.  It organizes data like a database.  As @glc_slash_it and @bowesmana explained, you either ... See more...
Splunk is not a substitute for spreadsheet application like Excel where you merge cells for visual effects.  It organizes data like a database.  As @glc_slash_it and @bowesmana explained, you either split by col1 or by VM.  You must ask yourself: Do you want to achieve Excel-like visual (split by col1) or do you want to maintain data logic (split by VM)? If Excel-like effect is more important than data logic, the best you can emulate a cell merge as in your illustration is to use list function to retain the order of VM and col2, like   | stats list(*) as * by col1 | eval col2 = if(mvindex(col2, 0) == mvindex(col2, -1), mvindex(col2, 0), col2) | table VM col*   Using the emulation given by @bowesmana, your mock data will give you VM col1 col2 vm4 vm5 bike Fazer thunder vm1 vm2 car sedan vm3 plane Priv This is the closest to your mock results.  It then becomes your job to convince your users that there is an invisible split line between vm4 and vm5, vm1 and vm2, etc.
Hi @gcusello    As I mentioned, for example, one hour timeframe, we may not know exactly when the device will become online. Until the device comes online, there is no need to trigger multiple aler... See more...
Hi @gcusello    As I mentioned, for example, one hour timeframe, we may not know exactly when the device will become online. Until the device comes online, there is no need to trigger multiple alerts for the offline condition. For this case how throttling will works.
@Dharani - Do you want to see only the last event per RequestId? (like only the latest error per request is right info?)  
Hi @parthiban, if you don't want a new alert triggered for an hour after a triggered alert, you have to enable "Throttle". Ciao. Giuseppe
Hi @gcusello  If we configure it like this, for example, if the device goes OFFLINE for the next one hour, will we receive an alert every 5 minutes? If yes, that is not fulfills my requirement; I on... See more...
Hi @gcusello  If we configure it like this, for example, if the device goes OFFLINE for the next one hour, will we receive an alert every 5 minutes? If yes, that is not fulfills my requirement; I only want the notification to be sent once. The same applies for the ONLINE condition as well. If it is not possible in a single search, we can split it into two different searches: one for the OFFLINE condition alert and another for the ONLINE condition alert. Is this possible?
Hi @AL3Z, the run frequency depends on what's the max delay is acceptable for your in discovering the triggered alert: one day, one hour, I don't know, it depends by your requisites. Ciao. Giuseppe
@gcusello , how we can complete this mark a threshold,  alert count for all my searches alerts if the count >10 for last 7 days reads  (189,186,167,167,89,74,60,59,56,46,35,32,28,26,20,19,17,14,11)... See more...
@gcusello , how we can complete this mark a threshold,  alert count for all my searches alerts if the count >10 for last 7 days reads  (189,186,167,167,89,74,60,59,56,46,35,32,28,26,20,19,17,14,11). How often do we need to run this in a day?
Hi @AL3Z, run tis search and click on Save as. Ciao. Giuseppe
Hi @parthiban , thisis the procedure: run this search in the search dashboard of the app where you want to store the alert, using the correct time period, click on "Save as", choose "Alert", in... See more...
Hi @parthiban , thisis the procedure: run this search in the search dashboard of the app where you want to store the alert, using the correct time period, click on "Save as", choose "Alert", insert the required fields: Titel: <the name of your alert to display> Description: not mandatory Permissions: Shared in App Alert Type: Scheduled Run on clock schedule Time range: you should have 5 minutes Cron expression: */5 * * * * Expires: 24 hours Trigger alert when number of results is greater than 0 Trigger Once Throttle: not flagger Trigger actions:  Add to triggered alerts Send email: all the fields. Save Ciao. Giuseppe
@gcusello , How to configure this search as a alert scheduling? threshold should be  2 seconds.... Thanks
sample logs: 1.IBroker call failed, sessionId=855762c0-9a6b, requestId=bc819b42-6646, request=PUT  responseStatus=422  response={"ErrorCode":0,"UserMessage":null,"DeveloperMessage":null,"Documentati... See more...
sample logs: 1.IBroker call failed, sessionId=855762c0-9a6b, requestId=bc819b42-6646, request=PUT  responseStatus=422  response={"ErrorCode":0,"UserMessage":null,"DeveloperMessage":null,"DocumentationUrl":null,"LogId":null,"ValidationErrors":"Invalid product ","Parameters":null} 2. sessionId=855762c0-9a6b, requestId=bc819b42-6646, request=PUT responseStatus=422  ErrorMessage: unprocessable   3.sessionId=855762c0-9a6b, requestId=bc819b42-6646, request=PUT  responseStatus=422 ErrorMessage: unprocessable 1st 2 logs should be eliminated because they share same requestId, 3 rd logs should be shown.  
Hi, In our environment, we utilize Windows security logs for our security purposes. To reduce licensing costs, I'm considering switching the render XML setting to false. I'm wondering if this is adv... See more...
Hi, In our environment, we utilize Windows security logs for our security purposes. To reduce licensing costs, I'm considering switching the render XML setting to false. I'm wondering if this is advisable, especially given our focus on security use cases. Could you highlight the major distinctions between using XML and non-XML formats for these logs? Thanks.