All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks @richgalloway  and @gcusello 
Thank you all, so, we have the concept of regions, and our Splunk architecture revolves around it. So, let’s say the European one - it has the all the Splunk data of Europe in the European indexer cl... See more...
Thank you all, so, we have the concept of regions, and our Splunk architecture revolves around it. So, let’s say the European one - it has the all the Splunk data of Europe in the European indexer cluster and because of that I asked the question, whether each region should have its own cluster master or they can share. If they share, how can I figure out how many buckets the cluster handles? So, we won’t reach the one million ..
Hi @gcusello  No, don't want cont alert for offline... I want to trigger first offline and first online message. Thanks for understanding.
Hi @parthiban , it isn't a problem notification when status is offline but, after the first offline, do you want that the alert continues to fire "offline", or do you want a message when it comes ba... See more...
Hi @parthiban , it isn't a problem notification when status is offline but, after the first offline, do you want that the alert continues to fire "offline", or do you want a message when it comes back on line?  if you want a message every time you have offline and the following online, you could try something like this: <your_search> | stats count(eval(status="offline")) AS offline_count count(eval(status="online")) AS online_count earliest(eval(if(status="offline",_time,""))) AS offline earliest(eval(if(status="online",_time,""))) AS online | fillnull value=0 offline_count | fillnull value=0 online_count | eval condition=case( offline_count=0 AND online_count>0,"Online", offline_count>0 AND online_count=0,"Offline", offline_count>0 AND online_count>0 AND online>offline, "Offline but newly online"), offline_count>0 AND online_count>0 AND online>offline, "Offline"), offline_count=0 AND online_count=0, "No data") | table condition in this way you can choose the conditions to trigger the alert. Ciao. Giuseppe
@richgalloway Below is SPL  used, index="*****" host="sclp*" source="*****" "BOLT_ARIBA_ERROR_DETAILS:" "1-57d28402-9058-11ee-83b7-021a6f9d1f1c" "5bda7ec9" | rex "(?ms)BOLT_ARIBA_ERROR_DETAILS: (?<... See more...
@richgalloway Below is SPL  used, index="*****" host="sclp*" source="*****" "BOLT_ARIBA_ERROR_DETAILS:" "1-57d28402-9058-11ee-83b7-021a6f9d1f1c" "5bda7ec9" | rex "(?ms)BOLT_ARIBA_ERROR_DETAILS: (?<details>\[.*\])" | spath input=details output=ERROR_MESSAGE path={}.ERROR_MESSAGE | spath input=details output=PO_NUMBER path={}.PO_NUMBER | spath input=details output=MW_ERROR_CODE path={}.MW_ERROR_CODE | spath input=details output=INVOICE_ID path={}.INVOICE_ID | spath input=details output=MSG_GUID path={}.MSG_GUID | spath input=details output=INVOICE_NUMBER path={}.INVOICE_NUMBER | spath input=details output=UUID path={}.UUID | spath input=details output=DB_TIMESTAMP path={}.DB_TIMESTAMP | table ERROR_MESSAGE PO_NUMBER MW_ERROR_CODE INVOICE_ID MSG_GUID INVOICE_NUMBER UUID DB_TIMESTAMP
It's probably better to split the data before the table is created.  Please share the current SPL.
Hi @gcusello  Let me clarify, We receive device status logs every 2 minutes from AWS Cloud. These logs indicate both online and offline statuses. If a device goes offline, we continuously receive o... See more...
Hi @gcusello  Let me clarify, We receive device status logs every 2 minutes from AWS Cloud. These logs indicate both online and offline statuses. If a device goes offline, we continuously receive offline logs until it comes back online, at which point we receive online logs for that specific device. My requirement is to trigger a critical alert for the end user when a particular device goes offline. Subsequently, I will notify the end user when the device comes back online. Based I need to create alert. Is this possible?  also I have already shared example logs in this conversation. Moreover we have this type of alert is working other observability application, now we are migrating to Splunk. I hope this clarifies my requirement. Please let me know anything required.
Hi Thanks for reply We can't make changes to dropdown as dropdown fields populates from SPL query and it's uses by other panel in dashboard. need assistance to make changes in panel SPL query itsel... See more...
Hi Thanks for reply We can't make changes to dropdown as dropdown fields populates from SPL query and it's uses by other panel in dashboard. need assistance to make changes in panel SPL query itself so it pick query on the basis of dropdown field. other if you could help me to makes changes in json code to manage both SPL query according to dropdown menu. Thanks Abhineet Kumar 
Many thanks for your time and insights @ITWhisperer  it works as expected.
We got output in table but all values are in one column  for each fields of output table. We want to split values in row. Below is the output table for reference. Please help to split it.   
base search earliest=-1d@d latest=now | eval Day=if(_time<relative_time(now(),"@d"),"Yesterday","Today") | chart count by User_Id, Day | eval Percentage_Difference = ((Yesterday - Today) / Yesterday)... See more...
base search earliest=-1d@d latest=now | eval Day=if(_time<relative_time(now(),"@d"),"Yesterday","Today") | chart count by User_Id, Day | eval Percentage_Difference = ((Yesterday - Today) / Yesterday) * 100
sorry my bad, it should be 50% variance. Today =2, yesterday 4   (Yesterday count - Today count / Yesterday  count )* 100 (4-2 /4)* 100 = >2/4 *100 ==> 50%
Hi @parthiban, probably there's a misundertanding one the condition to check: I understood that you want to check if status="recovery" or status=down, and I check for these statuses, but what's you... See more...
Hi @parthiban, probably there's a misundertanding one the condition to check: I understood that you want to check if status="recovery" or status=down, and I check for these statuses, but what's your requirement? with your search you check status=down and status=online, is this the requirement? Ciao. Giuseppe
The command is addinfo not add_info - the problem with using "$field1.earliest$" and "$field1.latest$" is that they can contain string and not epoch times, whereas addinfo provides the epoch times de... See more...
The command is addinfo not add_info - the problem with using "$field1.earliest$" and "$field1.latest$" is that they can contain string and not epoch times, whereas addinfo provides the epoch times derived from the timepicker.
Universal forwarders (UF) usually do not need to listen on any port since they typically read local files.  They can opt to read TCP data on any port or Splunk protocol on port 9997. A UF must be ab... See more...
Universal forwarders (UF) usually do not need to listen on any port since they typically read local files.  They can opt to read TCP data on any port or Splunk protocol on port 9997. A UF must be able to connect to indexers on port 9997.  If you have several UFs, it's a good idea to use a Deployment Server (DS) to manage them.  UFs talk to the DS on port 8089.
How is 2 missing 100%? 100% of what?
I tried a different server in the same IP range now to test and the method you mentioned worked just fine.  So was the port being blocked. Thanks for the help.
Make sure the LM is running.  Confirm 8089 is the LM's management port (it's the default).  Verify your firewalls allow connections to that port.
On which instance did you install those settings?  They should be on the indexers (and heavy forwarders, if you have them).  Did you restart the instances after modifying the file?  Are you looking a... See more...
On which instance did you install those settings?  They should be on the indexers (and heavy forwarders, if you have them).  Did you restart the instances after modifying the file?  Are you looking at new data?  The changes will not affect indexed data.  Do you have the correct host name in the stanza?  Have you tried using the sourcetype name instead of host?
i added the command | add info  and i think that it works i will do validations but thanks a lot!