All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, HTTP 503 Service Unavailable -- {"messages":[{"type":"ERROR","text":"This node is not the captain of the search head cluster, and we could not determine the current captain. The cluster is eithe... See more...
Hi, HTTP 503 Service Unavailable -- {"messages":[{"type":"ERROR","text":"This node is not the captain of the search head cluster, and we could not determine the current captain. The cluster is either in the process of electing a new captain, or this member hasn't joined the pool"}]} We received this error on one of the Search head cluster member. Is there any way to troubleshoot this? Please assist. Thankyou.  
While integrating the Speakatoo API into my project, I'm encountering a "cookies error." I'm seeking assistance and guidance on how to resolve this issue.
@Michael.Lee  : If the issue is not already resolved it's best if you create a Support ticket for end to end review of agent setup. You may also refer - https://community.appdynamics.com/t5/Knowl... See more...
@Michael.Lee  : If the issue is not already resolved it's best if you create a Support ticket for end to end review of agent setup. You may also refer - https://community.appdynamics.com/t5/Knowledge-Base/How-do-I-debug-common-Linux-Private-Synthetic-Agent-issues/ta-p/51547 How do I submit a Support ticket? An FAQ  Regards, Noopur
@David.Machacek : If the issue is not already resolved it's best if you create a Support ticket for end to end review of agent setup. How do I submit a Support ticket? An FAQ  Regards, Noopur
@Gopinathan.Vasudevan : If the issue is not already resolved it's best if you create a Support ticket for end to end review of agent setup. You may also refer - https://community.appdynamics.com/... See more...
@Gopinathan.Vasudevan : If the issue is not already resolved it's best if you create a Support ticket for end to end review of agent setup. You may also refer - https://community.appdynamics.com/t5/Knowledge-Base/How-do-I-debug-common-Linux-Private-Synthetic-Agent-issues/ta-p/51547 How do I submit a Support ticket? An FAQ  Regards, Noopur
@Ramesh.Jakka : There is no ideal sequence . App Agents connects to Analytics Agent to publish. There could be multiple reasons when agents can stop reporting data. It's best if you create a Supp... See more...
@Ramesh.Jakka : There is no ideal sequence . App Agents connects to Analytics Agent to publish. There could be multiple reasons when agents can stop reporting data. It's best if you create a Support ticket for end to end review of agents. How do I submit a Support ticket? An FAQ  Regards, Noopur
@Abhiram.Sahoo : Test your Event Service End Point connectivity on SAP Agent machine curl http(s)://<ES URL>:<Port Number>/_ping Expected Response : pong
Hi @Dustem, let me understand: 4768 or 4770 should be before the 4769 and you want an alert if tey are missing or tey aren't before, is is correct? Ciao. Giuseppe
I have 2 questions here: I am using Splunk cloud. 1. Is there a way I can import csv file into Splunk dashboard and display the view. Ex: we are trying to show order data as dashboard in Splunk 2.... See more...
I have 2 questions here: I am using Splunk cloud. 1. Is there a way I can import csv file into Splunk dashboard and display the view. Ex: we are trying to show order data as dashboard in Splunk 2. I am looking to import logs into Splunk using rest Api calls, how can I do it? I haven't leveraged it earlier. Ex: If that can be done, we can leverage OMS APIs or extract the OMS DB data through TOSCA and load the summary information into Splunk.
Thank you for the help. I was able to extract the fields now. When I run the query 1, I have found that event_name "pending-transfer" with a task_id of 3 have event_id "1274856" being repeated three... See more...
Thank you for the help. I was able to extract the fields now. When I run the query 1, I have found that event_name "pending-transfer" with a task_id of 3 have event_id "1274856" being repeated three times in a row which means that there is no increment in the event_id. However, when I run the query 2 for the same event_name " pending-transfer", it doesn't give any output. Technically, query 2 should send an alert ( I have created the alert to run at every minute but still NO alert was triggered ) because there is no change in the event_id for the event that was triggered at 9/4/22 10:02:39 PM and 9/4/22 09:57:39 PM  Not sure if I am missing something.   Query 1 : Alert if there is an increment | stats list(_time) as _time list(event_id) as event_id by event_name task_id | where mvindex(_time, 0) > mvindex(_time, -1) AND mvindex(event_id, 0) > mvindex(event_id, -1) OR mvindex(_time, 0) < mvindex(_time, -1) AND mvindex(event_id, 0) < mvindex(event_id, -1) | fieldformat _time = strftime(_time, "%F %H:%M:%S.%3Q")   Below is the output that I am getting when I run the query 1: Time event_name task_id event_id 9/4/22 10:02:39 PM pending-transfer 3 1274856 9/4/22 09:57:39 PM pending-transfer 3 1274856 9/4/22 09:52:39 PM pending-transfer 3 1274856 9/4/22 09:47:39 PM pending-transfer 3 1274851 9/4/22 09:37:39 PM pending-transfer 3 1274849     Query 2 : Alert if there is NO increment | stats list(_time) as _time list(event_id) as event_id by event_name task_id | where mvindex(event_id, 0) = mvindex(event_id, -1) | fieldformat _time = strftime(_time, "%F %H:%M:%S.%3Q")   Thank You  
i need following case to be searched, (past-3-days count=0 and today count >0)                    past-3- days    today field1           0                        4 field2          0               ... See more...
i need following case to be searched, (past-3-days count=0 and today count >0)                    past-3- days    today field1           0                        4 field2          0                          1 .....   then show the table _time    field   _raw
You'll need to a bit more specific when you say count for each field, but you could do something like this index=... earliest=-3d@d latest=now | bin _time span=1d ``` Calculates the count for a fi... See more...
You'll need to a bit more specific when you say count for each field, but you could do something like this index=... earliest=-3d@d latest=now | bin _time span=1d ``` Calculates the count for a field by day ``` | stats count by _time field ``` Now calculate today's value and the total ``` | stats sum(eval(if(_time=relative_time(now(), "@d"),count, 0))) as today sum(count) as total ``` And set a field to be TRUE or FALSE to alert ``` | eval alert=if(today>0 AND total-today=0, "TRUE", "FALSE") Do this fit what you're trying to do?
Not during this period, but the user did not have 4768 or 4770 events prior to this period.
I want to search for a user with 4769 events over a continuous period, but the user has no 4768 or 4770 events during that time, instead of 4769 and no 4768 or 4770 users.
how to  calculate the count for each field in the past 3 days. If the count for all 3 days is 0, and the count for today is greater than 0, then the command triggers an alert that shows log. 
Yes correct And i saw the send email logs  for other alerts which I can see in internal logs. Looks good But i don't see send email logs for this alert in internal logs 
I apologize for giving wrong information.  IPv6 is 128-bit, not 64 bit.  Given this lookup table and advanced option match_type CIDR(ip): expected ip true 2001:0db8:ffff:ffff:ffff:ffff:ffff:... See more...
I apologize for giving wrong information.  IPv6 is 128-bit, not 64 bit.  Given this lookup table and advanced option match_type CIDR(ip): expected ip true 2001:0db8:ffff:ffff:ffff:ffff:ffff:ff00/128 test mask 2001:db8:3333:4444:5555:6666::2101/128 test without mask 2001:db8:3333:4444:5555:6666::2101 This search now gives the correct output   | makeresults | fields - _time | eval ip=mvappend("2001:db8:3333:4444:5555:6666:0:2101", "2001:db8:3333:4444:5555:6666::2101", "2001:0db8:ffff:ffff:ffff:ffff:ffff:ff00") | mvexpand ip | lookup ipv6test ip   expected ip test mask 2001:db8:3333:4444:5555:6666:0:2101 test mask 2001:db8:3333:4444:5555:6666::2101 true 2001:0db8:ffff:ffff:ffff:ffff:ffff:ff00 Hope this helps.
I'm assuming that the src and destination are in the same event, so geostats will not expand multivalue fields, so you will first have to duplicate the events and then do the geostats, like this ind... See more...
I'm assuming that the src and destination are in the same event, so geostats will not expand multivalue fields, so you will first have to duplicate the events and then do the geostats, like this index="duo" extracted_eventtype=authentication_v2 user.name="$user.name$" (access_device.ip!="NULL" OR auth_device.ip!="NULL") | eval ip=mvappend('access_device.ip', 'auth_device.ip') | fields ip | mvexpand ip | iplocation ip | geostats count by City  
That's because at index time (when Splunk ingests data), fields like UserKey_ABC.job1 doesn't exist.  They are extracted at search time by some mechanism, but do not exist in indexer.
You did not show the top level nodes. (And it's always a bad idea to use screenshots to show data; use raw text.) If your upper array node is indeed called tokenData, Splunk should have something li... See more...
You did not show the top level nodes. (And it's always a bad idea to use screenshots to show data; use raw text.) If your upper array node is indeed called tokenData, Splunk should have something like tokenData{}.tokenData, tokenData{}.tokenId, etc.  To spread them out, first reach to that array with spath.  That will convert the JSON array to ordinary multivalue tokenData{} so you can use mvexpand.  Lastly, use spath again with each element to extract single value tokenData, tokenId. | spath path=tokenData{} | mvexpand tokenData{} | spath input=tokenData{} Hope this helps.