All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @pck1983, the timestamp format is defined for each sourcetype in the props.conf (for more infos see at https://docs.splunk.com/Documentation/ITSI/4.17.0/Configure/props.conf) to deploy to the For... See more...
Hi @pck1983, the timestamp format is defined for each sourcetype in the props.conf (for more infos see at https://docs.splunk.com/Documentation/ITSI/4.17.0/Configure/props.conf) to deploy to the Forwarders that ingested tha log and on the Search Head. The timestamp format definitions are described at https://docs.splunk.com/Documentation/SCS/current/Search/Timevariables In your case, you have to set: [your_sourcetype] TIME_PREFIX = ^ TIME_FORMAT = %b %d %H:%M:%S Ciao. Giuseppe
Hello, I have a few questions about the time in Splunk. That is a entry from an older logfile and here the _time field and the timestamp in the log does not match! 4/30/23 1:32:16.000 PM Mai... See more...
Hello, I have a few questions about the time in Splunk. That is a entry from an older logfile and here the _time field and the timestamp in the log does not match! 4/30/23 1:32:16.000 PM Mai 08 13:32:16 xxxxxx sshd[3312558]: Failed password for yyyyyyyy from 192.168.1.141 port 58744 ssh2   How could that happen? How does time come up with the time fields? And how does it handle files which comtain no time-stamps? Is then the index-time used?  Ther is a few things which I do not fully understand - maybe there is some article in the documentation which explain that in detail but I have not found with a quick search.  Could pleas someone clearify how splunk handle that or link to an article? Thanks!
@ITWhisperer Thanks for the reply. Given I use $product_brand$ in the conditional panel now, I still need to set the condition of displaying the panel. At the <condition> tag, how can I set it to acc... See more...
@ITWhisperer Thanks for the reply. Given I use $product_brand$ in the conditional panel now, I still need to set the condition of displaying the panel. At the <condition> tag, how can I set it to accept multiple values? As the above method only accepts a single value at one time, I want it to be if $procut_brand$ IN ANY of product brand ["A", "B", "C"], set the display panel to true and if not in those 3, just don't display. Any nudge in the right direction? Many thanks. 
Ah, the original design did not consider the possibility of mixed increment and no-increment.  Now, to deal with this, you will need to tell us whether you want to catch any duplicate regardless of i... See more...
Ah, the original design did not consider the possibility of mixed increment and no-increment.  Now, to deal with this, you will need to tell us whether you want to catch any duplicate regardless of interleave, or whether you want to catch only "consecutive" events that duplicate event_id, because the two use cases are very different. If only consecutive duplicate event_id should trigger alert, you can do   | delta event_id as delta | stats list(_time) as _time values(delta) as delta by event_id event_name task_id | where delta == "0" | fieldformat _time = strftime(_time, "%F %H:%M:%S.%3Q")   To test this use case, I construct the following extended test dataset based on your illustration. Time _time event_id event_name task_id 9/4/22 10:03:39 PM 2022-09-04 22:03:39 1274851 pending-transfer 3 9/4/22 10:02:39 PM 2022-09-04 22:02:39 1274856 pending-transfer 3 9/4/22 09:57:39 PM 2022-09-04 21:57:39 1274856 pending-transfer 3 9/4/22 09:52:39 PM 2022-09-04 21:52:39 1274856 pending-transfer 3 9/4/22 09:47:39 PM 2022-09-04 21:47:39 1274851 pending-transfer 3 9/4/22 09:37:39 PM 2022-09-04 21:37:39 1274849 pending-transfer 3 And the result is a single row event_id event_name task_id _time delta 1274856 pending-transfer 3 2022-09-04 22:02:39.000,2022-09-04 21:57:39.000,2022-09-04 21:52:39.000 0 5 If, on the other hand, the alert should be triggered no matter which other event_id's are in between, you should do   | stats list(_time) as _time by event_id event_name task_id | where mvcount(_time) > 1 | fieldformat _time = strftime(_time, "%F %H:%M:%S.%3Q")   Using the same test dataset as illustrated above, you should see two outputs event_id event_name task_id _time 1274851 pending-transfer 3 2022-09-04 22:03:39.000,2022-09-04 21:47:39.000 1274856 pending-transfer 3 2022-09-04 22:02:39.000,2022-09-04 21:57:39.000,2022-09-04 21:52:39.000 Here is data emulation that you can play with and compare with real data   | makeresults | eval _raw = "Time event_name task_id event_id 9/4/22 10:03:39 PM pending-transfer 3 1274851 9/4/22 10:02:39 PM pending-transfer 3 1274856 9/4/22 09:57:39 PM pending-transfer 3 1274856 9/4/22 09:52:39 PM pending-transfer 3 1274856 9/4/22 09:47:39 PM pending-transfer 3 1274851 9/4/22 09:37:39 PM pending-transfer 3 1274849" | multikv | eval _time = strptime(Time, "%m/%d/%y %I:%M:%S %p") | fields - linecount _raw ``` data emulation above ```    
Hi, HTTP 503 Service Unavailable -- {"messages":[{"type":"ERROR","text":"This node is not the captain of the search head cluster, and we could not determine the current captain. The cluster is eithe... See more...
Hi, HTTP 503 Service Unavailable -- {"messages":[{"type":"ERROR","text":"This node is not the captain of the search head cluster, and we could not determine the current captain. The cluster is either in the process of electing a new captain, or this member hasn't joined the pool"}]} We received this error on one of the Search head cluster member. Is there any way to troubleshoot this? Please assist. Thankyou.  
While integrating the Speakatoo API into my project, I'm encountering a "cookies error." I'm seeking assistance and guidance on how to resolve this issue.
@Michael.Lee  : If the issue is not already resolved it's best if you create a Support ticket for end to end review of agent setup. You may also refer - https://community.appdynamics.com/t5/Knowl... See more...
@Michael.Lee  : If the issue is not already resolved it's best if you create a Support ticket for end to end review of agent setup. You may also refer - https://community.appdynamics.com/t5/Knowledge-Base/How-do-I-debug-common-Linux-Private-Synthetic-Agent-issues/ta-p/51547 How do I submit a Support ticket? An FAQ  Regards, Noopur
@David.Machacek : If the issue is not already resolved it's best if you create a Support ticket for end to end review of agent setup. How do I submit a Support ticket? An FAQ  Regards, Noopur
@Gopinathan.Vasudevan : If the issue is not already resolved it's best if you create a Support ticket for end to end review of agent setup. You may also refer - https://community.appdynamics.com/... See more...
@Gopinathan.Vasudevan : If the issue is not already resolved it's best if you create a Support ticket for end to end review of agent setup. You may also refer - https://community.appdynamics.com/t5/Knowledge-Base/How-do-I-debug-common-Linux-Private-Synthetic-Agent-issues/ta-p/51547 How do I submit a Support ticket? An FAQ  Regards, Noopur
@Ramesh.Jakka : There is no ideal sequence . App Agents connects to Analytics Agent to publish. There could be multiple reasons when agents can stop reporting data. It's best if you create a Supp... See more...
@Ramesh.Jakka : There is no ideal sequence . App Agents connects to Analytics Agent to publish. There could be multiple reasons when agents can stop reporting data. It's best if you create a Support ticket for end to end review of agents. How do I submit a Support ticket? An FAQ  Regards, Noopur
@Abhiram.Sahoo : Test your Event Service End Point connectivity on SAP Agent machine curl http(s)://<ES URL>:<Port Number>/_ping Expected Response : pong
Hi @Dustem, let me understand: 4768 or 4770 should be before the 4769 and you want an alert if tey are missing or tey aren't before, is is correct? Ciao. Giuseppe
I have 2 questions here: I am using Splunk cloud. 1. Is there a way I can import csv file into Splunk dashboard and display the view. Ex: we are trying to show order data as dashboard in Splunk 2.... See more...
I have 2 questions here: I am using Splunk cloud. 1. Is there a way I can import csv file into Splunk dashboard and display the view. Ex: we are trying to show order data as dashboard in Splunk 2. I am looking to import logs into Splunk using rest Api calls, how can I do it? I haven't leveraged it earlier. Ex: If that can be done, we can leverage OMS APIs or extract the OMS DB data through TOSCA and load the summary information into Splunk.
Thank you for the help. I was able to extract the fields now. When I run the query 1, I have found that event_name "pending-transfer" with a task_id of 3 have event_id "1274856" being repeated three... See more...
Thank you for the help. I was able to extract the fields now. When I run the query 1, I have found that event_name "pending-transfer" with a task_id of 3 have event_id "1274856" being repeated three times in a row which means that there is no increment in the event_id. However, when I run the query 2 for the same event_name " pending-transfer", it doesn't give any output. Technically, query 2 should send an alert ( I have created the alert to run at every minute but still NO alert was triggered ) because there is no change in the event_id for the event that was triggered at 9/4/22 10:02:39 PM and 9/4/22 09:57:39 PM  Not sure if I am missing something.   Query 1 : Alert if there is an increment | stats list(_time) as _time list(event_id) as event_id by event_name task_id | where mvindex(_time, 0) > mvindex(_time, -1) AND mvindex(event_id, 0) > mvindex(event_id, -1) OR mvindex(_time, 0) < mvindex(_time, -1) AND mvindex(event_id, 0) < mvindex(event_id, -1) | fieldformat _time = strftime(_time, "%F %H:%M:%S.%3Q")   Below is the output that I am getting when I run the query 1: Time event_name task_id event_id 9/4/22 10:02:39 PM pending-transfer 3 1274856 9/4/22 09:57:39 PM pending-transfer 3 1274856 9/4/22 09:52:39 PM pending-transfer 3 1274856 9/4/22 09:47:39 PM pending-transfer 3 1274851 9/4/22 09:37:39 PM pending-transfer 3 1274849     Query 2 : Alert if there is NO increment | stats list(_time) as _time list(event_id) as event_id by event_name task_id | where mvindex(event_id, 0) = mvindex(event_id, -1) | fieldformat _time = strftime(_time, "%F %H:%M:%S.%3Q")   Thank You  
i need following case to be searched, (past-3-days count=0 and today count >0)                    past-3- days    today field1           0                        4 field2          0               ... See more...
i need following case to be searched, (past-3-days count=0 and today count >0)                    past-3- days    today field1           0                        4 field2          0                          1 .....   then show the table _time    field   _raw
You'll need to a bit more specific when you say count for each field, but you could do something like this index=... earliest=-3d@d latest=now | bin _time span=1d ``` Calculates the count for a fi... See more...
You'll need to a bit more specific when you say count for each field, but you could do something like this index=... earliest=-3d@d latest=now | bin _time span=1d ``` Calculates the count for a field by day ``` | stats count by _time field ``` Now calculate today's value and the total ``` | stats sum(eval(if(_time=relative_time(now(), "@d"),count, 0))) as today sum(count) as total ``` And set a field to be TRUE or FALSE to alert ``` | eval alert=if(today>0 AND total-today=0, "TRUE", "FALSE") Do this fit what you're trying to do?
Not during this period, but the user did not have 4768 or 4770 events prior to this period.
I want to search for a user with 4769 events over a continuous period, but the user has no 4768 or 4770 events during that time, instead of 4769 and no 4768 or 4770 users.
how to  calculate the count for each field in the past 3 days. If the count for all 3 days is 0, and the count for today is greater than 0, then the command triggers an alert that shows log. 
Yes correct And i saw the send email logs  for other alerts which I can see in internal logs. Looks good But i don't see send email logs for this alert in internal logs