All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Did you check the values of Report_Id I mentioned earlier @mdsnmss ? Are they repeating or all unique?
Try removing panels one at a time until you find the one that is causing the problem (there my be more than one), then look to fix that one.
Hi Team, We have a requirement to forward the archived data to external storage (GCS Bucket). I have verified the splunk document but haven't found any luck on this. Kinldy assist me in forwarding ... See more...
Hi Team, We have a requirement to forward the archived data to external storage (GCS Bucket). I have verified the splunk document but haven't found any luck on this. Kinldy assist me in forwarding the archived data to GCS Bucket.
Good morning Ryan, for the first link doc you passed to me this is the description of the processing time: I must say it doesn't say anything if value is expressed in Seconds or Milliseconds. ... See more...
Good morning Ryan, for the first link doc you passed to me this is the description of the processing time: I must say it doesn't say anything if value is expressed in Seconds or Milliseconds. In the second link the processing time I'm founding is the following and it's about the pre-build metric which was clear to me that was expressed in seconds: My question was the following: I'm in the analytics section I'm investigating the sap_idoc_table I double click one single row I see a field called PROCESSING_TIME = 0011123 like the following How can I know in what measure this is expressed? (from what piece of documentation? or from where in the product?) In AppD I went into the default dashboard provided for the Idoc and I double clicked on table with the title "Idoc Errors". There it seems we have the same field but mapped as follow: Am I correct to assume that this field is equal to the one called "PROCESSING_TIME" seen in the analytics engine ? If so, I'm wondering how comes that here I can see it mapped and explicitly declared and in the analytics I can only see the value as a string of text. Best regards
Splunk app for AWS security dashboard shows '0' data, need help to fix this issue   when I try to run/edit query shows error as below   
Hello to all dear friends. Does Splunk have settings to only serve on http version 2.0? Thank you in advance
I deployed splunk universal forwarder 9.1.1 on Linux servers which are running on VPC VSI in IBM Cloud. Some servers are RHEL7 others are RHEL8. These servers send logs to Heavy Forwarder server. A... See more...
I deployed splunk universal forwarder 9.1.1 on Linux servers which are running on VPC VSI in IBM Cloud. Some servers are RHEL7 others are RHEL8. These servers send logs to Heavy Forwarder server. After deployment, the memory usage was coming to high on each server and one of the server went down because of memory leak. CPU usage is also high as expected when the splunk process is running. For example, one of the server's CPU usage increased 30% and consumed 5.7GB memory out of 14GB after the splunk process up. How can I reduce the resource usage?
  thank you. With huge dashboard looks like I am hitting maximum concurrent searches Splunk allows was try to see if I could combine. would append [search…] would be started as new concurrent se... See more...
  thank you. With huge dashboard looks like I am hitting maximum concurrent searches Splunk allows was try to see if I could combine. would append [search…] would be started as new concurrent search?
Will this change the timezone in the output to SGT?  We want the output to be shifted to SGT and then formatted to "%Y-%m-%d %H:%M:%S" 
Hi @Satyapv , as @yuanliu said, I don't understand why to put disomogeneous results in te same search. Anyway, you could use the append command, but you'll have empty values in the columns of the o... See more...
Hi @Satyapv , as @yuanliu said, I don't understand why to put disomogeneous results in te same search. Anyway, you could use the append command, but you'll have empty values in the columns of the other search: index=IndexA | stats Count(X) AS X Avg(Y) AS Y BY XYZ | append [ search index=IndexB | stats Count(K) AS K Max(M) AS M by KM ] Ciao. Giuseppe
Hi @mlevsh, the easiest way is asking to remove that rule because it isn't useful! Anyway, you should list all the existing indexes in the WHERE condition: | tstats count where index IN (index1,in... See more...
Hi @mlevsh, the easiest way is asking to remove that rule because it isn't useful! Anyway, you should list all the existing indexes in the WHERE condition: | tstats count where index IN (index1,index2,index2) by index host | fields - count to avoid to repeat this list in every command, you could also put all these indexes in a macro or an eventtype and use it in your searches. Ciao. Giuseppe
Hi @Mien, if the days in which you're receiving less data aren't the weekend, you should analyze if in that days there are some scheduled activities or a downtime of that systems. In addition, you ... See more...
Hi @Mien, if the days in which you're receiving less data aren't the weekend, you should analyze if in that days there are some scheduled activities or a downtime of that systems. In addition, you should analyze if this behaviour is all weeks or only in one. then compare /opt/splunk/var/log/splunk/metrics.log file dimensions to understand if the issue is on Splunk or on the system. Ciao. Giuseppe
Sorry, but SGT+8 corresponds to UTC. If you want to chenge the time format from the displayed to  "%Y-%m-%d %H:%M:%S" you should use eval with the time functions: | eval Time=strftime(_time,"%Y-%m... See more...
Sorry, but SGT+8 corresponds to UTC. If you want to chenge the time format from the displayed to  "%Y-%m-%d %H:%M:%S" you should use eval with the time functions: | eval Time=strftime(_time,"%Y-%m-%d %H:%M:%S") Ciao. Giuseppe
I tried implementing slack app but unable to send alerts to splunk so can you guide me through how to use the app to send alerts without using webhook.
Hi Team, I'm currently receiving AWS CloudWatch logs in Splunk using the add-on. I'm developing a use case and need to utilize the "event Time" field from the logs. I require assistance in convertin... See more...
Hi Team, I'm currently receiving AWS CloudWatch logs in Splunk using the add-on. I'm developing a use case and need to utilize the "event Time" field from the logs. I require assistance in converting the event Time from UTC to SGT. Sample event Time is in UTC +0   2023-06-30T17:17:52Z 2023-06-30T21:29:53Z 2023-06-30T22:32:53Z 2023-07-01T00:38:53Z 2023-07-01T04:50:52Z 2023-07-01T05:53:55Z 2023-07-01T06:56:54Z 2023-07-01T07:59:52Z 2023-07-01T09:02:56Z 2023-07-01T10:05:54Z 2023-07-01T11:08:53Z 2023-07-01T12:11:53Z   End result:  UTC + 0 to SGT + 8 time. Expected output format is "%Y-%m-%d %H:%M:%S"   
Yes I came across that thing. So is there a alternative way to run python scripts?
Forget Splunk.  If there are no common fields between indices, can you illustrate what the stats result would look like?  Please show some sample tables of field values in each index (in text, anonym... See more...
Forget Splunk.  If there are no common fields between indices, can you illustrate what the stats result would look like?  Please show some sample tables of field values in each index (in text, anonymize as needed).  Then, illustrate the corresponding output table (also in text) that you envision with the two data data tables.  If anonymizing data is difficult, illustrate mock data tables and calculate desired output table by hand, so volunteers can understand your use case. Let me also point out that your illustrated mock code, "Stats Count (X) Avg(Y) by XYZ", is confusing because you mentioned no field named XYZ.  The other mock code, "stats Count (K) Max(M) by K M", also doesn't make sense because when you group by M, Max(M) can only have the value of that group M, unless K and M do not appear in the same event, in which case Max(M) is null.
If any of the codes, including your initial code, give output that doesn't suit the needs, please post sample data (anonymize as needed) that lead to such output, actual output (anonymize as needed) ... See more...
If any of the codes, including your initial code, give output that doesn't suit the needs, please post sample data (anonymize as needed) that lead to such output, actual output (anonymize as needed) from such code, and explain what the desired output should look like. (And how the desired output is different from actual output if that is not painfully obvious.) Your initial code performs transaction on user.  After excluding closed transactions, what remain in the stream are events with eventcode 4769 that do not have those three eventcodes for the same user, as well as events with eventcodes that are not those three.  Isn't this what you ask for?
Yep, thats a valid and nice SPL(the eval(score>0)).  or the "!=" also should do the trick...  | stats count(eval(score!=0)) as Total_Non_Zero_Vuln by ip the stats, eval commands give us so m... See more...
Yep, thats a valid and nice SPL(the eval(score>0)).  or the "!=" also should do the trick...  | stats count(eval(score!=0)) as Total_Non_Zero_Vuln by ip the stats, eval commands give us so many options, very nice!  
I tried what you suggested, but I was unable to get the results I expected. To resolve the issue, I had to disable Java log enrichment feature in Dynatrace OneAgent to stop OneAgent from injecting   ... See more...
I tried what you suggested, but I was unable to get the results I expected. To resolve the issue, I had to disable Java log enrichment feature in Dynatrace OneAgent to stop OneAgent from injecting   {dt.trace_id=837045e132ad49311fde0e1ac6a6c18b, dt.span_id=169aa205dab448fc, dt.trace_sampled=true}  into my logs. Now things are back to normal.