All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

If you are using an on-premises controller, you can log in to your database and execute this query. For those using a SAAS controller, enabling the Audit Report feature is recommended. Below are arti... See more...
If you are using an on-premises controller, you can log in to your database and execute this query. For those using a SAAS controller, enabling the Audit Report feature is recommended. Below are articles providing more detailed information on both scenarios. https://docs.appdynamics.com/appd/onprem/23.x/latest/en/controller-deployment/administer-the-controller/controller-audit-log
how to calculate incident end day SLA hours in splunk? I mean if Inc_Resolved_Time="27.02.2024 08:00" and TeamWorkTimings="Mon-Fri 7AM to 6PM", then the end day SLA hours should be 1business hour. ... See more...
how to calculate incident end day SLA hours in splunk? I mean if Inc_Resolved_Time="27.02.2024 08:00" and TeamWorkTimings="Mon-Fri 7AM to 6PM", then the end day SLA hours should be 1business hour. Could you please help me with splunk query to achieve this result. I am trying to use this query but not getting proper results:: | eval Ending_Day_SLA_Hours=if(((incidentEndTime1-incidentStartTime1)<86400 AND Inc_Open_Days=Inc_Resolved_Actual_Days),0,if((Inc_Resolve_Date=Weekend_1 OR Inc_Resolve_Date=Weekend_2 OR Inc_Resolve_Date=Weekend_3 OR Inc_Resolve_Date=Weekend_4),0,if((incidentEndTime1>Inc_SLA_Start_Day_Epoch AND incidentEndTime1>Inc_SLA_End_Day_Epoch),(660-((incidentEndTime1-Inc_SLA_End_Day_Epoch)/1440))/60,if(incidentEndTime1<Inc_SLA_Start_Day_Epoch,11,0))))
I'm trying to build a query to give real time results for a value, but the is a time delay between the data send and indexed. This means when I do a realtime query for last 60s, I get 20s of data and... See more...
I'm trying to build a query to give real time results for a value, but the is a time delay between the data send and indexed. This means when I do a realtime query for last 60s, I get 20s of data and 40s of blank. I'd like to load the last 60s of recieved data in realtime, not the data recieved in the last 60s. Any ideas? I've tried index=ind sourcetype=src (type=instrument) | where temperature!="" | timechart span=1s values(temperature) and index=ind sourcetype=src (type=instrument) | where temperature!= NULL | timechart span=1s values(temperature) No luck with either
Hi @uagraw01 , the results mean that you have some queue but not so critical (you don't have 100%) add to your search the host with missed logs and see if there are queues congestion on this host. ... See more...
Hi @uagraw01 , the results mean that you have some queue but not so critical (you don't have 100%) add to your search the host with missed logs and see if there are queues congestion on this host. Ciao. Giuseppe
Hi, I want to extract value c611b43d-a574-4636-9116-ec45fe8090f8 from below. Could you please let me know how I can do using rex field=httpURL   httpURL: /peerpayment/v1/payment/c611b43d-a574-463... See more...
Hi, I want to extract value c611b43d-a574-4636-9116-ec45fe8090f8 from below. Could you please let me know how I can do using rex field=httpURL   httpURL: /peerpayment/v1/payment/c611b43d-a574-4636-9116-ec45fe8090f8/performAction
Hi @Fadil.CK, If you have not seen this yet, we have published this information on our AppD Docs site: https://docs.appdynamics.com/paa/en/appdynamics-end-of-support-notices/end-of-support-notice-d... See more...
Hi @Fadil.CK, If you have not seen this yet, we have published this information on our AppD Docs site: https://docs.appdynamics.com/paa/en/appdynamics-end-of-support-notices/end-of-support-notice-disabling-tls-1-0-and-1-1 Let me know if this helps answer your questions.
Hi @Satish.Babu, Given how old this post is, it's unlikely to get a reply. I'm searching for any relevant existing content for you. If I find anything I'll share it here. If you happen to find any ... See more...
Hi @Satish.Babu, Given how old this post is, it's unlikely to get a reply. I'm searching for any relevant existing content for you. If I find anything I'll share it here. If you happen to find any info, can you please also share it? If you don't have any luck, you can create a ticket with AppDynamics Support (for contractual customers only) How do I submit a Support ticket? An FAQ 
I also have the same question. The new Outlook client makes links not clickable, I guess for security reasons. I want to make it so that my link in the Splunk email alert becomes clickable again, but... See more...
I also have the same question. The new Outlook client makes links not clickable, I guess for security reasons. I want to make it so that my link in the Splunk email alert becomes clickable again, but cannot find any way of doing so?
I have a similar/same problem, the logs explicitly state that the certficate validation failed due to a self-signed cert in the chain. Your ideas was:   "As a temporary test, try disabling certific... See more...
I have a similar/same problem, the logs explicitly state that the certficate validation failed due to a self-signed cert in the chain. Your ideas was:   "As a temporary test, try disabling certificate verification in Splunk (not recommended for production!)."   How would I do this for the dynatrace plugin?
Hi, can you share the SPL command to displaying :    date.........SSID......Device.......host Thx  
Hi Guys, Any success on this? Also I was checking the server module and in document it says Remediation actions are not available for servers. Any idea how we can call remediation script in Servers.... See more...
Hi Guys, Any success on this? Also I was checking the server module and in document it says Remediation actions are not available for servers. Any idea how we can call remediation script in Servers. Regards, Gopikrishnan R.
Impala logs contain data at the level of query, server, and pool (global, shared) level. The data is mixed together into the log entries for each query and the only way to extract it is combining ... See more...
Impala logs contain data at the level of query, server, and pool (global, shared) level. The data is mixed together into the log entries for each query and the only way to extract it is combining rows from a single query into a transaction and extracting them from there. Using timechart for count( query_id ), last( reserved ), last( max_mem ) provides a nice, accurate time-sample of the state of that one server. The query_id counts can and reserved values can be combined to produce useful values for the pool about the number of running queries and total reserved memory allocated in the pool. The max_mem value is per-pool and will have a single value (e.g, 123456) across all hosts in the pool. It may change over time but all of the pool members will show the same value all the time. At that point the last( max_mem ) will be have identical values for each value in the pool -- the value is common to all pool members and will invariably have the same value for log transactions acquired from the various hosts. If there were some way to simply take values( max_mem-* ) from the timechart then I'd have a valid sample of the pool-level max_mem value. There may be a way to do this with an addcols, there may be a keyword I don't know about similar to "addcols fieldname=x x-*" that will summarize multiple fields into a single value; if so then that command would work. I cannot be the first person who is trying to summarize values from multiple levels of a hierarchy of data in a timeslice, I think?  
Hi, I have multiple searches that follow a naming convention like "Server1_Monitoring", "Server2_Monitoring", and so on. I used an asterisk in the search name stanza to match all of them in the save... See more...
Hi, I have multiple searches that follow a naming convention like "Server1_Monitoring", "Server2_Monitoring", and so on. I used an asterisk in the search name stanza to match all of them in the savedsearches.conf file, like: [Server*_Monitoring] dispatch.ttl=3p I restarted the search head after the change, but it didn't work. Is there any way to avoid listing all the searches explicitly in savedsearches.conf? Thank you!
You could use btool to look what is applied to your sourcetype, BUT if there is also apply to source or host something those will override sourcetype definitions. Unfortunately I don't know if there ... See more...
You could use btool to look what is applied to your sourcetype, BUT if there is also apply to source or host something those will override sourcetype definitions. Unfortunately I don't know if there is any tool which can show to you which of those are applied ;-(
Hi @Anna.Friedel, It looks like the Community has not jumped in yet.  If you found a solution to your question we’d appreciate it if you came back and shared what you learned.  
Hi All, We are deploying out the Splunk Universal Forwarder at the moment which is going well but I'm looking at getting this installed on our Citrix Infrastructure. In our environment, we have "Go... See more...
Hi All, We are deploying out the Splunk Universal Forwarder at the moment which is going well but I'm looking at getting this installed on our Citrix Infrastructure. In our environment, we have "Golden Images" where we make changes. Once published (Using PVS), the new image will be deployed out to the Citrix servers in that specific delivery group. When the server/s (None-Persistent) in that group performs their reboots which is nightly they pick up the golden image via PVS. Using the clone prep command works and the new client comes through in the Forwarder Management without any issues which I'm happy with as it is working as expected but as these servers reboot every night, I'm finding that duplicate entries of the same servers are being created when the reboot completes and Splunk connects to the deployment server. I'm presuming that this is because the GUID is changing every time these servers reboot in the night but I want to know if there is a way to ensure it uses the same GUID for that hostname so that it avoids creating duplicate records in the Forwarder Management console? Or if there is some option somewhere where Splunk identifies a duplicate hostname and removes it automatically? For example, this is how it works: SERVERMAIN01 - Citrix Maintenance Server where golden images are attached and changes can be made. SERVERAPP01 - Application server which picks up golden image (None-Persistent) and is rebooted nightly SERVERAPP02 - Application server which picks up golden image (None-Persistent) and is rebooted nightly SERVERAPP03 - Application server which picks up golden image (None-Persistent) and is rebooted nightly So essentially, I'm getting duplicate clients in the Forwarder Management for SERVERAPP01/02/03 every night which will just build up over time unless I manually intervene which takes up my time. Hope this all makes sense and someone can point me in the right direction as I've searched around for a while and can't find any posts around this. Cheers,
Hi, did you manage to solve this?   Thanks!
If a case function returns no value it's because none of the expressions matched.  Adding a default expression | eval foo = case(..., 1==1, "???") will help flag edge cases that don't match the oth... See more...
If a case function returns no value it's because none of the expressions matched.  Adding a default expression | eval foo = case(..., 1==1, "???") will help flag edge cases that don't match the other expressions. In this instance, it seems the first expression needs some wildcards unless you're looking for an exact match. | eval Status=case(like('message',"%Exchange Rates Process Completed. File sucessfully sent to Concur%"),"SUCCESS", like('TracePoint',"%EXCEPTION%"),"ERROR")  
Without some sample events, it is difficult to determine what might be wrong with your search. Having said that, I noticed that the first option in your case function for Status does not have any wil... See more...
Without some sample events, it is difficult to determine what might be wrong with your search. Having said that, I noticed that the first option in your case function for Status does not have any wildcards in the pattern for the like function. Could this be the issue?
The x-axis of a chart is usually the first field / column in the result events used for the chart. Check your search query to ensure that the fields are in the correct order.