All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Fadil.CK, If you have not seen this yet, we have published this information on our AppD Docs site: https://docs.appdynamics.com/paa/en/appdynamics-end-of-support-notices/end-of-support-notice-d... See more...
Hi @Fadil.CK, If you have not seen this yet, we have published this information on our AppD Docs site: https://docs.appdynamics.com/paa/en/appdynamics-end-of-support-notices/end-of-support-notice-disabling-tls-1-0-and-1-1 Let me know if this helps answer your questions.
Hi @Satish.Babu, Given how old this post is, it's unlikely to get a reply. I'm searching for any relevant existing content for you. If I find anything I'll share it here. If you happen to find any ... See more...
Hi @Satish.Babu, Given how old this post is, it's unlikely to get a reply. I'm searching for any relevant existing content for you. If I find anything I'll share it here. If you happen to find any info, can you please also share it? If you don't have any luck, you can create a ticket with AppDynamics Support (for contractual customers only) How do I submit a Support ticket? An FAQ 
I also have the same question. The new Outlook client makes links not clickable, I guess for security reasons. I want to make it so that my link in the Splunk email alert becomes clickable again, but... See more...
I also have the same question. The new Outlook client makes links not clickable, I guess for security reasons. I want to make it so that my link in the Splunk email alert becomes clickable again, but cannot find any way of doing so?
I have a similar/same problem, the logs explicitly state that the certficate validation failed due to a self-signed cert in the chain. Your ideas was:   "As a temporary test, try disabling certific... See more...
I have a similar/same problem, the logs explicitly state that the certficate validation failed due to a self-signed cert in the chain. Your ideas was:   "As a temporary test, try disabling certificate verification in Splunk (not recommended for production!)."   How would I do this for the dynatrace plugin?
Hi, can you share the SPL command to displaying :    date.........SSID......Device.......host Thx  
Hi Guys, Any success on this? Also I was checking the server module and in document it says Remediation actions are not available for servers. Any idea how we can call remediation script in Servers.... See more...
Hi Guys, Any success on this? Also I was checking the server module and in document it says Remediation actions are not available for servers. Any idea how we can call remediation script in Servers. Regards, Gopikrishnan R.
Impala logs contain data at the level of query, server, and pool (global, shared) level. The data is mixed together into the log entries for each query and the only way to extract it is combining ... See more...
Impala logs contain data at the level of query, server, and pool (global, shared) level. The data is mixed together into the log entries for each query and the only way to extract it is combining rows from a single query into a transaction and extracting them from there. Using timechart for count( query_id ), last( reserved ), last( max_mem ) provides a nice, accurate time-sample of the state of that one server. The query_id counts can and reserved values can be combined to produce useful values for the pool about the number of running queries and total reserved memory allocated in the pool. The max_mem value is per-pool and will have a single value (e.g, 123456) across all hosts in the pool. It may change over time but all of the pool members will show the same value all the time. At that point the last( max_mem ) will be have identical values for each value in the pool -- the value is common to all pool members and will invariably have the same value for log transactions acquired from the various hosts. If there were some way to simply take values( max_mem-* ) from the timechart then I'd have a valid sample of the pool-level max_mem value. There may be a way to do this with an addcols, there may be a keyword I don't know about similar to "addcols fieldname=x x-*" that will summarize multiple fields into a single value; if so then that command would work. I cannot be the first person who is trying to summarize values from multiple levels of a hierarchy of data in a timeslice, I think?  
Hi, I have multiple searches that follow a naming convention like "Server1_Monitoring", "Server2_Monitoring", and so on. I used an asterisk in the search name stanza to match all of them in the save... See more...
Hi, I have multiple searches that follow a naming convention like "Server1_Monitoring", "Server2_Monitoring", and so on. I used an asterisk in the search name stanza to match all of them in the savedsearches.conf file, like: [Server*_Monitoring] dispatch.ttl=3p I restarted the search head after the change, but it didn't work. Is there any way to avoid listing all the searches explicitly in savedsearches.conf? Thank you!
You could use btool to look what is applied to your sourcetype, BUT if there is also apply to source or host something those will override sourcetype definitions. Unfortunately I don't know if there ... See more...
You could use btool to look what is applied to your sourcetype, BUT if there is also apply to source or host something those will override sourcetype definitions. Unfortunately I don't know if there is any tool which can show to you which of those are applied ;-(
Hi @Anna.Friedel, It looks like the Community has not jumped in yet.  If you found a solution to your question we’d appreciate it if you came back and shared what you learned.  
Hi All, We are deploying out the Splunk Universal Forwarder at the moment which is going well but I'm looking at getting this installed on our Citrix Infrastructure. In our environment, we have "Go... See more...
Hi All, We are deploying out the Splunk Universal Forwarder at the moment which is going well but I'm looking at getting this installed on our Citrix Infrastructure. In our environment, we have "Golden Images" where we make changes. Once published (Using PVS), the new image will be deployed out to the Citrix servers in that specific delivery group. When the server/s (None-Persistent) in that group performs their reboots which is nightly they pick up the golden image via PVS. Using the clone prep command works and the new client comes through in the Forwarder Management without any issues which I'm happy with as it is working as expected but as these servers reboot every night, I'm finding that duplicate entries of the same servers are being created when the reboot completes and Splunk connects to the deployment server. I'm presuming that this is because the GUID is changing every time these servers reboot in the night but I want to know if there is a way to ensure it uses the same GUID for that hostname so that it avoids creating duplicate records in the Forwarder Management console? Or if there is some option somewhere where Splunk identifies a duplicate hostname and removes it automatically? For example, this is how it works: SERVERMAIN01 - Citrix Maintenance Server where golden images are attached and changes can be made. SERVERAPP01 - Application server which picks up golden image (None-Persistent) and is rebooted nightly SERVERAPP02 - Application server which picks up golden image (None-Persistent) and is rebooted nightly SERVERAPP03 - Application server which picks up golden image (None-Persistent) and is rebooted nightly So essentially, I'm getting duplicate clients in the Forwarder Management for SERVERAPP01/02/03 every night which will just build up over time unless I manually intervene which takes up my time. Hope this all makes sense and someone can point me in the right direction as I've searched around for a while and can't find any posts around this. Cheers,
Hi, did you manage to solve this?   Thanks!
If a case function returns no value it's because none of the expressions matched.  Adding a default expression | eval foo = case(..., 1==1, "???") will help flag edge cases that don't match the oth... See more...
If a case function returns no value it's because none of the expressions matched.  Adding a default expression | eval foo = case(..., 1==1, "???") will help flag edge cases that don't match the other expressions. In this instance, it seems the first expression needs some wildcards unless you're looking for an exact match. | eval Status=case(like('message',"%Exchange Rates Process Completed. File sucessfully sent to Concur%"),"SUCCESS", like('TracePoint',"%EXCEPTION%"),"ERROR")  
Without some sample events, it is difficult to determine what might be wrong with your search. Having said that, I noticed that the first option in your case function for Status does not have any wil... See more...
Without some sample events, it is difficult to determine what might be wrong with your search. Having said that, I noticed that the first option in your case function for Status does not have any wildcards in the pattern for the like function. Could this be the issue?
The x-axis of a chart is usually the first field / column in the result events used for the chart. Check your search query to ensure that the fields are in the correct order.
The runanywhere example I shared shows it working. However, this is based on the events that you shared, so if it isn't working for your real data, there is likely to be some discrepancy between your... See more...
The runanywhere example I shared shows it working. However, this is based on the events that you shared, so if it isn't working for your real data, there is likely to be some discrepancy between your real data and the sample events that you shared. This is why it is important to share accurate representative examples of your data. Check your actual field names and event structure and modify the search accordingly.
Thank you for the suggestion.  But I have mentioned, The current Splunk has been setup such that it's not letting me to use ./splunk start to accept license.   The only way I can start the splunk u... See more...
Thank you for the suggestion.  But I have mentioned, The current Splunk has been setup such that it's not letting me to use ./splunk start to accept license.   The only way I can start the splunk using systemctl start Splunkd
Currently, we are using the ITSI Module along with the Splunk_TA_snow addon to create incidents on ServiceNow and this is working as expected. We have a new requirement now to create TASKs along w... See more...
Currently, we are using the ITSI Module along with the Splunk_TA_snow addon to create incidents on ServiceNow and this is working as expected. We have a new requirement now to create TASKs along with the incidents. We went through the scripts of ServiceNow and the documentation and we couldn't find anything that could help us.   My questions are 1. do we have this feature within the current scope of the addon? 2. If not, can this be customized?
Hi Guys, In this case statement i am getting jobType values but i am not getting Status value. I already mentioned the keyword above in the query But why i am not getting . index="mulesoft" applica... See more...
Hi Guys, In this case statement i am getting jobType values but i am not getting Status value. I already mentioned the keyword above in the query But why i am not getting . index="mulesoft" applicationName="s-concur-api" environment=DEV timestamp ("onDemand Flow for concur Expense Report file with FileID Started" OR "Exchange Rates Scheduler process started" OR "Exchange Rates Process Completed. File successfully sent to Concur")|transaction correlationId| rename timestamp as Timestamp correlationId as CorrelationId tracePoint as TracePoint content.payload.TargetFileName as TargetFileName | eval JobType=case(like('message',"%onDemand Flow for concur Expense Report file with FileID Started%"),"OnDemand",like('message',"%Exchange Rates Scheduler process started%"),"Scheduled", true() , "Unknown")| eval Status=case(like('message',"Exchange Rates Process Completed. File sucessfully sent to Concur"),"SUCCESS",like('TracePoint',"%EXCEPTION%"),"ERROR") |table JobType Status    
We had PS create a report but, I can't seem to figure out what setting he set to show a time base chart without a time-based command.   He didn't use dashboard.   The graphic only shows on the re... See more...
We had PS create a report but, I can't seem to figure out what setting he set to show a time base chart without a time-based command.   He didn't use dashboard.   The graphic only shows on the report?  I want the ability to do similar type of visualization but, I can't figure what setting cause the visual output.