All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi! We have installed extensions (23.10.0 compatibile with 4.4.1.0) for our Azure App Functions (.NET Core 6.0.24) to collect data and monitor them. The functions are collecting data and they are sh... See more...
Hi! We have installed extensions (23.10.0 compatibile with 4.4.1.0) for our Azure App Functions (.NET Core 6.0.24) to collect data and monitor them. The functions are collecting data and they are showing up as tiers in the Application Flow Map. The problem is that we cannot see the outbound connections that they are making and we would like to see them (arrows pointing to other services that are not monitored via appdynamics agents). The list of remote services is empty as well. Automatic discovery is turned on in Backend Detection for each type, so it looks fine from controller's end. We still have some old parts of the infrastructure monitored under a different AppDynamics Application and we can see an Azure App Service (JVM OpenJDK 64-Bit Server VM 11.0.8) with an installed AppAgent (Server Agent 21.11.2.33305) that shows outbound connections without any extra settings in the controller (I have compared those for both AppDynamics applications).  What is more interesting, in both situations, the Azure Function App from the new AppDynamics Application and the Azure App Service from the old AppDynamics application are making calls via HTTP to another, but the same Azure App Service and we can see those calls on the flow map for the old application.  Are there some extra app agent settings/properties that should be set up for the outbound calls and more precise flow map to show up? Or what could be the reason for the remote services/outbound connections not showing up? (A flow map where we can see arrows pointing to other services that also show up in Remote Services in AppDynamics). Please note that I looked through Backend detection/Remote Services and Flow map documentation as well as available topics on Cisco AppDynamics Community and I haven't found a solution to my problem.  Best Regards
Thanks for reply. I´ve tried with the option:  initCrcLength = 1024 But still not all the files have been synced. There are still more pending.
Saved my day
No. Macros are expanded in search time, not during processing the results. So macro as the email action recipients won't work. You can use result-based tokens. https://docs.splunk.com/Documentation... See more...
No. Macros are expanded in search time, not during processing the results. So macro as the email action recipients won't work. You can use result-based tokens. https://docs.splunk.com/Documentation/Splunk/9.1.2/Alert/EmailNotificationTokens EDIT: OK, so you can use macro or a lookup to generate a recipient field in the search results. And then use this result as a token for the given alert setting.
Wild idea, maybe I can do this with a macro definition? I'll play around with it and see if it works 
Thanks for that information
Dears, we have 2 applications called(Cortex & IST) & they are related to credit cards processing & management which are provided by FIS vendor these 2 APPS consist of 2 parts(JAVA & C), we succes... See more...
Dears, we have 2 applications called(Cortex & IST) & they are related to credit cards processing & management which are provided by FIS vendor these 2 APPS consist of 2 parts(JAVA & C), we successfully monitored the Java part, but we aren't able to monitor the C part as the source code doesn't exit & not provided by the FIS vendor... Is there anyone who succeeded in monitoring them or has any idea about how to do that? 
We have found the solution: Change the Dashboard Studio design from dark to light. Now the fonts are black
I think it was acually a problem with the "security" angle, though I can't remember. So I'll keep my fingers crossed for some creative suggestions
Technically, data from UF does _not_ come via HEC to Cloud but is sent with S2S protocol embedded in HTTP requests. But it does connect with HTTP protocol and needs an authentication token. Otherwise... See more...
Technically, data from UF does _not_ come via HEC to Cloud but is sent with S2S protocol embedded in HTTP requests. But it does connect with HTTP protocol and needs an authentication token. Otherwise everyone could send data to your environment.
Thank you for the clarification. Also, may I know what the use of universal forwarder credentials is when the date comes via HEC? 
HEC inputs on Cloud are TLS-enabled by default.
Thanks, yes i know it works with css, but i use the new Dashboard Studio with json.
That sounds like a problem with your email system which should be handled with your mail admins
Close. But without the <> part (the <SOURCE> part must be literally put this way if you use this option). And you'd typically want a higher value if you have a constant header. Something like initC... See more...
Close. But without the <> part (the <SOURCE> part must be literally put this way if you use this option). And you'd typically want a higher value if you have a constant header. Something like initCrcLength = 1024 for example.
Hello I see. You mean anything like this ? initCrcLength = <256>  
Yes, 100% agreed and I have tried to do this though for some reason the "splunk" sender was not allowed access to distribution lists and using group inboxes would not achieve the desired outcome.
While there are probably solutions within the splunk itself I suppose the easiest solution to manage would be to create distribution lists in your company email system and simply manage recipients of... See more...
While there are probably solutions within the splunk itself I suppose the easiest solution to manage would be to create distribution lists in your company email system and simply manage recipients of the reports by membership in this list.
I am fairly confident that there is a clever workaround for this though I am not 100% sure how. I have alerts stored in apps on a deployer which makes use of the email function when triggered. I if ... See more...
I am fairly confident that there is a clever workaround for this though I am not 100% sure how. I have alerts stored in apps on a deployer which makes use of the email function when triggered. I if I need to add/remove recipients from the email alert I have to manually edit several different recipient lists for several different alerts. What I wan't is a clever way to set up som sort of "list" of recipients which I can name "developers" for instance, and instead of having 20 email adresses as recipients in the alert I could do something like "$devops$". Then just edit recipients at a single location for all alerts instead of each one separately. I hope this is a clear enough explanation for what I am hoping is possible and welcome all suggestions.