Not sure if this got solved, but I was able to get it formatted using the following:
| inputlookup sc_vuln_data_lookup
| eval first_found = strftime(first_found, "%c")
| eval last_found = strfrtime...
See more...
Not sure if this got solved, but I was able to get it formatted using the following:
| inputlookup sc_vuln_data_lookup
| eval first_found = strftime(first_found, "%c")
| eval last_found = strfrtime(last_found, "%c")
https://docs.splunk.com/Documentation/AddOns/released/Overview/Distributedinstall here is the correct link no one mentioned that it is the same TA for both, did you tried this before? As per do...
See more...
https://docs.splunk.com/Documentation/AddOns/released/Overview/Distributedinstall here is the correct link no one mentioned that it is the same TA for both, did you tried this before? As per documentation it should be downloaded directly from splunkbase, but can't find it. The only thing i found is "Splunk-add-on-for-windows" but not sure if that's it or not thanks
Hi @PickleRick , your solution was correct! I only changed a very little detail: in the "drop_dead_all stanza" because I have to remove only the cloned events not all, otherwise also eventual new f...
See more...
Hi @PickleRick , your solution was correct! I only changed a very little detail: in the "drop_dead_all stanza" because I have to remove only the cloned events not all, otherwise also eventual new flows are deleted without cloning, but you solution is great! Thank You very much for your help, I hope to have the opportunity to return the favor i the future because you solved a very important issue for my job. On this occasion I take advantage of your knowledge, if you have expertise on WinLogBeat, would you please take a look at my question: https://community.splunk.com/t5/Getting-Data-In/Connect-winlogbeat-log-format-to-Splunk-TA-Windows/m-p/669363 ? Thank you again. Ciao. Giuseppe
Hi In fact my problem is not one, it is the normal behavior of Splunk. Before I had a single search head and all the .conf files were in the local directory to override the default settings. When I...
See more...
Hi In fact my problem is not one, it is the normal behavior of Splunk. Before I had a single search head and all the .conf files were in the local directory to override the default settings. When I migrated the search head into a search head cluster I kept this principle. Splunk's philosophy and best practice is that the deployer must deploy files that are not changing "locally" on the search head. These files must therefore be put in the default directory. To resolve my problem I had to move the files from local directory to the default directory then run the apply shcluster-bundle command. Now it works as expected.
Hi! We have installed extensions (23.10.0 compatibile with 4.4.1.0) for our Azure App Functions (.NET Core 6.0.24) to collect data and monitor them. The functions are collecting data and they are sh...
See more...
Hi! We have installed extensions (23.10.0 compatibile with 4.4.1.0) for our Azure App Functions (.NET Core 6.0.24) to collect data and monitor them. The functions are collecting data and they are showing up as tiers in the Application Flow Map. The problem is that we cannot see the outbound connections that they are making and we would like to see them (arrows pointing to other services that are not monitored via appdynamics agents). The list of remote services is empty as well. Automatic discovery is turned on in Backend Detection for each type, so it looks fine from controller's end. We still have some old parts of the infrastructure monitored under a different AppDynamics Application and we can see an Azure App Service (JVM OpenJDK 64-Bit Server VM 11.0.8) with an installed AppAgent (Server Agent 21.11.2.33305) that shows outbound connections without any extra settings in the controller (I have compared those for both AppDynamics applications). What is more interesting, in both situations, the Azure Function App from the new AppDynamics Application and the Azure App Service from the old AppDynamics application are making calls via HTTP to another, but the same Azure App Service and we can see those calls on the flow map for the old application. Are there some extra app agent settings/properties that should be set up for the outbound calls and more precise flow map to show up? Or what could be the reason for the remote services/outbound connections not showing up? (A flow map where we can see arrows pointing to other services that also show up in Remote Services in AppDynamics). Please note that I looked through Backend detection/Remote Services and Flow map documentation as well as available topics on Cisco AppDynamics Community and I haven't found a solution to my problem. Best Regards
No. Macros are expanded in search time, not during processing the results. So macro as the email action recipients won't work. You can use result-based tokens. https://docs.splunk.com/Documentation...
See more...
No. Macros are expanded in search time, not during processing the results. So macro as the email action recipients won't work. You can use result-based tokens. https://docs.splunk.com/Documentation/Splunk/9.1.2/Alert/EmailNotificationTokens EDIT: OK, so you can use macro or a lookup to generate a recipient field in the search results. And then use this result as a token for the given alert setting.
Dears,
we have 2 applications called(Cortex & IST) & they are related to credit cards processing & management which are provided by FIS vendor
these 2 APPS consist of 2 parts(JAVA & C), we succes...
See more...
Dears,
we have 2 applications called(Cortex & IST) & they are related to credit cards processing & management which are provided by FIS vendor
these 2 APPS consist of 2 parts(JAVA & C), we successfully monitored the Java part, but we aren't able to monitor the C part as the source code doesn't exit & not provided by the FIS vendor...
Is there anyone who succeeded in monitoring them or has any idea about how to do that?
Technically, data from UF does _not_ come via HEC to Cloud but is sent with S2S protocol embedded in HTTP requests. But it does connect with HTTP protocol and needs an authentication token. Otherwise...
See more...
Technically, data from UF does _not_ come via HEC to Cloud but is sent with S2S protocol embedded in HTTP requests. But it does connect with HTTP protocol and needs an authentication token. Otherwise everyone could send data to your environment.