Query: index=abc mal_code=xyz TERM(application) OR (TERM(status) TERM(success)) NOT (TERM(unauthorized) TERM(time) TERM(mostly)) site=SOC
|stats count by Srock
|stats sum(count) as Success
|appendco...
See more...
Query: index=abc mal_code=xyz TERM(application) OR (TERM(status) TERM(success)) NOT (TERM(unauthorized) TERM(time) TERM(mostly)) site=SOC
|stats count by Srock
|stats sum(count) as Success
|appendcols
[search index=abc mal_code=xyz (TERM(unauthorized) TERM(time) TERM(mostly)) NOT (TERM(status) TERM(success)) site=SOC
|stats count by ID
|fields ID
|eval matchfield=ID
|join matchfield [search index=abc mal_code=xyz site=SOC "application"
|stats count by Srock
|fields Srock
|eval matchfield=Srock]
|stats count(matchfiled) as Failed]
|eval Total=Success+Failed
|eval SuccessRate=round(Success/Total*100,2)
|table * From the above query i am getting data only for one site. but I want data for both sites SOC and BDC. I tried giving as site=* its not working Any help would be appreciated.
log line will be 05:02:05.213 Txt 46000 008a456b37de5982_ETC_RFG: (Q056) play this message id:announcement/4637825, duration:58 expecting Table like 008a456b37de5982 ETC RFG 4637825
Thanks Dorowo based on your comment after trying to fix this for hours I found the solution 1, Download agent from https://accounts.appdynamics.com/downloads 2. Run npm install /my/local/path/app...
See more...
Thanks Dorowo based on your comment after trying to fix this for hours I found the solution 1, Download agent from https://accounts.appdynamics.com/downloads 2. Run npm install /my/local/path/appdynamics-nodejs-standalone-linux-x64-v21-24.1.0.9734.tgz
Please elaborate on "it doesn't work". What results are you expecting and what do you get? What is that screenshot intended to show? I see the name for the second app was mis-entered in app.conf a...
See more...
Please elaborate on "it doesn't work". What results are you expecting and what do you get? What is that screenshot intended to show? I see the name for the second app was mis-entered in app.conf and that both of the first two apps should have check_for_updates=false. It's not clear how the screenshot demonstrates anything not working.
I suspect multiple LMs will cause issues. There's no real need for more than a single LM. If the LM goes away, the clients will continue to function normally for a few days - which should be more t...
See more...
I suspect multiple LMs will cause issues. There's no real need for more than a single LM. If the LM goes away, the clients will continue to function normally for a few days - which should be more than enough time to stand up a new LM.
I can confirm that the checkpoint data is stored in the KV Store on the forwarder. The checkpoint is the last timestamp retrieved from the Azure REST API. So if you use a new forwarder, the data wi...
See more...
I can confirm that the checkpoint data is stored in the KV Store on the forwarder. The checkpoint is the last timestamp retrieved from the Azure REST API. So if you use a new forwarder, the data will be ingested again (duplicate data).
The field name "search" is given special treatment when returned in a subsearch in that the field name is not returned, so instead of the subsearch being ((request_id="valueA") OR (request_id="valueB...
See more...
The field name "search" is given special treatment when returned in a subsearch in that the field name is not returned, so instead of the subsearch being ((request_id="valueA") OR (request_id="valueB")), it becomes (("valueA") OR ("valueB")). The same goes for field name "query".
And in the end I want it to display the days out of the 12 months the users logged in. SO if a user logged in 4 time in one day it should count it as 1 day. If you are aggregating number of da...
See more...
And in the end I want it to display the days out of the 12 months the users logged in. SO if a user logged in 4 time in one day it should count it as 1 day. If you are aggregating number of days over 12 months, why do you use timechart? That splits output into individual days the user logged on, therefore the count is the number of times the user logged on each day, i.e., 4 times. This is the aggregate index=windows source="WinEventLog:Security" EventCode=4624 host IN (Server1, Server2) Logon_Type IN (10, 7)
| bucket _time span=1d@d
| eval Account_Name = mvindex(Account_Name,1)
| stats dc(_time) as count by Account_Name
I wasn't sure if having multiple different license managers would cause any violations. Ideally we really do not like the idea of having a single point of failure for our license manager, and are lo...
See more...
I wasn't sure if having multiple different license managers would cause any violations. Ideally we really do not like the idea of having a single point of failure for our license manager, and are looking to implement redundancy. Is this possible or will it cause issues?
it doesn't work to me, i am using de ofial app Mimecast for Splunk, and i created two custom apps called Mimecast for LiveSOC and Mimecast for neither works what name do you recommend to me ...
See more...
it doesn't work to me, i am using de ofial app Mimecast for Splunk, and i created two custom apps called Mimecast for LiveSOC and Mimecast for neither works what name do you recommend to me using for that?
How many hosts are typically returned? If there are not many, you can just use metadata to filter index search. This would meet your original requirement. [metadata type=hosts | where recentTim...
See more...
How many hosts are typically returned? If there are not many, you can just use metadata to filter index search. This would meet your original requirement. [metadata type=hosts | where recentTime < now() - 10800| stats values(host) as host]
| dedup host If there are too many, performance can be a concern. (You can also add other filters in addition to | metadata.) As @PickleRick noted, you probably don't want to send raw events, especially not a lot of them, in E-mail. In theory, you SHOULD have this "somenumber-somename-ks-srx" extracted in a field it means something. Haven't you? Assuming the field name is somefield. [metadata type=hosts | where recentTime < now() - 10800| stats values(host) as host]
| dedup host
| table host _time somefield
Can you share your SPL and data. This example works | makeresults
| eval line="SOMEALPHA9876NUMERIC_ETC_RFG: play this message: announcement/12345678"
| rex field=line "(?<ID>\w+)_ETC_RFG:.*/(?<NUM>...
See more...
Can you share your SPL and data. This example works | makeresults
| eval line="SOMEALPHA9876NUMERIC_ETC_RFG: play this message: announcement/12345678"
| rex field=line "(?<ID>\w+)_ETC_RFG:.*/(?<NUM>\d+)"
@EPitch I don't believe there is a break on condition function to abort the search, but what you could do, is to turn on sampling at an appropriately large ratio so you run the search on a subset of ...
See more...
@EPitch I don't believe there is a break on condition function to abort the search, but what you could do, is to turn on sampling at an appropriately large ratio so you run the search on a subset of the data. This will be quicker - if you get >10 then you don't need to re-run - but if you get <10, you will need to re-run at a lower sampling ratio. I'm not sure this solves the problem in that if you don't expect or want >10 then you will always end up running the search with 1:1 ratio. The other alternative is to craft your search criteria to use the TERM() directive if possible and if these data fields can be reduced to TERM elements then you can even use tstats. See this .conf presentation https://conf.splunk.com/files/2020/slides/PLA1089C.pdf So maybe you can do index=blah sourcetype=blah (TERM(name=Name1) TERM(ip=IP1) TERM(id=id1)) OR... but you will have to know your data well to know if the terms exist as real terms in the data and you need to understand major and minor breakers in the data. If all the search criteria can be converted to TERM then you could do | tstats count where index=blah sourcetype=blah (TERM(name=Name1) TERM(ip=IP1) TERM(id=id1)) OR... by PREFIX(name=) PREFIX(ip=) PREFIX(id=)
| rename *= as *
We are experiencing the same thing. The clients are showing up in the client_events logs checking in and phoning home on the deployment server. But after updating to 9.2 they aren't appearing under t...
See more...
We are experiencing the same thing. The clients are showing up in the client_events logs checking in and phoning home on the deployment server. But after updating to 9.2 they aren't appearing under the Settings>Forwarder Management page on the DS. We have not made any changes to the forwarders yet.
Hi @Fernando.Moreira,
Thanks for asking your question on the Community. I found this AppD Docs page that might be helpful. https://docs.appdynamics.com/sap/en/set-up-sap-netweaver-systems/set-up...
See more...
Hi @Fernando.Moreira,
Thanks for asking your question on the Community. I found this AppD Docs page that might be helpful. https://docs.appdynamics.com/sap/en/set-up-sap-netweaver-systems/set-up-sap-abap-agent
Hi @Jahnavi.Vangari,
Thanks for following up. Since the Community did not jump in and help. At this point, it might be best to contact Support.
How do I submit a Support ticket? An FAQ