Hi @Cleber.Penteado,
My apologies, any Self-service free trials started after Feb 2024 no longer convert to Lite. If you want to activate your free license again, please contact Sales by going here...
See more...
Hi @Cleber.Penteado,
My apologies, any Self-service free trials started after Feb 2024 no longer convert to Lite. If you want to activate your free license again, please contact Sales by going here: https://www.appdynamics.com/company/contact-us
You can read about the changes we made to Trial and Lite here: https://community.appdynamics.com/t5/Knowledge-Base/AppDynamics-Trial-account-setup-getting-help-and-post-trial/ta-p/53018
Hi All, I have created a lookup table Status.csv which is having all the status of tickets and whether they are SLA relevant or not. However, due to having incorrect data while creating the table th...
See more...
Hi All, I have created a lookup table Status.csv which is having all the status of tickets and whether they are SLA relevant or not. However, due to having incorrect data while creating the table the values for all the Statuses are coming wrong. I want to update all the data for these statuses and add few more status values to the lookup table. How do I do that? Please suggest.
Hi @adrifesa95, are you receiving Splunk internal logs from the HF and UFs in Splunk Cloud? how did you configure the outputs.conf on the HF? and on the UFs? Ciao. Giuseppe
We are writing Log Statements in Java, and then reviewing the info and exception alerts. Our team is then conducting a Splunk Search count of log statements by Category. Many of our log statements...
See more...
We are writing Log Statements in Java, and then reviewing the info and exception alerts. Our team is then conducting a Splunk Search count of log statements by Category. Many of our log statements can have share multiple categories. Using this reference for key-value pair, https://dev.splunk.com/enterprise/docs/developapps/addsupport/logging/loggingbestpractices/ So in our log statements, We are doing LOG.info("CategoryA=true , CategoryG=true"); Of course, we aren't going to write "Category=false" in any logger, since its inherent in the statement. Is this a overall good method to count values in Splunk by Category, or do you recommend a better practice?
Hi @avii7326, sorry but I don't understand the purpose of this search: you have the same search in the first part, with results aggregated using statsm so in one row you have three values Total Suc...
See more...
Hi @avii7326, sorry but I don't understand the purpose of this search: you have the same search in the first part, with results aggregated using statsm so in one row you have three values Total Success and Error. Then in the append search, using the same search, you have many events listed with the table command. And there isn't any correlation between the two parts of the search. What's the output that you would have? Ciao. Giuseppe
Hi @Shubham.Kadam,
I hear you have a call this Friday with AppDynamics. Can you share any learnings from that call here as a reply as it relates to the question you asked.
Hi @sajo.sam,
Did you see the reply from @Rajesh.Ganapavarapu? Can you confirm if it helped? If it did, click the "Accept as Solution" button, if not, continue the conversation
Hello,
I have a problem because I can't see the windows logs in splunk cloud.
My architecture is as follows: UF->HF->Splunk cloud
I get the logs on the HF because I see them by doing p...
See more...
Hello,
I have a problem because I can't see the windows logs in splunk cloud.
My architecture is as follows: UF->HF->Splunk cloud
I get the logs on the HF because I see them by doing packet inspection with tcpdump. So I have 9997 open, but these are not being forwarded to the cloud.
These are my inputs.conf
/opt/splunk/etc/apps/Splunk_TA_windows/local/
###### OS Logs ######
[WinEventLog://Application]
disabled = 0
index=mx_windows
start_from = oldest
current_only = 0
checkpointInterval = 5
renderXml=true
[WinEventLog://Security]
disabled = 0
index=mx_windows
start_from = oldest
current_only = 0
evt_resolve_ad_obj = 1
checkpointInterval = 5
blacklist1 = EventCode="4662" Message="Object Type:(?!\s*groupPolicyContainer)"
blacklist2 = EventCode="566" Message="Object Type:(?!\s*groupPolicyContainer)"
renderXml=true
[WinEventLog://System]
disabled = 0
index=mx_windows
start_from = oldest
current_only = 0
checkpointInterval = 5
renderXml=true
###### Forwarded WinEventLogs (WEF) ######
[WinEventLog://ForwardedEvents]
disabled = 0
start_from = oldest
current_only = 0
checkpointInterval = 5
## The addon supports only XML format for the collection of WinEventLogs using WEF, hence do not change the below renderXml parameter to false.
renderXml=true
host=WinEventLogForwardHost
index=mx_windows
/opt/splunk/etc/system/local/inputs.conf
[splunktcp://9997]
index=mx_windows
disabled = 0
[WinEventLog://ForwardedEvents]
index=mx_windows
disabled = 0
Hi @Amadou, as I said, you have to find the conditions to search (in other words the words or strings or field values to search) then you can use the stats command to find the number of occurrences...
See more...
Hi @Amadou, as I said, you have to find the conditions to search (in other words the words or strings or field values to search) then you can use the stats command to find the number of occurrences grouped e.g. for host and user. e.g. in windows if you want an alert with log failed greater than 5, you could run: index=wineventlog EventCode=4625
| stats count BY host user
| where count>5 Ciao. Giuseppe
Hi @dataisbeautiful, try to add eval after the timechart: index=indx sourcetype=src (Instrument="a" OR Instrument="b")
| timechart values(a) values(b) span=1s
| eval c = a - b Ciao. Giuseppe
Hi all I'd like to plot the difference between two values on a timechart Example data: _time a b t 10 1 t+1s 11 1.5 t+2s 12 2 Expected resulting data time a b c ...
See more...
Hi all I'd like to plot the difference between two values on a timechart Example data: _time a b t 10 1 t+1s 11 1.5 t+2s 12 2 Expected resulting data time a b c t 10 1 9 t+1s 11 1.5 9.5 t+2s 12 2 10 I'm using the query index=indx sourcetype=src (Instrument="a" OR Instrument="b")
| eval c = a - b
| timechart values(a) values(b) values(c) span=1s Any ideas where I'm going wrong?
I'm using the global time in a dashboard search as suggested above: "queryParameters": {
"earliest": "$global_time.earliest$",
"latest": "$glo...
See more...
I'm using the global time in a dashboard search as suggested above: "queryParameters": {
"earliest": "$global_time.earliest$",
"latest": "$global_time.latest$"
} It works fine if the user selects presets or relative time. But if the user picks a date range, I get an error like this: Any ideas on how to avoid this date format issue?
How should I refine this query so that I can get every fields in one table without using join or append or any other sub search.
(index=whcrm OR index=whcrm_int)sourcetype="bmw-sl-gcdm-int-api" ("S...
See more...
How should I refine this query so that I can get every fields in one table without using join or append or any other sub search.
(index=whcrm OR index=whcrm_int)sourcetype="bmw-sl-gcdm-int-api" ("Sending POST consents to *" OR "Create / Update Consents done" OR "Error in sync-consent-dataFlow:*")
| stats count(eval(match(_raw, "Sending POST consents to *"))) as Total,
count(eval(match(_raw, "Create / Update Consents done"))) as Success,
count(eval(match(_raw, "Error in sync-consent-dataFlow:*"))) as Error
| eval ErrorRate = round((Error / TotalReceived) * 100, 2)
| table Total, Success, Error, ErrorRate
| append
[ search (index=whcrm OR index=whcrm_int) (sourcetype="bmw-sl-gcdm-int-api" ("Sending POST consents to *" OR "Create / Update Consents done" OR "Error in sync-consent-dataFlow:*"))
| rex field=message ": (?<json>\{[\w\W]*\})$"
| rename properties.correlationId as correlationId
| rename properties.gcid as GCID
| rename properties.gcid as errorcode
| rename properties.entity as entity
| rename properties.country as country
| rename properties.targetSystem as target_system
| table correlationId GCID errorcode entity country target_system
]
I am trying to forward data from UF to few indexers but the indexes have dynamic IPs which keep changing. Now, how does the UF know where to forward the data How can I tackle this problem? Also, ca...
See more...
I am trying to forward data from UF to few indexers but the indexes have dynamic IPs which keep changing. Now, how does the UF know where to forward the data How can I tackle this problem? Also, can someone explain what is a smartstore & how does it work?
Hi it seems that when you are using output_mode=json those f=xyz didn't work. Instead of those you must use jq as @deepakc already propose. curl -ksu $UP 'https://localhost:8089/servicesNS/-/-/admi...
See more...
Hi it seems that when you are using output_mode=json those f=xyz didn't work. Instead of those you must use jq as @deepakc already propose. curl -ksu $UP 'https://localhost:8089/servicesNS/-/-/admin/macros?count=4&output_mode=json' | jq '.entry[].name'
"3cx_supply_chain_attack_network_indicators_filter"
"7zip_commandline_to_smb_share_path_filter"
"abnormally_high_aws_instances_launched_by_user___mltk_filter"
"abnormally_high_aws_instances_launched_by_user_filter" You could/should leave comment on doc page where output_mode has defined and add information that if you are using json mode then f=xyz doesn't work. Doc team is really helpful to update that kind of notes into real documentation. r. Ismo
Here is what I found. When using that connection type we needed to add an option authenticationScheme=NTLM (which enables NTLMv2 authentication) and then in our environment we made sure SSL is enable...
See more...
Here is what I found. When using that connection type we needed to add an option authenticationScheme=NTLM (which enables NTLMv2 authentication) and then in our environment we made sure SSL is enabled (encrypt=true) and added the option trustServerCertificate=true. After that, the connection could be saved and worked fine.