Hey all,
Tech stack: Next.js 13 (pages router)
I've been following the guide https://docs.appdynamics.com/display/GOVAPM234/Add+Custom+User+Data+to+a+Page+Browser+Snapshot to set custom attribute...
See more...
Hey all,
Tech stack: Next.js 13 (pages router)
I've been following the guide https://docs.appdynamics.com/display/GOVAPM234/Add+Custom+User+Data+to+a+Page+Browser+Snapshot to set custom attributes.
On the initial page I load the AppDynamics script provided below
window['adrum-start-time'] = new Date().getTime()
;((config) => {
config.appKey = 'XXX'
config.adrumExtUrlHttp = 'http://cdn.appdynamics.com'
config.adrumExtUrlHttps = 'https://cdn.appdynamics.com'
config.beaconUrlHttp = 'http://syd-col.eum-appdynamics.com'
config.beaconUrlHttps = 'https://syd-col.eum-appdynamics.com'
config.useHTTPSAlways = true
config.xd = { enable: true }
config.resTiming = { bufSize: 200, clearResTimingOnBeaconSend: true }
config.maxUrlLength = 512;
config.userEventInfo = {
PageView: getAppDynamicsUserInfo(),
VPageView: getAppDynamicsUserInfo(),
}
})(window['adrum-config'] || (window['adrum-config'] = {}))
getAppDynamicsUserInfo is a function attached to window and will return the attribute sessionId always and if available, another attribute called customerId.
On the initial page load, the sessionId is sent and viewable on AppDynamics Analyze view. When I get to the page where the customerId is available, it is not sent to AppDynamics.
If I inspect window["adrum-config"] or use ADRUM.conf.userConf, I can see both sessionId and customerId.
In the above script I've tried just setting PageView and just setting VPageView.
In terms of methods of loading the above script, I've used the Next.js Script component and tried the following:
Load the above as an external script file on different pages (different react components)
Load the above in different versions of the same script file (different names) on different pages
Added the above script into a React component and loaded the component on different pages
I've also tried to use the AJAX method to intercept http calls. It intercepts the http call but does not result in sending the user data to AppDynamics.
In addition to trying to set it via config.userInfo as above, I've tried to use the following options as well.
(function (info) {
info.PageView = getAppDynamicsUserInfo
info.VPageView = getAppDynamicsUserInfo
})(config.userEventInfo || (config.userEventInfo = {}))
(function (info) {
info.PageView = getAppDynamicsUserInfo()
info.VPageView = getAppDynamicsUserInfo()
})(config.userEventInfo || (config.userEventInfo = {}))
Any help is appreciated, thank you
That was bad workaround way to force enable tty when launch splunkforwarder. Why it has the istty rescrition on itself. Can't we manage forwarder within other program manager like supervisor which co...
See more...
That was bad workaround way to force enable tty when launch splunkforwarder. Why it has the istty rescrition on itself. Can't we manage forwarder within other program manager like supervisor which couldn't have tty env
That was rediculous to resctriction ternimal tty to start Splunkforwarder. If that how could the splunkforwarder process been contrulled under program manager like supervisor? tty: true is a work ag...
See more...
That was rediculous to resctriction ternimal tty to start Splunkforwarder. If that how could the splunkforwarder process been contrulled under program manager like supervisor? tty: true is a work aground way but not a good solution
The above values didn't work for me for some reason, When I used for example: SYSTEM,(?:[^,]*,){18}([^,]*) ....it worked - (same REGEX as yours, but with the suffix after the "}" being different)
If it's a single line per event then the standard line breaker ([\r\n]+) should work fine. What settings are you using now and how are they failing you? This Splunk Lantern article may shed some li...
See more...
If it's a single line per event then the standard line breaker ([\r\n]+) should work fine. What settings are you using now and how are they failing you? This Splunk Lantern article may shed some light on the process: https://lantern.splunk.com/Splunk_Platform/Product_Tips/Data_Management/Configuring_new_source_types
Hi Paul, That was what I was suspecting, the service account permission to access the Servicenow. The only problem i have is getting the other team(Servicenow) to provide info for my troubleshooti...
See more...
Hi Paul, That was what I was suspecting, the service account permission to access the Servicenow. The only problem i have is getting the other team(Servicenow) to provide info for my troubleshooting as they are denying that it is their end with issue. I was thinking since the service account is an AD account, there will surely be a security group assign to the service account . I have actually point out that the service account did not have any grouping assigned to it thus there could be a possibility that the servicenow account does not have the permission to access the Servicenow. There were actually similar issues where we found that some AD users security group were missing after an issue happened. I will try to go through this path and check on the permission again.. Thanks for the advice.
from the below query, i am running for 2 to 3 and posted the output and ran again same query from 3 to 4 and posted the output. i want a query where i can compare pervious hour(2 to 3 data) with (3...
See more...
from the below query, i am running for 2 to 3 and posted the output and ran again same query from 3 to 4 and posted the output. i want a query where i can compare pervious hour(2 to 3 data) with (3 to 4) data and i want to calculate the difference percentage |mstats sum(transaction) as Trans where index=host-metrics service=login application IN(app1, app2, app3, app4) span=1h by application output: 02:00 to 03:00 hours data _time application Trans 2022-01-22 02:00 app1 3456.000000 2022-01-22 02:00 app2 5632.000000 2022-01-22 02:00 app3 5643.000000 2022-01-22 02:00 app4 16543.00000 03:00 to 04:00 hours data output: _time application Trans 2022-01-22 03:00 app1 8753.000000 2022-01-22 03:00 app2 342.000000 2022-01-22 03:00 app3 87653.000000 2022-01-22 03:00 app4 8912.00000
Below are the sample logs , i am not getting how to write props line breaker. can anyone help on this. A0C0A0H8~~AB~ABCg.C~AB~Wed Jan 11 19:11:17 IST 2021~C~0.00~0.00~0.01~Z~1HTYYY B0C0A0K8~~AB~ABC...
See more...
Below are the sample logs , i am not getting how to write props line breaker. can anyone help on this. A0C0A0H8~~AB~ABCg.C~AB~Wed Jan 11 19:11:17 IST 2021~C~0.00~0.00~0.01~Z~1HTYYY B0C0A0K8~~AB~ABCUHg.C~AB~Mon Jan 10 20:11:17 IST 2021~C~0.00~0.00~0.01~Z~1HTYYY1245 D0C01010~~CD~SDRg.D~HH~Thu Jan 20 11:11:17 IST 2021~C~0.00~0.00~0.01~Z~1140AU A0C01212~~AB~ABCg.C~AB~Wed Jan 11 19:11:17 IST 2021~C~0.00~0.00~0.01~Z~1HTYYY
For future-folk finding this thread, IMDSv2 support was added in 8.1.5, the config's under the imds stanza in server.conf. Set it like this, have an instance role for access to s3 and smartstore ...
See more...
For future-folk finding this thread, IMDSv2 support was added in 8.1.5, the config's under the imds stanza in server.conf. Set it like this, have an instance role for access to s3 and smartstore will Just Work™: [imds] imds_version = v2
Do you get any results from the search in the API call? You could start the search simple, like index=yourindex | head 5, and ensure there are 5 results returned. Then build it up step by step until...
See more...
Do you get any results from the search in the API call? You could start the search simple, like index=yourindex | head 5, and ensure there are 5 results returned. Then build it up step by step until you find the part of the SPL search which is not working.
It might be possible to tweak the indexer queue size and change the indexes.conf settings, as per this other community post: https://community.splunk.com/t5/Splunk-Enterprise/The-index-processor-has...
See more...
It might be possible to tweak the indexer queue size and change the indexes.conf settings, as per this other community post: https://community.splunk.com/t5/Splunk-Enterprise/The-index-processor-has-paused-data-flow-How-to-optimize/m-p/633597 According to the post it should be fixed in version 9.1, but perhaps something has made the issue resurface.
I've seen this too. Despite these 1Password logs being labeled as "ERROR", they then show debug messages that only say the app made a POST request to the 1Password endpoint. This looks like a case ...
See more...
I've seen this too. Despite these 1Password logs being labeled as "ERROR", they then show debug messages that only say the app made a POST request to the 1Password endpoint. This looks like a case of the 1Password app developers mistakenly putting DEBUG messages into the ERROR log function. It can be ignored and the developers can be requested to ensure that the DEBUG messages are logged at the DEBUG level.
You could make a lookup definition for that lookup, and under the Advanced options, set the pattern field as wildcard. Like this: Match type must have "WILDCARD(pattern)" Then you can ru...
See more...
You could make a lookup definition for that lookup, and under the Advanced options, set the pattern field as wildcard. Like this: Match type must have "WILDCARD(pattern)" Then you can run a lookup on this lookup definition using the url field, and return the "type" field values. Type will be a multi-value field, so you can eval mvcount on it to get the count of values. | makeresults
| eval url = "xxxxabcxxxxx.google.com"
| lookup testcsv pattern as url output type as type
| eval count = mvcount(type)
Can I ask the reason why your custom function must call another custom function? You might be better served by making a playbook with multiple custom functions, or copy-pasting the code from the othe...
See more...
Can I ask the reason why your custom function must call another custom function? You might be better served by making a playbook with multiple custom functions, or copy-pasting the code from the other custom function into your main custom function.
Does the cloud-hosted environment have any kind of log export functionality? It may be possible to configure it to send logs to the Splunk HTTP Event Collector. Or, instead of pushing the logs, you ...
See more...
Does the cloud-hosted environment have any kind of log export functionality? It may be possible to configure it to send logs to the Splunk HTTP Event Collector. Or, instead of pushing the logs, you can pull the logs using an API if it is offered by your cloud-hosted environment.