All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Viji, Why don't you upload these dSYM files while building your SDK automatically? For IOS https://docs.appdynamics.com/appd/22.x/latest/en/end-user-monitoring/mobile-real-user-monitoring/instr... See more...
Hi Viji, Why don't you upload these dSYM files while building your SDK automatically? For IOS https://docs.appdynamics.com/appd/22.x/latest/en/end-user-monitoring/mobile-real-user-monitoring/instrument-ios-applications/upload-the-dsym-file#id-.UploadthedSYMFilev22.5-enable-dsym For Android, https://docs.appdynamics.com/appd/22.x/latest/en/end-user-monitoring/mobile-real-user-monitoring/instrument-android-applications/customize-the-android-build/automatically-upload-mapping-files Thanks  Cansel
Hi Ashok, I you have Analytics you can use ADQL in order create Top 10 query wait states widget Thanks Cansel
Splunk does not recommend any specific load balancer.  They specify only that the LB employ layer-7 (application-level) processing and user sessions are "sticky" or "persistent."
Hi Jian, Is  your controller SaaS or OnPrem? Thanks Cansel
Hi Dietrich, This log pattern is basically, a very generic one. Unfortunately, there are lots of different reasons but mostly related to the controller side. I want to ask you, can you please run t... See more...
Hi Dietrich, This log pattern is basically, a very generic one. Unfortunately, there are lots of different reasons but mostly related to the controller side. I want to ask you, can you please run this grep command  in your server files below command which are hosted in /appdynamics/controller/server folder below, grep -i "Buffer Overflow" server* | wc -l grep -i "dropping event" server* | wc -l grep -i "Caused by: java.nio.BufferOverflowException" server* | wc -l Than let's see what is the root cause of your problem Thanks Cansel
Hi Shwetha, Can you please give some details about your scenario.? What is your aim to use java agent ? Thanks Cansel  
@badrinath_itrs  No, as not able to login in U.I
Hi, I have deployed a search head cluster with 3 members and one deployer. based on splunk document, is recommends that run a third-party hardware or software load balancer in front of the clustered... See more...
Hi, I have deployed a search head cluster with 3 members and one deployer. based on splunk document, is recommends that run a third-party hardware or software load balancer in front of the clustered search heads. does splunk recommend any special load balancer that is most compatible
Hi @Praz_123,  Do you know if the user is a SPLUNK local user or LDAP user ? 
Edit your lookup file and reload it, or use outputlookup to overwrite/update it.
Hi @ITWhisperer ,   How do i do that? Any steps pls. Thanks.     Regards, Siva Kumar
| where isnotnull(vuln) OR isnotnull(score) OR isnotnull(company)
It looks like your host ip may not be in your lookup - please add the relevant information to the lookup.
Hi, I am not able to login in any of the server like (CM, SH and more...). While am putting the username and passwords its showing . "Login Failed " What could be the reason , how can i troubles... See more...
Hi, I am not able to login in any of the server like (CM, SH and more...). While am putting the username and passwords its showing . "Login Failed " What could be the reason , how can i troubleshoot from backend 
Hello, You can use direct Webhook URL to Pushcall API like: https://pushcall.me/api/call?api_key=XXXXXXXX&to=<Phone  just navigate to your account https://pushcall.me , enter phone number and copy... See more...
Hello, You can use direct Webhook URL to Pushcall API like: https://pushcall.me/api/call?api_key=XXXXXXXX&to=<Phone  just navigate to your account https://pushcall.me , enter phone number and copy URL Then configure alert in Splunk to Webhook
Hi at all, a customer asked to me if it's possible to have an alias instead of the hostname in the Monitoring Console Dashboards. I know that's it's easy to do this in normal Splunk Searches, but i... See more...
Hi at all, a customer asked to me if it's possible to have an alias instead of the hostname in the Monitoring Console Dashboards. I know that's it's easy to do this in normal Splunk Searches, but in the Monitoring Console Dashboards (e.g. Summary or Overview or Instances)? Ciao. Giuseppe
Hi @LearningGuy , the solution to your "Expected Result" is the one hinted by @ITWhisperer . Instead you can have to the last table simply adding    (vuln=* OR company=*)   to you main search. ... See more...
Hi @LearningGuy , the solution to your "Expected Result" is the one hinted by @ITWhisperer . Instead you can have to the last table simply adding    (vuln=* OR company=*)   to you main search. Ciao. Giuseppe  
Hi @Hami-g , you regex isn't correct, please try this: ^(?:[^:\n]*:){8}\d+\s+bytes\s(?P<BYTES>\w+\s+) that you can test at https://regex101.com/r/BGPGr9/1 Ciao. Giuseppe
> I don't want to send data to remote storage and then bring it back onto the indexer for archiving locally. And: > When a bucket rolls from warm to frozen, cache manager will download the warm buc... See more...
> I don't want to send data to remote storage and then bring it back onto the indexer for archiving locally. And: > When a bucket rolls from warm to frozen, cache manager will download the warm bucket from the indexes prefix withing the S3 bucket to one of the indexers, splunk will then take path to the bucket and pass it to the cold to frozen script for archiving which places the archive in the S3 bucket under archives.  Can you elaborate a bit on this? At first you mention you don't want to upload to S3 and then download for archiving locally, but that appears how you solved the problem. I see that archives go back to S3, so that's not archiving locally in terms of where archives get stored, but it's archiving locally in terms of where it happens (as in, you still pay S3 egress fees, which I thought was the main reason for coming up with a workaround/solution). Wouldn't it be better to just leave them there? > When archiving is successful, cache manager will delete the local and remote copies of the warm bucket. Your data is eventually still on S3 and it would be evicted from cache for good (presumably no one needs to search it, which is why it's being archived), so what's the benefit? > Smartstore will roll the buckets to frozen by default unless you set frozen time to 0 which will leave all warm buckets in S3. I didn't want that as a long term solution I wonder why. I like the creative approach, but I'm curious about the non-technical value (cost, special use case, business rules, something else) you get in return for the possibly additional/unnecessary egress fees.
Can i run appdynamics PHP agent on Alpine Docker image ?