All Topics

Top

All Topics

Hi team, I have currently configured my Otel Collector to send traces data from the adservice (Otel-demo service) to AppDynamics over a proxy. My problem is that AppDynamics doesn't show any ingeste... See more...
Hi team, I have currently configured my Otel Collector to send traces data from the adservice (Otel-demo service) to AppDynamics over a proxy. My problem is that AppDynamics doesn't show any ingested data in the Otel section (No Data available). The collector logs show no errors. This is my Collector config: config: | receivers: otlp: protocols: grpc: http: processors: resource: attributes: - key: appdynamics.controller.account action: upsert value: "company" - key: appdynamics.controller.host action: upsert value: "company.saas.appdynamics.com" - key: appdynamics.controller.port action: upsert value: 443 batch: send_batch_size: 90 timeout: 30s exporters: otlphttp: endpoint: "https://some-agent-api.saas.appdynamics.com" headers: {"x-api-key": "<some-api-key>"} logging: verbosity: detailed sampling_initial: 10 sampling_thereafter: 5 extensions: zpages: service: telemetry: logs: level: debug extensions: [zpages] pipelines: traces: receivers: [otlp] processors: [resource, batch] exporters: [logging, otlphttp] env: - name: HTTPS_PROXY value: proxy.company.com:8080
Good day, What screen do users get when they attempt to reply to a poll after clicking on the link to the poll, even if the maximum number of replies allowed is 100? If the poll has already reached ... See more...
Good day, What screen do users get when they attempt to reply to a poll after clicking on the link to the poll, even if the maximum number of replies allowed is 100? If the poll has already reached its maximum number of responses. Will they still have the opportunity to see the replies chart that illustrates how everyone else answered the questions? This is the outcome that I am hoping for. Many thanks
Hi, How are you? Thank you for the community! I have tried to search logs using API as per Creating searches using the REST API - Splunk Documentation this seems complex anyhow possible but by my ex... See more...
Hi, How are you? Thank you for the community! I have tried to search logs using API as per Creating searches using the REST API - Splunk Documentation this seems complex anyhow possible but by my experience this has been impossible for me until now. How to search in Splunk using the API? Here what I found https://community.splunk.com/t5/Building-for-the-Splunk-Platform/How-to-collect-debug-logs-for-apps-on-Splunk-Cloud/m-p/586144 . Kind regards, Tiago
how to make splunk rest api sid remains unchanged
Hello comrades! I just wonder, does splunk detects logs similarity by it's pattern? Many thanks.
I've tried to enable boot-start on *nix and Windows, but after the machine reboots, Splunk Forwarder still cannot start automatically. Can anyone have solutions for this case?
Hello, How do I add a dropdown or a text on any location in the Dashboard Studio? I tried to put inside the rectangle in the middle of my dashboard, but it stayed in the top of the dashboard below ... See more...
Hello, How do I add a dropdown or a text on any location in the Dashboard Studio? I tried to put inside the rectangle in the middle of my dashboard, but it stayed in the top of the dashboard below the title. I tried to move "inputs" section in the JSON source code, but it didn't seem to work. Also, whenever I made changes in the source code, I wasn't able to revert it back easily like it did on the classic dashboard. Please suggest. Thank you.
Hi, I am new to Splunk and am looking for a search that is able to identify duplicate field values. We have an issue in Tenable that assets have duplicate asset IDs. My initial search is: index=t... See more...
Hi, I am new to Splunk and am looking for a search that is able to identify duplicate field values. We have an issue in Tenable that assets have duplicate asset IDs. My initial search is: index=tenable sourcetype=tenable:io:assets | stats count by hostnames, agent_uuid Lists hostnames with ther unique ID on a table. Need to just show hostnames with the same agent_uuid. I don't know if I need to export this and put it on a lookup table and then compare the agent_uuid values from there and just show the duplicates but I was hoping for a more straight forward search to do this. Thank you.
Hi all,   I am trying to get  Azure AD B2C to work as SAML provider for Splunk    anyone managed to get this to work ?    please advise,  I followed all the available online resources but noth... See more...
Hi all,   I am trying to get  Azure AD B2C to work as SAML provider for Splunk    anyone managed to get this to work ?    please advise,  I followed all the available online resources but nothing is working 
The splunk DLTK 5.1.0 documentation suggests below : No indexer distribution Data is processed on the search head and sent to the container environment. Data cannot be processed in a distributed... See more...
The splunk DLTK 5.1.0 documentation suggests below : No indexer distribution Data is processed on the search head and sent to the container environment. Data cannot be processed in a distributed manner, such as streaming data in parallel from indexers to one or many containers. However, all advantages of search in a distributed Splunk platform deployment still exist. Does the above imply that data from splunk are not distributed (such as data parallelism) among multiple containers in the Kubernetes execution environment during training or inference phase ? Further, is the distribution only vertical in nature (multi CPU or multi GPU in a single container) or the jobs can scale horizontally as well (multiple containers) with each container working on a partition of data ? Further, for executing Tensorflow, PyTorch, Spark or Dask jobs do we need to have required operators/services pre-installed prior to (Spark K8s operator for example) submitting the jobs from Splunk Jupyter notebook ? Or are these services setup during DLTK app installation and configuration in Splunk ? Appreciate any inputs on above query. Thanks in advance !
Identify load and performance anomalies across all nodes in a tier with the Tier Metric Correlator. Use discoveries to create alerts that trigger events in your ITSM platform.   CONTENTS | Introduc... See more...
Identify load and performance anomalies across all nodes in a tier with the Tier Metric Correlator. Use discoveries to create alerts that trigger events in your ITSM platform.   CONTENTS | Introduction | Video | Resources | About the presenter  Video Length: 2 min 49 seconds  See how to simplify troubleshooting the seemingly random and uncorrelated issues that arise in complex app landscapes. Using AppDynamics Tier Metric Correlator to link unknown unknowns to business context, you can quickly and with minimal effort find both the problem and its impact.      NOTE | If this feature does not appear in your Controller instance, you can request it to be enabled by AppDynamics Support.   For on-premises environments, AppDynamics Tier Metric Correlator must be turned on at the Controller level.   Additional Resources  Learn more about the Tier Metric Correlator in the Documentation  AppDynamics SaaS 23.x | Tier Metric Correlator  AppDynamics APM Platform 23.x | Tier Metric Correlator  AppDynamics On-premise | Tier Metric Correlator About presenter Scott C. Young  Scott Young joined AppDynamics as a Sales Engineer in 2015 and has supported the pre-sales organization as an SE and SE Manager. He now leads the AppDynamics WW Field Architecture Organization.   This team includes a group of highly skilled and experienced Observability architects whose goals are "To be AWESOME at what we do, so we can LOVE what we do" while helping the SE and Sales teams architect solutions to some of the most complex Observability challenges. 
Hi @All ,  I want to extract the correlation_id for the below payload, can anyone help me to write rex command. {"message_type": "INFO", "processing_stage": "Deleted message from queue", "messa... See more...
Hi @All ,  I want to extract the correlation_id for the below payload, can anyone help me to write rex command. {"message_type": "INFO", "processing_stage": "Deleted message from queue", "message": "Deleted message from queue", "correlation_id": "['321e2253-443a-41f1-8af3-81dbdb8bcc77']", "error": "", "invoker_agent": "arn:aws:sqs:eu-central-1:981503094308:prd-ccm-incontact-ingestor-queue-v1", "invoked_component": "prd-ccm-incontact-ingestor-v1", "request_payload": "", "response_details": "{'ResponseMetadata': {'RequestId': 'a04c3e82-fe3a-5986-b61c-6323fd295e18', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amzn-requestid': 'a04c3e82-fe3a-5986-b61c-6323fd295e18', 'x-amzn-trace-id': 'Root=1-652700cc-f7ed3cf574ce28da63f6625d;Parent=865f4dad6eddf3c1;Sampled=1', 'date': 'Wed, 11 Oct 2023 20:08:51 GMT', 'content-type': 'text/xml', 'content-length': '215', 'connection': 'keep-alive'}, 'RetryAttempts': 0}}", "invocation_timestamp": "2023-10-11T20:08:51Z", "response_timestamp": "2023-10-11T20:08:51Z", "original_source_app": "YMKT", "target_idp_application": "", "retry_attempt": "1", "custom_attributes": {"entity-internal-id": "", "root-entity-id": "", "campaign-id": "", "campaign-name": "", "marketing-area": "", "lead-id": "", "record_count": "1", "country": ["India"]}}
Hello, How to put comment on the Splunk Dashboard Studio source? The classic Splunk Dashboard I can put comment  on the source using <!--  comment  --> In the new Splunk Dashboard Studio, I tried ... See more...
Hello, How to put comment on the Splunk Dashboard Studio source? The classic Splunk Dashboard I can put comment  on the source using <!--  comment  --> In the new Splunk Dashboard Studio, I tried to put comment using /* comment */, but I got an error "Comments are not permitted in JSON." The comment only work on the data configuration query editor Thank you so much
On a Column Chart is it possible to hide/unhide legend values by clicking on it? For eg. if I click on www3 in legend this action will hide www3 and I'll see only www1 and www2 on a chart.  
I have been tasked with cleaning up the catchall directory in the syslog directory of our Heavy Forwarders. The path is /var/syslog/catchall/. I plan on grouping servers/directories based on the kind... See more...
I have been tasked with cleaning up the catchall directory in the syslog directory of our Heavy Forwarders. The path is /var/syslog/catchall/. I plan on grouping servers/directories based on the kind of logs being received. I just wanted to ask what kind of logs are usually expected to end up in this directory?
I am creating a continuous error alert in Splunk. I have been working on constructing a search query to group different error types in Splunk. I have made several attempts and have explored multiple ... See more...
I am creating a continuous error alert in Splunk. I have been working on constructing a search query to group different error types in Splunk. I have made several attempts and have explored multiple approaches; however, I have encountered challenges in effectively grouping the error types within the query. Can anybody help me in this
I have a standalone Splunk Enterprise (not Splunk Cloud) set up to work with some log data that is stored in an AWS S3 bucket. The log data is in TSV format, each file has a header row at the top wit... See more...
I have a standalone Splunk Enterprise (not Splunk Cloud) set up to work with some log data that is stored in an AWS S3 bucket. The log data is in TSV format, each file has a header row at the top with the field names, and each file is gzipped. I have the AWS TA installed (https://splunkbase.splunk.com/app/1876). Having followed the instructions in the documentation (Introduction to the Splunk Add-on for Amazon Web Services - Splunk Documentation) for setting up a Generic S3 input, no fields are being extracted and the time stamps are not being recognized. The data does ingest but it is all just raw rows from the TSVs. The header row is being indexed as an event as well. The timestamps in Splunk are just _indextime even though there is a column called "timestamp" in the data. Does anyone have any suggestions on how I can get this to recognize the timestamps and actually show the field names that appear in the header row?
      How to get the exception from the below tables. Exception is John who is not HR table .     User list from the servers.   Name  ID  Bill 23 Peter 24 john  25   HR T... See more...
      How to get the exception from the below tables. Exception is John who is not HR table .     User list from the servers.   Name  ID  Bill 23 Peter 24 john  25   HR Table  Name  ID  Bill  23 Peter  24 Anita 27
Hello all, I installed a Splunk add-on on my heavy forwarder just to test it first, it worked fine. After that I copied it (the entire directory) to the deployment server and I pushed it to the hea... See more...
Hello all, I installed a Splunk add-on on my heavy forwarder just to test it first, it worked fine. After that I copied it (the entire directory) to the deployment server and I pushed it to the heavy forwarder because, you know, I want to manage everything from the deployment server (trying to be organized ) The issue is, from the heavy forwarder GUI, when i click on the app icon it doesn't load: it gives me "500 Internal Server Error" (with the picture of the confused horse ) and I have these error messages from the internal logs:  "ERROR ExecProcessor [2341192 ExecProcessorSchedulerThread] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/myapp_hf/bin/app.py" HTTP 404 Not Found -- Action forbidden." I forgot to mention that I changed the name of the original app in app.conf  I can't figure out why it is not working   Thanks for your help, Kaboom1
Hi,  I'm trying to use the REST API to get and post saved searches that are Alerts but for some reason it only returns data for Reports. Has anyone else had this problem?  GET https://<host>:<mP... See more...
Hi,  I'm trying to use the REST API to get and post saved searches that are Alerts but for some reason it only returns data for Reports. Has anyone else had this problem?  GET https://<host>:<mPort>/services/saved/searches https://<host>:<mPort>/services/saved/searches/{name}