All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

My bad - The LogText column has the key word (connected or disconnected) with other texts. It will some kind of wildcard lookup for either of these 2 words. So, I'm looking to extract row 4 and 5 wh... See more...
My bad - The LogText column has the key word (connected or disconnected) with other texts. It will some kind of wildcard lookup for either of these 2 words. So, I'm looking to extract row 4 and 5 which has the "disconnected" text and where there isn't an associated connected row within say 120 secs.  Row Time LogText 1 7:00:00am text connected text 2 7:30:50am text disconnected text 3 7:31:30am text connected text 4 8:00:10am text disconnected text 5 8:10:30am text disconnected text
Yes thats what i did and its working now - thanks for your advises.  Cheers 
Hello, I am configuring statsd to send custom metric from AWS EC2 instance on which splunk-otel-collector.service is running to Splunk Observability Cloud to monitor this custom metrics. I have fol... See more...
Hello, I am configuring statsd to send custom metric from AWS EC2 instance on which splunk-otel-collector.service is running to Splunk Observability Cloud to monitor this custom metrics. I have followed the steps mentioned in the https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/statsdreceiver to setup statsd as receiver. receivers: statsd: endpoint: "localhost:8125" # default aggregation_interval: 60s # default enable_metric_type: false # default is_monotonic_counter: false # default timer_histogram_mapping: - statsd_type: "histogram" observer_type: "histogram" histogram: max_size: 50 - statsd_type: "distribution" observer_type: "histogram" histogram: max_size: 50 - statsd_type: "timing" observer_type: "summary" I have a problem in setting service for this statsd as receiver, as per github doc below configuration is written for the exporters, but I am not sure how this will work. exporters: file: path: ./test.json service: pipelines: metrics: receivers: [statsd] exporters: [file]  I also tried setting exporters in service section "receivers: [hostmetrics, otlp, signalfx, statsd]" and "exporters: [signalfx]" in the agent_config.yaml file as mentioned below, when I restart the "systemctl restart splunk-otel-collector.service", splunk otel collector agent stop sending any metric to the  Splunk Observability Cloud and when I remove statsd (receivers: [hostmetrics, otlp, signalfx]) then splunk otel collector agent starts sending any metric to the  Splunk Observability Cloud. # pwd /etc/otel/collector # # ls agent_config.yaml config.d fluentd gateway_config.yaml splunk-otel-collector.conf splunk-otel-collector.conf.example splunk-support-bundle.sh # service: extensions: [health_check, http_forwarder, zpages, smartagent] pipelines: traces: receivers: [jaeger, otlp, zipkin] processors: - memory_limiter - batch - resourcedetection #- resource/add_environment exporters: [otlphttp, signalfx] # Use instead when sending to gateway #exporters: [otlp/gateway, signalfx] metrics: receivers: [hostmetrics, otlp, signalfx, statsd] processors: [memory_limiter, batch, resourcedetection] exporters: [signalfx] # Use instead when sending to gateway #exporters: [otlp/gateway] What should be correct/supported exporter for the statsd as receiver? Thanks
You were correct, this solved the issue
Requirement: We need to monitor the Customer Decision Hub (CDH) portal, including Campaigns and Dataflows, using Real User Monitoring (RUM) in AppDynamics. Steps Taken: We injected the AppDynamic... See more...
Requirement: We need to monitor the Customer Decision Hub (CDH) portal, including Campaigns and Dataflows, using Real User Monitoring (RUM) in AppDynamics. Steps Taken: We injected the AppDynamics JavaScript agent code into the UserWorkForm HTML fragment rule. This is successfully capturing OOTB (Out-of-the-Box) screens but is not capturing Campaigns-related screens. Challenges: Pega operates as a Single Page Application (SPA), which complicates page load event tracking for Campaigns screens. Additionally, the CDH portal lacks a traditional front-end structure (HTML/CSS/JS), as Pega primarily serves server-generated content, which may restrict monitoring. Has anyone here successfully implemented such an integration? What are the best practices for passing this kind of contextual data from Pega to AppDynamics? Looking forward to your insights! Best regards,
Where should i get a trial copy for AppDynamics On-prem trial version for EUM and controller for evaluation purpose for few weeks.
@richgalloway @PickleRick @isoutamo Thank you all for your responses. This information is very helpful for me to better understand this topic.
Thank you for your response and the provided documentation. I’ve already followed the steps, but encountered communication issues and I had to reset the configuration in order to restore connectivit... See more...
Thank you for your response and the provided documentation. I’ve already followed the steps, but encountered communication issues and I had to reset the configuration in order to restore connectivity. Could you please provide a more detailed procedure or tailored guidance for my case to help me securely configure TLS/SSL?
Hi @ktn01 , the only solution is apply INGEST_EVAL rules to your input, instead a python script. Ciao. Giuseppe
Yes, you can achieve this by using a Python script as a scripted input in Splunk. You can read the data using Python, perform the modifications as you described (decoding the JSON, updating the dicti... See more...
Yes, you can achieve this by using a Python script as a scripted input in Splunk. You can read the data using Python, perform the modifications as you described (decoding the JSON, updating the dictionary, and re-encoding it), and output the modified data. Here's how it works: Create a Python Script: Read the incoming data. Apply the necessary transformations. Print the modified JSON to standard output (stdout). Configure Scripted Input in Splunk: Go to Settings > Data Inputs > Scripts. Add a new scripted input and select your Python script. Set a cron schedule for when the script should run. The script will run at the configured intervals, fetch the data, apply your changes, and send the transformed data to Splunk for indexing. Important Consideration: The main limitation is that data ingestion will depend on the cron schedule of the scripted input, so real-time or very frequent data processing might not be achievable. Adjust the schedule as needed based on your data update frequency.
Thank you for your reply. I can't pre-process the events before ingestion in Splunk because they are directly sent by an appliance to a hec input. Christian
Hi @ktn01 , yes, it's possible, but it isn't related to Splunk because it pre-processes data before ingestion: I did it for a customer. Put attention to one issue: changing the format of your logs,... See more...
Hi @ktn01 , yes, it's possible, but it isn't related to Splunk because it pre-processes data before ingestion: I did it for a customer. Put attention to one issue: changing the format of your logs, you have to completely rebuild the parsing rules for your data because the standard parsing rules aren't still applicable to the new data format. Ciao. Giuseppe
Is it possible to use a python script to perform transforms during event indexing? My aim is to remove keys from json files to reduce volume. I'm thinking of using a python script that decodes the j... See more...
Is it possible to use a python script to perform transforms during event indexing? My aim is to remove keys from json files to reduce volume. I'm thinking of using a python script that decodes the json, modifies the resulting dict and then encodes the result in a new json that will be indexed.
| eval Device=Device_name.":".src_ip | table Device state_to count primarycolor secondarycolor info_min_time info_max_time
@luizlimapg , thanks for your reply. Is there any confirmation after curl or ways to check if password is added successfully, is there any other way to add a password?     Enter host password for ... See more...
@luizlimapg , thanks for your reply. Is there any confirmation after curl or ways to check if password is added successfully, is there any other way to add a password?     Enter host password for user 'user': curl: (35) error:1408F10B:SSL routines:ssl3_get_record:wrong version number     I found that this is possible using the same curl, but i got an error:   curl -k -u user https://localhost:8089/servicesNS/nobody/app/storage/passwords/ curl: (35) error:1408F10B:SSL routines:ssl3_get_record:wrong version number    
Hi    trying to build a dashboard for user gateaccess, How to visualise this in a live data.   I am looking for some inbuilt visuaisations that helps for this, something like a missilemap but for... See more...
Hi    trying to build a dashboard for user gateaccess, How to visualise this in a live data.   I am looking for some inbuilt visuaisations that helps for this, something like a missilemap but for user moving from one gate to other
Sorry for had being annoying, I'm stopping this behavior.
"Best practice" depends heavily on use case. There are some general best practices but they might not be suited well to a particular situation at hand. That's why I suggest involving a skilled profes... See more...
"Best practice" depends heavily on use case. There are some general best practices but they might not be suited well to a particular situation at hand. That's why I suggest involving a skilled professional who will review your detailed requirements and suggest appropriate solution.
Please stop UP-ing the thread. You haven't found a similar issue in old threads, noone seems to be able to help you here right now. It's time to engage support. Posting "UP" once a week only clutters... See more...
Please stop UP-ing the thread. You haven't found a similar issue in old threads, noone seems to be able to help you here right now. It's time to engage support. Posting "UP" once a week only clutters the forum. Thanks for understanding.
Thank you for your detailed response! If I were to implement a Heavy Forwarder (HF) in my architecture, would this be the correct approach? Additionally, would it be considered a best practice for f... See more...
Thank you for your detailed response! If I were to implement a Heavy Forwarder (HF) in my architecture, would this be the correct approach? Additionally, would it be considered a best practice for forwarding data to an external system?