All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I am configuring statsd to send custom metric from AWS EC2 instance on which splunk-otel-collector.service is running to Splunk Observability Cloud to monitor this custom metrics. I have fol... See more...
Hello, I am configuring statsd to send custom metric from AWS EC2 instance on which splunk-otel-collector.service is running to Splunk Observability Cloud to monitor this custom metrics. I have followed the steps mentioned in the https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/statsdreceiver to setup statsd as receiver. receivers: statsd: endpoint: "localhost:8125" # default aggregation_interval: 60s # default enable_metric_type: false # default is_monotonic_counter: false # default timer_histogram_mapping: - statsd_type: "histogram" observer_type: "histogram" histogram: max_size: 50 - statsd_type: "distribution" observer_type: "histogram" histogram: max_size: 50 - statsd_type: "timing" observer_type: "summary" I have a problem in setting service for this statsd as receiver, as per github doc below configuration is written for the exporters, but I am not sure how this will work. exporters: file: path: ./test.json service: pipelines: metrics: receivers: [statsd] exporters: [file]  I also tried setting exporters in service section "receivers: [hostmetrics, otlp, signalfx, statsd]" and "exporters: [signalfx]" in the agent_config.yaml file as mentioned below, when I restart the "systemctl restart splunk-otel-collector.service", splunk otel collector agent stop sending any metric to the  Splunk Observability Cloud and when I remove statsd (receivers: [hostmetrics, otlp, signalfx]) then splunk otel collector agent starts sending any metric to the  Splunk Observability Cloud. # pwd /etc/otel/collector # # ls agent_config.yaml config.d fluentd gateway_config.yaml splunk-otel-collector.conf splunk-otel-collector.conf.example splunk-support-bundle.sh # service: extensions: [health_check, http_forwarder, zpages, smartagent] pipelines: traces: receivers: [jaeger, otlp, zipkin] processors: - memory_limiter - batch - resourcedetection #- resource/add_environment exporters: [otlphttp, signalfx] # Use instead when sending to gateway #exporters: [otlp/gateway, signalfx] metrics: receivers: [hostmetrics, otlp, signalfx, statsd] processors: [memory_limiter, batch, resourcedetection] exporters: [signalfx] # Use instead when sending to gateway #exporters: [otlp/gateway] What should be correct/supported exporter for the statsd as receiver? Thanks
Requirement: We need to monitor the Customer Decision Hub (CDH) portal, including Campaigns and Dataflows, using Real User Monitoring (RUM) in AppDynamics. Steps Taken: We injected the AppDynamic... See more...
Requirement: We need to monitor the Customer Decision Hub (CDH) portal, including Campaigns and Dataflows, using Real User Monitoring (RUM) in AppDynamics. Steps Taken: We injected the AppDynamics JavaScript agent code into the UserWorkForm HTML fragment rule. This is successfully capturing OOTB (Out-of-the-Box) screens but is not capturing Campaigns-related screens. Challenges: Pega operates as a Single Page Application (SPA), which complicates page load event tracking for Campaigns screens. Additionally, the CDH portal lacks a traditional front-end structure (HTML/CSS/JS), as Pega primarily serves server-generated content, which may restrict monitoring. Has anyone here successfully implemented such an integration? What are the best practices for passing this kind of contextual data from Pega to AppDynamics? Looking forward to your insights! Best regards,
Where should i get a trial copy for AppDynamics On-prem trial version for EUM and controller for evaluation purpose for few weeks.
Is it possible to use a python script to perform transforms during event indexing? My aim is to remove keys from json files to reduce volume. I'm thinking of using a python script that decodes the j... See more...
Is it possible to use a python script to perform transforms during event indexing? My aim is to remove keys from json files to reduce volume. I'm thinking of using a python script that decodes the json, modifies the resulting dict and then encodes the result in a new json that will be indexed.
Hi    trying to build a dashboard for user gateaccess, How to visualise this in a live data.   I am looking for some inbuilt visuaisations that helps for this, something like a missilemap but for... See more...
Hi    trying to build a dashboard for user gateaccess, How to visualise this in a live data.   I am looking for some inbuilt visuaisations that helps for this, something like a missilemap but for user moving from one gate to other
Hi everyone, I've recently integrated Lansweeper (cloud) data into my Splunk Cloud instance, but over the past few days, I've been encountering some ingestion issues. I used the add-on: https://spl... See more...
Hi everyone, I've recently integrated Lansweeper (cloud) data into my Splunk Cloud instance, but over the past few days, I've been encountering some ingestion issues. I used the add-on: https://splunkbase.splunk.com/app/5418 Specifically, the data source intermittently stops sending data to Splunk without any clear pattern. Here's what I've checked so far: My configuration seems fine, and the polling interval is set to 300 seconds. The ingestion behavior appears inconsistent, as seen in the attached image. Based on the type of data Lansweeper generates, I wouldn't expect this inconsistency. While double-checking my configuration, I noticed an error, yet the source still manages to ingest data sporadically at certain hours.   Has anyone experienced similar issues or could provide guidance on how to debug this further? Thanks in advance for your help! LansweeperLansweeper Add-on for SplunkLansweeper Add On For Splunk 
Hi all, I am currently facing an issue in my Splunk environment. We need to forward data from Splunk to a third-party system, specifically Elasticsearch. For context, my setup consists of two index... See more...
Hi all, I am currently facing an issue in my Splunk environment. We need to forward data from Splunk to a third-party system, specifically Elasticsearch. For context, my setup consists of two indexers, one search head, and one deployment server. Could anyone share the best practices for achieving this? I’d appreciate any guidance or recommendations to ensure a smooth and efficient setup. Thanks in advance!
Everytime we have to force replication on the SH nodes of a SH Cluster, the inputs.conf replicates and overwrites the hostname. Is there anyway to blacklist a .conf file by location to prevent it rep... See more...
Everytime we have to force replication on the SH nodes of a SH Cluster, the inputs.conf replicates and overwrites the hostname. Is there anyway to blacklist a .conf file by location to prevent it replicating when you do a forced resync of the SH nodes?
I've been working on a search that I *finally* managed to get working that would look for events generated by a provided network switch and port name and then gives me all the devices that have conne... See more...
I've been working on a search that I *finally* managed to get working that would look for events generated by a provided network switch and port name and then gives me all the devices that have connected to that specific port over a period of time. Fortunately, most of the device data is included alongside the events which contain the switch/port information.....that is....evenything except the hostname. Because of this, I've tried to use the join command to perform a second search through a second data set which contains the hostnames for all devices which have connected to the network and match those hostnames based on the shared MAC address field. The search works, and that's great, but it can only work over a time period of about a day or so before the subsearch breaks past the 50k event limit. Is these anyway I can get rid of the join command and maybe use the stats command instead? That's what simialr posts to this one seem to suggest, but I have trouble wrapping my head around how the stats command can be used to correlate data from two different events from different data sets.....in this case the dhcp_host_name getting matched to the corresponding device in my networking logs. I'll gladly take any assistance. Thank you.       index="indexA" log_type IN(Failed_Attempts, Passed_Authentications) IP_Address=* SwitchID=switch01 Port_Id=GigabitEthernet1/0/13 | rex field=message_text "\((?<src_mac>[A-Fa-f0-9]{4}\.[A-Fa-f0-9]{4}\.[A-Fa-f0-9]{4})\)" | eval src_mac=lower(replace(src_mac, "(\w{2})(\w{2})\.(\w{2})(\w{2})\.(\w{2})(\w{2})", "\1:\2:\3:\4:\5:\6")) | eval time=strftime(_time,"%Y-%m-%d %T") | join type=left left=L right=R max=0 where L.src_mac=R.src_mac L.IP_Address=R.src_ip [| search index="indexB" source="/var/logs/devices.log" | fields src_mac src_ip dhcp_host_name] | stats values(L.time) AS Time, count as "Count" by L.src_mac R.dhcp_host_name L.IP_Address L.SwitchID L.Port_Id  
I've piped a Splunk log query extract into a table showing disconnected and connected log entries sorted by time. NB row 1 is fine. Row 2 is fine because it connected within 120 sec. Now I want to ... See more...
I've piped a Splunk log query extract into a table showing disconnected and connected log entries sorted by time. NB row 1 is fine. Row 2 is fine because it connected within 120 sec. Now I want to show "disconnected" entries with no subsequent "connected" row say within a 120 sec time frame.  So, I want to pick up rows 4 and 5. Can someone advise on the Splunk query format for this? Table = Connect_Log Row Time Log text 1 7:00:00am connected 2 7:30:50am disconnected 3 7:31:30am connected 4 8:00:10am disconnected 5 8:10:30am disconnected
Hi folks! I want to create a custom GeneratingCommand that makes a simple API request, but how do I save the API key in passwords.conf? I have a default/setup.xml file with the following content:  ... See more...
Hi folks! I want to create a custom GeneratingCommand that makes a simple API request, but how do I save the API key in passwords.conf? I have a default/setup.xml file with the following content:   <setup> <block title="Add API key(s)" endpoint="storage/passwords" entity="_new"> <input field="password"> <label>API key</label> <type>password</type> </input> </block> </setup>   But when I configure the app, the password (API key) is not saved in the app folder (passwords.conf). And if I need to add several api keys, how can I assign names to them and get information from the storage? I doubt this code will work:   try: app = "app-name" settings = json.loads(sys.stdin.read()) config = settings['configuration'] entities = entity.getEntities(['admin', 'passwords'], namespace=app, owner='nobody', sessionKey=settings['session_key']) i, c = entities.items()[0] api_key = c['clear_password'] #user, = c['username'], c['clear_password'] except Exception as e: yield {"_time": time.time(), "_raw": str(e)} self.logger.fatal("ERROR Unexpected error: %s" % e)    
Hi All i have a csv look up with below data Event_Code AUB01 AUB36 BUA12 i want to match it with a dataset which has field name  Event_Code with several values i need to extract the count... See more...
Hi All i have a csv look up with below data Event_Code AUB01 AUB36 BUA12 i want to match it with a dataset which has field name  Event_Code with several values i need to extract the count of the event code per day from the matching csv lookup  my query index=abc |rex field=data "\|(?<data>[^\.|]+)?\|(?<Event_Code>[^\|]+)?\|" | lookup dataeventcode.csv Event_Code | timechart span=1d dc(Event_Code) however the result is showing all 100 count per day instaed of matching the event code from the CSV and then give the total count per day
Dear experts Why is the following line   | where my_time>=relative_time(now(),"-1d@d") AND my_time<=relative_time(now(),"@d")   Accepted as a valid statement in a search window, but as soon I wa... See more...
Dear experts Why is the following line   | where my_time>=relative_time(now(),"-1d@d") AND my_time<=relative_time(now(),"@d")   Accepted as a valid statement in a search window, but as soon I want to use exactly this code in a dashboard, I get the error message: "Error in line 100: Unencoded <" ? The dashboard code validator somehow fails with the <= comparison.  >= works, as well = but not <=  We're on splunkcloud. 
I'd like to take an existing Splunkbase app and make a few changes to the bin files. What is the proper way to do this in Splunk Cloud?  I'm guessing I should modify it, save it to a tar.gz and uplo... See more...
I'd like to take an existing Splunkbase app and make a few changes to the bin files. What is the proper way to do this in Splunk Cloud?  I'm guessing I should modify it, save it to a tar.gz and upload/install it as a custom app?
Hello, I know that it is necessary to do this for the forwarders but I would like to confirm whether it is necessary to add other components (such as indexers, search heads) as clients to our Depl... See more...
Hello, I know that it is necessary to do this for the forwarders but I would like to confirm whether it is necessary to add other components (such as indexers, search heads) as clients to our Deployment Server in Splunk environment. We currently have a Deployment Server set up, but we are unsure if registering the other instances as clients is a required step, or if any specific configurations need to be made for each type of component. Thank you in advance. Best regards,
Hello, Is it possible to send logs (for example: /var/log/GRPCServer.log) directly to Splunk Observability Cloud using  Splunk Universal Forwarder? If yes, how we can configure Splunk Universal For... See more...
Hello, Is it possible to send logs (for example: /var/log/GRPCServer.log) directly to Splunk Observability Cloud using  Splunk Universal Forwarder? If yes, how we can configure Splunk Universal Forwarder to send logs to Splunk Observability Cloud directly as we don't have the IP Address / hostname of Splunk Observability Cloud as well the 9997 port open atSplunk Observability Cloud end, like in general we can the below steps to configure Splunk Universal Forwarder to Splunk Enterprise/Cloud as mentioned below:  Add IP_Address/Host_Name where the log has to be sent "./splunk add <IP/HOST_NAME>:9997" Add the file whose log has to collected "./splunk add monitor /var/log/GRPCServer.log"  Thank You
Would anyone be able to help me on one more thing please !!!  I have a Number display dashboard which represent the BGP flap details as # Device_name & #BGP peer IP , however I cannot add the timing... See more...
Would anyone be able to help me on one more thing please !!!  I have a Number display dashboard which represent the BGP flap details as # Device_name & #BGP peer IP , however I cannot add the timing when the BGP flap on Number display Current Query : index="network" %BGP-5 *clip* | rex field=_raw "^(?:[^ \n]* ){4}(?P<Device_name>[^:]+)" | dedup Device_name,src_ip | stats count by Device_name,src_ip,state_to | eval primarycolor=case(state_to="Down", "#D93F3C", state_to="Up", "#31A35F") | eval secondarycolor=primarycolor     Is there something we can add to display flap time in the same number display  
Hello Splunkers!! I have reassigned all the knowledge objects of 5 users to admin user. After that those users are not visible in user list when I am logging with "Admin User". Please help to identi... See more...
Hello Splunkers!! I have reassigned all the knowledge objects of 5 users to admin user. After that those users are not visible in user list when I am logging with "Admin User". Please help to identify the root cause and to fix this scenario. Thanks in advance.
Hello, I just wanted to know more detailed information so I opened the case. About Alert settings. I set  Threshold '90' , Trigger 'Immediately'  and Alert when ' Above '  If the above settings a... See more...
Hello, I just wanted to know more detailed information so I opened the case. About Alert settings. I set  Threshold '90' , Trigger 'Immediately'  and Alert when ' Above '  If the above settings are Does the alarm occur from 90.1? I remember in the beginning, if I set it to 90, it was registered as 89. It's currently set up that way I would like to know if an alert is occurring at 89.1. In case an alarm occurs at 89.1, I need to fix it as soon as possible Please reply   Thank you !!!  
This is my first time using splunk cloud. And I'm trying to perform field extraction directly in the heavy forwarder before indexing the data. I created REPORT and TRANSFORM in props.conf with trans... See more...
This is my first time using splunk cloud. And I'm trying to perform field extraction directly in the heavy forwarder before indexing the data. I created REPORT and TRANSFORM in props.conf with transform.conf configured using regex expression tested and functional in splunk Cloud through field extract, but it does not work when trying to use HF Are there any limitations on data extraction when using heavy forwarder to Splunk Cloud?