All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

i did it on each search head separately and it worked! too bad there's no way to do it from the master and deploy it them but atleast it works. thanks for the help!
Hello @bishida, I have gone through the repo for the statsdreceiver but I was not able to configure it successfully. receivers: statsd: statsd/2: endpoint: "localhost:8127" aggregation_interval:... See more...
Hello @bishida, I have gone through the repo for the statsdreceiver but I was not able to configure it successfully. receivers: statsd: statsd/2: endpoint: "localhost:8127" aggregation_interval: 70s enable_metric_type: true is_monotonic_counter: false timer_histogram_mapping: - statsd_type: "histogram" observer_type: "gauge" - statsd_type: "timing" observer_type: "histogram" histogram: max_size: 100 - statsd_type: "distribution" observer_type: "summary" summary: percentiles: [0, 10, 50, 90, 95, 100] I tried to configure above but it was not working, here I am not sure how Splunk Oberverability Cloud will know to listen to 8127 post. Let me explain my use case in detail: I have couple of EC2 Linux instance on which statsd server is running and it generating some custom gRPC metrics from a golang application running on port UDP:8125 (statsd). Now, I want these custom gRPC metrics from a golang application running on port UDP:8125 (statsd) to send to Splunk Oberverability Cloud, so that I can monitor these custom gRPC metrics there, but this we need to make a connection between EC2 Linux instance and Splunk Oberverability Cloud, Splunk Oberverability Cloud should able to recieve these custom gRPC metrics as we don't have any hostname/IP address for Splunk Oberverability Cloud we have to use some agent for doing this, I think we can using "splunk-otel-collector.service" Currently I am able to capture the predefined metrices such "^aws.ec2.cpu.utilization", system.filesystem.usage etc on my Splunk Oberverability Cloud but now I also want the custom gRPC metrics same like this. Before this setup I using a setup in which I was having multiple EC2 Linux instance on which statsd server was running and I was a serepate Spunk Enterprise EC2 instance and it was collecting all the metrics there. But Spunk Enterprise provide commands to connect instances to Spunk Enterprise using "./splunk enable listen 9997" and "./splunk add <destination_hostname>:9997" and I was using below configuration to do so. "statsd": { "statsd_max_packetsize": 1400, "statsd_server" : "destination_hostname", "statsd_port" : "8125" }, Same thing I want to achieve using Splunk Oberverability Cloud. Can you please explain in detail how we can connect EC2 instances with Splunk Oberverability Cloud to send custom gRPC metrics from a golang application running on port UDP:8125 (statsd), if using https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/statsdreceiver is the only way then what changes I want to make in the configuration files related to the custom metric collection (has to be added any where in this directory), hostname, ports name mentioning in any files etc in details. Thanks
Hello, I have an Splunk Connect for Syslog (SC4S) server that retrieves logs from a source and transmits them to Splunk indexers. But in order to reduce the number of events, I want to filter the l... See more...
Hello, I have an Splunk Connect for Syslog (SC4S) server that retrieves logs from a source and transmits them to Splunk indexers. But in order to reduce the number of events, I want to filter the logs at the sc4s level. Note that the sc4s tool uses syslog-ng for filtering and parsing. The use case is as follows: when an event arrives on the sc4s server and contains an ip address of 10.9.40.245, the event is dropped. Does anyone have any idea how to create this filter on SC4S? Thank you.
Well, unfortunately, as I stated above "neither fixes the issue." It doesn't matter how I configure the UF props.conf or the SH props.conf, Splunk is refusing to parse JSON properly. Even though othe... See more...
Well, unfortunately, as I stated above "neither fixes the issue." It doesn't matter how I configure the UF props.conf or the SH props.conf, Splunk is refusing to parse JSON properly. Even though other JSON datafeeds work just fine. Guess I'll have to open another ticket with Splunk.
Changed everything as mentioned before, but no data collected into O11y Cloud.
Hello @bishida, I have gone through the repo for the statsdreceiver but I was not able to configure it successfully. receivers: statsd: statsd/2: endpoint: "localhost:8127" aggregation_interval:... See more...
Hello @bishida, I have gone through the repo for the statsdreceiver but I was not able to configure it successfully. receivers: statsd: statsd/2: endpoint: "localhost:8127" aggregation_interval: 70s enable_metric_type: true is_monotonic_counter: false timer_histogram_mapping: - statsd_type: "histogram" observer_type: "gauge" - statsd_type: "timing" observer_type: "histogram" histogram: max_size: 100 - statsd_type: "distribution" observer_type: "summary" summary: percentiles: [0, 10, 50, 90, 95, 100] I tried to configure above but it was not working, here I am not sure how Splunk Oberverability Cloud will know to listen to 8127 post. Let me explain my use case in detail: I have couple of EC2 Linux instance on which statsd server is running and it generating some custom gRPC metrics from a golang application running on port UDP:8125 (statsd). Now, I want these custom gRPC metrics from a golang application running on port UDP:8125 (statsd) to send to Splunk Oberverability Cloud, so that I can monitor these custom gRPC metrics there, but this we need to make a connection between EC2 Linux instance and Splunk Oberverability Cloud, Splunk Oberverability Cloud should able to recieve these custom gRPC metrics as we don't have any hostname/IP address for Splunk Oberverability Cloud we have to use some agent for doing this, I think we can using "splunk-otel-collector.service" Currently I am able to capture the predefined metrices such "^aws.ec2.cpu.utilization", system.filesystem.usage etc on my Splunk Oberverability Cloud but now I also want the custom gRPC metrics same like this. Before this setup I using a setup in which I was having multiple EC2 Linux instance on which statsd server was running and I was a serepate Spunk Enterprise EC2 instance and it was collecting all the metrics there. But Spunk Enterprise provide commands to connect instances to Spunk Enterprise using "./splunk enable listen 9997" and "./splunk add <destination_hostname>:9997" and I was using below configuration to do so. "statsd": { "statsd_max_packetsize": 1400, "statsd_server" : "destination_hostname", "statsd_port" : "8125" }, Same thing I want to achieve using Splunk Oberverability Cloud. Can you please explain in detail how we can connect EC2 instances with Splunk Oberverability Cloud to send custom gRPC metrics from a golang application running on port UDP:8125 (statsd), if using https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/statsdreceiver is the only way then what changes I want to make in the configuration files related to the custom metric collection (has to be added any where in this directory), hostname, ports name mentioning in any files etc in details. Thanks
Hi @dorHerbesman , beware web-features.conf not web.features.conf, but maybe it's a mistyping. Anyway, what do you mean with Splunk master instance? you have to do this on the Search Heads, not on... See more...
Hi @dorHerbesman , beware web-features.conf not web.features.conf, but maybe it's a mistyping. Anyway, what do you mean with Splunk master instance? you have to do this on the Search Heads, not on other instances. Ciao. Giuseppe  
Hi @ameet , the 32 bit UF should run on a 64 bit OS, but it's always better to have the correct version! You can override installation, even if it's better to remove the old installation. To avoid... See more...
Hi @ameet , the 32 bit UF should run on a 64 bit OS, but it's always better to have the correct version! You can override installation, even if it's better to remove the old installation. To avoid this kind of issues, It's always better to plan the installation, creating (e.g. on Excel) a list of destination systems with the OS and architecture of each one, to avoid to install the wrong version. Ciao. Giuseppe
Hello, I am going through the steps of updating Splunk SOAR Unpriv from the site documentation, but when I copy the new package to the Splunk-soar folder and want to start the phantom service, I enco... See more...
Hello, I am going through the steps of updating Splunk SOAR Unpriv from the site documentation, but when I copy the new package to the Splunk-soar folder and want to start the phantom service, I encounter the error Phantom Startup failed: postgresql-11
i have tried pasting your code into /opt/splunk/etc/system/local/web.features.conf of my splunk master instance and restarting it (including rolling-restart) and no luck. maybe i should put it somewhe... See more...
i have tried pasting your code into /opt/splunk/etc/system/local/web.features.conf of my splunk master instance and restarting it (including rolling-restart) and no luck. maybe i should put it somewhere in my shcluster/apps? any other suggestion?
Hi!! I'm very new to Splunk and just want some advise.  I accidentally installed a 32 bit version of the universal forwarder on my test linux machine.  is it fine to install the 64 bit version on to... See more...
Hi!! I'm very new to Splunk and just want some advise.  I accidentally installed a 32 bit version of the universal forwarder on my test linux machine.  is it fine to install the 64 bit version on top without removing the 32bit version and will this cause issues later?  i'm also running splunk web on the same linux machine too.  any advise or suggestion please.  Amit 
I dont think this is related to my case. if change is replicated with in 2 SH, why changes are not replicated on 1 SH, all 3 SH are sending logs to indexers  
Hello @bishida, I have gone through the repo for the statsdreceiver but I was not able to configure it successfully. receivers: statsd: statsd/2: endpoint: "localhost:8127" aggregation_interval:... See more...
Hello @bishida, I have gone through the repo for the statsdreceiver but I was not able to configure it successfully. receivers: statsd: statsd/2: endpoint: "localhost:8127" aggregation_interval: 70s enable_metric_type: true is_monotonic_counter: false timer_histogram_mapping: - statsd_type: "histogram" observer_type: "gauge" - statsd_type: "timing" observer_type: "histogram" histogram: max_size: 100 - statsd_type: "distribution" observer_type: "summary" summary: percentiles: [0, 10, 50, 90, 95, 100] I tried to configure above but it was not working, here I am not sure how Splunk Oberverability Cloud will know to listen to 8127 post. Let me explain my use case in detail: I have couple of EC2 Linux instance on which statsd server is running and it generating some custom gRPC metrics from a golang application running on port UDP:8125 (statsd). Now, I want these custom gRPC metrics from a golang application running on port UDP:8125 (statsd) to send to Splunk Oberverability Cloud, so that I can monitor these custom gRPC metrics there, but this we need to make a connection between EC2 Linux instance and Splunk Oberverability Cloud, Splunk Oberverability Cloud should able to recieve these custom gRPC metrics as we don't have any hostname/IP address for Splunk Oberverability Cloud we have to use some agent for doing this, I think we can using "splunk-otel-collector.service" Currently I am able to capture the predefined metrices such "^aws.ec2.cpu.utilization", system.filesystem.usage etc on my Splunk Oberverability Cloud but now I also want the custom gRPC metrics same like this. Before this setup I using a setup in which I was having multiple EC2 Linux instance on which statsd server was running and I was a serepate Spunk Enterprise EC2 instance and it was collecting all the metrics there. But Spunk Enterprise provide commands to connect instances to Spunk Enterprise using "./splunk enable listen 9997" and "./splunk add <destination_hostname>:9997" and I was using below configuration to do so. "statsd": { "statsd_max_packetsize": 1400, "statsd_server" : "destination_hostname", "statsd_port" : "8125" }, Same thing I want to achieve using Splunk Oberverability Cloud. Can you please explain in detail how we can connect EC2 instances with Splunk Oberverability Cloud to send custom gRPC metrics from a golang application running on port UDP:8125 (statsd), if using https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/statsdreceiver is the only way then what changes I want to make in the configuration files related to the custom metric collection (has to be added any where in this directory), hostname, ports name mentioning in any files etc in details. Thanks
Hello, We have a Splunk indexer cluster with two searchheads and would like to use the addon in the cluster: https://splunkbase.splunk.com/app/4055 We installed the addon on the searchhead without ... See more...
Hello, We have a Splunk indexer cluster with two searchheads and would like to use the addon in the cluster: https://splunkbase.splunk.com/app/4055 We installed the addon on the searchhead without ES and on all indexers via ClusterManager App. Then we set up all the inputs for the addon on the searchhead and could not select the index “M365” but only enter it manually. The problem now is that this index is not filled by the indexers! What are we doing wrong here?
Hi @, please see this: https://docs.splunk.com/Documentation/Splunk/9.3.2/Indexer/Clustersandsummaryreplication Ciao. Giuseppe
yes, sh is sending logs to indexers
Hi Experts, Has any one achieved SNMP polling to network device from redhat based Splunk HF. Trying to follow below documentation but end up getting some errors related to data bases and connections... See more...
Hi Experts, Has any one achieved SNMP polling to network device from redhat based Splunk HF. Trying to follow below documentation but end up getting some errors related to data bases and connections. Collect data with input plugins | Telegraf Documentation
Hi @Nawab , did you configured your SHs to send logs to the Indexers? Ciao. Giuseppe
I have a SHC of 3 search heads. I changed some fields in data model of 1 sh. it is replicated on 2nd SH, but 3rd SH does not have the same fields. Even though that SH was the captian.   I ran resyn... See more...
I have a SHC of 3 search heads. I changed some fields in data model of 1 sh. it is replicated on 2nd SH, but 3rd SH does not have the same fields. Even though that SH was the captian.   I ran resync command but still the same issue.    
Please find the requested screenshots