All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @ameet , the 32 bit UF should run on a 64 bit OS, but it's always better to have the correct version! You can override installation, even if it's better to remove the old installation. To avoid... See more...
Hi @ameet , the 32 bit UF should run on a 64 bit OS, but it's always better to have the correct version! You can override installation, even if it's better to remove the old installation. To avoid this kind of issues, It's always better to plan the installation, creating (e.g. on Excel) a list of destination systems with the OS and architecture of each one, to avoid to install the wrong version. Ciao. Giuseppe
Hello, I am going through the steps of updating Splunk SOAR Unpriv from the site documentation, but when I copy the new package to the Splunk-soar folder and want to start the phantom service, I enco... See more...
Hello, I am going through the steps of updating Splunk SOAR Unpriv from the site documentation, but when I copy the new package to the Splunk-soar folder and want to start the phantom service, I encounter the error Phantom Startup failed: postgresql-11
i have tried pasting your code into /opt/splunk/etc/system/local/web.features.conf of my splunk master instance and restarting it (including rolling-restart) and no luck. maybe i should put it somewhe... See more...
i have tried pasting your code into /opt/splunk/etc/system/local/web.features.conf of my splunk master instance and restarting it (including rolling-restart) and no luck. maybe i should put it somewhere in my shcluster/apps? any other suggestion?
Hi!! I'm very new to Splunk and just want some advise.  I accidentally installed a 32 bit version of the universal forwarder on my test linux machine.  is it fine to install the 64 bit version on to... See more...
Hi!! I'm very new to Splunk and just want some advise.  I accidentally installed a 32 bit version of the universal forwarder on my test linux machine.  is it fine to install the 64 bit version on top without removing the 32bit version and will this cause issues later?  i'm also running splunk web on the same linux machine too.  any advise or suggestion please.  Amit 
I dont think this is related to my case. if change is replicated with in 2 SH, why changes are not replicated on 1 SH, all 3 SH are sending logs to indexers  
Hello @bishida, I have gone through the repo for the statsdreceiver but I was not able to configure it successfully. receivers: statsd: statsd/2: endpoint: "localhost:8127" aggregation_interval:... See more...
Hello @bishida, I have gone through the repo for the statsdreceiver but I was not able to configure it successfully. receivers: statsd: statsd/2: endpoint: "localhost:8127" aggregation_interval: 70s enable_metric_type: true is_monotonic_counter: false timer_histogram_mapping: - statsd_type: "histogram" observer_type: "gauge" - statsd_type: "timing" observer_type: "histogram" histogram: max_size: 100 - statsd_type: "distribution" observer_type: "summary" summary: percentiles: [0, 10, 50, 90, 95, 100] I tried to configure above but it was not working, here I am not sure how Splunk Oberverability Cloud will know to listen to 8127 post. Let me explain my use case in detail: I have couple of EC2 Linux instance on which statsd server is running and it generating some custom gRPC metrics from a golang application running on port UDP:8125 (statsd). Now, I want these custom gRPC metrics from a golang application running on port UDP:8125 (statsd) to send to Splunk Oberverability Cloud, so that I can monitor these custom gRPC metrics there, but this we need to make a connection between EC2 Linux instance and Splunk Oberverability Cloud, Splunk Oberverability Cloud should able to recieve these custom gRPC metrics as we don't have any hostname/IP address for Splunk Oberverability Cloud we have to use some agent for doing this, I think we can using "splunk-otel-collector.service" Currently I am able to capture the predefined metrices such "^aws.ec2.cpu.utilization", system.filesystem.usage etc on my Splunk Oberverability Cloud but now I also want the custom gRPC metrics same like this. Before this setup I using a setup in which I was having multiple EC2 Linux instance on which statsd server was running and I was a serepate Spunk Enterprise EC2 instance and it was collecting all the metrics there. But Spunk Enterprise provide commands to connect instances to Spunk Enterprise using "./splunk enable listen 9997" and "./splunk add <destination_hostname>:9997" and I was using below configuration to do so. "statsd": { "statsd_max_packetsize": 1400, "statsd_server" : "destination_hostname", "statsd_port" : "8125" }, Same thing I want to achieve using Splunk Oberverability Cloud. Can you please explain in detail how we can connect EC2 instances with Splunk Oberverability Cloud to send custom gRPC metrics from a golang application running on port UDP:8125 (statsd), if using https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/statsdreceiver is the only way then what changes I want to make in the configuration files related to the custom metric collection (has to be added any where in this directory), hostname, ports name mentioning in any files etc in details. Thanks
Hello, We have a Splunk indexer cluster with two searchheads and would like to use the addon in the cluster: https://splunkbase.splunk.com/app/4055 We installed the addon on the searchhead without ... See more...
Hello, We have a Splunk indexer cluster with two searchheads and would like to use the addon in the cluster: https://splunkbase.splunk.com/app/4055 We installed the addon on the searchhead without ES and on all indexers via ClusterManager App. Then we set up all the inputs for the addon on the searchhead and could not select the index “M365” but only enter it manually. The problem now is that this index is not filled by the indexers! What are we doing wrong here?
Hi @, please see this: https://docs.splunk.com/Documentation/Splunk/9.3.2/Indexer/Clustersandsummaryreplication Ciao. Giuseppe
yes, sh is sending logs to indexers
Hi Experts, Has any one achieved SNMP polling to network device from redhat based Splunk HF. Trying to follow below documentation but end up getting some errors related to data bases and connections... See more...
Hi Experts, Has any one achieved SNMP polling to network device from redhat based Splunk HF. Trying to follow below documentation but end up getting some errors related to data bases and connections. Collect data with input plugins | Telegraf Documentation
Hi @Nawab , did you configured your SHs to send logs to the Indexers? Ciao. Giuseppe
I have a SHC of 3 search heads. I changed some fields in data model of 1 sh. it is replicated on 2nd SH, but 3rd SH does not have the same fields. Even though that SH was the captian.   I ran resyn... See more...
I have a SHC of 3 search heads. I changed some fields in data model of 1 sh. it is replicated on 2nd SH, but 3rd SH does not have the same fields. Even though that SH was the captian.   I ran resync command but still the same issue.    
Please find the requested screenshots
Report schedule Search query Index Summary index
We faced the same challenge, but in our case we had to migrate the OS version and the Splunk version at the same time: We created an indexer cluster with the new OS version and Splunk version. This ... See more...
We faced the same challenge, but in our case we had to migrate the OS version and the Splunk version at the same time: We created an indexer cluster with the new OS version and Splunk version. This cluster ingests the new data We kept a single instance of the old read-only indexer cluster to query historical data. The search head cluster was connected to all peers, the indexer cluster and the standalone indexer. Once the retention times had expired, we decommissioned the old indexer.
Nous avons du faire face au même défi mais pour notre cas nous devions migrer la version OS et la version Splunk en même temps : Nous avons créer un cluster d'indexer avec la nouvelle version OS et ... See more...
Nous avons du faire face au même défi mais pour notre cas nous devions migrer la version OS et la version Splunk en même temps : Nous avons créer un cluster d'indexer avec la nouvelle version OS et version Splunk. Ce cluster ingère les nouvelles données Nous avons conserver une seule instance de l'ancien cluster d'indexer en lecture seule pour requêter les données historiques Le cluster de search head était branché sur tous les peers, le cluster d'indexer et l'indexer standalone. Une fois les durées de rétention expirées, nous avons décommissionné l'ancien indexer.
Hello @bishida, I have gone through the repo for the statsdreceiver but I was not able to configure it successfully.  receivers: statsd: statsd/2: endpoint: "localhost:8127" aggregation_interval: 7... See more...
Hello @bishida, I have gone through the repo for the statsdreceiver but I was not able to configure it successfully.  receivers: statsd: statsd/2: endpoint: "localhost:8127" aggregation_interval: 70s enable_metric_type: true is_monotonic_counter: false timer_histogram_mapping: - statsd_type: "histogram" observer_type: "gauge" - statsd_type: "timing" observer_type: "histogram" histogram: max_size: 100 - statsd_type: "distribution" observer_type: "summary" summary: percentiles: [0, 10, 50, 90, 95, 100] I tried to configure the above but it was not working, here I am not sure how Splunk Oberverability Cloud will know to listen to 8127 port. Let me explain my use case in detail: I have a couple of EC2 Linux instances on which the statsd server is running and it is generating some custom gRPC metrics from a golang application running on port UDP:8125 (statsd). Now, I want these custom gRPC metrics from a golang application running on port UDP:8125 (statsd) to send to Splunk Oberverability Cloud, so that I can monitor these custom gRPC metrics there, but this we need to make a connection between EC2 Linux instance and Splunk Oberverability Cloud, Splunk Oberverability Cloud should able to receive these custom gRPC metrics as we don't have any hostname/IP address for Splunk Oberverability Cloud we have to use some agent for doing this, I think we can using "splunk-otel-collector.service" Currently I am able to capture the predefined metrics such as "^aws.ec2.cpu.utilization", system.filesystem.usage etc on my Splunk Oberverability Cloud but now I also want the custom gRPC metrics same like this. Before this setup I used a setup in which I was having multiple EC2 Linux instances on which statsd server was running and I was a serepate Spunk Enterprise EC2 instance and it was collecting all the metrics there. But Spunk Enterprise provides commands to connect instances to Spunk Enterprise using "./splunk enable listen 9997" and "./splunk add <destination_hostname>:9997" and I was using the below configuration to do so.  "statsd": { "statsd_max_packetsize": 1400, "statsd_server" : "destination_hostname", "statsd_port" : "8125" }, Same thing I want to achieve using Splunk Oberverability Cloud. Can you please explain in detail how we can connect EC2 instances with Splunk Oberverability Cloud to send custom gRPC metrics from a golang application running on port UDP:8125 (statsd), if using https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/statsdreceiver is the only way then what changes I want to make in the configuration files related to the custom metric collection (has to be added anywhere in this directory), hostname, ports name mentioning in any files etc in details. Thanks
@yuanliu $t_time.latest$ comes from an input selector. As I wanted to have always the @d timestamp your proposal must be changed slightly.  Below is my untested proposal how a solution could look li... See more...
@yuanliu $t_time.latest$ comes from an input selector. As I wanted to have always the @d timestamp your proposal must be changed slightly.  Below is my untested proposal how a solution could look like based on a if evaluation:  index="abc" search_name="def" [| makeresults | eval earliest=relative_time($t_time.latest$,"-1d@d") | eval latest=if("t_time.latest$" == "now", relative_time(now(), "@d") relative_time($t_time.latest$,"@d")) | fields earliest latest | format] | table _time zbpIdentifier  However, for me the @bowesmana proposal is better understandable. 
@bowesmana Exactly what I was looking for, thank you. 
1.I have time attribute added as required. 2. I have set the Summarization Period to run once in every 5 mins. (*/5 * * * *) and the old summaries clean up is default 30 mins. 3. Added summary rang... See more...
1.I have time attribute added as required. 2. I have set the Summarization Period to run once in every 5 mins. (*/5 * * * *) and the old summaries clean up is default 30 mins. 3. Added summary range earliest time to 91 days. 4. Adding summariesonly = true, doesnt give any results --> for 1 hour as well.