All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @, please see this: https://docs.splunk.com/Documentation/Splunk/9.3.2/Indexer/Clustersandsummaryreplication Ciao. Giuseppe
yes, sh is sending logs to indexers
Hi Experts, Has any one achieved SNMP polling to network device from redhat based Splunk HF. Trying to follow below documentation but end up getting some errors related to data bases and connections... See more...
Hi Experts, Has any one achieved SNMP polling to network device from redhat based Splunk HF. Trying to follow below documentation but end up getting some errors related to data bases and connections. Collect data with input plugins | Telegraf Documentation
Hi @Nawab , did you configured your SHs to send logs to the Indexers? Ciao. Giuseppe
I have a SHC of 3 search heads. I changed some fields in data model of 1 sh. it is replicated on 2nd SH, but 3rd SH does not have the same fields. Even though that SH was the captian.   I ran resyn... See more...
I have a SHC of 3 search heads. I changed some fields in data model of 1 sh. it is replicated on 2nd SH, but 3rd SH does not have the same fields. Even though that SH was the captian.   I ran resync command but still the same issue.    
Please find the requested screenshots
Report schedule Search query Index Summary index
We faced the same challenge, but in our case we had to migrate the OS version and the Splunk version at the same time: We created an indexer cluster with the new OS version and Splunk version. This ... See more...
We faced the same challenge, but in our case we had to migrate the OS version and the Splunk version at the same time: We created an indexer cluster with the new OS version and Splunk version. This cluster ingests the new data We kept a single instance of the old read-only indexer cluster to query historical data. The search head cluster was connected to all peers, the indexer cluster and the standalone indexer. Once the retention times had expired, we decommissioned the old indexer.
Nous avons du faire face au même défi mais pour notre cas nous devions migrer la version OS et la version Splunk en même temps : Nous avons créer un cluster d'indexer avec la nouvelle version OS et ... See more...
Nous avons du faire face au même défi mais pour notre cas nous devions migrer la version OS et la version Splunk en même temps : Nous avons créer un cluster d'indexer avec la nouvelle version OS et version Splunk. Ce cluster ingère les nouvelles données Nous avons conserver une seule instance de l'ancien cluster d'indexer en lecture seule pour requêter les données historiques Le cluster de search head était branché sur tous les peers, le cluster d'indexer et l'indexer standalone. Une fois les durées de rétention expirées, nous avons décommissionné l'ancien indexer.
Hello @bishida, I have gone through the repo for the statsdreceiver but I was not able to configure it successfully.  receivers: statsd: statsd/2: endpoint: "localhost:8127" aggregation_interval: 7... See more...
Hello @bishida, I have gone through the repo for the statsdreceiver but I was not able to configure it successfully.  receivers: statsd: statsd/2: endpoint: "localhost:8127" aggregation_interval: 70s enable_metric_type: true is_monotonic_counter: false timer_histogram_mapping: - statsd_type: "histogram" observer_type: "gauge" - statsd_type: "timing" observer_type: "histogram" histogram: max_size: 100 - statsd_type: "distribution" observer_type: "summary" summary: percentiles: [0, 10, 50, 90, 95, 100] I tried to configure the above but it was not working, here I am not sure how Splunk Oberverability Cloud will know to listen to 8127 port. Let me explain my use case in detail: I have a couple of EC2 Linux instances on which the statsd server is running and it is generating some custom gRPC metrics from a golang application running on port UDP:8125 (statsd). Now, I want these custom gRPC metrics from a golang application running on port UDP:8125 (statsd) to send to Splunk Oberverability Cloud, so that I can monitor these custom gRPC metrics there, but this we need to make a connection between EC2 Linux instance and Splunk Oberverability Cloud, Splunk Oberverability Cloud should able to receive these custom gRPC metrics as we don't have any hostname/IP address for Splunk Oberverability Cloud we have to use some agent for doing this, I think we can using "splunk-otel-collector.service" Currently I am able to capture the predefined metrics such as "^aws.ec2.cpu.utilization", system.filesystem.usage etc on my Splunk Oberverability Cloud but now I also want the custom gRPC metrics same like this. Before this setup I used a setup in which I was having multiple EC2 Linux instances on which statsd server was running and I was a serepate Spunk Enterprise EC2 instance and it was collecting all the metrics there. But Spunk Enterprise provides commands to connect instances to Spunk Enterprise using "./splunk enable listen 9997" and "./splunk add <destination_hostname>:9997" and I was using the below configuration to do so.  "statsd": { "statsd_max_packetsize": 1400, "statsd_server" : "destination_hostname", "statsd_port" : "8125" }, Same thing I want to achieve using Splunk Oberverability Cloud. Can you please explain in detail how we can connect EC2 instances with Splunk Oberverability Cloud to send custom gRPC metrics from a golang application running on port UDP:8125 (statsd), if using https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/statsdreceiver is the only way then what changes I want to make in the configuration files related to the custom metric collection (has to be added anywhere in this directory), hostname, ports name mentioning in any files etc in details. Thanks
@yuanliu $t_time.latest$ comes from an input selector. As I wanted to have always the @d timestamp your proposal must be changed slightly.  Below is my untested proposal how a solution could look li... See more...
@yuanliu $t_time.latest$ comes from an input selector. As I wanted to have always the @d timestamp your proposal must be changed slightly.  Below is my untested proposal how a solution could look like based on a if evaluation:  index="abc" search_name="def" [| makeresults | eval earliest=relative_time($t_time.latest$,"-1d@d") | eval latest=if("t_time.latest$" == "now", relative_time(now(), "@d") relative_time($t_time.latest$,"@d")) | fields earliest latest | format] | table _time zbpIdentifier  However, for me the @bowesmana proposal is better understandable. 
@bowesmana Exactly what I was looking for, thank you. 
1.I have time attribute added as required. 2. I have set the Summarization Period to run once in every 5 mins. (*/5 * * * *) and the old summaries clean up is default 30 mins. 3. Added summary rang... See more...
1.I have time attribute added as required. 2. I have set the Summarization Period to run once in every 5 mins. (*/5 * * * *) and the old summaries clean up is default 30 mins. 3. Added summary range earliest time to 91 days. 4. Adding summariesonly = true, doesnt give any results --> for 1 hour as well.
I have an application on Splunkbase and want to rename it along with the commands and custom action. I have updated the app name by renaming the folder and updating the app ID. I've also updated t... See more...
I have an application on Splunkbase and want to rename it along with the commands and custom action. I have updated the app name by renaming the folder and updating the app ID. I've also updated the commands and custom action with the new name. While testing it on my local Splunk instance I observed that the existing application isn't getting replaced with a new one as the folder name and app name/ID is different compared to the older version. I believe that is fine as I can ask users to remove it from their instances, but I want the saved searches as well as local data of the older app to be available in the renamed app (newer app) but I'm unable to find any appropriate way of doing so. Lastly, There was a post in the community where the solution was to clone the local data from the older app to the newer app but that isn't feasible for me as I don't have access to the instances that the users are having with the older app installed. Can someone please help me with this? Also, I had a few other questions related to older applications: What is the procedure for deleting an already existing application on Splunkbase? Is emailing Splunk support the only way? Tried app archiving but it doesn't restrict the users from installing it. Is there a way to transfer the old Splunk application or account to a new account? any alternative to emailing the Splunk support team?  TL;DR How can I replace the already installed application on the user's end with the newly renamed application in Splunk? Since the names of the applications differ, Splunk installs a separate app for the new name instead of updating the existing one. If there are users who are already using the existing application and have the application's saved configurations and searches, how can we get it migrated to the newly renamed application?
Hi @Cramery_ , could you share a sample of your complete logs (aventually anonymized)? Anyway, when there's a backslash, it's always a problem because you need to add more backslashes than usual on... See more...
Hi @Cramery_ , could you share a sample of your complete logs (aventually anonymized)? Anyway, when there's a backslash, it's always a problem because you need to add more backslashes than usual on regex101.com. Do you need to use the regex in a search or in conf files? if in conf files, use the number of backslashes that you use in regex101, if in a search add one backslash. Ciao. Giuseppe
1.I have time attribute added as required. 2. I have set the Summarization Period to run once in every 5 mins. (*/5 * * * *) and the old summaries clean up is default 30 mins. 3. Added summary rang... See more...
1.I have time attribute added as required. 2. I have set the Summarization Period to run once in every 5 mins. (*/5 * * * *) and the old summaries clean up is default 30 mins. 3. Added summary range earliest time to 91 days. 4. Adding summariesonly = true, doesnt give any results --> for 1 hour as well.
Hi So I ran into a very odd and specific issue. I trx to regex-Filter a field, lets call it "parent". The field has the following structure: (not actual, the field I wanna regex, but easier to show ... See more...
Hi So I ran into a very odd and specific issue. I trx to regex-Filter a field, lets call it "parent". The field has the following structure: (not actual, the field I wanna regex, but easier to show the issue, so other options like "use .* or something wont work) C:\\Windows\\System32\\test\\ I try to regex this field like: "C:\\\\Windows\\\\System32\\\\test\\\\" This does not work But as soon as I delete this second folder "C:\\\\Windows\\\\.*\\\\test\\\\" it works. And this will be over all fields, no matter which field with a path I take, as soon as I enter this second folder, it will immediately stop working. I also tried to add different special characters, all numbers and letters, space, tab etc. also tried to change the "\\\\", Adding ".*System32.*" but nothing works out. Someone else ever ran into this issue and got a solution?
Hello @bishida, Thanks for the reply. I have gone through the repo for the statsdreceiver but I was not able to configure it successfully. receivers: statsd: statsd/2: endpoint: "localhost:8127... See more...
Hello @bishida, Thanks for the reply. I have gone through the repo for the statsdreceiver but I was not able to configure it successfully. receivers: statsd: statsd/2: endpoint: "localhost:8127" aggregation_interval: 70s enable_metric_type: true is_monotonic_counter: false timer_histogram_mapping: - statsd_type: "histogram" observer_type: "gauge" - statsd_type: "timing" observer_type: "histogram" histogram: max_size: 100 - statsd_type: "distribution" observer_type: "summary" summary: percentiles: [0, 10, 50, 90, 95, 100] I tried to configure above but it was not working, here I am not sure how Splunk Oberverability Cloud will know to listen to 8127 port. Let me explain my use case in detail: I have couple of EC2 Linux instance on which statsd server is running and it generating some custom gRPC metrics from a golang application running on port UDP:8125 (statsd). Now, I want these custom gRPC metrics from a golang application running on port UDP:8125 (statsd) to send to Splunk Oberverability Cloud, so that I can monitor these custom gRPC metrics there, but this we need to make a connection between EC2 Linux instance and Splunk Oberverability Cloud, Splunk Oberverability Cloud should able to recieve these custom gRPC metrics as we don't have any hostname/IP address for Splunk Oberverability Cloud we have to use some agent for doing this, I think we can using "splunk-otel-collector.service" Currently I am able to capture the predefined metrices such "^aws.ec2.cpu.utilization", system.filesystem.usage etc on my Splunk Oberverability Cloud but now I also want the custom gRPC metrics same like this. Before this setup I using a setup in which I was having multiple EC2 Linux instance on which statsd server was running and I was a serepate Spunk Enterprise EC2 instance and it was collecting all the metrics there. But Spunk Enterprise provide commands to connect instances to Spunk Enterprise using "./splunk enable listen 9997" and "./splunk add <destination_hostname>:9997" and I was using below configuration to do so. "statsd": { "statsd_max_packetsize": 1400, "statsd_server" : "destination_hostname", "statsd_port" : "8125" }, Same thing I want to achieve using Splunk Oberverability Cloud. Can you please explain in detail how we can connect EC2 instances with Splunk Oberverability Cloud to send custom gRPC metrics from a golang application running on port UDP:8125 (statsd), if using https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/statsdreceiver is the only way then what changes I want to make in the configuration files related to the custom metric collection (has to be added any where in this directory), hostname, ports name mentioning in any files etc in details. Thanks
The env variable OTEL_EXPORTER_OTLP_TRACES_HEADERS has not been created. Regarding the OTEL_OTLP_EXPORTER_ENDPOINT it was set tot http://localhost:4317. Now I chnaged to 4318. Regarding SPLUNK_... See more...
The env variable OTEL_EXPORTER_OTLP_TRACES_HEADERS has not been created. Regarding the OTEL_OTLP_EXPORTER_ENDPOINT it was set tot http://localhost:4317. Now I chnaged to 4318. Regarding SPLUNK_ACCESS_TOKEN, I have not changed this value to the new one. Changed now. I will restart the application and generates traffic again. Let you know.
I tried this as well and increased the depth_limit as well in limits.conf on HF under tenable addon local directory. still not working [rex] depth_limit=10000 my character limit is 9450 characte... See more...
I tried this as well and increased the depth_limit as well in limits.conf on HF under tenable addon local directory. still not working [rex] depth_limit=10000 my character limit is 9450 character total in an event. Still not