All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I'm not sure what you mean by "subcluster". You can have HEC inputs directly on indexers (and have an LB in front of them) or can have a farm of HFs with HECs sending to the cluster.
Thanks this worked like a charm! 
You may just want to copy/paste a small sample of anything interesting in those logs. Can you also check /var/log/syslog or /var/log/messages for potentially interesting errors coming directly from t... See more...
You may just want to copy/paste a small sample of anything interesting in those logs. Can you also check /var/log/syslog or /var/log/messages for potentially interesting errors coming directly from the OTel collector?
The logs that I found are /var/log/apache2/access.log and error.log. I don't know how to attach all logs here.
I think you have some options. You could configure a statsd receiver on your OTel collector and then send your metrics to that receiver's listening endpoint. https://github.com/open-telemetry/ope... See more...
I think you have some options. You could configure a statsd receiver on your OTel collector and then send your metrics to that receiver's listening endpoint. https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/statsdreceiver If it's possible to export OTLP formatted metrics and/or logs, you could send them to the OTLP receiver's listening endpoint on your OTel collector (port 4317 for grpc or 4318 for http). For logs that you can collect from disk, you could use a Splunk universal forwarder to get your logs into Splunk Cloud or Splunk Enterprise. Or, you could use an OTel filelog receiver to collect those logs from disk and send to Splunk Cloud/Enterprise via an HEC (http event collector) endpoint. https://docs.splunk.com/observability/en/gdi/opentelemetry/components/filelog-receiver.html
We fail again and again these days when we have major spikes in ingestion, primarily with HEC. What would be a good and efficient way to detect major up/down spikes in data ingestion. 
On Splunk cloud, we can receive HEC ingestion directly to the cloud whereas on-prem we install distinct subclusters for HEC and struggle to scale them up with multiple down-times cases. I wonder whet... See more...
On Splunk cloud, we can receive HEC ingestion directly to the cloud whereas on-prem we install distinct subclusters for HEC and struggle to scale them up with multiple down-times cases. I wonder whether it's possible for an on-prem installation to send HEC data directly to the indexers. 
OK, perhaps 4318 was correct. Can you look at logs on the host where the OTel collector is running and share any errors you find? You might see exporting errors there or some clues as to why your tel... See more...
OK, perhaps 4318 was correct. Can you look at logs on the host where the OTel collector is running and share any errors you find? You might see exporting errors there or some clues as to why your telemetry isn't making it to the back end.
We have a case where the data resides under /usr/feith/log/*.log and the Splunk process can read these files however, when I log in to the unix server I cannot navigate into this directory as the Spl... See more...
We have a case where the data resides under /usr/feith/log/*.log and the Splunk process can read these files however, when I log in to the unix server I cannot navigate into this directory as the Splunk user. What's going on? bash-4.4$ whoami splunk bash-4.4$ pwd /usr/feith bash-4.4$ \ls -tlr total 388 ... drwxr-xr-x. 2 feith feith 4096 Dec 12 12:17 lib drwx------. 19 feith feith 4096 Dec 13 01:00 log bash-4.4$ cd log/ bash: cd: log/: Permission denied  
Nothing.  
Done. Follow the outputs. luizpolli@PCWIN11-LPOLLI:~$ cat /etc/php/8.1/apache2/php.ini|grep OTEL OTEL_RESOURCE_ATTRIBUTES="deployment.environment=prod,service.version=1.0" OTEL_EXPORTER_OTLP_ENDPOIN... See more...
Done. Follow the outputs. luizpolli@PCWIN11-LPOLLI:~$ cat /etc/php/8.1/apache2/php.ini|grep OTEL OTEL_RESOURCE_ATTRIBUTES="deployment.environment=prod,service.version=1.0" OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:4317" OTEL_SERVICE_NAME=shopping OTEL_PHP_AUTOLOAD_ENABLED=true luizpolli@PCWIN11-LPOLLI:~$ env|grep OTEL OTEL_RESOURCE_ATTRIBUTES=deployment.environment=prod,service.version=1.0 OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317 OTEL_SERVICE_NAME=shopping OTEL_PHP_AUTOLOAD_ENABLED=true luizpolli@PCWIN11-LPOLLI:~$ sudo systemctl restart apache2 luizpolli@PCWIN11-LPOLLI:~$ sudo systemctl status apache2 ● apache2.service - The Apache HTTP Server Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled) Active: active (running) since Fri 2024-12-13 19:02:36 CET; 5s ago Docs: https://httpd.apache.org/docs/2.4/ Process: 211814 ExecStart=/usr/sbin/apachectl start (code=exited, status=0/SUCCESS) Main PID: 211818 (apache2) Tasks: 7 (limit: 18994) Memory: 14.7M CGroup: /system.slice/apache2.service ├─211818 /usr/sbin/apache2 -k start ├─211819 /usr/sbin/apache2 -k start ├─211820 /usr/sbin/apache2 -k start ├─211821 /usr/sbin/apache2 -k start ├─211822 /usr/sbin/apache2 -k start ├─211823 /usr/sbin/apache2 -k start └─211824 /usr/sbin/apache2 -k start Dec 13 19:02:36 PCWIN11-LPOLLI systemd[1]: Starting The Apache HTTP Server... Dec 13 19:02:36 PCWIN11-LPOLLI systemd[1]: Started The Apache HTTP Server. luizpolli@PCWIN11-LPOLLI:~$  
Hello Team, I have successfully set up Splunk Observability Cloud to monitor Amazon Web Services through Amazon CloudWatch and can now observe all AWS services via IAM role. Additionally, I have a ... See more...
Hello Team, I have successfully set up Splunk Observability Cloud to monitor Amazon Web Services through Amazon CloudWatch and can now observe all AWS services via IAM role. Additionally, I have a gRPC application running on an AWS EC2 instance, which generates custom metrics using a StatsD server via Golang. I would like to send these custom metrics to Splunk Observability Cloud to monitor the health of the gRPC application, along with the logs it generates. On my AWS Linux machine, I can see that the host monitoring agent is installed and the splunk-otel-collector service is running. Could you please advise on the method to send the custom metrics and logs generated by the StatsD server from the Golang gRPC application to Splunk Observability Cloud for monitoring? Thank you.
Prefer the SH side to what? Have or not have INDEXED_EXTRACTIONS? Depends on what? The needs are simple. We want Splunk to not show duplicated fields when showing JSON data into tables., and be cons... See more...
Prefer the SH side to what? Have or not have INDEXED_EXTRACTIONS? Depends on what? The needs are simple. We want Splunk to not show duplicated fields when showing JSON data into tables., and be consistent. What is really confusing (and aggravating) about this, is we have other JSON feeds coming in, and those are working just fine. However, each config appears to be different. Sometimes the UF has the props.conf. Sometimes the SH has the props.conf. Sometimes the props.conf has "INDEXED_EXTRACTIONS = JSON" sometimes it doesn't. The whole thing is really confusing as to why Splunk sometimes works properly, and sometimes doesn't. The JSON file is rather large, so put it on pastebin: https://pastebin.com/VwkcdLLA   Appreciate your attention with this confusing issue. Let me know if you have any other questions.
Can we try using the grpc endpoint (http://localhost:4317) instead of the http endpoint (http://localhost:4318)? It's possible that PHP uses grpc by default. 
Hi bishida, The token has been rotated. As a matter fact i tried to setup directly, but it was configured with otel collector before with the same result on env variables. Image below. The ide... See more...
Hi bishida, The token has been rotated. As a matter fact i tried to setup directly, but it was configured with otel collector before with the same result on env variables. Image below. The idea is to configure locally running on an otel collector instance. As mentioned before, even locally at otel collector or directly to o11y cloud is not working.
Hello Splunk Community, I am running Splunk Enterprise Version: 9.2.3 Steps to reproduce: Make a config change to an app on the Cluster Manager - $SPLUNK_HOME/etc/master-apps/<custom_app>/local/i... See more...
Hello Splunk Community, I am running Splunk Enterprise Version: 9.2.3 Steps to reproduce: Make a config change to an app on the Cluster Manager - $SPLUNK_HOME/etc/master-apps/<custom_app>/local/indexes.conf Validate and Check Restart from Cluster Manager GUI. Bundle Information: Updated Time shows a date/time from last month (did not update) The Active Bundle ID did not change Unable to make changes to apps and have them pushed to Indexers. Note: there are other issues All three of my clustered Indexers are in Automatic Detention Seeing these Messages on GUI: Search peer xxx has the following message: The minimum free disk space (1000MB) reached for /opt/splunk/var/run/splunk/dispatch. Search peer xxx has the following message: Now skipping indexing of internal audit events, because the downstream queue is not accepting data. Will keep dropping events until data flow resumes. Review system health: ensure downstream indexing and/or forwarding are operating correctly Ultimatley, I am trying to push changes to the setting frozenTimePeriodInSecs to reduce stored logs and free up space.  Thanks for your help
Hi All, I am trying to create summary index for Cisco ESA Textmail logs. I will then rebuild the Email data model using the summary index. The scheduled search is running correctly but when I try t... See more...
Hi All, I am trying to create summary index for Cisco ESA Textmail logs. I will then rebuild the Email data model using the summary index. The scheduled search is running correctly but when I try to search the summary index I get no events returned. How does one check that events are going into the summary index correctly? Steps Taken Created a new index called email_summary I have created a scheduled search to run every 15 minutes In the settings I have ticked 'Enable summary indexing' Saved Search   index=email sourcetype=cisco:esa:textmail | stats values(action) as action, values(dest) as dest, values(duration) as duration, values(file_name) as file_name, values(message_id) as message_id, values(recipient) as recipient, dc(recipient) as recipient_count, values(recipient_domain) as recipient_domain, values(src) as src, values(src_user) as src_user, values(src_user_domain) as src_user_domain, values(message_subject) as subject, values(tag) as tag, values(url) as url, values(user) AS user values(vendor_product) as vendor_product, values(vendor_action) as filter_action, values(reputation_score) as filter_score BY internal_message_id     Thanks, Dave
All-in-one environment. The user account on the machine I used was ec2-user. I assume, but am not sure if that was the user used to do the original install.  Honestly, I didn't try to start the inst... See more...
All-in-one environment. The user account on the machine I used was ec2-user. I assume, but am not sure if that was the user used to do the original install.  Honestly, I didn't try to start the instance after the 9.2.4 upgrade. I was on 9.0. Did the applied the 9.2.4 upgrade and then immediately applied the 9.3.2 upgrade.  I user the tgz upgrade file and followed the 9.3.2 upgrade documentation. Again, total noob here. 
Your token is exposed in your post. Please be sure to replace that token ASAP. As for troubleshooting, I noticed you have 2 different values for OTEL_EXPORTER_OTLP_ENDPOINT. Is your goal to point ... See more...
Your token is exposed in your post. Please be sure to replace that token ASAP. As for troubleshooting, I noticed you have 2 different values for OTEL_EXPORTER_OTLP_ENDPOINT. Is your goal to point your telemetry to a locally running instance of the OTel collector or do you want to bypass a collector and send directly to the Observability Cloud backend? 
It depends. In most cases I prefer SH side. But this depends on your json and your needs. Can you show your raw event on disk before indexing!