All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Glad it works out.  JSON allows for semantic expression.  The more traditional "Splunk" trick is to use string concatenation then split after stats.  tojson command is present in all Splunk versions;... See more...
Glad it works out.  JSON allows for semantic expression.  The more traditional "Splunk" trick is to use string concatenation then split after stats.  tojson command is present in all Splunk versions; in this case, it is also very concise. If you remove the rest of search after that chart, you'll see something like this: _raw false true {"lastLogin":"2024-12-12T23:42:47","userPrincipalName":"yliu"}   28 {"lastLogin":"2024-12-13T00:58:38","userPrincipalName":"splunk-system-user"} 290 150 The intent is to construct a chart that will render the desired table layout while retaining all the data needed to produce final presentation. (This is why I ask for a mockup table so I know how you want to present data.  Presentation does influence solution.)
Unless you start splunk with all those mid versions it didn’t do those conversations etc actions which are needed before next update. Now you have done direct update from 9.0.x to 9.3.2 and this is no... See more...
Unless you start splunk with all those mid versions it didn’t do those conversations etc actions which are needed before next update. Now you have done direct update from 9.0.x to 9.3.2 and this is not supported way. Usually splunk has installed as root, but it should run as splunk (or other non root) user. Have you look what logs said especially migration.log and splunkd.log?
I see -    $ ps -ef | grep splunk splunk 2802446 2802413 0 Dec08 ? 00:00:08 [splunkd pid=2802413] splunkd --under-systemd --systemd-delegate=yes -p 8089 _internal_launch_under_systemd [process-... See more...
I see -    $ ps -ef | grep splunk splunk 2802446 2802413 0 Dec08 ? 00:00:08 [splunkd pid=2802413] splunkd --under-systemd --systemd-delegate=yes -p 8089 _internal_launch_under_systemd [process-runner]   Meaning, the user splunk runs on the host and when I sudo to be the splunk user, I don't have access the these logs files, even though they are being ingested.   
I love of the idea of the - HEC inputs directly on indexers (and have an LB in front of them) !
You have no _time in your output fields. See https://docs.splunk.com/Documentation/Splunk/latest/Knowledge/UseSireportingcommands#Summary_indexing_of_data_without_timestamps
We have more than one instance of S1 configured in the SentinelOne app on our SH. We do NOT have the S1 TA installed anywhere else. We have noticed that you can only specific a single "SentinelOne Se... See more...
We have more than one instance of S1 configured in the SentinelOne app on our SH. We do NOT have the S1 TA installed anywhere else. We have noticed that you can only specific a single "SentinelOne Search Index" in the base configuration. We have more than one index because we have configured each instance we have to go to different indexes. Because of this, the only index where events are typed and tagged properly is the index we have selected in the app.  Anyone know how we can get around this and have the events in the other indexes typed and tagged correctly?
And what user does your splunkd run with?
I'm not sure what you mean by "subcluster". You can have HEC inputs directly on indexers (and have an LB in front of them) or can have a farm of HFs with HECs sending to the cluster.
Thanks this worked like a charm! 
You may just want to copy/paste a small sample of anything interesting in those logs. Can you also check /var/log/syslog or /var/log/messages for potentially interesting errors coming directly from t... See more...
You may just want to copy/paste a small sample of anything interesting in those logs. Can you also check /var/log/syslog or /var/log/messages for potentially interesting errors coming directly from the OTel collector?
The logs that I found are /var/log/apache2/access.log and error.log. I don't know how to attach all logs here.
I think you have some options. You could configure a statsd receiver on your OTel collector and then send your metrics to that receiver's listening endpoint. https://github.com/open-telemetry/ope... See more...
I think you have some options. You could configure a statsd receiver on your OTel collector and then send your metrics to that receiver's listening endpoint. https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/statsdreceiver If it's possible to export OTLP formatted metrics and/or logs, you could send them to the OTLP receiver's listening endpoint on your OTel collector (port 4317 for grpc or 4318 for http). For logs that you can collect from disk, you could use a Splunk universal forwarder to get your logs into Splunk Cloud or Splunk Enterprise. Or, you could use an OTel filelog receiver to collect those logs from disk and send to Splunk Cloud/Enterprise via an HEC (http event collector) endpoint. https://docs.splunk.com/observability/en/gdi/opentelemetry/components/filelog-receiver.html
We fail again and again these days when we have major spikes in ingestion, primarily with HEC. What would be a good and efficient way to detect major up/down spikes in data ingestion. 
On Splunk cloud, we can receive HEC ingestion directly to the cloud whereas on-prem we install distinct subclusters for HEC and struggle to scale them up with multiple down-times cases. I wonder whet... See more...
On Splunk cloud, we can receive HEC ingestion directly to the cloud whereas on-prem we install distinct subclusters for HEC and struggle to scale them up with multiple down-times cases. I wonder whether it's possible for an on-prem installation to send HEC data directly to the indexers. 
OK, perhaps 4318 was correct. Can you look at logs on the host where the OTel collector is running and share any errors you find? You might see exporting errors there or some clues as to why your tel... See more...
OK, perhaps 4318 was correct. Can you look at logs on the host where the OTel collector is running and share any errors you find? You might see exporting errors there or some clues as to why your telemetry isn't making it to the back end.
We have a case where the data resides under /usr/feith/log/*.log and the Splunk process can read these files however, when I log in to the unix server I cannot navigate into this directory as the Spl... See more...
We have a case where the data resides under /usr/feith/log/*.log and the Splunk process can read these files however, when I log in to the unix server I cannot navigate into this directory as the Splunk user. What's going on? bash-4.4$ whoami splunk bash-4.4$ pwd /usr/feith bash-4.4$ \ls -tlr total 388 ... drwxr-xr-x. 2 feith feith 4096 Dec 12 12:17 lib drwx------. 19 feith feith 4096 Dec 13 01:00 log bash-4.4$ cd log/ bash: cd: log/: Permission denied  
Nothing.  
Done. Follow the outputs. luizpolli@PCWIN11-LPOLLI:~$ cat /etc/php/8.1/apache2/php.ini|grep OTEL OTEL_RESOURCE_ATTRIBUTES="deployment.environment=prod,service.version=1.0" OTEL_EXPORTER_OTLP_ENDPOIN... See more...
Done. Follow the outputs. luizpolli@PCWIN11-LPOLLI:~$ cat /etc/php/8.1/apache2/php.ini|grep OTEL OTEL_RESOURCE_ATTRIBUTES="deployment.environment=prod,service.version=1.0" OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:4317" OTEL_SERVICE_NAME=shopping OTEL_PHP_AUTOLOAD_ENABLED=true luizpolli@PCWIN11-LPOLLI:~$ env|grep OTEL OTEL_RESOURCE_ATTRIBUTES=deployment.environment=prod,service.version=1.0 OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317 OTEL_SERVICE_NAME=shopping OTEL_PHP_AUTOLOAD_ENABLED=true luizpolli@PCWIN11-LPOLLI:~$ sudo systemctl restart apache2 luizpolli@PCWIN11-LPOLLI:~$ sudo systemctl status apache2 ● apache2.service - The Apache HTTP Server Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled) Active: active (running) since Fri 2024-12-13 19:02:36 CET; 5s ago Docs: https://httpd.apache.org/docs/2.4/ Process: 211814 ExecStart=/usr/sbin/apachectl start (code=exited, status=0/SUCCESS) Main PID: 211818 (apache2) Tasks: 7 (limit: 18994) Memory: 14.7M CGroup: /system.slice/apache2.service ├─211818 /usr/sbin/apache2 -k start ├─211819 /usr/sbin/apache2 -k start ├─211820 /usr/sbin/apache2 -k start ├─211821 /usr/sbin/apache2 -k start ├─211822 /usr/sbin/apache2 -k start ├─211823 /usr/sbin/apache2 -k start └─211824 /usr/sbin/apache2 -k start Dec 13 19:02:36 PCWIN11-LPOLLI systemd[1]: Starting The Apache HTTP Server... Dec 13 19:02:36 PCWIN11-LPOLLI systemd[1]: Started The Apache HTTP Server. luizpolli@PCWIN11-LPOLLI:~$  
Hello Team, I have successfully set up Splunk Observability Cloud to monitor Amazon Web Services through Amazon CloudWatch and can now observe all AWS services via IAM role. Additionally, I have a ... See more...
Hello Team, I have successfully set up Splunk Observability Cloud to monitor Amazon Web Services through Amazon CloudWatch and can now observe all AWS services via IAM role. Additionally, I have a gRPC application running on an AWS EC2 instance, which generates custom metrics using a StatsD server via Golang. I would like to send these custom metrics to Splunk Observability Cloud to monitor the health of the gRPC application, along with the logs it generates. On my AWS Linux machine, I can see that the host monitoring agent is installed and the splunk-otel-collector service is running. Could you please advise on the method to send the custom metrics and logs generated by the StatsD server from the Golang gRPC application to Splunk Observability Cloud for monitoring? Thank you.
Prefer the SH side to what? Have or not have INDEXED_EXTRACTIONS? Depends on what? The needs are simple. We want Splunk to not show duplicated fields when showing JSON data into tables., and be cons... See more...
Prefer the SH side to what? Have or not have INDEXED_EXTRACTIONS? Depends on what? The needs are simple. We want Splunk to not show duplicated fields when showing JSON data into tables., and be consistent. What is really confusing (and aggravating) about this, is we have other JSON feeds coming in, and those are working just fine. However, each config appears to be different. Sometimes the UF has the props.conf. Sometimes the SH has the props.conf. Sometimes the props.conf has "INDEXED_EXTRACTIONS = JSON" sometimes it doesn't. The whole thing is really confusing as to why Splunk sometimes works properly, and sometimes doesn't. The JSON file is rather large, so put it on pastebin: https://pastebin.com/VwkcdLLA   Appreciate your attention with this confusing issue. Let me know if you have any other questions.