I'm trying to use the recently released 8.1.0 Universal Forwarder to send logs over HTTP:
https://docs.splunk.com/Documentation/Forwarder/8.1.0/Forwarder/Configureforwardingwithoutputs.conf#...
I have my outputs.conf configured as described in that configuration:
[httpout]
httpEventCollectorToken = [my_hec_token]
uri = http://[my_splunk_url]:8088
batchSize = 65536
batchTimeout = 5
I am also able to curl the HTTP Event Collector and successfully test the endpoint from the machine running the Universal Forwarder:
curl -k http://[my_splunk_url]:8088/services/collector/event -H "Authorization: Splunk [my_hec_token]" -d '{"event": "hello world"}'
{"text":"Success","code":0}
However when I start the Universal Forwarder, it shows the following error in the splunkd.log:
10-20-2020 14:41:40.989 +0000 ERROR S2SOverHttpOutputProcessor - HTTP 404 Not Found
10-20-2020 14:41:50.103 +0000 WARN TailReader - Could not send data to output queue (parsingQueue), retrying...
I have tried using https (although I know that the HEC endpoint in this case does not use https) and I have tried providing the /services/collector/event or /services/collector url paths in the config, but when doing any of these I instead get a 502 error in the log.
How can I troubleshoot this?
Hello @nunoaragao , unfortunately I don't have access anymore to the Splunk UF to perform a check.
Never had access to the third party Splunk where we were sending the data.
By the way I didn't really get which is the issue you are facing.
Please remember that in outputs.conf you don't have to explicit the HEC endpoint (/services/collector/s2s) but just the URI (https://yourdomain.com)
uri=https://yourdomain.com/services/collector/s2s
Hi @edoardo_vicendo , thanks for the reply.
Yeah, no issue with the sending of data. Like you, we managed to crack it.
But the HEC that receives the data is also receiving from an appliance and a AWS Firehose, on two other input tokens. Using the Splunk search I sent, I'm able to see metrics for connections, bytes ingested and parsing errors for those two other tokens, but NONE from the token used by the UF using S2S over HTTP.
You are welcome.
I would try checking based on what is written here:
https://docs.splunk.com/Documentation/Splunk/latest/Data/TroubleshootHTTPEventCollector
In particular:
1- Check if HEC token is enabled (I guess so :-))
2- Verify if ACK is enabled
3- Look at the log file directly in the machine
$SPLUNK_HOME/var/log/introspection/splunk/http_event_collector_metrics.log
4- Run a more general query
index="_introspection" token
5- Enable logs in DEBUG
We have solved the issue with this config.
Note: in server.conf better to first test with proxy_rules = * and then restrict
server.conf
[proxyConfig]
http_proxy = http://ip:port
https_proxy = http://ip:port
proxy_rules = *
no_proxy = localhost, 127.0.0.1, ::1
outputs.conf
[httpout]
httpEventCollectorToken = XXXX-XXXX-XXXX-XXXX-XXXX
uri = https://yourdomain.com
We had to put Splunk UF in DEBUG mode and it seems Splunk by itself append the “/services/collector/s2s”, so there is no need to add it in the httpout uri config:
12-21-2021 19:01:38.193 +0100 DEBUG S2SOverHttpOutputProcessor - S2SHttp Json transaction uri=https://yourdomain.com/services/collector/s2s, with sending size: 373645
Hi @edoardo_vicendo ,
Do you still have your working setup?
Do you find that introspection logs from the HEC receiver instances report metrics for tokens used by "/services/collector/raw" and "/services/collector/event", but not "/services/collector/s2s" ?
index="_introspection" component=HttpEventCollector data.token_name=*
Dislike to reply to my own comment, but I got an answer from Splunk Support.
HTTP Event Collector does NOT log metrics from UF sending data over HTTP, and this is reported on internal ticket SPL-239230 : "No metrics are sent to the http_event_collector_metrics.log" which has been in backlog since 2023.
@edoardo_vicendo we are facing the same issue, but I see the same error even after adding the proxy config under server.conf..
ERROR S2SOverHttpOutputProcessor - HTTP 502 Bad Gateway
here's my outputs.conf file..
[httpout]
httpEventCollectorToken = ###khldkhfkahl979797####
uri = https://10.x.x.x:443
batchSize = 32768
batchTimeout = 10
it's a network load balancer on AWS, are you using the same kind of load balancer.??
@prakash007 You posted this over a year ago but I'd like to know if you managed to solve this issue?
I've similar setup and getting HTTP 502 Bad Gateway
Hi @prakash007
You probably don't need to declare the port in uri config, the 443 is the default one for https connection.
By the way, even with the correct configuration I posted previously we were getting an HTTP 502 Bad Gateway error. Our Use Case was to export some logs from an on premise Data Center to a third party Splunk installation hosted in AWS. The target was hosted in AWS, with a Load Balancer and a WAF in front but the modification were in charge to the third party admin, and as far as I know they did some modification in the WAF rules to avoid the HTTP 502.
Same issue, did you were able to solve it?
12-16-2021 16:23:59.872 +0100 ERROR S2SOverHttpOutputProcessor [1631141 parsing] - HTTP 502 Bad Gateway
Which Splunk Enterprise Version are you running?
httpout on UFs requires Splunk Enterprise (or Cloud) to run on 8.1.x as well.
8.1 introduced a new HEC endpoint to which the UFs send their data over http: /services/collector/s2s
That explains why your troubleshooting on the /event endpoint worked.
Sadly you cannot use curl in the same way to send test data to the /s2s endpoint as you could with the /event endpoint, as splunk expects a different format on /s2s.
But if the endpoint is available, a curl with the right token in the header should at least give you this response:
{"count":0,"text":"Invalid data format","code":6}
Hope this helps!