Are you also on a cloud trial? These could just be momentary server busy etc, be sure to check splunk internal logs index=_internal source=*splunkd.log httpinputdatahandler) to see if the payload ...
See more...
Are you also on a cloud trial? These could just be momentary server busy etc, be sure to check splunk internal logs index=_internal source=*splunkd.log httpinputdatahandler) to see if the payload hit a 503 or something then retried. It is "expected" that hec clients have to handle backpressure or timeouts, so from time to time you may see a failed send, but as long as retry is successful, its "normal" unless you up your indexing layer to handle more traffic uninterrupted. The error says "timeout reached" so it could be that Splunk was to busy to answer (especially in standalone trail or small test boxes). Also please confirm the HEC full URL you are using. I believe you need to put the full URL https://http-inputs.foo.splunkcloud.com/services/collector/event (or trial equivalient) OP looks like they configured to just the cloud url on 8088, which is not a correct url for HEC.
same problem. The index is correct # docker logs -f sc4s SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=sddc_internal for sourcetype=sc4s:fallback... SC4S_ENV_CHECK_HEC: Splun...
See more...
same problem. The index is correct # docker logs -f sc4s SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=sddc_internal for sourcetype=sc4s:fallback... SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=sddc_internal for sourcetype=sc4s:events... syslog-ng checking config sc4s version=3.34.1 Configuring the health check port to: 8080 [2025-01-21 13:54:30 +0000] [129] [INFO] Starting gunicorn 23.0.0 [2025-01-21 13:54:30 +0000] [129] [INFO] Listening at: http://0.0.0.0:8080 (129) [2025-01-21 13:54:30 +0000] [129] [INFO] Using worker: sync [2025-01-21 13:54:30 +0000] [138] [INFO] Booting worker with pid: 138 starting syslog-ng no errors on startup but still these sc4s:events keep coming no idea what they are and the are annoying. The index is correct.
Hi @josephp , did you defined on the Search Head the field extraction for critC field? if yes, the grants for this field are App or Global? if App, are you in the same App? Ciao. Giuseppe
Hi, We recently migrated from a standalone Search Head to a clustered one. However, we are having some issue running some search commands. For example, this is a query that is not working on the ne...
See more...
Hi, We recently migrated from a standalone Search Head to a clustered one. However, we are having some issue running some search commands. For example, this is a query that is not working on the new SH cluster sourcetype=dataA index=deptA | where critC > 25 On the old search head, this query runs fine and we see the results as expected. But on the SH cluster, this doesn't yield anything. I have run the "sourcetype=dataA index=deptA" search query by itself, and they both see the same events. I am not sure why the search with (| where citC > 25) on the standalone SH would work and the cluster would not. Any help would be appreciated. Thank you
It is clean at startup SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=sddc_internal for sourcetype=sc4s:fallback... SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful ...
See more...
It is clean at startup SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=sddc_internal for sourcetype=sc4s:fallback... SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=sddc_internal for sourcetype=sc4s:events... syslog-ng checking config sc4s version=3.34.1 Configuring the health check port to: 8080 [2025-01-21 13:36:54 +0000] [135] [INFO] Starting gunicorn 23.0.0 [2025-01-21 13:36:54 +0000] [135] [INFO] Listening at: http://0.0.0.0:8080 (135) [2025-01-21 13:36:54 +0000] [135] [INFO] Using worker: sync [2025-01-21 13:36:54 +0000] [138] [INFO] Booting worker with pid: 138 starting syslog-ng
I am getting this all of the time and A the index exists and i can test it with curl and when sc4s starts it shows it is able to connect - it is annoying. what else can i check it is not well docume...
See more...
I am getting this all of the time and A the index exists and i can test it with curl and when sc4s starts it shows it is able to connect - it is annoying. what else can i check it is not well documented.
There are several possible approaches but each of them has its own drawbacks. The most obvious three are: 1) Use eventstats to add count to events, sort and limit by the count value. (might be memo...
See more...
There are several possible approaches but each of them has its own drawbacks. The most obvious three are: 1) Use eventstats to add count to events, sort and limit by the count value. (might be memory-intensive as I said earlier) 2) Use subsearch to find the count, then search your whole body of data for those events (if you can't use "fast" commands like tstats for your subsearch you might hit all the subsearch-related problems; also you're effectively digging twice through your whole data set) 3) Add more values() aggregations to your stats listing specific fields (might cause problems with "linking" values from different fields; especially if potentially empty fields are involved).
hi. Would it be possible for us to regularly read the statistics from the Protection Group Runs via Splunk Add-on? These fields, which are also available via Helios, are of interest to us: Start T...
See more...
hi. Would it be possible for us to regularly read the statistics from the Protection Group Runs via Splunk Add-on? These fields, which are also available via Helios, are of interest to us: Start Time End Time Duration Status Sla Status Snapshot Status Object Name Source Name Group Name Policy Name Object Type Backup Type System Name Logical Size Bytes Data Read Bytes Data Written Bytes Organization Name This would make it much easier for us to create the necessary reports in Splunk. Thank you very much
Check the logs on the receiving end (the server you're connecting to). You can dump the traffic and check if the TLS negotiation is happening properly but I suspect it does up to a point when you're ...
See more...
Check the logs on the receiving end (the server you're connecting to). You can dump the traffic and check if the TLS negotiation is happening properly but I suspect it does up to a point when you're getting refused by the receiving end. But the question is why and that should be in your splunkd.log.
Hi @deckard1984 , do you know the stats command (https://docs.splunk.com/Documentation/SCS/current/SearchReference/StatsCommandOverview)? index=main_sysmon sourcetype=xmlwineventlog process_exec=te...
See more...
Hi @deckard1984 , do you know the stats command (https://docs.splunk.com/Documentation/SCS/current/SearchReference/StatsCommandOverview)? index=main_sysmon sourcetype=xmlwineventlog process_exec=test EventCode=11 dest=hosts*
| strcat "Event ID: ", EventID " (" signature ")" timestampType
| strcat "EventDescription: " EventDescription " | TargetFilename: " TargetFilename " | User: " User activity
| strcat EventDescription ": " TargetFilename " by " User details
| eval attck = "N/A"
| stats
count
latest(_time) AS _time
values(activity) AS activity
BY Computer process_name Ciao. Giuseppe
Hi @Ste , with my above solution you can reach your target, otherwise you can use a subsearch (less performant): <your_search> [ search <your_search>
| where stoerCode IN ("K02")
...
See more...
Hi @Ste , with my above solution you can reach your target, otherwise you can use a subsearch (less performant): <your_search> [ search <your_search>
| where stoerCode IN ("K02")
| stats count as periodCount by zbpIdentifier
| sort -periodCount
| head 10
| fields zbpIdentifier ]
| table importZeit_uF zbpIdentifier bpKurzName zbpIdentifier_bp status stoerCode
I prefer the other solution. Ciao. Giuseppe
Right now a have a table list with fields populated where one process_name is repeating across multiples hosts with same EventID. index=main_sysmon sourcetype=xmlwineventlog process_exec=test Event...
See more...
Right now a have a table list with fields populated where one process_name is repeating across multiples hosts with same EventID. index=main_sysmon sourcetype=xmlwineventlog process_exec=test EventCode=11 dest=hosts* | strcat "Event ID: ", EventID " (" signature ")" timestampType | strcat "EventDescription: " EventDescription " | TargetFilename: " TargetFilename " | User: " User activity | strcat EventDescription ": " TargetFilename " by " User details | eval attck = "N/A" | table Computer , UtcTime, timestampType, activity, Channel, attck, process_name I want to have a total sum of counts per same host and process_name with all activity (or target file names listed under). For e.g Computer | UTC | timestamp | activity | process_name | count | 1 | File list | same - repeats | missing value 2 | File list | same - repeats | missing value
Maybe this will give you what you are looking for, use the stats to include all the fields, and if you dont want the count in the table add a fields command after like | fields - periodCount | stat...
See more...
Maybe this will give you what you are looking for, use the stats to include all the fields, and if you dont want the count in the table add a fields command after like | fields - periodCount | stats count as periodCount by zbpIdentifier zbpIdentifier_bp periodCount importZeit_uF
| sort -periodCount
Hi @splunklearner , access grants are managed in Splunk at index level, so the best approach is to create different indexes for different grants. Otherwise, you can put all the events in the same i...
See more...
Hi @splunklearner , access grants are managed in Splunk at index level, so the best approach is to create different indexes for different grants. Otherwise, you can put all the events in the same index and, when you create roles, you out a filter for each one, e.g. one role can see only events in index X with sourcetype A or source B. Ciao. Giuseppe
Hello,
The configuration files contain the following:
[sslConfig]
enableSplunkdSSL = true
sslPassword = value
sslRootCAPath = /path/to/ca/cert
serverCert = /path/to/srv/cert
caTrustStore = splunk...
See more...
Hello,
The configuration files contain the following:
[sslConfig]
enableSplunkdSSL = true
sslPassword = value
sslRootCAPath = /path/to/ca/cert
serverCert = /path/to/srv/cert
caTrustStore = splunk
caTrustStorePath = path/to/trust/ca
caPath = path/to/trust/c
caCertFile = path/to./ca
Yes, the connection was to the management port.
The self-signed certificate was only for the web interface (and I have no issues regarding that).
However, the problem lies between the components of the architecture.