Hi @ripleyramirocastillo93
If you want to add more than one indexes under one HEC token then you have to add below config under HEC token stanza in inputs.conf :
[HEC Token name]
Indexes = All the index name that you want to send the data by a comma-separated
If you are using Splunk-cloud then you have to raise a support case to add this config under HEC token stanza.
I hope this will help you.
Thanks,
Dixit
... View more
Hi @anandhalagarasan
You can check and compare the capabilities for the user who is not able to view notable events and open the incident review dashboard with other users who are able to access that dashboard.
You can take reference from this Splunk Documentation regarding ES capabilities:
https://docs.splunk.com/Documentation/ES/5.3.1/Install/ConfigureUsersRoles
Thanks,
Dixit
... View more
Hi @jmauleon
Actually Splunk is providing one active Splunk Cloud Trial to each customer, so you have to wait only.
You can cleat cache in your browser and try it or you can try to access it from another browser as because in my case it took 2 days to open successfully.
Thanks,
Dixit
... View more
Hi @jmauleon
I have created the same Free trial Splunk Cloud instance and When I have tried to open got the same issue "Authenticating..." in the web page but It is expected behavior as It is taking time to open the UI so please try to open it after some hours.
I tried to open after some time like some hours or after one day I was able to open the UI successfully.
Thanks,
Dixit
... View more
Hi @willemjongeneel
To connect with indexer from Splunk UF you have to add SSL cert configuration in outputs.conf file of the UF:
[tcpout]
defaultGroup = my_indexers
[tcpout:my_indexers]
server = your indexer DNS: port on which you want to send the data
sslCertPath = *******
sslRootCAPath = *********
sslPassword = ********
sslCommonNameToCheck = ********
sslVerifyServerCert = true
useClientSSLCompression = true
... View more
Hi @daniel333
You can use logstash to forward data from Filebeat to Splunk via http-event-collector:
Reference: https://medium.com/ibm-garage/how-to-forward-events-from-logstash-to-splunk-4f2608041feb
... View more
Hi @rpasupuleti
If you want to install the add-ons which are available on Splunk base, you have to raise support case so that Splunk CloudOps team will install that requested add-ons on your requested Splunk Cloud instance.
... View more
Hi @packland
As UF will not do any type of parsing activity it will just forward your data to HF in your data pipeline and HF will parse your data and then It will forward it to indexer for indexing so if you want to apply any of the extraction with source type then you can do it during index time or search time and in your data pipeline I think you can add it on HF so that it will apply the source type before indexing on indexer.
... View more
Hi @st4ple
To push the cluster bundle on the indexer you need a rolling restart on the indexer but it depends on your bundle configuration like some time for some config you just have to reload the indexers no need to restart them (It will automatically done when you are applying cluster bundle to the indexer) but if your configuration needed rolling restart then you can't stage the bundle on all the indexer and schedule a restart activity for any other time.
... View more
Hi @season88481
It is the issue in which event field passed to Splunk is empty and which is probably caused by a log record containing a blank message value.
I think you need to add the filter as below in output.conf: |- of configMap, It will resolve the issue of blank event as It will filter the logs with empty value also: (empty logs are shown as E in Splunk)
# ensure we do not have empty line logs, they cannot be ingested by Splunk and result in 400 response from
# the Splunk HEC
<filter tail.containers.**>
@type jq_transformer
jq 'if .record.log == "\n" then .record.log = "E" else .record.log = .record.log end | .record'
</filter>
... View more
Hi @rhendle
You can use aws:sqs because as per App doc this is the default one in-app so I suggest to use the config which one is suggested by App doc.
... View more
Hi @jobayer
To work with inputs in Splunk Add-on for AWS on Heavy forwarder you have to first add your AWS account and IAM role with the correct configs (Access Key, Secret Access Key, IAM role arn and all the required information) from which you want to send logs in Splunk. After this setup, all your inputs will identify your AWS account and IAM role and will work smoothly.
... View more
Hi @anank134
Option 1:
You can see the option of Install app from file at Apps-> Manage Apps page as like below and you can download the app form Splunk base https://splunkbase.splunk.com/app/1448/ and can install the app:
Option 2:
You can untar the downloaded app tgz file form Splunk base at /opt/splunk/etc/apps/ folder and restart the Splunk on your Splunk server it will install the app.
... View more
Hi @tthonest
Try below steps to resolve the issue:
Please check whether rsyslog is running on your server, if rsyslog is running you won't be able to start Syslog because by default they are listening on the same port. To disable rsyslog run this command: systemctl disable rsyslog
Try running syslog-ng --syntax-only to verify that there are no syntax errors in your config as because if there is any syntax error present on your config file then it will fail to start Syslog and also try to run /usr/sbin/syslog-ng -F -p /var/run/syslogd.pid as it will check customized syslog-ng.conf that there is any typo or syntax error present or not, if there are any errors then you have to fix them and need to restart Syslog.
... View more
Hi @sivauser
In Splunk cloud instance, if port 9997 is down or not listening then you need to raise a support ticket to investigate your issue by Splunk cloud networking team.
you can also check the connection by telnet between your forwarder and Splunk Cloud IP addresses on port 9997 by this way you can get the result that port 9997 is up and listening or not.
... View more
Hi @FraserC1
Option 1:
If you want to use UF then you can directly send data to Splunk cloud but the UF will not parse the data as it will only forward the data to the Splunk cloud indexer and for that, you have to just put the config in outputs.conf of UF and in this case parsing and indxing will be done by Splunk cloud indexer.
Option2:
If you will use HF only then it will be a better option, As it will parse the data and will send it to Splunk cloud for indexing and in this case we don't have to use UF and need to put the same config in outputs.conf as per option1.
... View more