All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

If the intention is cloning all data to both and you're okay with the double license ingest, you just need to configure outputs similar to this below. There could be other TLS settings to include, b... See more...
If the intention is cloning all data to both and you're okay with the double license ingest, you just need to configure outputs similar to this below. There could be other TLS settings to include, but adding a comma-delimited list in [tcpout] will duplicate all logs to both groups listed, which can each have their own independent cert settings. Another method is to create a "050_clone_app" with just the [tcpout] stanza, calling the exact name of the tcp group in the 100 UF cloud app and your other outputs app to your on-prem. That way it's modular, can be managed with a DS, and when you're ready to cut one out you just delete the "050" app and the outputs you no longer want. We do this all the time to migrate from one Splunk to another with a clone period during migration and testing. outputs.conf [tcpout] defaultGroup = cloud_indexers, onprem_indexers [tcpout:cloud_indexers] server = 192.168.7.112:9998, idx2, idx3, etc clientCert = $SPLUNK_HOME/etc/auth/kramerCerts/SplunkServerCert.pem (retain settings from UF Cloud 100 app) [tcpout:onprem_indexers] server = 192.168.1.102:9998, idx2, idx3, etc clientCert = $SPLUNK_HOME/etc/auth/kramerCerts/SplunkServerCert.pem    
Your config looks good, so it could be the certs were not prepared correctly, or Splunk cannot read them. Splunk's docs are not clear or accurate for cert prep. The server cert on the Indexer must ... See more...
Your config looks good, so it could be the certs were not prepared correctly, or Splunk cannot read them. Splunk's docs are not clear or accurate for cert prep. The server cert on the Indexer must have the leaf cert followed by the private key and that's it. Any intermediate or root certs are simply referenced by the sslRootCAPath in server.conf Ensure those cert files are all readable and owned by the 'splunk' user and chmod them to 640 to be safe. Make sure you can cat the cert and root using the the splunk user on the indexer. By the way for log encryption, Splunk only uses the server cert (Indexer cert) to encrypt the logs. As others mentioned, use the openssl command and check the cert results from it. $SPLUNK_HOME/bin/splunk cmd openssl s_client -connect <your_indexer>:<port> -showcerts  Also search the internal logs on both the indexer and the UF for TLS errors. cat /opt/splunk/var/log/splunk/splunkd.log | grep -i 'tls\|ssl'
I don't think I can as a partner, but I frequently submit changes to their docs and post in Slack
You can just create a .../local/inputs.conf with stanzas and attributes that override the default config like this:   [monitor://$SPLUNK_HOME/var/log/splunk/splunkd.log] _TCP_ROUTING = default-auto... See more...
You can just create a .../local/inputs.conf with stanzas and attributes that override the default config like this:   [monitor://$SPLUNK_HOME/var/log/splunk/splunkd.log] _TCP_ROUTING = default-autolb-group index = _internal [monitor://$SPLUNK_HOME/var/log/splunk/metrics.log] _TCP_ROUTING = default-autolb-group index = _internal  
Insight VM API will be better through api but i dont see the addon on the splunk cloud. I will recommend not to use the nexpose because will mess your nexpose appliance.
Hi Andy, Recently, we have got the similar issue. We can search the data but the app's dashboard not populating the data. I have installed the app into search heads and heavy forwarder as well. I ... See more...
Hi Andy, Recently, we have got the similar issue. We can search the data but the app's dashboard not populating the data. I have installed the app into search heads and heavy forwarder as well. I can see props.conf in app's default configs. Do you mean props.conf needs updating there ? 
I use Splunk to monitor a basic text file on multiple Windows Servers with the following stanza in inputs.conf: [monitor://C:\Windows\System32\logfiles\Ansible.log] disabled = 0 sourcetype = Ansib... See more...
I use Splunk to monitor a basic text file on multiple Windows Servers with the following stanza in inputs.conf: [monitor://C:\Windows\System32\logfiles\Ansible.log] disabled = 0 sourcetype = Ansible index = sw interval = 10 This always works at first and I can find all the events inside Splunk. But that Ansible.log file is regularly updated by Powershell or ScheduledTask or something similar and over time several servers will have 0 events for that Ansible.log file. In the file system, the file has been updated recently, but the Splunk Universal Forwarder just doesn't sent the updates but those servers have events from other SourceTypes. Restarting the SplunkForwarder service, the server, upgrading the Splunk Universal Forwarder does not fix the issue. The file is a simple raw text file in (typically UTF8 but I've tried multiple formats). I've make sure permissions are correct and the service which runs the SplunkForwarder has read rights. What else can I do to have the SplunkForwarder send updates to that file?
Hi @Ryan.Paredez , yes, thank you. It's a problem of long running queries in the backend db of the Controllers. The problem was solved within few hours, raising an urgent PRE ticket to the AppDynami... See more...
Hi @Ryan.Paredez , yes, thank you. It's a problem of long running queries in the backend db of the Controllers. The problem was solved within few hours, raising an urgent PRE ticket to the AppDynamics engineers. Regards, Alberto
Hi @Tran.Vinh , @sunil.vanmullem  Can you offer any help to Clarence based on your experience with this?
Just happened to use similar, getting ssl error " Error: Certificate validation error: Self signed certificate in certificate chain, although am making it as ignore in playbook its still redirecting ... See more...
Just happened to use similar, getting ssl error " Error: Certificate validation error: Self signed certificate in certificate chain, although am making it as ignore in playbook its still redirecting to couldnt complete request 
Hello @shaunm001 , Can you please check internal log specific to this input ? is there any ERROR logs present? 
Has anyone tried to use this process for upgrading the Splunk Universal forwarder? 
Hi @Alberto.Astolfi, Thanks for sharing this with the community. Have you heard back from Support yet? 
Try something like this <html> <style> #hide_number_distribution .highcharts-data-label text { display: none !important; } #hide_number_dist... See more...
Try something like this <html> <style> #hide_number_distribution .highcharts-data-label text { display: none !important; } #hide_number_distribution .highcharts-series-0 .highcharts-data-label text { display: block !important; } #hide_number_distribution .highcharts-series-0 path, #hide_number_distribution .highcharts-legend .highcharts-series-0 text { display: none !important; } </style> </html>
Have you seen https://github.com/splunk/splunk-ansible ? There are some good docs and examples within the repo that you can use to get started! I hope this helps! Will
Hi, am looking for Ansible playbooks to deploy Splunk Master, indexer, header and forwarder can any one provide insights
In my dashboard, when one of the dropdowns are changed I need to reset the value in the other dropdowns to the default value (*); this can easily be done using the <change> function so no issues ther... See more...
In my dashboard, when one of the dropdowns are changed I need to reset the value in the other dropdowns to the default value (*); this can easily be done using the <change> function so no issues there. The problem arises when the user clicks a link to the dashboard with pre-populated parameters for the dropdowns (user is taken to a specific state of the dropdowns). The loading of the dashboard with the incoming HTTP parameters for the dropdowns also trigger the <change> to happen and thus resetting all of the selected dropdowns. My questions is, how can I prevent the <change> to trigger on the Initial Load of the Dashboard? Once the Dashboard has been loaded I want the <change> to trigger when the user changes certain dropdowns. I tried the following approach. In the dropdown for which I want to prevent the <change> to trigger a condition was added to check that the Token $FirstLoad$ is set to "Done". <change> <condition match="tostring($FirstLoad$) == &quot;Done&quot;"> <set token="form.PipelineName">*</set> <set token="form.LabelName">*</set> </condition> In the heaviest Search I set a Token when completed (Done): <done> <set token="FirstLoad">Done</set> </done>   The thinking for the above was that since on Initial Load the $FirstLoad$ Token will not initially be set which should prevent the <change> to trigger, but as soon as the $FirstLoad$ Token is updated to "Done", the <change> is triggered. Very frustrating. Anyways, maybe I am missing something simple? Any ideas are appreciated.
Hi @danielbb , there are many ProofPoint modules and many ways to take logs (syslogs, scripts, etc...) see here to be guided: https://www.proofpoint.com/us/partners/splunk Ciao. Giuseppe
Looking at Splunk base, and there are quite a lot of Proofpoint apps/TAs, which one should I install in order to connect to the Proofpoint endpoint and receive the data? 
I think that app was removed for 9.3. What is confusing is that you said this was a new build as opposed to an upgrade -  maybe you are pushing it and it shouldn't be there?    https://docs.splunk.... See more...
I think that app was removed for 9.3. What is confusing is that you said this was a new build as opposed to an upgrade -  maybe you are pushing it and it shouldn't be there?    https://docs.splunk.com/Documentation/SecureGateway/3.5.15/ReleaseNotes/Releasenotes