All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @erick4x4 , Splunk, by default, doesn't index twice the same logs, so, if this file is always the same or it has the same first 256 chars, it isn't read. For more infos see at https://docs.splun... See more...
Hi @erick4x4 , Splunk, by default, doesn't index twice the same logs, so, if this file is always the same or it has the same first 256 chars, it isn't read. For more infos see at https://docs.splunk.com/Documentation/Splunk/9.4.0/Admin/Inputsconf searhcing for crcSalt. Anyway, you could use a larger initCrcLenght parameter to check more than the first 256 chars, or write the file with a different name (e.g. including date and/or time) and using crcSalt = <SOURCE> parameter. Ciao. Giuseppe
Hi @takuyaikeda , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Now the overlay line and the "Total" value in the legend are hidden.  Overlay values to display are not visible yet  
@khjunfortunately changing the font size is not supported yet. Here’s a link outlining the available features. Add text, links, and images with Markdown - Splunk Documentation
I'm trying to resize text in a pie graph or column graph in a splunk dashboard studio but I'm not finding a way to do it. Does anyone know if there's a way to resize text?
@aravind If you're a Splunk Partner you surely have a partner Portal Portal Account and you can see your Channel Manager. https://splunk.my.site.com/partner/s/
Thank you for providing valuable information. While we were able to gather information related to the schedule search execution results using the query you provided, we were unable to obtain the log... See more...
Thank you for providing valuable information. While we were able to gather information related to the schedule search execution results using the query you provided, we were unable to obtain the logs detected by the search execution. However, using the query you provided as a reference, we were able to achieve our desired outcome via the server's CLI, so I would like to report that the issue has been resolved. The commands we are executing on the server are as follows: curl -sS -k -u '<ID:PW>' https://localhost:8089/services/search/jobs/export -d search='search index=_audit "user=splunk-system-user" "info=completed" NOT "result_count=0" NOT "savedsearch_name=\"\"" earliest=-2h | stats count by timestamp savedsearch_name search_id' -d output_mode=csv | while read line ; do echo "${line}" | cut -d ',' -f 1 ; echo "${line}" | cut -d ',' -f 2 ; sid=`echo ${line} | cut -d ',' -f 3 | sed "s/'//g"` ; curl -sS -k -u '<ID:PW>' https://localhost:8089/services/search/jobs/export -d search="|loadjob ${sid}" | grep "<field k='_raw'>" | sed -e 's/<\/*field[^>]*>//g' -e 's/<\/*v[^>]*>//g' ; done (I understand that this is not the optimal solution, but at least I was able to obtain the necessary information with this one-liner.) Giuseppe, grazie mille
Hello S plunk Support, We are using S plunk Cloud in our company and we need the contact details of our S plunk Cloud Account Manager to update our internal records. Could you please provide us with... See more...
Hello S plunk Support, We are using S plunk Cloud in our company and we need the contact details of our S plunk Cloud Account Manager to update our internal records. Could you please provide us with the name, email, and contact information of our assigned account manager?  
we have our environment in google cloud platform where we have SH cluster with 3 SH. and earlier the issue was notable index data was getting stored locally in each search head to fix this we have c... See more...
we have our environment in google cloud platform where we have SH cluster with 3 SH. and earlier the issue was notable index data was getting stored locally in each search head to fix this we have created the notable index at indexer cluster and then forwarded the SH data toward the Indexer cluster using "indexer discovery" method, now the problem is the configuration (props.conf & transform.conf) which were responsible to redirect the data to notable index locally (each SH) are not taking effect to forward the data into notable index created in indexer cluster. however internal index data are forwarding now in the indexer cluster.
Hi @Esky73 , thank you for your insights. Can you provide details on the custom alert action that runs a batch job restart? Is the script being run from Splunk to Control M server? or the script is ... See more...
Hi @Esky73 , thank you for your insights. Can you provide details on the custom alert action that runs a batch job restart? Is the script being run from Splunk to Control M server? or the script is present on the Control M server, and Splunk have a way to trigger it externally?
@tt-nexteng  The password to the RSA private key that is in the server certificate file is wrong. This generates an error similar to the following: Can't read key file /opt/splunkforwarder/etc/auth/... See more...
@tt-nexteng  The password to the RSA private key that is in the server certificate file is wrong. This generates an error similar to the following: Can't read key file /opt/splunkforwarder/etc/auth/server.pem 02-04-2025 03:53:19.898 +0000 ERROR HTTPServer [13291 HTTPDispatch] - Error setting up TLS, TLS will not be enabled. file=server Exception: Can't read key file /opt/splunkforwarder/etc/auth/server.pem SSL error code=101077092 message="error:06065064:digital envelope routines:EVP_DecryptFinal_ex:bad decrypt"  
Hi Clarence, what Content-Type are you using in the headers? With the latest release of the Controller, there is a specific Content-Type you need to use, not the protobuf one like always
The erros from UF log. 02-04-2025 03:53:19.270 +0000 WARN SSLOptions [0 MainThread] - server.conf/[sslConfig]/sslVerifyServerCert is false disabling certificate validation; must be set to "true" for... See more...
The erros from UF log. 02-04-2025 03:53:19.270 +0000 WARN SSLOptions [0 MainThread] - server.conf/[sslConfig]/sslVerifyServerCert is false disabling certificate validation; must be set to "true" for increased security 02-04-2025 03:53:19.274 +0000 ERROR SSLCommon [0 MainThread] - Can't read key file /opt/splunkforwarder/etc/auth/server.pem SSL error code=101077092 message="error:06065064:digital envelope routines:EVP_DecryptFinal_ex:bad decrypt" 02-04-2025 03:53:19.274 +0000 ERROR ServerConfig [0 MainThread] - Couldn't initialize SSL Context for HTTPClient in ServerConfig 02-04-2025 03:53:19.274 +0000 INFO ServerConfig [0 MainThread] - disableSSLShutdown=0 02-04-2025 03:53:19.658 +0000 INFO ProxyConfig [13255 MainThread] - Successfully initialized enable_tls_proxy=0 from server.conf for splunkd. 02-04-2025 03:53:19.658 +0000 INFO loader [13255 MainThread] - TLS proxy is not enabled. Will not start the TLS proxy server. 02-04-2025 03:53:19.798 +0000 INFO TcpOutputProc [13334 parsing] - Initializing connection for non-ssl forwarding to 176.32.83.56:9997 02-04-2025 03:53:19.886 +0000 INFO loader [13291 HTTPDispatch] - Setting SSL configuration. 02-04-2025 03:53:19.886 +0000 INFO loader [13291 HTTPDispatch] - Server supporting SSL versions=TLS1.2 02-04-2025 03:53:19.898 +0000 ERROR HTTPServer [13291 HTTPDispatch] - Error setting up TLS, TLS will not be enabled. file=server Exception: Can't read key file /opt/splunkforwarder/etc/auth/server.pem SSL error code=101077092 message="error:06065064:digital envelope routines:EVP_DecryptFinal_ex:bad decrypt"
If the intention is cloning all data to both and you're okay with the double license ingest, you just need to configure outputs similar to this below. There could be other TLS settings to include, b... See more...
If the intention is cloning all data to both and you're okay with the double license ingest, you just need to configure outputs similar to this below. There could be other TLS settings to include, but adding a comma-delimited list in [tcpout] will duplicate all logs to both groups listed, which can each have their own independent cert settings. Another method is to create a "050_clone_app" with just the [tcpout] stanza, calling the exact name of the tcp group in the 100 UF cloud app and your other outputs app to your on-prem. That way it's modular, can be managed with a DS, and when you're ready to cut one out you just delete the "050" app and the outputs you no longer want. We do this all the time to migrate from one Splunk to another with a clone period during migration and testing. outputs.conf [tcpout] defaultGroup = cloud_indexers, onprem_indexers [tcpout:cloud_indexers] server = 192.168.7.112:9998, idx2, idx3, etc clientCert = $SPLUNK_HOME/etc/auth/kramerCerts/SplunkServerCert.pem (retain settings from UF Cloud 100 app) [tcpout:onprem_indexers] server = 192.168.1.102:9998, idx2, idx3, etc clientCert = $SPLUNK_HOME/etc/auth/kramerCerts/SplunkServerCert.pem    
Your config looks good, so it could be the certs were not prepared correctly, or Splunk cannot read them. Splunk's docs are not clear or accurate for cert prep. The server cert on the Indexer must ... See more...
Your config looks good, so it could be the certs were not prepared correctly, or Splunk cannot read them. Splunk's docs are not clear or accurate for cert prep. The server cert on the Indexer must have the leaf cert followed by the private key and that's it. Any intermediate or root certs are simply referenced by the sslRootCAPath in server.conf Ensure those cert files are all readable and owned by the 'splunk' user and chmod them to 640 to be safe. Make sure you can cat the cert and root using the the splunk user on the indexer. By the way for log encryption, Splunk only uses the server cert (Indexer cert) to encrypt the logs. As others mentioned, use the openssl command and check the cert results from it. $SPLUNK_HOME/bin/splunk cmd openssl s_client -connect <your_indexer>:<port> -showcerts  Also search the internal logs on both the indexer and the UF for TLS errors. cat /opt/splunk/var/log/splunk/splunkd.log | grep -i 'tls\|ssl'
I don't think I can as a partner, but I frequently submit changes to their docs and post in Slack
You can just create a .../local/inputs.conf with stanzas and attributes that override the default config like this:   [monitor://$SPLUNK_HOME/var/log/splunk/splunkd.log] _TCP_ROUTING = default-auto... See more...
You can just create a .../local/inputs.conf with stanzas and attributes that override the default config like this:   [monitor://$SPLUNK_HOME/var/log/splunk/splunkd.log] _TCP_ROUTING = default-autolb-group index = _internal [monitor://$SPLUNK_HOME/var/log/splunk/metrics.log] _TCP_ROUTING = default-autolb-group index = _internal  
Insight VM API will be better through api but i dont see the addon on the splunk cloud. I will recommend not to use the nexpose because will mess your nexpose appliance.
Hi Andy, Recently, we have got the similar issue. We can search the data but the app's dashboard not populating the data. I have installed the app into search heads and heavy forwarder as well. I ... See more...
Hi Andy, Recently, we have got the similar issue. We can search the data but the app's dashboard not populating the data. I have installed the app into search heads and heavy forwarder as well. I can see props.conf in app's default configs. Do you mean props.conf needs updating there ? 
I use Splunk to monitor a basic text file on multiple Windows Servers with the following stanza in inputs.conf: [monitor://C:\Windows\System32\logfiles\Ansible.log] disabled = 0 sourcetype = Ansib... See more...
I use Splunk to monitor a basic text file on multiple Windows Servers with the following stanza in inputs.conf: [monitor://C:\Windows\System32\logfiles\Ansible.log] disabled = 0 sourcetype = Ansible index = sw interval = 10 This always works at first and I can find all the events inside Splunk. But that Ansible.log file is regularly updated by Powershell or ScheduledTask or something similar and over time several servers will have 0 events for that Ansible.log file. In the file system, the file has been updated recently, but the Splunk Universal Forwarder just doesn't sent the updates but those servers have events from other SourceTypes. Restarting the SplunkForwarder service, the server, upgrading the Splunk Universal Forwarder does not fix the issue. The file is a simple raw text file in (typically UTF8 but I've tried multiple formats). I've make sure permissions are correct and the service which runs the SplunkForwarder has read rights. What else can I do to have the SplunkForwarder send updates to that file?