All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks for suggesting this bro, Let me try this once and let you know what is the result.
Hi @Ramachandran  To force the omhttp module to use HTTP instead of HTTPS, you need to specify the usehttps parameter and set it to off. action(type="omhttp" server="172.31.25.126" serverport... See more...
Hi @Ramachandran  To force the omhttp module to use HTTP instead of HTTPS, you need to specify the usehttps parameter and set it to off. action(type="omhttp" server="172.31.25.126" serverport="8088" usehttps="off" uri="/services/collector/event" headers=["Authorization: Splunk <token>"] template="RSYSLOG_SyslogProtocol23Format" queue.filename="fwdRule1" queue.maxdiskspace="1g" queue.saveonshutdown="on" queue.type="LinkedList" action.resumeRetryCount="-1" )   The usehttps parameter controls whether the module uses HTTPS or HTTP to connect to the server. By default, it is set to on, which means HTTPS is used. Setting it to off will force the module to use HTTP. Additionally, you should use serverport instead of port to specify the port number. The behavior you're seeing is expected if you only set the port to 8088 without configuring the protocol because the default protocol is HTTPS. https://www.rsyslog.com/doc/v8-stable/configuration/modules/omhttp.html  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @amit2312  If you want to extract this as part of a search then you can do the following: | rex "Service ID - (?<Service_ID>\S+)$" For example:  To convert your rex to an automatic extract... See more...
Hi @amit2312  If you want to extract this as part of a search then you can do the following: | rex "Service ID - (?<Service_ID>\S+)$" For example:  To convert your rex to an automatic extraction, add the regex as a REPORT extraction or inline FIELD extraction to your props.conf: == props.conf == [yourSourcetype]REPORT-service_id = service_id_extraction == transforms.conf ==  [service_id_extraction] REGEX = Service ID - (?<Service_ID>\S+)$  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi All, I am very new to splunk and faced a issue while extracting a value which is having alphanumeric value, with no predefined length. ex: 2025-05-15T04:32:12.397Z INFO 1 --- [nio-8080-exec-4] x... See more...
Hi All, I am very new to splunk and faced a issue while extracting a value which is having alphanumeric value, with no predefined length. ex: 2025-05-15T04:32:12.397Z INFO 1 --- [nio-8080-exec-4] x.y.z.y.LDAPAccountServiceImpl : [Request END] Failed : Cannot fetch secret for Vault Engine - XYXR_VPN_Engine, AIT - 9876 Service ID - zywstrf 2025-05-15T04:32:12.397Z INFO 1 --- [nio-8080-exec-4] x.y.z.y.LDAPAccountServiceImpl : [Request END] Failed : Cannot fetch secret for Vault Engine - XYXR_VPN_Engine, AIT - 9876 Service ID - abc123f 2025-05-15T04:32:12.397Z INFO 1 --- [nio-8080-exec-4] x.y.z.y.LDAPAccountServiceImpl : [Request END] Failed : Cannot fetch secret for Vault Engine - XYXR_VPN_Engine, AIT - 9876 Service ID - 1234-abcehu09_svc06-app_texsas_14455 I am trying get the Service ID value, which comes at the end of the line. Thanks a lot in advance. Regards, AKM
I’m forwarding logs from an EC2 instance using rsyslog with the omhttp module to a Splunk HEC endpoint running on another EC2 instance (IP: 172.31.25.126) over *port 8088*.My rsyslog.conf includes: r... See more...
I’m forwarding logs from an EC2 instance using rsyslog with the omhttp module to a Splunk HEC endpoint running on another EC2 instance (IP: 172.31.25.126) over *port 8088*.My rsyslog.conf includes: rsyslog  module(load="omhttp") action(type="omhttp"    server="172.31.25.126"    port="8088"    uri="/services/collector/event"    headers=["Authorization: Splunk <token>"]    template="RSYSLOG_SyslogProtocol23Format"    queue.filename="fwdRule1"    queue.maxdiskspace="1g"    queue.saveonshutdown="on"    queue.type="LinkedList"    action.resumeRetryCount="-1"  )### Problem:Even though I’ve explicitly configured port 8088, I get this error: omhttp: suspending ourselves due to server failure 7: Failed to connect to 172.31.25.126 port 443: No route to hostIt seems like omhttp is still trying to use *HTTPS (port 443)* instead of *plain HTTP on port 8088*.---### Questions:1. How do I force the omhttp module to use HTTP instead of HTTPS? 2. Is there a configuration parameter to explicitly set the protocol scheme (http vs https)? 3. Is this behavior expected if I just set the port to 8088 without configuring the protocol?Any insights or examples are appreciated. Thanks!
Hi, Thanks a lot for your help, it really helped. Regards, AKM
Thank you all for your reply! it helps!
Hi, If you set otel.exporter.otlp.endpoint then you shouldn’t have to set anything for the logs endpoint or the profiler logs endpoint because they should, by default, append /v1/logs to your otlp e... See more...
Hi, If you set otel.exporter.otlp.endpoint then you shouldn’t have to set anything for the logs endpoint or the profiler logs endpoint because they should, by default, append /v1/logs to your otlp endpoint. It looks like you set your profiler logs endpoint but didn’t include /v1/logs which is what I think is causing your exporting error. 
I don't believe this is correct.  Splunk uses the splunk.secret file for encrypting and decrypting passwords and other sensitive info in its configuration files. Splunk uses different algorithms for... See more...
I don't believe this is correct.  Splunk uses the splunk.secret file for encrypting and decrypting passwords and other sensitive info in its configuration files. Splunk uses different algorithms for password hashing: $6 (SHA-512): This algorithm is used for hashing passwords.   $7 (Encryption): This algorithm requires the splunk.secret file for decryption.  This is what makes it portable and useful with automation. You can generate a password hash using splunk hash-passwd <somePassword> Then you can run something like this before you start Splunk. cat <<EOF > $SPLUNK_HOME/etc/system/local/user-seed.conf [user_info] USERNAME = admin HASHED_PASSWORD = $6$TOs.jXjSRTCsfPsw$2St.t9lH9fpXd9mCEmCizWbb67gMFfBIJU37QF8wsHKSGud1QNMCuUdWkD8IFSgCZr5.W6zkjmNACGhGafQZj1 EOF Alternatively you can create and export a user-seed.conf file with the same information, put it in Ansible Vault and then have it placed in $SPLUNK_HOME/etc/system/local as part of the automation None of the hosts that user-seed.conf is being distributed to have to have the same splunk.secret since it's just hash-matching, not decrypting.  
Hi @lakshman239  I would ask the firewall to check/show that the traffic is not being blocked. I spent a lot of time with a customer recently who told me we had a direct connection to Splunk Cloud w... See more...
Hi @lakshman239  I would ask the firewall to check/show that the traffic is not being blocked. I spent a lot of time with a customer recently who told me we had a direct connection to Splunk Cloud when infact it went via some PaloAlto firewalls which were occasionally blocking, when it blocked it gave that exact error. If you're able to do the usual netcat/openssl tests that it shows your connectivity should be okay so hard to pinpoint.  Check out https://community.splunk.com/t5/Deployment-Architecture/Connection-problems-with-Universal-Forwarder-for-Linux-ARM-and/m-p/232759 which has some more detail too about.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
That is what I wanted to confirm . Do you have any suggestion what could be the other way to send logs using UF to logstash , I have tested TCP which is working but somehow it is sending splunk UF... See more...
That is what I wanted to confirm . Do you have any suggestion what could be the other way to send logs using UF to logstash , I have tested TCP which is working but somehow it is sending splunk UF  internal logs too to logstash which I need to filter later at logstash level 
There's something odd in the interaction between the <event> display and the table command and the fields control. I have an example dashboard, which does this search index=_internal user=* | table ... See more...
There's something odd in the interaction between the <event> display and the table command and the fields control. I have an example dashboard, which does this search index=_internal user=* | table _time index sourcetype user * | eval Channel=user Yet the Channel column is not even shown, even though it is in the <fields> statement. If I change the table to a fields statement or remove it completely, it works. Is there any reason you are adding the table command there? It doesn't really serve any purpose, as you are controlling display with the <fields> statement.  
Hi,   I have installed splunk 9.4.2 on-prem and downloaded and installed the 'splunkuf' app from splunkcloud universal forwarder package. Upon restarting the splunk instance, it throws following err... See more...
Hi,   I have installed splunk 9.4.2 on-prem and downloaded and installed the 'splunkuf' app from splunkcloud universal forwarder package. Upon restarting the splunk instance, it throws following errors. I just want to ensure the internal logs reach cloud before i configure the server with custom apps/add-ons. 05-14-2025 13:05:23.918 +0000 ERROR TcpOutputFd [2377196 TcpOutEloop] - Connection to host=18.xx:9997 failed. sock_error = 104. SSL Error = No error I have checked connectivity from on-prem instance to inputs1.*.splunkcloud.com:9997 using curl/telnet and openssl and firewall team confirmed the ports are open. Any thoughts on what I could be missing or suggestions to troubleshoot? thanks laks  
@bengoerz  - does that mean, we shouldn't SSL inspect the traffic from On-prem splunk instance to splunk cloud traffic, to avoid sock_error = 104? thx
@tech_g706  You’re welcome! I’m glad to hear the props configuration worked as expected
Hi @RdomSplunkUser7  I think ultimately this depends on what your searches are doing, if there is a risk of pulling in duplicate data then dedup is a good option, or you could look at using somethin... See more...
Hi @RdomSplunkUser7  I think ultimately this depends on what your searches are doing, if there is a risk of pulling in duplicate data then dedup is a good option, or you could look at using something like stats latest(fieldName) as latestFieldName It really depends on your search(es). If you'd like to share the SPL we might be able to help further.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Which dashboard?  Is it custom or Splunk-provided?
Hi,  We just upgrade Splunk to version 9.4.2 and in the dahabord we noticed that all the text are wrappedbefore the sting was cutted and ... were at the end. Do you know how to revert this auto w... See more...
Hi,  We just upgrade Splunk to version 9.4.2 and in the dahabord we noticed that all the text are wrappedbefore the sting was cutted and ... were at the end. Do you know how to revert this auto wrapping?   Thank you
Hi @sainag_splunk  In AppDynamics there is no such option. I need this for AppDynamics dash studio, please suggest for that. Thanks. Regards, Gopikrishnan R.
Just list the fields that you want after the table command https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Table