All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I’m forwarding logs from an EC2 instance using rsyslog with the omhttp module to a Splunk HEC endpoint running on another EC2 instance (IP: 172.31.25.126) over *port 8088*.My rsyslog.conf includes: r... See more...
I’m forwarding logs from an EC2 instance using rsyslog with the omhttp module to a Splunk HEC endpoint running on another EC2 instance (IP: 172.31.25.126) over *port 8088*.My rsyslog.conf includes: rsyslog  module(load="omhttp") action(type="omhttp"    server="172.31.25.126"    port="8088"    uri="/services/collector/event"    headers=["Authorization: Splunk <token>"]    template="RSYSLOG_SyslogProtocol23Format"    queue.filename="fwdRule1"    queue.maxdiskspace="1g"    queue.saveonshutdown="on"    queue.type="LinkedList"    action.resumeRetryCount="-1"  )### Problem:Even though I’ve explicitly configured port 8088, I get this error: omhttp: suspending ourselves due to server failure 7: Failed to connect to 172.31.25.126 port 443: No route to hostIt seems like omhttp is still trying to use *HTTPS (port 443)* instead of *plain HTTP on port 8088*.---### Questions:1. How do I force the omhttp module to use HTTP instead of HTTPS? 2. Is there a configuration parameter to explicitly set the protocol scheme (http vs https)? 3. Is this behavior expected if I just set the port to 8088 without configuring the protocol?Any insights or examples are appreciated. Thanks!
Hi, Thanks a lot for your help, it really helped. Regards, AKM
Thank you all for your reply! it helps!
Hi, If you set otel.exporter.otlp.endpoint then you shouldn’t have to set anything for the logs endpoint or the profiler logs endpoint because they should, by default, append /v1/logs to your otlp e... See more...
Hi, If you set otel.exporter.otlp.endpoint then you shouldn’t have to set anything for the logs endpoint or the profiler logs endpoint because they should, by default, append /v1/logs to your otlp endpoint. It looks like you set your profiler logs endpoint but didn’t include /v1/logs which is what I think is causing your exporting error. 
I don't believe this is correct.  Splunk uses the splunk.secret file for encrypting and decrypting passwords and other sensitive info in its configuration files. Splunk uses different algorithms for... See more...
I don't believe this is correct.  Splunk uses the splunk.secret file for encrypting and decrypting passwords and other sensitive info in its configuration files. Splunk uses different algorithms for password hashing: $6 (SHA-512): This algorithm is used for hashing passwords.   $7 (Encryption): This algorithm requires the splunk.secret file for decryption.  This is what makes it portable and useful with automation. You can generate a password hash using splunk hash-passwd <somePassword> Then you can run something like this before you start Splunk. cat <<EOF > $SPLUNK_HOME/etc/system/local/user-seed.conf [user_info] USERNAME = admin HASHED_PASSWORD = $6$TOs.jXjSRTCsfPsw$2St.t9lH9fpXd9mCEmCizWbb67gMFfBIJU37QF8wsHKSGud1QNMCuUdWkD8IFSgCZr5.W6zkjmNACGhGafQZj1 EOF Alternatively you can create and export a user-seed.conf file with the same information, put it in Ansible Vault and then have it placed in $SPLUNK_HOME/etc/system/local as part of the automation None of the hosts that user-seed.conf is being distributed to have to have the same splunk.secret since it's just hash-matching, not decrypting.  
Hi @lakshman239  I would ask the firewall to check/show that the traffic is not being blocked. I spent a lot of time with a customer recently who told me we had a direct connection to Splunk Cloud w... See more...
Hi @lakshman239  I would ask the firewall to check/show that the traffic is not being blocked. I spent a lot of time with a customer recently who told me we had a direct connection to Splunk Cloud when infact it went via some PaloAlto firewalls which were occasionally blocking, when it blocked it gave that exact error. If you're able to do the usual netcat/openssl tests that it shows your connectivity should be okay so hard to pinpoint.  Check out https://community.splunk.com/t5/Deployment-Architecture/Connection-problems-with-Universal-Forwarder-for-Linux-ARM-and/m-p/232759 which has some more detail too about.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
That is what I wanted to confirm . Do you have any suggestion what could be the other way to send logs using UF to logstash , I have tested TCP which is working but somehow it is sending splunk UF... See more...
That is what I wanted to confirm . Do you have any suggestion what could be the other way to send logs using UF to logstash , I have tested TCP which is working but somehow it is sending splunk UF  internal logs too to logstash which I need to filter later at logstash level 
There's something odd in the interaction between the <event> display and the table command and the fields control. I have an example dashboard, which does this search index=_internal user=* | table ... See more...
There's something odd in the interaction between the <event> display and the table command and the fields control. I have an example dashboard, which does this search index=_internal user=* | table _time index sourcetype user * | eval Channel=user Yet the Channel column is not even shown, even though it is in the <fields> statement. If I change the table to a fields statement or remove it completely, it works. Is there any reason you are adding the table command there? It doesn't really serve any purpose, as you are controlling display with the <fields> statement.  
Hi,   I have installed splunk 9.4.2 on-prem and downloaded and installed the 'splunkuf' app from splunkcloud universal forwarder package. Upon restarting the splunk instance, it throws following err... See more...
Hi,   I have installed splunk 9.4.2 on-prem and downloaded and installed the 'splunkuf' app from splunkcloud universal forwarder package. Upon restarting the splunk instance, it throws following errors. I just want to ensure the internal logs reach cloud before i configure the server with custom apps/add-ons. 05-14-2025 13:05:23.918 +0000 ERROR TcpOutputFd [2377196 TcpOutEloop] - Connection to host=18.xx:9997 failed. sock_error = 104. SSL Error = No error I have checked connectivity from on-prem instance to inputs1.*.splunkcloud.com:9997 using curl/telnet and openssl and firewall team confirmed the ports are open. Any thoughts on what I could be missing or suggestions to troubleshoot? thanks laks  
@bengoerz  - does that mean, we shouldn't SSL inspect the traffic from On-prem splunk instance to splunk cloud traffic, to avoid sock_error = 104? thx
@tech_g706  You’re welcome! I’m glad to hear the props configuration worked as expected
Hi @RdomSplunkUser7  I think ultimately this depends on what your searches are doing, if there is a risk of pulling in duplicate data then dedup is a good option, or you could look at using somethin... See more...
Hi @RdomSplunkUser7  I think ultimately this depends on what your searches are doing, if there is a risk of pulling in duplicate data then dedup is a good option, or you could look at using something like stats latest(fieldName) as latestFieldName It really depends on your search(es). If you'd like to share the SPL we might be able to help further.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Which dashboard?  Is it custom or Splunk-provided?
Hi,  We just upgrade Splunk to version 9.4.2 and in the dahabord we noticed that all the text are wrappedbefore the sting was cutted and ... were at the end. Do you know how to revert this auto w... See more...
Hi,  We just upgrade Splunk to version 9.4.2 and in the dahabord we noticed that all the text are wrappedbefore the sting was cutted and ... were at the end. Do you know how to revert this auto wrapping?   Thank you
Hi @sainag_splunk  In AppDynamics there is no such option. I need this for AppDynamics dash studio, please suggest for that. Thanks. Regards, Gopikrishnan R.
Just list the fields that you want after the table command https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Table  
 @gopu  never used AppDynamics, in studio go to the json source code for each markdown.
Hello! You are correct. I had to dig into it and found out that the primaryGroupID is considered an "implicit membership." It is uncommon to change but Guest is 514, as an example. The issue happens... See more...
Hello! You are correct. I had to dig into it and found out that the primaryGroupID is considered an "implicit membership." It is uncommon to change but Guest is 514, as an example. The issue happens with the Guest user account as well since it is (traditionally) only a member of the security group called Domain Guests. I was able to confirm this using the Windows LDP tool. Apparently, I just never had to use LDAP to actually query for all memberships in the past, it was always using third-party tools which would include even the "implicit" memberships.
Hello! I maintain Splunk reports. Some of the Pivot reports are based on a Dataset that is generated based on a simple search. Duplicate values ​​have not been taken into account in the generation. ... See more...
Hello! I maintain Splunk reports. Some of the Pivot reports are based on a Dataset that is generated based on a simple search. Duplicate values ​​have not been taken into account in the generation. Due to an error, there were two data sources for a few weeks. This resulted in identical duplicate rows in the dataset. In the future, duplicate rows can be removed from the dataset with a simple dedup. However, are there any best practices to fix this?
1. Normally a non-admin user should not have this capability. This is normally used for maintaining credentials which are used for third party integrations (modular inputs, custom alert actions). 2.... See more...
1. Normally a non-admin user should not have this capability. This is normally used for maintaining credentials which are used for third party integrations (modular inputs, custom alert actions). 2. This works for credentials managed in the official Splunk way. If - for some reason - an addon developer decided to do something "their own way" (for example - decided that for each run of an input, it will pull credentials from a github project; no, that's not a real example but nothing is forbidding an addon author from inventing anything, no matter how stupid), that will most probably not be limited by this capability. 3. Obviously if there are credentials for access stored for use in automated way, you should have additional controls implemented on the destination system mitigating risk of abuse of those credentials. Their use of course should based on the rule of least required privilege and ideally they should be limited per IP. At the very least, if there is no other way, their use in the destination system should be monitored and reviewed regularly.