All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi,   I have installed splunk 9.4.2 on-prem and downloaded and installed the 'splunkuf' app from splunkcloud universal forwarder package. Upon restarting the splunk instance, it throws following err... See more...
Hi,   I have installed splunk 9.4.2 on-prem and downloaded and installed the 'splunkuf' app from splunkcloud universal forwarder package. Upon restarting the splunk instance, it throws following errors. I just want to ensure the internal logs reach cloud before i configure the server with custom apps/add-ons. 05-14-2025 13:05:23.918 +0000 ERROR TcpOutputFd [2377196 TcpOutEloop] - Connection to host=18.xx:9997 failed. sock_error = 104. SSL Error = No error I have checked connectivity from on-prem instance to inputs1.*.splunkcloud.com:9997 using curl/telnet and openssl and firewall team confirmed the ports are open. Any thoughts on what I could be missing or suggestions to troubleshoot? thanks laks  
@bengoerz  - does that mean, we shouldn't SSL inspect the traffic from On-prem splunk instance to splunk cloud traffic, to avoid sock_error = 104? thx
@tech_g706  You’re welcome! I’m glad to hear the props configuration worked as expected
Hi @RdomSplunkUser7  I think ultimately this depends on what your searches are doing, if there is a risk of pulling in duplicate data then dedup is a good option, or you could look at using somethin... See more...
Hi @RdomSplunkUser7  I think ultimately this depends on what your searches are doing, if there is a risk of pulling in duplicate data then dedup is a good option, or you could look at using something like stats latest(fieldName) as latestFieldName It really depends on your search(es). If you'd like to share the SPL we might be able to help further.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Which dashboard?  Is it custom or Splunk-provided?
Hi,  We just upgrade Splunk to version 9.4.2 and in the dahabord we noticed that all the text are wrappedbefore the sting was cutted and ... were at the end. Do you know how to revert this auto w... See more...
Hi,  We just upgrade Splunk to version 9.4.2 and in the dahabord we noticed that all the text are wrappedbefore the sting was cutted and ... were at the end. Do you know how to revert this auto wrapping?   Thank you
Hi @sainag_splunk  In AppDynamics there is no such option. I need this for AppDynamics dash studio, please suggest for that. Thanks. Regards, Gopikrishnan R.
Just list the fields that you want after the table command https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Table  
 @gopu  never used AppDynamics, in studio go to the json source code for each markdown.
Hello! You are correct. I had to dig into it and found out that the primaryGroupID is considered an "implicit membership." It is uncommon to change but Guest is 514, as an example. The issue happens... See more...
Hello! You are correct. I had to dig into it and found out that the primaryGroupID is considered an "implicit membership." It is uncommon to change but Guest is 514, as an example. The issue happens with the Guest user account as well since it is (traditionally) only a member of the security group called Domain Guests. I was able to confirm this using the Windows LDP tool. Apparently, I just never had to use LDAP to actually query for all memberships in the past, it was always using third-party tools which would include even the "implicit" memberships.
Hello! I maintain Splunk reports. Some of the Pivot reports are based on a Dataset that is generated based on a simple search. Duplicate values ​​have not been taken into account in the generation. ... See more...
Hello! I maintain Splunk reports. Some of the Pivot reports are based on a Dataset that is generated based on a simple search. Duplicate values ​​have not been taken into account in the generation. Due to an error, there were two data sources for a few weeks. This resulted in identical duplicate rows in the dataset. In the future, duplicate rows can be removed from the dataset with a simple dedup. However, are there any best practices to fix this?
1. Normally a non-admin user should not have this capability. This is normally used for maintaining credentials which are used for third party integrations (modular inputs, custom alert actions). 2.... See more...
1. Normally a non-admin user should not have this capability. This is normally used for maintaining credentials which are used for third party integrations (modular inputs, custom alert actions). 2. This works for credentials managed in the official Splunk way. If - for some reason - an addon developer decided to do something "their own way" (for example - decided that for each run of an input, it will pull credentials from a github project; no, that's not a real example but nothing is forbidding an addon author from inventing anything, no matter how stupid), that will most probably not be limited by this capability. 3. Obviously if there are credentials for access stored for use in automated way, you should have additional controls implemented on the destination system mitigating risk of abuse of those credentials. Their use of course should based on the rule of least required privilege and ideally they should be limited per IP. At the very least, if there is no other way, their use in the destination system should be monitored and reviewed regularly.
@livehybrid  @kiran_panchavat  thank you very much. | set the props like this, and field extractions are fine now. [netiq] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+)<\d+> TIME_PREFIX ... See more...
@livehybrid  @kiran_panchavat  thank you very much. | set the props like this, and field extractions are fine now. [netiq] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+)<\d+> TIME_PREFIX = rt= TIME_FORMAT = %s%3N MAX_TIMESTAMP_LOOKAHEAD = 20 KV_MODE = auto TRUNCATE = 99999
@ljvc I appreciate the information you were able to provide, this is helpful. On a side note I do have an active case open with Splunk support on this topic. Their latest update was that this has bee... See more...
@ljvc I appreciate the information you were able to provide, this is helpful. On a side note I do have an active case open with Splunk support on this topic. Their latest update was that this has been a reported issue, and that they expect it to be addressed in ES 8.2 per an internal JIRA ticket.
Yes, that is correct. We are using admon in the default. I'll give that a go. Also, if I wanted to also limit it by that and then the destination IP, would I use a transforms.conf for that? Many Thanks
Hi @Mobyd  Please could you confirm - is this using an admon:// input? If so you should be able to specify a "startingNode" which would the OU which you would like to monitor. https://docs.splunk.... See more...
Hi @Mobyd  Please could you confirm - is this using an admon:// input? If so you should be able to specify a "startingNode" which would the OU which you would like to monitor. https://docs.splunk.com/Documentation/Splunk/latest/admin/Inputsconf#:~:text=startingNode%20%3D%20%3Cstring%3E  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @tech_g706  The issue here is that syslog arrives with the default syslog sourcetype and you shouldnt start applying field extractions to a default sourcetype. Do you use the syslog input for an... See more...
Hi @tech_g706  The issue here is that syslog arrives with the default syslog sourcetype and you shouldnt start applying field extractions to a default sourcetype. Do you use the syslog input for any other feeds? If you dont use it for other feeds then one thing you could do is change/specify the sourcetype in the syslog input stanza to something specific to something like "netiq:log" and then you can apply your relevant props/transforms based on this sourcetype. However, if you are you using the syslog input for other feeds too then you would need to use some other props/transforms to determine IF it is netIQ and then apply props accordingly, such as changing the sourcetype for that data. The other thing which you might want to look at is using Splunk for Syslog (SC4S) which supports the CEF format that NetIQ is sending, check out the relevant SC4S docs here   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi,      I am trying to gather data from a specific organisation unit in Active Directory and ignore everything else? I have tried with a transforms.conf to allow it but didn't seem to work.  I coul... See more...
Hi,      I am trying to gather data from a specific organisation unit in Active Directory and ignore everything else? I have tried with a transforms.conf to allow it but didn't seem to work.  I could sort of get it to work by writing a block for everything else but its a bit of a pain as the environment is shared. Any one had any experience  doing this sort of thing?
Hi! Yes, here is the complete search: $case_token$ sourcetype=hayabusa $host_token$ $level_token$ $rule_token$ | table Timestamp, host, Computer, Level, Channel, RecordID, EventID, Ruletitle, Detail... See more...
Hi! Yes, here is the complete search: $case_token$ sourcetype=hayabusa $host_token$ $level_token$ $rule_token$ | table Timestamp, host, Computer, Level, Channel, RecordID, EventID, Ruletitle, Details, * Channel is added as a field in the table command, as well as specified in the code: <fields>Timestamp, host, Computer, Level, Channel, RecordID, EventID, RuleTItle, Details</fields>
@tech_g706  The default syslog sourcetype is too generic and often leads to improper parsing, as it’s not tailored to specific log formats like NetIQ. Instead, create a custom sourcetype to handle t... See more...
@tech_g706  The default syslog sourcetype is too generic and often leads to improper parsing, as it’s not tailored to specific log formats like NetIQ. Instead, create a custom sourcetype to handle the unique structure of NetIQ logs.   1. Review the Answers post   Solved: What are the best practices for defining source ty... - Splunk Community   Reference:    https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/Listofpretrainedsourcetypes  https://kinneygroup.com/blog/splunk-magic-8-props-conf/