All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I believe (although I rarely use the event visualisation) that you must specify a  | fields a b c... in your SPL to get fields from the event to show up in the event panel as fields. The XML <field... See more...
I believe (although I rarely use the event visualisation) that you must specify a  | fields a b c... in your SPL to get fields from the event to show up in the event panel as fields. The XML <fields> element is used as a way to limit the display of the available fields from the search, so in order to get those fields there in the first place, you must use the SPL fields command to specify fields you want. Using the table command is not the right way
The Splunk fix is known as SPL-270280.  A fix has been included in the latest version 9.4.2 and backported to supported versions of older releases  9.3.4, 9.2.6 and 9.1.9 https://splunk.my.site.co... See more...
The Splunk fix is known as SPL-270280.  A fix has been included in the latest version 9.4.2 and backported to supported versions of older releases  9.3.4, 9.2.6 and 9.1.9 https://splunk.my.site.com/customer/s/article/Splunk-vulnerability-libcurl-7-32-0-8-9-1-DoS-CVE-2024-7264-TEN-205024
_raw is like ... \"products\": [\"foo\", \"bar\"], ...
It's not that httpout is not supported for logstash, it's that logstash cannot do s2s. Yes, it is confusing but despite sharing some of the low-level mechanics, s2s over http (which is httpout) h... See more...
It's not that httpout is not supported for logstash, it's that logstash cannot do s2s. Yes, it is confusing but despite sharing some of the low-level mechanics, s2s over http (which is httpout) has nothing to do with "normal HEC" .
You can make events generated by local inputs be sent to just one output group. But that will not be pretty. You need to set _TCP_ROUTING key for each input stanza that you want to selectively manag... See more...
You can make events generated by local inputs be sent to just one output group. But that will not be pretty. You need to set _TCP_ROUTING key for each input stanza that you want to selectively manage. That means adding this to every single Splunk's own input. I'd just create a separate app and create inputs.conf in that app containing just this one setting per each input stanza. EDIT: And one more thing - you cannot use both tcpout and httpout at the same time.
so I tried this but end up with same problem  UF--> HF(routing) --> LS( writing to a file)  httpout is definitely not working/supported for logstash . 
exactly , stopping internal logs at UF level does not work however at logstash level it worked . but yeah via HEC it is not possible it seems so far . Still waiting for others to respond may be we cr... See more...
exactly , stopping internal logs at UF level does not work however at logstash level it worked . but yeah via HEC it is not possible it seems so far . Still waiting for others to respond may be we crack something amazing here collectively . Thank you for response though 
Thank you for your response , I have tried below but with that also same problem .  codec => plain { charset => "UTF-8" } codec => plain { charset => "UTF-16LE" }
Are you sure that your character sets are correctly defined? Based on your example it seems that you have at least UTF escaped characters and probably real UTF or some other in your file?
I have used eventhub with splunk without issues e.g. with AKS and other logs. Just use https://splunkbase.splunk.com/app/3110 this app to ingest those. 
Any progress here?
Hi! Thank you for your response. When I take out the table command, only the _time, host, Level, and RuleTitle fields show up. The fields I have included in <fields></fields> don't all show up.
You used httpout which doesn't use this option at all so I completely missed that.
Well I was using this already as mentioned in my original post . 
for anyone that would like to see this work better, please consider voting for my idea here to support long query urls: https://ideas.splunk.com/ideas/EID-I-2569 to me, this is not uncommon at all. ... See more...
for anyone that would like to see this work better, please consider voting for my idea here to support long query urls: https://ideas.splunk.com/ideas/EID-I-2569 to me, this is not uncommon at all.  it's a daily problem that I have to work around.   (I'm aware of the current solutions and already use them.)
for anyone that would like to see this work better, please consider voting for my idea here to support long query urls: https://ideas.splunk.com/ideas/EID-I-2569 to me, this is not uncommon at all. ... See more...
for anyone that would like to see this work better, please consider voting for my idea here to support long query urls: https://ideas.splunk.com/ideas/EID-I-2569 to me, this is not uncommon at all.  it's a daily problem that I have to work around.   (I'm aware of the current solutions and already use them.)
Hi @gheller   The latest docs are at https://docs.tenable.com/integrations/Splunk/Content/Welcome.htm which they have recently updated, there is a great diagram to show where things should be instal... See more...
Hi @gheller   The latest docs are at https://docs.tenable.com/integrations/Splunk/Content/Welcome.htm which they have recently updated, there is a great diagram to show where things should be installed:       In short, the Tenable Add-On for Splunk should be installed on your SH and HF (with inputs created on HF, or pushed out via your deployment server to HF if appropriate) and then install the Tenable App for Splunk on just the SH).  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Good find, @PickleRick !  The docs do imply one should set sendCookedData=false when sending to third-party systems. @vikas_gopalPlease try that and report the results.
CLONE_SOURCETYPE works on all events on which it is fired regardless of the REGEX value. In other words - you cannot limit its scope. If you assign a transform with CLONE_SOURCETYPE to a sourcetype, ... See more...
CLONE_SOURCETYPE works on all events on which it is fired regardless of the REGEX value. In other words - you cannot limit its scope. If you assign a transform with CLONE_SOURCETYPE to a sourcetype, source or host, it will clone your event without any filtering. And yes, the docs on CLONE_SOURCETYPE are a bit misleading and confusing.
Depends on what you means by "require HF". Modular inputs must be run on a "full" Splunk Enterprise instance. So in this meaning - it requires HF because it won't run on UF. Technically you can run t... See more...
Depends on what you means by "require HF". Modular inputs must be run on a "full" Splunk Enterprise instance. So in this meaning - it requires HF because it won't run on UF. Technically you can run the modular input on an All-in-one instance without spinning up a separate HF. While you could run it also directly on an indexer or SH, it's not a recommended architecture - those roles are best left alone with what they do.