All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @gcusello , Thank you very much. It is very troubling that I can't inquire, but I'll wait a little longer.
Hi @Hiroshi , there's a similat issue on the Partner Portal, I suppose that they are in maintenance. Ciao. Giuseppe
Why can't I open the Support Portal page? I am having trouble referencing a case.
Please check the syslogSourceType and reconfigure it syslogSourceType = <string> * Specifies an additional rule for handling data, in addition to that provided by the 'syslog' source type. * This ... See more...
Please check the syslogSourceType and reconfigure it syslogSourceType = <string> * Specifies an additional rule for handling data, in addition to that provided by the 'syslog' source type. * This string is used as a substring match against the sourcetype key. For example, if the string is set to "syslog", then all sourcetypes containing the string 'syslog' receive this special treatment. * To match a sourcetype explicitly, use the pattern "sourcetype::sourcetype_name". * Example: syslogSourceType = sourcetype::apache_common * Data that is "syslog" or matches this setting is assumed to already be in syslog format. * Data that does not match the rules has a header, optionally a timestamp (if defined in 'timestampformat'), and a hostname added to the front of the event. This is how Splunk software causes arbitrary log data to match syslog expectations. * No default. outputs.conf - Splunk Documentation  
Hi @kvm  More details needed pls: 1) Splunk version, 2) Cloud or on-prim, 3) Dynatrace version, 4) UF (and HF) version? 5) Splunk's own SSL Certificate(Linux's SSL certificate) or third party S... See more...
Hi @kvm  More details needed pls: 1) Splunk version, 2) Cloud or on-prim, 3) Dynatrace version, 4) UF (and HF) version? 5) Splunk's own SSL Certificate(Linux's SSL certificate) or third party SSL certificate?
Hi Team, We could see latency in logs Log ingestion via - syslog Network devices --> Syslog server --> splunk  Using below query, we could see minimum 10 mins to maxminum 60 mins log la... See more...
Hi Team, We could see latency in logs Log ingestion via - syslog Network devices --> Syslog server --> splunk  Using below query, we could see minimum 10 mins to maxminum 60 mins log latency index="ABC" sourcetype="syslog" source="/syslog*" | eval indextime=strftime(_indextime,"%c") | table _raw _time indextime What should be our next steps to check where the latency is and how to fix it?
hello  , this is the current example of the outputs.conf, but still the header is not gone: [tcpout-server://xxxx..xxx:9997][tcpout-server://yyy.yyy.yyy:9997] [tcpout-server://zz.zzz.zzz:9997]... See more...
hello  , this is the current example of the outputs.conf, but still the header is not gone: [tcpout-server://xxxx..xxx:9997][tcpout-server://yyy.yyy.yyy:9997] [tcpout-server://zz.zzz.zzz:9997] [tcpout:default-autolb-group] server = xx.xxx.xxx:9997,yyy.yyy.yyy:9997,zz.zzz.zzz:9997 disabled = false [syslog] #defaultGroup = syslogGroup2 [syslog:syslogGroup1] server = aa.aaa.aa.a.:514 type = udp syslogSourceType = fortigate [syslog:syslogGroup2] server = bb.bbb.bbb:517 type = udp syslogSourceType = fortigate can you give me an example of how i could fix it? Thank you very much Giulia  
index=_introspection sourcetype=splunk_resource_usage host IN ("hostname" ) component=Hostwide | eval total_cpu_usage=('data.cpu_system_pct' + 'data.cpu_user_pct') | eval Tenant=case(match(host,"name... See more...
index=_introspection sourcetype=splunk_resource_usage host IN ("hostname" ) component=Hostwide | eval total_cpu_usage=('data.cpu_system_pct' + 'data.cpu_user_pct') | eval Tenant=case(match(host,"name"),"Core",match(host,"name"),"Enterprise Security",match(host,"name"),"Critical Reports",match(host,"hostname"),"Mgmt",match(host,"hostname"),"IDX",match(host,"hostname"),"AWE",match(host,"hostname"),"ABC",1==1,host) | eval Env=case(match(host,"hostname"),"Prod",match(host,"hostname"),"E2E",match(host,"hostname"),"ABC",1==1,splunk_server) | fields host_zone Tenant _time total_cpu_usage | table host_zone Tenant _time total_cpu_usage | search host_zone="pr" Tenant="Core" | bin span=24h aligntime=@d _time | stats Perc90(total_cpu_usage) AS cpu_usage BY _time | trendline sma2(cpu_usage) AS trend | fields * trend
Hi @wm , why are you using crcSalt=<SOURCE> ? It's usually used to reindex already indexed data, usually isn't useful. try to delete it. Ciao. Giuseppe
this is inputs.conf  [monitor://D:\temp\zkstats*.json] crcSalt = <SOURCE> disabled = false followTail = 0 index = abc sourcetype = zk_stats props.conf [zk_stats] KV_MODE = json INDEXED_EXTRACTIONS... See more...
this is inputs.conf  [monitor://D:\temp\zkstats*.json] crcSalt = <SOURCE> disabled = false followTail = 0 index = abc sourcetype = zk_stats props.conf [zk_stats] KV_MODE = json INDEXED_EXTRACTIONS = json however my search code index=abc sourcetype = zk_stats is not getting new events. meaning to say if zkstats20240824_0700 for example new files coming in it wont re index
Please feel free to share your current outsputs.conf. If you use the [syslog] stanza to forward the data to your third-party system no additional header should be added by splunk. Forward data to t... See more...
Please feel free to share your current outsputs.conf. If you use the [syslog] stanza to forward the data to your third-party system no additional header should be added by splunk. Forward data to third-party systems - Splunk Documentation
You could try to check the current tailing status Solved: Is there some way to see the current tailing statu... - Splunk Community The mgmt. port must be opened temporarily on the UFs 
Hi, I am currently dealing with some logs being forwarded via syslog to a third party system. The question is if there is an option to prevent splunk from adding an additional header to each message... See more...
Hi, I am currently dealing with some logs being forwarded via syslog to a third party system. The question is if there is an option to prevent splunk from adding an additional header to each message before it is forwarded. So there should be a way to disable the additional syslog header when using forwarding, so that the third party system receives the original message by removing the header. Any ideas, can you give me a practical example? I am trying to test by modifying the outputs.conf.  thanks, Giulia
I found the problem When you select an index, by default you must select one of the indexes on that instance of Splunk Enterprise. This means that you cannot select an index that you have confi... See more...
I found the problem When you select an index, by default you must select one of the indexes on that instance of Splunk Enterprise. This means that you cannot select an index that you have configured on a search peer but not distributed to the rest of the deployment. The Indexes I tried to use were from indexer instance, not from the search instance. Now I created the index in the search instance and I can see my data.
Hi,   you can calculate the average timespan between events using | tstats count as event_count, latest(_time) as latest_time, earliest(_time) as earliest_time by host,index | eval total_time_span... See more...
Hi,   you can calculate the average timespan between events using | tstats count as event_count, latest(_time) as latest_time, earliest(_time) as earliest_time by host,index | eval total_time_spans = latest_time - earliest_time | eval average_time_span = total_time_spans / (event_count - 1) | stats avg(average_time_span) as avg_time_span by host,index   But beware since this makes only sense if you have regular reporting hosts/indexes. this will not work if one host e.g. sends 1k events once a day.
Okay, the inputs.conf looks okay. The index main is definitely empty even if you search alltime? Could you check the internal logs on the affected Splunk Universal Forwarder for any issues?
Have you tried using a kv store instead of csv as I know that csv lookup don't work for python custom commands?
when I upgrade ITSI app to 4.18.1. The services option in the configuration dropdown is missing Reference Screenshot:
Hi @rickymckenzie10 , yes it's possible to filter audit logs from some servers but your approach isn't correct: the blacklist option is to not index files not some events from a file. If you don't... See more...
Hi @rickymckenzie10 , yes it's possible to filter audit logs from some servers but your approach isn't correct: the blacklist option is to not index files not some events from a file. If you don't want only events read from the same file, the only solution is filter logs on Indexers before indexing ( https://docs.splunk.com/Documentation/SplunkCloud/9.2.2403/Forwarding/Routeandfilterdatad#Filter_event_data_and_send_to_queues ). In other words, filtering isn't possible on Forwarders. The only logs that's possible to filter on Forwarders are WinEventLogs but it isn't your case. Ciao. Giuseppe
Hi Team, We are currently using pyhton 3.9.0 version for Splunk app development. Is it ok or if it can be suggested some better version of python to develop splunk app.   Thanks, Alankrit