All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

index=_introspection sourcetype=splunk_resource_usage host IN ("hostname" ) component=Hostwide | eval total_cpu_usage=('data.cpu_system_pct' + 'data.cpu_user_pct') | eval Tenant=case(match(host,"name... See more...
index=_introspection sourcetype=splunk_resource_usage host IN ("hostname" ) component=Hostwide | eval total_cpu_usage=('data.cpu_system_pct' + 'data.cpu_user_pct') | eval Tenant=case(match(host,"name"),"Core",match(host,"name"),"Enterprise Security",match(host,"name"),"Critical Reports",match(host,"hostname"),"Mgmt",match(host,"hostname"),"IDX",match(host,"hostname"),"AWE",match(host,"hostname"),"ABC",1==1,host) | eval Env=case(match(host,"hostname"),"Prod",match(host,"hostname"),"E2E",match(host,"hostname"),"ABC",1==1,splunk_server) | fields host_zone Tenant _time total_cpu_usage | table host_zone Tenant _time total_cpu_usage | search host_zone="pr" Tenant="Core" | bin span=24h aligntime=@d _time | stats Perc90(total_cpu_usage) AS cpu_usage BY _time | trendline sma2(cpu_usage) AS trend | fields * trend
Hi @wm , why are you using crcSalt=<SOURCE> ? It's usually used to reindex already indexed data, usually isn't useful. try to delete it. Ciao. Giuseppe
this is inputs.conf  [monitor://D:\temp\zkstats*.json] crcSalt = <SOURCE> disabled = false followTail = 0 index = abc sourcetype = zk_stats props.conf [zk_stats] KV_MODE = json INDEXED_EXTRACTIONS... See more...
this is inputs.conf  [monitor://D:\temp\zkstats*.json] crcSalt = <SOURCE> disabled = false followTail = 0 index = abc sourcetype = zk_stats props.conf [zk_stats] KV_MODE = json INDEXED_EXTRACTIONS = json however my search code index=abc sourcetype = zk_stats is not getting new events. meaning to say if zkstats20240824_0700 for example new files coming in it wont re index
Please feel free to share your current outsputs.conf. If you use the [syslog] stanza to forward the data to your third-party system no additional header should be added by splunk. Forward data to t... See more...
Please feel free to share your current outsputs.conf. If you use the [syslog] stanza to forward the data to your third-party system no additional header should be added by splunk. Forward data to third-party systems - Splunk Documentation
You could try to check the current tailing status Solved: Is there some way to see the current tailing statu... - Splunk Community The mgmt. port must be opened temporarily on the UFs 
Hi, I am currently dealing with some logs being forwarded via syslog to a third party system. The question is if there is an option to prevent splunk from adding an additional header to each message... See more...
Hi, I am currently dealing with some logs being forwarded via syslog to a third party system. The question is if there is an option to prevent splunk from adding an additional header to each message before it is forwarded. So there should be a way to disable the additional syslog header when using forwarding, so that the third party system receives the original message by removing the header. Any ideas, can you give me a practical example? I am trying to test by modifying the outputs.conf.  thanks, Giulia
I found the problem When you select an index, by default you must select one of the indexes on that instance of Splunk Enterprise. This means that you cannot select an index that you have confi... See more...
I found the problem When you select an index, by default you must select one of the indexes on that instance of Splunk Enterprise. This means that you cannot select an index that you have configured on a search peer but not distributed to the rest of the deployment. The Indexes I tried to use were from indexer instance, not from the search instance. Now I created the index in the search instance and I can see my data.
Hi,   you can calculate the average timespan between events using | tstats count as event_count, latest(_time) as latest_time, earliest(_time) as earliest_time by host,index | eval total_time_span... See more...
Hi,   you can calculate the average timespan between events using | tstats count as event_count, latest(_time) as latest_time, earliest(_time) as earliest_time by host,index | eval total_time_spans = latest_time - earliest_time | eval average_time_span = total_time_spans / (event_count - 1) | stats avg(average_time_span) as avg_time_span by host,index   But beware since this makes only sense if you have regular reporting hosts/indexes. this will not work if one host e.g. sends 1k events once a day.
Okay, the inputs.conf looks okay. The index main is definitely empty even if you search alltime? Could you check the internal logs on the affected Splunk Universal Forwarder for any issues?
Have you tried using a kv store instead of csv as I know that csv lookup don't work for python custom commands?
when I upgrade ITSI app to 4.18.1. The services option in the configuration dropdown is missing Reference Screenshot:
Hi @rickymckenzie10 , yes it's possible to filter audit logs from some servers but your approach isn't correct: the blacklist option is to not index files not some events from a file. If you don't... See more...
Hi @rickymckenzie10 , yes it's possible to filter audit logs from some servers but your approach isn't correct: the blacklist option is to not index files not some events from a file. If you don't want only events read from the same file, the only solution is filter logs on Indexers before indexing ( https://docs.splunk.com/Documentation/SplunkCloud/9.2.2403/Forwarding/Routeandfilterdatad#Filter_event_data_and_send_to_queues ). In other words, filtering isn't possible on Forwarders. The only logs that's possible to filter on Forwarders are WinEventLogs but it isn't your case. Ciao. Giuseppe
Hi Team, We are currently using pyhton 3.9.0 version for Splunk app development. Is it ok or if it can be suggested some better version of python to develop splunk app.   Thanks, Alankrit  
Dear Splunkers.   I am planning to get Dashboard visualization for Citrix Netscaler . We have Splunk add-on for Citrix Netscaler to collect the logs from the Netscaler appliance. However, i can't... See more...
Dear Splunkers.   I am planning to get Dashboard visualization for Citrix Netscaler . We have Splunk add-on for Citrix Netscaler to collect the logs from the Netscaler appliance. However, i can't find any APP available for Citrix NetScaler, which leverages the logs collected by the add-on and show in visualizations. Any suggestions please. thanks.    
It would be easier if you could influence limits.conf.  Is this a limitation in Splunk Cloud? Here is how I assess the situation. If you need to use join, your method is semantically equivalent to... See more...
It would be easier if you could influence limits.conf.  Is this a limitation in Splunk Cloud? Here is how I assess the situation. If you need to use join, your method is semantically equivalent to using one index search on the right. Append + stats may or may not be faster; your search also has some inefficiencies.  But as long as current performance is acceptable, you don't have to worry. It is more profitable to investigate why your right set is greater than 50K. You have to remember: any solution has to be evaluated in the specific dataset and specific use case.  There is no universal formula.  On the last point, you need to carefully review your indexed data.  There are several ways to reduce the amount of rows in the right. First of all, do you really have more than 50K unique combinations of ip, risk, score, contact?  I ask because you really only care about these fields.  My speculation is that you have many rows with identical combinations.  This is a simple test for you: index=risk | stats count by ip risk score contact | stats count  Is this count really greater than 50K?  If not, this would be easier to maintain and possibly more efficient: | inputlookup host.csv | rename ip_address as ip | join max=0 type=left ip [ search index=risk | fields ip risk score contact | dedup ip risk score contact ] | table ip host risk score contact Another possible cause that you have too many rows in the right could be that there are too many ip's in the index that are missing from the lookup.  If that is the case, you can further restrict the index search by the lookup, i.e., | inputlookup host.csv | rename ip_address as ip | join max=0 type=left ip [ search index=risk [ inputlookup host.csv | rename ip_address as ip | fields ip ] | fields ip risk score contact | dedup ip risk score contact ] | table ip host risk score contact These could be other ways to reduce amount of data.  They all depends on your dataset and use case.
Hi @Ryan.Paredez  - Any idea on this  ?  Thanks
Hi @yuanliu, I appreciate your help.  I accepted your solution. I tried your solution and it worked, however the subsearch index hit 50k max rows, so I split the join based on subnets. Is this t... See more...
Hi @yuanliu, I appreciate your help.  I accepted your solution. I tried your solution and it worked, however the subsearch index hit 50k max rows, so I split the join based on subnets. Is this the right way to do it?    See below I split into 3 join, each join has a filter based on subnets, but the issue is I don't know if the subnets will hit max 50k rows in the future, and I will have to manually adjust. Do you have any other suggestion ? Thanks again | inputlookup host.csv | rename ip_address as ip | eval source="csv" | join max=0 type=left ip [ search index=risk | fields ip risk score contact | search ip="10.1.0.0/16" | eval source="risk1" ] | join max=0 type=left ip [ search index=risk | fields ip risk score contact | search ip="10.2.0.0/16" | eval source="risk2" ] | join max=0 type=left ip [ search index=risk | fields ip risk score contact | search ip="10.3.0.0/16" | eval source="risk3" ] | table ip, host, risk, score, contact
Hi, I’m trying to integrate dynatrace with Splunk using Dynatrace add-on for Splunk. However after the configuration, I’m getting the below error. (Caused by SSLError(SSLCertVerificationError(1, '... See more...
Hi, I’m trying to integrate dynatrace with Splunk using Dynatrace add-on for Splunk. However after the configuration, I’m getting the below error. (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1143)'))), Anyone experienced this, or know how to solve this certificate issue? FYI, I have updated ssl certificate on both Splunk and Dynatrace, but it didn’t help.
Truncated Output(The message was too long): C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        [SSL] C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf   ... See more...
Truncated Output(The message was too long): C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        [SSL] C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        _rcvbuf = 1572864 C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        allowSslRenegotiation = true C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        certLogMaxCacheEntries = 10000 C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        certLogRepeatFrequency = 1d C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        cipherSuite = ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256 C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        ecdhCurves = prime256v1, secp384r1, secp521r1 C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        evt_dc_name = C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        evt_dns_name = C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        evt_resolve_ad_obj = 0 C:..\SplunkUniversalForwarder\etc\system\local\inputs.conf                          host = <Full Computer Name> C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        index = default C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        logCertificateData = true C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        sslQuietShutdown = false C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        sslVersions = tls1.2 C:..\SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\local\inputs.conf   [WinEventLog://Application] C:..\SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\local\inputs.conf   checkpointInterval = 5 C:..\SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\local\inputs.conf   current_only = 0 C:..\SplunkUniversalForwarder\etc\system\local\inputs.conf                          disabled = 0 C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        evt_dc_name = C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        evt_dns_name = C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        evt_resolve_ad_obj = 0 host = <Full Computer Name> index = default C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        interval = 60 C:..\SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\local\inputs.conf   start_from = oldest C:..\SplunkUniversalForwarder\etc\system\local\inputs.conf                          [WinEventLog://DNS Server] C:..\SplunkUniversalForwarder\etc\system\local\inputs.conf                          checkpointInterval = 5 C:..\SplunkUniversalForwarder\etc\system\local\inputs.conf                          current_only = 0 C:..\SplunkUniversalForwarder\etc\system\local\inputs.conf                          disabled = 0 C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        evt_dc_name = C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        evt_dns_name = C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        evt_resolve_ad_obj = 0 host = <Full Computer Name> C:..\SplunkUniversalForwarder\etc\system\local\inputs.conf                          index = main C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        interval = 60 C:..\SplunkUniversalForwarder\etc\system\local\inputs.conf                          start_from = oldest C:..\SplunkUniversalForwarder\etc\system\local\inputs.conf                          [WinEventLog://Directory Service] C:..\SplunkUniversalForwarder\etc\system\local\inputs.conf                          checkpointInterval = 5 C:..\SplunkUniversalForwarder\etc\system\local\inputs.conf                          current_only = 0 C:..\SplunkUniversalForwarder\etc\system\local\inputs.conf                          disabled = 0 C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        evt_dc_name = C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        evt_dns_name = C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        evt_resolve_ad_obj = 0 host = <Full Computer Name> C:..\SplunkUniversalForwarder\etc\system\local\inputs.conf                          index = main C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        interval = 60 C:..\SplunkUniversalForwarder\etc\system\local\inputs.conf                          start_from = oldest C:..\SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\local\inputs.conf   [WinEventLog://ForwardedEvents] C:..\SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\local\inputs.conf   checkpointInterval = 5 C:..\SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\local\inputs.conf   current_only = 0 C:..\SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\local\inputs.conf   disabled = 0 C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        evt_dc_name = C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        evt_dns_name = C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        evt_resolve_ad_obj = 0 host = <Full Computer Name> index = default C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        interval = 60 C:..\SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\local\inputs.conf   start_from = oldest C:..\SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\local\inputs.conf   [WinEventLog://Security] C:..\SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\local\inputs.conf   checkpointInterval = 5 C:..\SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\local\inputs.conf   current_only = 0 C:..\SplunkUniversalForwarder\etc\system\local\inputs.conf                          disabled = 0 C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        evt_dc_name = C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        evt_dns_name = C:..\SplunkUniversalForwarder\etc\system\default\inputs.conf                        evt_resolve_ad_obj = 0 host = <Full Computer Name> index = default
First, thanks for clearly illustrating raw input, desired output, and the logic to get from there.  Transaction is still the easiest way to go.  You just need to keep track of which value is which ev... See more...
First, thanks for clearly illustrating raw input, desired output, and the logic to get from there.  Transaction is still the easiest way to go.  You just need to keep track of which value is which eventtype. Many people here are familiar with the traditional technique of using string concatenation.  I will show a more semantic approach afforded by JSON functions introduced in 8.1.   | rename _raw as temp ``` only if you want to preserve _raw for later ``` | tojson eventtype, field1 | transaction startswith="eventtype=get" endswith="eventtype=update" | eval _raw = split(_raw, " ") | eval Before = json_extract(mvindex(_raw, 0), "field1"), After = json_extract(mvindex(_raw, 1), "field1") | rename temp as _raw ``` only if you want to preserve _raw for later ``` | fields Before, After   Note: The above is not completely semantic as I am also using the side effect of Splunk's default of lexical order. Here is an emulation for you to play with and compare with real data.   | makeresults format=csv data="_time, eventtype, sessionid, field1 10:06, update, session2, newvalue3 10:05, get, session2, newvalue2 09:15, update, session1, newvalue2 09:12, get, session1, newvalue1 09:10, get, session1, newvalue1 09:09, update, session1, newvalue1 09:02, get, session1, oldvalue1 09:01, get, session1, oldvalue1 08:59, get, session1, oldvalue1" | eval _time = strptime("2024-08-22T" . _time, "%FT%H:%M") ``` data emulation above ```   Output from the above search gives Before After _time newvalue2 newvalue3 2024-08-22 10:05:00 newvalue1 newvalue2 2024-08-22 09:12:00 oldvalue1 newvalue1 2024-08-22 09:02:00