All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

On Splunk Enterprise 9.0.4, we are using the Proofpoint Isolation TA to download Isolation data into Splunk from the Proofpoint Isolation cloud.  However, when we activated SSL decryption on the URLs... See more...
On Splunk Enterprise 9.0.4, we are using the Proofpoint Isolation TA to download Isolation data into Splunk from the Proofpoint Isolation cloud.  However, when we activated SSL decryption on the URLs at our firewall for other necessary reasons, the TA stopped working, giving these errors in the logs:   2024-01-09 19:09:52,554 WARNING pid=9240 tid=MainThread file=connectionpool.py:urlopen:811 | Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1106)'))': /api/v2/reporting/usage-data?from=2023-11-29T01%3A17%3A33.000&to=2024-01-10T01%3A09%3A52.188&pageSize=10000 2024-01-09 19:09:52,657 ERROR pid=9240 tid=MainThread file=base_modinput.py:log_error:309 | Call to send_http_request failed: HTTPSConnectionPool(host='urlisolation.com', port=443): Max retries exceeded with url: /api/v2/reporting/usage-data?from=2023-11-29T01%3A17%3A33.000&to=2024-01-10T01%3A09%3A52.188&pageSize=10000 (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1106)'))) The error makes sense, since it's not (yet) a "trusted root" cert for this Splunk instance. How do I properly configure Splunk (or, perhaps, the Python client) to recognize this firewall root certificate as valid, or at the very least to stop validating the certificates provided by the outside server.  The latter would be my least-preferred choice, obviously.
You can try putting a makeresults command at the start of your third search an it may work. Assuming the the tokens $Denominator$ and $Numerator$ both populate as expected then running this SPL on y... See more...
You can try putting a makeresults command at the start of your third search an it may work. Assuming the the tokens $Denominator$ and $Numerator$ both populate as expected then running this SPL on your dashboard should do it.   | makeresults | eval result=round((tonumber("$Numerator$")/tonumber("$Denominator$"))*100)."%" | fields - _time    POC on local instance:    
Hi @yvan-rostand  As per my understanding, you should use props and transform.conf as well.   Maybe could you pls try this idea - forward data to 3rd party systems: https://docs.splunk.com/Docume... See more...
Hi @yvan-rostand  As per my understanding, you should use props and transform.conf as well.   Maybe could you pls try this idea - forward data to 3rd party systems: https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Forwarddatatothird-partysystemsd  
I have two data sources or searches that return a number. They are used to supply data to radial components. I've ticked the box so both are also available as tokens, Numerator and Denominator. I'd l... See more...
I have two data sources or searches that return a number. They are used to supply data to radial components. I've ticked the box so both are also available as tokens, Numerator and Denominator. I'd like a dashboard component that expresses the ratio of those numbers as a percent. How do I do this? I've tried creating a third search that returns the value, but that does not to work: | eval result=round("$Denominator$" / "$Numerator$" * 100)."%"
Hi @4nton10 Thru geostats, it may be a long route, maybe please try "choropleth" maps https://docs.splunk.com/Documentation/Splunk/9.1.2/Viz/ChoroplethGenerate  
Hi, I have created a custom metric to monitor the tablespace usage for Oracle databases that selects two columns, the tablespace name and used percent: "select tablespace_name,used_percent from dba_t... See more...
Hi, I have created a custom metric to monitor the tablespace usage for Oracle databases that selects two columns, the tablespace name and used percent: "select tablespace_name,used_percent from dba_tablespace_usage_metrics". In the metrics browser it will show me a list of items which are the tablespaces: On the health rule I try to specify the relative metric path but it is not being evaluated, I don't want to use the first option because new tablespaces are constantly created and I would like this to work in a dynamic way. My intention is to send an alert when the used_percent column is above a certain threshold for any of the tablespaces.
Oddly, even the Call Record Monitoring is not an option under the Teams menu, when in the Microsoft 365 App for Splunk. The only options are:  Teams Activity Overview Teams Security Monitoring Tea... See more...
Oddly, even the Call Record Monitoring is not an option under the Teams menu, when in the Microsoft 365 App for Splunk. The only options are:  Teams Activity Overview Teams Security Monitoring Teams Activity Report Teams Activity Audit
This is a really neat problem! Does doing something like this get you where you are trying to go?       | makeresults | fields - _time | eval nums="1,2,3,4,5,6,7,8,9,10" | makemv nums delim="... See more...
This is a really neat problem! Does doing something like this get you where you are trying to go?       | makeresults | fields - _time | eval nums="1,2,3,4,5,6,7,8,9,10" | makemv nums delim="," | eval cnt=0 | foreach mode=multivalue nums [ | eval moving_window_size=3, summation_json=if( mvcount(mvindex(nums,cnt,cnt+('moving_window_size'-1)))=='moving_window_size', mvappend( 'summation_json', json_object( "set", mvindex('nums', 'cnt', 'cnt'+('moving_window_size'-1)), "sum", sum(mvindex('nums', 'cnt', 'cnt'+('moving_window_size'-1))) ) ), 'summation_json' ), cnt='cnt'+1 ]       End result looks something like this:   I'm sure this can be standardized more and not sure how you want the final results to be formatted but you should be able to parse out the final MV json objects to get what you need out of them. Update: With the addition of the field "moving_window_size" it is a bit more standardized. And here it is in a slightly different format (summations associated with their own fields):   | makeresults | fields - _time | eval nums="1,2,3,4,5,6,7,8,9,10" | makemv nums delim="," | eval cnt=0 | foreach mode=multivalue nums [ | eval moving_window_size=3, iter_count="iteration_".'cnt', summation_json=if( mvcount(mvindex('nums', 'cnt', 'cnt'+('moving_window_size'-1)))=='moving_window_size', if( isnull(summation_json), json_object('iter_count', sum(mvindex('nums', 'cnt', 'cnt'+('moving_window_size'-1)))), json_set(summation_json, 'iter_count', sum(mvindex('nums', 'cnt', 'cnt'+('moving_window_size'-1)))) ), 'summation_json' ), cnt='cnt'+1 ] | fromjson summation_json | fields - summation_json, iter_count, cnt | fields + nums, iteration_*     And this SPL to try and simulate your original use-case (also added some addition context in the output): | makeresults count=1500 | eval low=1, high=100, rand=round(((random()%'high')/'high')*('high'-'low')+'low') | stats list(rand) as nums | eval cnt=0 | foreach mode=multivalue nums [ | eval moving_window_size=5, summation_json=if( mvcount(mvindex(nums,cnt,cnt+('moving_window_size'-1)))=='moving_window_size', mvappend( 'summation_json', json_object( "set", mvindex('nums', 'cnt', 'cnt'+('moving_window_size'-1)), "sum", sum(mvindex('nums', 'cnt', 'cnt'+('moving_window_size'-1))), "average", sum(mvindex('nums', 'cnt', 'cnt'+('moving_window_size'-1)))/'moving_window_size', "min", min(mvindex('nums', 'cnt', 'cnt'+('moving_window_size'-1))), "max", max(mvindex('nums', 'cnt', 'cnt'+('moving_window_size'-1))) ) ), 'summation_json' ), cnt='cnt'+1 ] | eval average_sum=sum(mvmap(summation_json, tonumber(spath(summation_json, "sum"))))/mvcount(summation_json), min_sum=min(mvmap(summation_json, tonumber(spath(summation_json, "sum")))), max_sum=max(mvmap(summation_json, tonumber(spath(summation_json, "sum")))) You can see by the screenshot below that I hit some Splunk limits when trying to put together a MV field with 1,500 entries (truncates to 250). But other than that it seems to work.  
Hello, The description is not very descriptive. Hopefully, the example and data will be. I have a list of 1500 numbers. I need to calculate the sum in increments of 5 numbers. However, the numbers ... See more...
Hello, The description is not very descriptive. Hopefully, the example and data will be. I have a list of 1500 numbers. I need to calculate the sum in increments of 5 numbers. However, the numbers will overlap (be used more than once). Using this code of only 10 values. | makeresults | fields - _time | eval nums="1,2,3,4,5,6,7,8,9,10" | makemv nums delim="," | eval cnt=0 | foreach nums [| eval nums_set_of_3 = mvindex(nums,cnt,+2) | eval sum_nums_{cnt} = sum(mvindex(nums_set_of_3,cnt,+2)) | eval cnt = cnt + 1]   The first sum (1st value + 2nd value + 3rd value or 1 + 2+ 3) = 6. The second sum (2nd value + 3rd value + 4th value or 2 + 3 + 4) = 9. The third sum would be (3rd value + 4th value + 5th value or 3 + 4 + 5) =12. And so on. The above code only makes it through one pass, the first sum. Thanks and God bless, Genesius
Hi everyone.   I am generating a cluster map which to make a count by log_subtype and in the map itself shows me the county and the latitude and longitude data. The question here is whether I c... See more...
Hi everyone.   I am generating a cluster map which to make a count by log_subtype and in the map itself shows me the county and the latitude and longitude data. The question here is whether I can replace the latitude and longitude data with the name of the country.   I have the query as follows:   | iplocation client_ip | geostats count by log_subtype
Just want to pop in and let you know that I think this SPL you shared is not actually capturing the minutes>55 but including the minutes between 0 and 9 just because of the way that Splunk buckets ti... See more...
Just want to pop in and let you know that I think this SPL you shared is not actually capturing the minutes>55 but including the minutes between 0 and 9 just because of the way that Splunk buckets time windows. You should be able to see this demonstrated with this SPL <base_search> | eval true_minute=strftime(_time, "%M"), true_hour=strftime(_time, "%H") | bin span=10m _time | eval bucketed_minute=strftime(_time,"%M"), bucketed_hour=strftime(_time, "%H") | where 'bucketed_minute'>54 OR 'bucketed_minute'<6 | dedup true_minute  Results I'm seeing look something like this. To stay in the spirit of the simpler SPL and to build on your methodology, I think something like this would do the trick. Here is sample code as a POC of the minutes being bucketed properly. <base_search> | where tonumber(strftime(_time, "%M"))<=5 OR tonumber(strftime(_time, "%M"))>=55 | eval date_minute=strftime(_time, "%M"), date_hour=strftime(_time, "%H"), original_time=strftime(_time, "%Y-%m-%d %H:%M:%S"), snap_time=case( 'date_minute'>=55, strftime(relative_time(_time, "@h+1h"), "%Y-%m-%d %H:%M:%S"), 'date_minute'<=5, strftime(relative_time(_time, "@h"), "%Y-%m-%d %H:%M:%S") ) | fields - _time | fields + original_time, snap_time, date_hour, date_minute   And to use this method with your original ask it would look something like this. <base_search> | where tonumber(strftime(_time, "%M"))<=5 OR tonumber(strftime(_time, "%M"))>=55 | eval date_minute=strftime(_time, "%M"), date_hour=strftime(_time, "%H"), _time=case( 'date_minute'>=55, relative_time(_time, "@h+1h"), 'date_minute'<=5, relative_time(_time, "@h") ) | stats count as count by _time  Examples:     Both 07:57 AM and 08:04 would fall into the 8:00 AM bucket in the stats count by.
The problem is Splunk can't do that.  An HF can forward to another Splunk instance or to a syslog receiver.  They cannot send directly to a storage device/service.
Looking at the conf files and navigation for app https://splunkbase.splunk.com/app/3786 It looks like the "Teams Call QoS" dashboard should be located at. From the App navigation menu:     T... See more...
Looking at the conf files and navigation for app https://splunkbase.splunk.com/app/3786 It looks like the "Teams Call QoS" dashboard should be located at. From the App navigation menu:     Teams ==> Call Record Monitoring ==> Teams Call QoS There is also a file in <app>/default/data/ui/views/teams_call_qos.xml on the most recent version I just looked at. so not really sure why it wouldn't be available
Hi, I am trying to to forward logs from a heavy forwarder to a gcp bucket using the outputs.conf, but it has been unsuccessful (no logs seen in the bucket). Not sure if that has to do with my confi... See more...
Hi, I am trying to to forward logs from a heavy forwarder to a gcp bucket using the outputs.conf, but it has been unsuccessful (no logs seen in the bucket). Not sure if that has to do with my config file or something else. Can anyone help me with an example? This is my outputs.conf and I don't know what is wrong. # BASE SETTINGS [tcpout] defaultGroup = primary_indexers forceTimebasedAutoLB = true [tcpout:bucket_index] indexAndForward = true forwardedindex.0.whitelist = my_index [bucket] compressed = false json_escaping = auto google_storage_key = “12345abcde” google_storage_bucket = my-gcp-bucket path = /path/my-gcp-bucket route = bucket_index
We have both the Microsoft 365 App for Splunk and Microsoft Teams Add-on for Splunk installed in our Splunk cloud instance. However, we do not have the Teams Call QoS dashboard option seen in the scr... See more...
We have both the Microsoft 365 App for Splunk and Microsoft Teams Add-on for Splunk installed in our Splunk cloud instance. However, we do not have the Teams Call QoS dashboard option seen in the screenshots here: https://splunkbase.splunk.com/app/4994. Has that feature been removed? Are we missing something?
Hi @hieuba  i assume you created a custom Visualization in Splunk Classic Dashboards and now you would like to recreate that one thru the Splunk Dashboard Studio... is that correct, pls suggest us. ... See more...
Hi @hieuba  i assume you created a custom Visualization in Splunk Classic Dashboards and now you would like to recreate that one thru the Splunk Dashboard Studio... is that correct, pls suggest us.  maybe could you pls tell us more details about the custom dashboard you are looking to create in the Dashboard Studio pls, thanks. 
Does anyone know if version 7.x of Threat Defense Manager (f.k.a. Firepower Management Center)  is compatible with the latest version of Cisco's eStreamer add-on? https://splunkbase.splunk.com/app... See more...
Does anyone know if version 7.x of Threat Defense Manager (f.k.a. Firepower Management Center)  is compatible with the latest version of Cisco's eStreamer add-on? https://splunkbase.splunk.com/app/3662
We had the same problem initially and found more details about code command usage under \TA-code\default\searchbnf.conf We are able to decode the URL or process using | code method=base64 field=en... See more...
We had the same problem initially and found more details about code command usage under \TA-code\default\searchbnf.conf We are able to decode the URL or process using | code method=base64 field=encodedcommand action=decode destfield=decoded_command key=abc123 but when we stats the decoded_command it gives the result as "p". I tried the base64 conversion matrix macro as well, it does the same p thing.  Can anyone help?
First off, I would suggest doing what @sshelly_splunk said if possible. If not possible then you can try this method with SPL. I see this question come over a lot and people usually respond with "... See more...
First off, I would suggest doing what @sshelly_splunk said if possible. If not possible then you can try this method with SPL. I see this question come over a lot and people usually respond with "its complicated", and it is. With that said, I have been working on trying to standardize a solution by using macros and think I have a good first iteration worked out, but I'm sure still needs some more regression testing. Here is what results look like using your sample timestamp that is assumed to be GMT but because of the user running the query's timezone preference is set to something else the epoch conversion isn't working as expected. You can see inputs of the first macro `convert_timestamp_to_epoch(3)` are $timestamp_field$ ----> REPORTED_DATE $timestamp_format$ ----> %Y-%m-%d %H:%M:%S.%1N $assumed_timezone$ ----> GMT     This first macro should convert a timestamp to a standardized epoch time by using either a timezone found in the timestamp itself or if no timezone is found in the timestamp to revert to using the 3rd argument of the "assumed_timezone". You have the ability to leave the 3rd argument blank as well and then the catchall timezone is the user's configured timezone preference. The second macro `convert_epoch_to_specific_timezone(3)` has the input args $epoch$ ----> standardized_epoch (this is default fieldname of the output of the previous macro) $timestamp_format$ ----> %Y-%m-%d %H:%M:%S.%1N $output_timezone$ ----> EST     This macro is taking in a epoch value and returns a human readable timestamp set to any timezone requested in the 3rd argument. (thats the idea at least) Using the 2 macros together should be able to convert any timestamp to another with a desired timezone association. If you are interested in the macros, shoot me a message and I can get them packaged up for you and share. In the mean time why dont you try appending "+0000" to your REPORTED_DATE and convert to epoch including the timezone specifier Example:   | eval REPORTED_DATE2=strptime('REPORTED_DATE'."+0000", "%Y-%m-%d %H:%M:%S.%1N%z")  
I've tried this, but without de "as _time" Now works perfect. Thank you very much!!!!!