All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Nice example @dtburrows3 ! You can always get more than the 1500 in the example with stats values to avoid the stats list limit of 100 and prefix the rand with the row index #, e.g. start the search... See more...
Nice example @dtburrows3 ! You can always get more than the 1500 in the example with stats values to avoid the stats list limit of 100 and prefix the rand with the row index #, e.g. start the search with | makeresults count=1500 | streamstats c | eval low=1, high=100, rand=printf("%05d:%d", c, round(((random()%'high')/'high')*('high'-'low')+'low')) | stats values(rand) as nums | eval cnt=0 | rex field=nums max_match=0 "(?<x>[^:]*):(?<nums>\d+)" | fields - x c ...  
You can't replace the lat/long, but you can add country to the log_subtype, i.e. | iplocation client_ip | eval type=log_subtype." (".Country.")" | geostats count by type
@inventsekar The page below is likewise broken. I tried the URL you suggested. That indicates that the problem is with all the other folders where the macro is resistant, right?  
Hi All, I have created a detector that monitors splunk environment. I am trying to customize message in alert message and trying to pass {{dimensions.AWSUniqueId}}. When the alert notification is ... See more...
Hi All, I have created a detector that monitors splunk environment. I am trying to customize message in alert message and trying to pass {{dimensions.AWSUniqueId}}. When the alert notification is sent, this variable is empty. Can anyone please let me know why is this happening. Regards, PNV
Hi All, I recently installed/configured the "Microsoft Teams Add-on for splunk" to ingest call logs and meeting info from Microsoft Teams. I have run into an isuue I was hoping someone could help wi... See more...
Hi All, I recently installed/configured the "Microsoft Teams Add-on for splunk" to ingest call logs and meeting info from Microsoft Teams. I have run into an isuue I was hoping someone could help with me. [What I would like to do] Ingesting call logs and meeting info from Microsoft Teams via "Microsoft Teams Add-on for splunk". [What I did] I have followed the instructions and configured the "Subscription", "User Reports", "Call Reports" and "Webhook". Instructions:https://www.splunk.com/en_us/blog/tips-and-tricks/splunking-microsoft-teams-data.html [issue]"User Reports" and "Webhooks" has worked, but "Subscription" and " Call reports" has not worked. As a results, Teams logs are not ingested. I have granted all of the required permissions in Teams/Azure based on the instructions. [error logs] I checked the internal logs and detected many error logs, but reading the errors did not reveal a clear cause. Among the logged problems indicated were the following: From {/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA_MS_Teams/bin/TA_MS_Teams_rh_settings.py persistent}: solnlib.credentials.CredentialNotExistException: Failed to get password of realm=__REST_CREDENTIAL__#TA_MS_Teams#configs/conf-ta_ms_teams_settings, user=proxy. message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA_MS_Teams/bin/teams_subscription.py" 400 Client Error: Bad Request for url: https://graph.microsoft.com/v1.0/subscriptions message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA_MS_Teams/bin/teams_subscription.py" requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://graph.microsoft.com/v1.0/subscriptions [environment] Add-On Version: 1.1.3 Splunk Enterprise Verison: 9.1.2 Add-On is installed on a Splunk Enterprise. Is the error in the error log due to the call log and subscriptions not working properly? Or does the webhook URL have to be https to work properly? If anyone knows the reason, let me know. Any help would be greatly appreciated. Thanks,
On Splunk Enterprise 9.0.4, we are using the Proofpoint Isolation TA to download Isolation data into Splunk from the Proofpoint Isolation cloud.  However, when we activated SSL decryption on the URLs... See more...
On Splunk Enterprise 9.0.4, we are using the Proofpoint Isolation TA to download Isolation data into Splunk from the Proofpoint Isolation cloud.  However, when we activated SSL decryption on the URLs at our firewall for other necessary reasons, the TA stopped working, giving these errors in the logs:   2024-01-09 19:09:52,554 WARNING pid=9240 tid=MainThread file=connectionpool.py:urlopen:811 | Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1106)'))': /api/v2/reporting/usage-data?from=2023-11-29T01%3A17%3A33.000&to=2024-01-10T01%3A09%3A52.188&pageSize=10000 2024-01-09 19:09:52,657 ERROR pid=9240 tid=MainThread file=base_modinput.py:log_error:309 | Call to send_http_request failed: HTTPSConnectionPool(host='urlisolation.com', port=443): Max retries exceeded with url: /api/v2/reporting/usage-data?from=2023-11-29T01%3A17%3A33.000&to=2024-01-10T01%3A09%3A52.188&pageSize=10000 (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1106)'))) The error makes sense, since it's not (yet) a "trusted root" cert for this Splunk instance. How do I properly configure Splunk (or, perhaps, the Python client) to recognize this firewall root certificate as valid, or at the very least to stop validating the certificates provided by the outside server.  The latter would be my least-preferred choice, obviously.
You can try putting a makeresults command at the start of your third search an it may work. Assuming the the tokens $Denominator$ and $Numerator$ both populate as expected then running this SPL on y... See more...
You can try putting a makeresults command at the start of your third search an it may work. Assuming the the tokens $Denominator$ and $Numerator$ both populate as expected then running this SPL on your dashboard should do it.   | makeresults | eval result=round((tonumber("$Numerator$")/tonumber("$Denominator$"))*100)."%" | fields - _time    POC on local instance:    
Hi @yvan-rostand  As per my understanding, you should use props and transform.conf as well.   Maybe could you pls try this idea - forward data to 3rd party systems: https://docs.splunk.com/Docume... See more...
Hi @yvan-rostand  As per my understanding, you should use props and transform.conf as well.   Maybe could you pls try this idea - forward data to 3rd party systems: https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Forwarddatatothird-partysystemsd  
I have two data sources or searches that return a number. They are used to supply data to radial components. I've ticked the box so both are also available as tokens, Numerator and Denominator. I'd l... See more...
I have two data sources or searches that return a number. They are used to supply data to radial components. I've ticked the box so both are also available as tokens, Numerator and Denominator. I'd like a dashboard component that expresses the ratio of those numbers as a percent. How do I do this? I've tried creating a third search that returns the value, but that does not to work: | eval result=round("$Denominator$" / "$Numerator$" * 100)."%"
Hi @4nton10 Thru geostats, it may be a long route, maybe please try "choropleth" maps https://docs.splunk.com/Documentation/Splunk/9.1.2/Viz/ChoroplethGenerate  
Hi, I have created a custom metric to monitor the tablespace usage for Oracle databases that selects two columns, the tablespace name and used percent: "select tablespace_name,used_percent from dba_t... See more...
Hi, I have created a custom metric to monitor the tablespace usage for Oracle databases that selects two columns, the tablespace name and used percent: "select tablespace_name,used_percent from dba_tablespace_usage_metrics". In the metrics browser it will show me a list of items which are the tablespaces: On the health rule I try to specify the relative metric path but it is not being evaluated, I don't want to use the first option because new tablespaces are constantly created and I would like this to work in a dynamic way. My intention is to send an alert when the used_percent column is above a certain threshold for any of the tablespaces.
Oddly, even the Call Record Monitoring is not an option under the Teams menu, when in the Microsoft 365 App for Splunk. The only options are:  Teams Activity Overview Teams Security Monitoring Tea... See more...
Oddly, even the Call Record Monitoring is not an option under the Teams menu, when in the Microsoft 365 App for Splunk. The only options are:  Teams Activity Overview Teams Security Monitoring Teams Activity Report Teams Activity Audit
This is a really neat problem! Does doing something like this get you where you are trying to go?       | makeresults | fields - _time | eval nums="1,2,3,4,5,6,7,8,9,10" | makemv nums delim="... See more...
This is a really neat problem! Does doing something like this get you where you are trying to go?       | makeresults | fields - _time | eval nums="1,2,3,4,5,6,7,8,9,10" | makemv nums delim="," | eval cnt=0 | foreach mode=multivalue nums [ | eval moving_window_size=3, summation_json=if( mvcount(mvindex(nums,cnt,cnt+('moving_window_size'-1)))=='moving_window_size', mvappend( 'summation_json', json_object( "set", mvindex('nums', 'cnt', 'cnt'+('moving_window_size'-1)), "sum", sum(mvindex('nums', 'cnt', 'cnt'+('moving_window_size'-1))) ) ), 'summation_json' ), cnt='cnt'+1 ]       End result looks something like this:   I'm sure this can be standardized more and not sure how you want the final results to be formatted but you should be able to parse out the final MV json objects to get what you need out of them. Update: With the addition of the field "moving_window_size" it is a bit more standardized. And here it is in a slightly different format (summations associated with their own fields):   | makeresults | fields - _time | eval nums="1,2,3,4,5,6,7,8,9,10" | makemv nums delim="," | eval cnt=0 | foreach mode=multivalue nums [ | eval moving_window_size=3, iter_count="iteration_".'cnt', summation_json=if( mvcount(mvindex('nums', 'cnt', 'cnt'+('moving_window_size'-1)))=='moving_window_size', if( isnull(summation_json), json_object('iter_count', sum(mvindex('nums', 'cnt', 'cnt'+('moving_window_size'-1)))), json_set(summation_json, 'iter_count', sum(mvindex('nums', 'cnt', 'cnt'+('moving_window_size'-1)))) ), 'summation_json' ), cnt='cnt'+1 ] | fromjson summation_json | fields - summation_json, iter_count, cnt | fields + nums, iteration_*     And this SPL to try and simulate your original use-case (also added some addition context in the output): | makeresults count=1500 | eval low=1, high=100, rand=round(((random()%'high')/'high')*('high'-'low')+'low') | stats list(rand) as nums | eval cnt=0 | foreach mode=multivalue nums [ | eval moving_window_size=5, summation_json=if( mvcount(mvindex(nums,cnt,cnt+('moving_window_size'-1)))=='moving_window_size', mvappend( 'summation_json', json_object( "set", mvindex('nums', 'cnt', 'cnt'+('moving_window_size'-1)), "sum", sum(mvindex('nums', 'cnt', 'cnt'+('moving_window_size'-1))), "average", sum(mvindex('nums', 'cnt', 'cnt'+('moving_window_size'-1)))/'moving_window_size', "min", min(mvindex('nums', 'cnt', 'cnt'+('moving_window_size'-1))), "max", max(mvindex('nums', 'cnt', 'cnt'+('moving_window_size'-1))) ) ), 'summation_json' ), cnt='cnt'+1 ] | eval average_sum=sum(mvmap(summation_json, tonumber(spath(summation_json, "sum"))))/mvcount(summation_json), min_sum=min(mvmap(summation_json, tonumber(spath(summation_json, "sum")))), max_sum=max(mvmap(summation_json, tonumber(spath(summation_json, "sum")))) You can see by the screenshot below that I hit some Splunk limits when trying to put together a MV field with 1,500 entries (truncates to 250). But other than that it seems to work.  
Hello, The description is not very descriptive. Hopefully, the example and data will be. I have a list of 1500 numbers. I need to calculate the sum in increments of 5 numbers. However, the numbers ... See more...
Hello, The description is not very descriptive. Hopefully, the example and data will be. I have a list of 1500 numbers. I need to calculate the sum in increments of 5 numbers. However, the numbers will overlap (be used more than once). Using this code of only 10 values. | makeresults | fields - _time | eval nums="1,2,3,4,5,6,7,8,9,10" | makemv nums delim="," | eval cnt=0 | foreach nums [| eval nums_set_of_3 = mvindex(nums,cnt,+2) | eval sum_nums_{cnt} = sum(mvindex(nums_set_of_3,cnt,+2)) | eval cnt = cnt + 1]   The first sum (1st value + 2nd value + 3rd value or 1 + 2+ 3) = 6. The second sum (2nd value + 3rd value + 4th value or 2 + 3 + 4) = 9. The third sum would be (3rd value + 4th value + 5th value or 3 + 4 + 5) =12. And so on. The above code only makes it through one pass, the first sum. Thanks and God bless, Genesius
Hi everyone.   I am generating a cluster map which to make a count by log_subtype and in the map itself shows me the county and the latitude and longitude data. The question here is whether I c... See more...
Hi everyone.   I am generating a cluster map which to make a count by log_subtype and in the map itself shows me the county and the latitude and longitude data. The question here is whether I can replace the latitude and longitude data with the name of the country.   I have the query as follows:   | iplocation client_ip | geostats count by log_subtype
Just want to pop in and let you know that I think this SPL you shared is not actually capturing the minutes>55 but including the minutes between 0 and 9 just because of the way that Splunk buckets ti... See more...
Just want to pop in and let you know that I think this SPL you shared is not actually capturing the minutes>55 but including the minutes between 0 and 9 just because of the way that Splunk buckets time windows. You should be able to see this demonstrated with this SPL <base_search> | eval true_minute=strftime(_time, "%M"), true_hour=strftime(_time, "%H") | bin span=10m _time | eval bucketed_minute=strftime(_time,"%M"), bucketed_hour=strftime(_time, "%H") | where 'bucketed_minute'>54 OR 'bucketed_minute'<6 | dedup true_minute  Results I'm seeing look something like this. To stay in the spirit of the simpler SPL and to build on your methodology, I think something like this would do the trick. Here is sample code as a POC of the minutes being bucketed properly. <base_search> | where tonumber(strftime(_time, "%M"))<=5 OR tonumber(strftime(_time, "%M"))>=55 | eval date_minute=strftime(_time, "%M"), date_hour=strftime(_time, "%H"), original_time=strftime(_time, "%Y-%m-%d %H:%M:%S"), snap_time=case( 'date_minute'>=55, strftime(relative_time(_time, "@h+1h"), "%Y-%m-%d %H:%M:%S"), 'date_minute'<=5, strftime(relative_time(_time, "@h"), "%Y-%m-%d %H:%M:%S") ) | fields - _time | fields + original_time, snap_time, date_hour, date_minute   And to use this method with your original ask it would look something like this. <base_search> | where tonumber(strftime(_time, "%M"))<=5 OR tonumber(strftime(_time, "%M"))>=55 | eval date_minute=strftime(_time, "%M"), date_hour=strftime(_time, "%H"), _time=case( 'date_minute'>=55, relative_time(_time, "@h+1h"), 'date_minute'<=5, relative_time(_time, "@h") ) | stats count as count by _time  Examples:     Both 07:57 AM and 08:04 would fall into the 8:00 AM bucket in the stats count by.
The problem is Splunk can't do that.  An HF can forward to another Splunk instance or to a syslog receiver.  They cannot send directly to a storage device/service.
Looking at the conf files and navigation for app https://splunkbase.splunk.com/app/3786 It looks like the "Teams Call QoS" dashboard should be located at. From the App navigation menu:     T... See more...
Looking at the conf files and navigation for app https://splunkbase.splunk.com/app/3786 It looks like the "Teams Call QoS" dashboard should be located at. From the App navigation menu:     Teams ==> Call Record Monitoring ==> Teams Call QoS There is also a file in <app>/default/data/ui/views/teams_call_qos.xml on the most recent version I just looked at. so not really sure why it wouldn't be available
Hi, I am trying to to forward logs from a heavy forwarder to a gcp bucket using the outputs.conf, but it has been unsuccessful (no logs seen in the bucket). Not sure if that has to do with my confi... See more...
Hi, I am trying to to forward logs from a heavy forwarder to a gcp bucket using the outputs.conf, but it has been unsuccessful (no logs seen in the bucket). Not sure if that has to do with my config file or something else. Can anyone help me with an example? This is my outputs.conf and I don't know what is wrong. # BASE SETTINGS [tcpout] defaultGroup = primary_indexers forceTimebasedAutoLB = true [tcpout:bucket_index] indexAndForward = true forwardedindex.0.whitelist = my_index [bucket] compressed = false json_escaping = auto google_storage_key = “12345abcde” google_storage_bucket = my-gcp-bucket path = /path/my-gcp-bucket route = bucket_index
We have both the Microsoft 365 App for Splunk and Microsoft Teams Add-on for Splunk installed in our Splunk cloud instance. However, we do not have the Teams Call QoS dashboard option seen in the scr... See more...
We have both the Microsoft 365 App for Splunk and Microsoft Teams Add-on for Splunk installed in our Splunk cloud instance. However, we do not have the Teams Call QoS dashboard option seen in the screenshots here: https://splunkbase.splunk.com/app/4994. Has that feature been removed? Are we missing something?