All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello,  I have seen a few of the spath topics around, but wasn't able to understand enough to make it work for my data.  I would like to create a line chart using pointlist values - it contains tim... See more...
Hello,  I have seen a few of the spath topics around, but wasn't able to understand enough to make it work for my data.  I would like to create a line chart using pointlist values - it contains timestamp in epoch and CPU% Search I tried but not working as expected to extract this data: index="splunk_test" source="test.json" | spath output=pointlist path=series{}.pointlist{}{} | mvexpand pointlist | table pointlist Please see below sample json. {"status": "ok", "res_type": "time_series", "resp_version": 1, "query": "system.cpu.idle{*}", "from_date": 1698796800000, "to_date": 1701388799000, "series": [{"unit": [{"family": "percentage", "id": 17, "name": "percent", "short_name": "%", "plural": "percent", "scale_factor": 1.0}, null], "query_index": 0, "aggr": null, "metric": "system.cpu.idle", "tag_set": [], "expression": "system.cpu.idle{*}", "scope": "*", "interval": 14400, "length": 180, "start": 1698796800000, "end": 1701388799000, "pointlist": [[1698796800000.0, 67.48220718526889], [1698811200000.0, 67.15981521730248], [1698825600000.0, 67.07217666403122], [1698840000000.0, 64.72434584884627], [1698854400000.0, 64.0411289094932], [1698868800000.0, 64.17585938553243], [1698883200000.0, 64.044969119166], [1698897600000.0, 63.448143595246194], [1698912000000.0, 63.80226399404451], [1698926400000.0, 63.93216493520908], [1698940800000.0, 63.983679174088145], [1701331200000.0, 63.3783379315815], [1701345600000.0, 63.45321248782884], [1701360000000.0, 63.452383398041064], [1701374400000.0, 63.46314971048991]], "display_name": "system.cpu.idle", "attributes": {}}], "values": [], "times": [], "message": "", "group_by": []} can you please help how I can achieve this? Thank you. Regards, Madhav
Hi, i need to find a way to present all alerts in a dashboard(Classic/Studio). users don't want to get mail for each alert, they prefer to see (maybe in a table ) all the alerts in one page + the al... See more...
Hi, i need to find a way to present all alerts in a dashboard(Classic/Studio). users don't want to get mail for each alert, they prefer to see (maybe in a table ) all the alerts in one page + the alert's last result. and maybe to click on the alert and get the last search. is it possible to create an alerts dashboard? thanks, Maayan
Hi, I need to find all time_interval for each machine where there is no data (no row for Name) . (to goal is to create an alert if there was no data in a time interval for a machine) for example... See more...
Hi, I need to find all time_interval for each machine where there is no data (no row for Name) . (to goal is to create an alert if there was no data in a time interval for a machine) for example, if we look at one day and machine X. if there was data in time interval 8:00-10:00, 10:00-12:00. I need to return X and the rest of the interval (12:00-1:00,1:00-2:00,..) i wrote the following command:  | chart count(Name) over machine by time_interval i get a table with all interval and machines. cell=0 if there is no data. i want to return all cell =0 (i need the interval and machine where cell=0) but i didn't succeed. i also tried to save the query and do left join but it doenst work. it's a very simple mission, some can help me with that? thanks, Maayan
Query should return last/latest available data when there is no data for the selected time range
I am trying to set up custom user data to capture user id of the user using Ajax request payload via HTTP method and URL method but unable to execute with both methods and it is not showing up in the... See more...
I am trying to set up custom user data to capture user id of the user using Ajax request payload via HTTP method and URL method but unable to execute with both methods and it is not showing up in the Network tab under developer tools. Could someone help me what could be the issue?
Hi I have to create correlation searches in Splunk ES My cron schedule will be */60**** Is it better to use a real-time schedule or a continuous schedule? Is it necessary to fill the time range (... See more...
Hi I have to create correlation searches in Splunk ES My cron schedule will be */60**** Is it better to use a real-time schedule or a continuous schedule? Is it necessary to fill the time range (start time and end time)? Last question : if an alert event exists, does it means that this event will be created many times in the incident review dashboard? I need to creat just an incident for the same alert. How to do this? Thanks in advance  
Hi All, I have created a detector that monitors splunk environment. I am trying to customize message in alert message and trying to pass {{dimensions.AWSUniqueId}}. When the alert notification is ... See more...
Hi All, I have created a detector that monitors splunk environment. I am trying to customize message in alert message and trying to pass {{dimensions.AWSUniqueId}}. When the alert notification is sent, this variable is empty. Can anyone please let me know why is this happening. Regards, PNV
Hi All, I recently installed/configured the "Microsoft Teams Add-on for splunk" to ingest call logs and meeting info from Microsoft Teams. I have run into an isuue I was hoping someone could help wi... See more...
Hi All, I recently installed/configured the "Microsoft Teams Add-on for splunk" to ingest call logs and meeting info from Microsoft Teams. I have run into an isuue I was hoping someone could help with me. [What I would like to do] Ingesting call logs and meeting info from Microsoft Teams via "Microsoft Teams Add-on for splunk". [What I did] I have followed the instructions and configured the "Subscription", "User Reports", "Call Reports" and "Webhook". Instructions:https://www.splunk.com/en_us/blog/tips-and-tricks/splunking-microsoft-teams-data.html [issue]"User Reports" and "Webhooks" has worked, but "Subscription" and " Call reports" has not worked. As a results, Teams logs are not ingested. I have granted all of the required permissions in Teams/Azure based on the instructions. [error logs] I checked the internal logs and detected many error logs, but reading the errors did not reveal a clear cause. Among the logged problems indicated were the following: From {/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA_MS_Teams/bin/TA_MS_Teams_rh_settings.py persistent}: solnlib.credentials.CredentialNotExistException: Failed to get password of realm=__REST_CREDENTIAL__#TA_MS_Teams#configs/conf-ta_ms_teams_settings, user=proxy. message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA_MS_Teams/bin/teams_subscription.py" 400 Client Error: Bad Request for url: https://graph.microsoft.com/v1.0/subscriptions message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA_MS_Teams/bin/teams_subscription.py" requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://graph.microsoft.com/v1.0/subscriptions [environment] Add-On Version: 1.1.3 Splunk Enterprise Verison: 9.1.2 Add-On is installed on a Splunk Enterprise. Is the error in the error log due to the call log and subscriptions not working properly? Or does the webhook URL have to be https to work properly? If anyone knows the reason, let me know. Any help would be greatly appreciated. Thanks,
On Splunk Enterprise 9.0.4, we are using the Proofpoint Isolation TA to download Isolation data into Splunk from the Proofpoint Isolation cloud.  However, when we activated SSL decryption on the URLs... See more...
On Splunk Enterprise 9.0.4, we are using the Proofpoint Isolation TA to download Isolation data into Splunk from the Proofpoint Isolation cloud.  However, when we activated SSL decryption on the URLs at our firewall for other necessary reasons, the TA stopped working, giving these errors in the logs:   2024-01-09 19:09:52,554 WARNING pid=9240 tid=MainThread file=connectionpool.py:urlopen:811 | Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1106)'))': /api/v2/reporting/usage-data?from=2023-11-29T01%3A17%3A33.000&to=2024-01-10T01%3A09%3A52.188&pageSize=10000 2024-01-09 19:09:52,657 ERROR pid=9240 tid=MainThread file=base_modinput.py:log_error:309 | Call to send_http_request failed: HTTPSConnectionPool(host='urlisolation.com', port=443): Max retries exceeded with url: /api/v2/reporting/usage-data?from=2023-11-29T01%3A17%3A33.000&to=2024-01-10T01%3A09%3A52.188&pageSize=10000 (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1106)'))) The error makes sense, since it's not (yet) a "trusted root" cert for this Splunk instance. How do I properly configure Splunk (or, perhaps, the Python client) to recognize this firewall root certificate as valid, or at the very least to stop validating the certificates provided by the outside server.  The latter would be my least-preferred choice, obviously.
I have two data sources or searches that return a number. They are used to supply data to radial components. I've ticked the box so both are also available as tokens, Numerator and Denominator. I'd l... See more...
I have two data sources or searches that return a number. They are used to supply data to radial components. I've ticked the box so both are also available as tokens, Numerator and Denominator. I'd like a dashboard component that expresses the ratio of those numbers as a percent. How do I do this? I've tried creating a third search that returns the value, but that does not to work: | eval result=round("$Denominator$" / "$Numerator$" * 100)."%"
Hi, I have created a custom metric to monitor the tablespace usage for Oracle databases that selects two columns, the tablespace name and used percent: "select tablespace_name,used_percent from dba_t... See more...
Hi, I have created a custom metric to monitor the tablespace usage for Oracle databases that selects two columns, the tablespace name and used percent: "select tablespace_name,used_percent from dba_tablespace_usage_metrics". In the metrics browser it will show me a list of items which are the tablespaces: On the health rule I try to specify the relative metric path but it is not being evaluated, I don't want to use the first option because new tablespaces are constantly created and I would like this to work in a dynamic way. My intention is to send an alert when the used_percent column is above a certain threshold for any of the tablespaces.
Hello, The description is not very descriptive. Hopefully, the example and data will be. I have a list of 1500 numbers. I need to calculate the sum in increments of 5 numbers. However, the numbers ... See more...
Hello, The description is not very descriptive. Hopefully, the example and data will be. I have a list of 1500 numbers. I need to calculate the sum in increments of 5 numbers. However, the numbers will overlap (be used more than once). Using this code of only 10 values. | makeresults | fields - _time | eval nums="1,2,3,4,5,6,7,8,9,10" | makemv nums delim="," | eval cnt=0 | foreach nums [| eval nums_set_of_3 = mvindex(nums,cnt,+2) | eval sum_nums_{cnt} = sum(mvindex(nums_set_of_3,cnt,+2)) | eval cnt = cnt + 1]   The first sum (1st value + 2nd value + 3rd value or 1 + 2+ 3) = 6. The second sum (2nd value + 3rd value + 4th value or 2 + 3 + 4) = 9. The third sum would be (3rd value + 4th value + 5th value or 3 + 4 + 5) =12. And so on. The above code only makes it through one pass, the first sum. Thanks and God bless, Genesius
Hi everyone.   I am generating a cluster map which to make a count by log_subtype and in the map itself shows me the county and the latitude and longitude data. The question here is whether I c... See more...
Hi everyone.   I am generating a cluster map which to make a count by log_subtype and in the map itself shows me the county and the latitude and longitude data. The question here is whether I can replace the latitude and longitude data with the name of the country.   I have the query as follows:   | iplocation client_ip | geostats count by log_subtype
Hi, I am trying to to forward logs from a heavy forwarder to a gcp bucket using the outputs.conf, but it has been unsuccessful (no logs seen in the bucket). Not sure if that has to do with my confi... See more...
Hi, I am trying to to forward logs from a heavy forwarder to a gcp bucket using the outputs.conf, but it has been unsuccessful (no logs seen in the bucket). Not sure if that has to do with my config file or something else. Can anyone help me with an example? This is my outputs.conf and I don't know what is wrong. # BASE SETTINGS [tcpout] defaultGroup = primary_indexers forceTimebasedAutoLB = true [tcpout:bucket_index] indexAndForward = true forwardedindex.0.whitelist = my_index [bucket] compressed = false json_escaping = auto google_storage_key = “12345abcde” google_storage_bucket = my-gcp-bucket path = /path/my-gcp-bucket route = bucket_index
We have both the Microsoft 365 App for Splunk and Microsoft Teams Add-on for Splunk installed in our Splunk cloud instance. However, we do not have the Teams Call QoS dashboard option seen in the scr... See more...
We have both the Microsoft 365 App for Splunk and Microsoft Teams Add-on for Splunk installed in our Splunk cloud instance. However, we do not have the Teams Call QoS dashboard option seen in the screenshots here: https://splunkbase.splunk.com/app/4994. Has that feature been removed? Are we missing something?
Does anyone know if version 7.x of Threat Defense Manager (f.k.a. Firepower Management Center)  is compatible with the latest version of Cisco's eStreamer add-on? https://splunkbase.splunk.com/app... See more...
Does anyone know if version 7.x of Threat Defense Manager (f.k.a. Firepower Management Center)  is compatible with the latest version of Cisco's eStreamer add-on? https://splunkbase.splunk.com/app/3662
how to change backslash of text input of a dashboard to use in subsequent search?
Hello, I need some help. Manipulating time is something I have struggled with  Below is the code I have   ((index="desktop_os") (sourcetype="itsm_remedy")) earliest=-1d@d | search ASSIGNED_GROUP ... See more...
Hello, I need some help. Manipulating time is something I have struggled with  Below is the code I have   ((index="desktop_os") (sourcetype="itsm_remedy")) earliest=-1d@d | search ASSIGNED_GROUP IN ("Desktop_Support_1", "Remote_Support") ``` Convert REPORTED_DATE to epoch form ``` | eval REPORTED_DATE2=strptime(REPORTED_DATE, "%Y-%m-%d %H:%M:%S") ``` Keep events reported more than 12 hours ago so are due in < 12 hours ``` | where REPORTED_DATE2 <= relative_time(now(), "-12h") | eval MTTRSET = round((now()-REPORTED_DATE2)/3600) | dedup INCIDENT_NUMBER | stats values(REPORTED_DATE) AS Reported, values(DESCRIPTION) AS Title, values(ASSIGNED_GROUP) AS Group, values(ASSIGNEE) AS Assignee, LAST(STATUS_TXT) as Status,values(MTTRSET) as MTTRHours, values(STATUS_REASON_TXT) as PendStatus by INCIDENT_NUMBER | search Status IN ("ASSIGNED", "IN PROGRESS", "PENDING") | sort Assignee | table Assignee MTTRHours INCIDENT_NUMBER Reported Title Title Status PendStatus  this code runs and gives us the results we need, but the issue is that REPORTED_DATE field is off by 5 hours due to time zone issue. that is a custom field from out ticketing system that is stuck on GMT and the output looks like  2024-01-08 09:22:49.0 I need to get that field produce a correct timezone for EST. I am struggling with making it work. I looked at this thread but that is not working for us: Solved: How to convert date and time in UTC to EST? - Splunk Community Any help is appreciated.   Thanks  
Hi, I have a log with several transactions, each one have some events. All event in one transaction share the same ID. The other events contains some information each one, for example, execution ti... See more...
Hi, I have a log with several transactions, each one have some events. All event in one transaction share the same ID. The other events contains some information each one, for example, execution time, transact type, url. login url, etc.... This fields can be in one or several of the events. I want to obtain the total transactions of each type in spanned time, for example each 5m. I need to group the events of each trasaction for extract the info for it. index=prueba source="*blablabla*" | eval Date=strftime(_time,"%Y/%m/%d") | eval Time=strftime(_time,"%H:%M:%S") | eval Fecha=strftime(_time,"%Y/%m/%d %H:%M:%S") | rex "^.+transactType:\s(?P<transactType>(.\w+)+)" | stats values(Fecha) as Fecha, values(transactType) as transactType by ID This is Ok, if i want count transactType then i do: index=prueba source="*blablabla*" | eval Date=strftime(_time,"%Y/%m/%d") | eval Time=strftime(_time,"%H:%M:%S") | eval Fecha=strftime(_time,"%Y/%m/%d %H:%M:%S") | rex "^.+transactType:\s(?P<transactType>(.\w+)+)" | stats values(Fecha) as Fecha, values(transactType) as transactType by ID |stats count by transactType The problem is if i want to obtain that in a span time: I cant do this because there is some events with the transactType field in one transaction: index=prueba source="*blablabla*" | eval Date=strftime(_time,"%Y/%m/%d") | eval Time=strftime(_time,"%H:%M:%S") | eval Fecha=strftime(_time,"%Y/%m/%d %H:%M:%S") | rex "^.+transactType:\s(?P<transactType>(.\w+)+)" | timechart span=5m count by transactType And following query dont give me any result: index=prueba source="*blablabla*" | eval Date=strftime(_time,"%Y/%m/%d") | eval Time=strftime(_time,"%H:%M:%S") | eval Fecha=strftime(_time,"%Y/%m/%d %H:%M:%S") | rex "^.+transactType:\s(?P<transactType>(.\w+)+)" | stats values(Fecha) as Fecha, values(transactType) as transactType by ID | timechart span=5m count by transactType Im tried too (but i dont get results): index=prueba source="*blablabla*" | eval Date=strftime(_time,"%Y/%m/%d") | eval Time=strftime(_time,"%H:%M:%S") | eval Fecha=strftime(_time,"%Y/%m/%d %H:%M:%S") | rex "^.+transactType:\s(?P<transactType>(.\w+)+)" | bucket Fecha span=5m | stats values(Fecha) as Fecha, values(transactType) as transactType by ID |stats count by transactType Or: index=prueba source="*blablabla*" | eval Date=strftime(_time,"%Y/%m/%d") | eval Time=strftime(_time,"%H:%M:%S") | eval Fecha=strftime(_time,"%Y/%m/%d %H:%M:%S") | rex "^.+transactType:\s(?P<transactType>(.\w+)+)" | stats values(Fecha) as Fecha, values(transactType) as transactType by ID | bucket Fecha span=5m |stats count by transactType How can i obtain what i want?  
i have configured the splunk addon for jmx and added the jmx server. i could able to get jmx server data. When i delete and reinstall new splunk enterprise and I copied splunk addon for jmx app of pr... See more...
i have configured the splunk addon for jmx and added the jmx server. i could able to get jmx server data. When i delete and reinstall new splunk enterprise and I copied splunk addon for jmx app of previous splunk to /etc/app folder. But here I am getting error as internal server cannot reach in configuration page. But input is configuration clear. Is their any option to add jmx server other then web interface . When I copy app why same configuration of jmx server is not applying.