All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a csv that gets loaded weekly... timestamp for events are on load. However, this file has multiple time fields (first discovered, last seen, etc.). I am attempting to find those events (based ... See more...
I have a csv that gets loaded weekly... timestamp for events are on load. However, this file has multiple time fields (first discovered, last seen, etc.). I am attempting to find those events (based on the fields) that are greater than 30 days, for example. had this working fine, until I introduced a lookup. I am attempting to show results grouping them by owner (stats) but only those events that are 30 days from first discovered until now().  If I add | where Days > 30, results show every event from the fiel. But I know they are there... anonymized query below. What am I doing wrong? Sample fields being eval'ed:  First Discovered: Jul 26, 2023 16:50:26 UTC Last Observed: Jul 19, 2024 09:06:32 UTC   index=stuff  source=file Severity="Critical" | lookup detail.csv "IP Address" OUTPUTNEW Manager | eval First_DiscoveredTS = strptime("First Discovered", "%b %d, %Y %H:%M:%S %Z"), Last_ObservedTS = strptime("Last Observed", "%b %d, %Y %H:%M:%S %Z"), firstNowDiff = (now() - First_DiscoveredTS)/86400, Days = floor(firstNowDiff) | stats by Manager | where Days > 30    
I have installed Splunk Enterprise on an RHEL9 VM in AWS. I have tried installing via TAR and RPM. I also tried starting it as "root" and "splunk" users but it just won't start. It always hangs at th... See more...
I have installed Splunk Enterprise on an RHEL9 VM in AWS. I have tried installing via TAR and RPM. I also tried starting it as "root" and "splunk" users but it just won't start. It always hangs at the same point and when that happens I can't even SSH to my VM. I have to reboot the VM to get access to it again. It stays here for about 30 minutes (maybe longer). Then, I see the following. Any idea what might be going on?
Hello All, Can ya'll give me advice on why my query taking so long? In a dashboard it just times out and regular verbose it takes quite a bit of time. Purpose of the query is to simply just search ... See more...
Hello All, Can ya'll give me advice on why my query taking so long? In a dashboard it just times out and regular verbose it takes quite a bit of time. Purpose of the query is to simply just search my index and output me the result that match in the lookup url.  index=myindex sourcetype=mysource | stats count by url | fields - count | search [|inputlookup LCL_url.csv | fields url] | sort url Thank you
I´m probably very slow on the uptake from my side.  But supposedly I was to get a link for Splunk cloud by mail for my google security certificate.   But I did not get a direct link and neither do I ... See more...
I´m probably very slow on the uptake from my side.  But supposedly I was to get a link for Splunk cloud by mail for my google security certificate.   But I did not get a direct link and neither do I seem to find any acces after logging in on splunk.com
Here is my query for checking BGP routing that goes UP and DOWN. (I only want to see when the amount of UP and DOWN are not equal for the same Neighbor on a router) In my case i want to show only li... See more...
Here is my query for checking BGP routing that goes UP and DOWN. (I only want to see when the amount of UP and DOWN are not equal for the same Neighbor on a router) In my case i want to show only line #5 and #6. How do i do that ?    My query: ...... | rex field=_raw "(?<BGP_NEIGHBOR>neighbor\s\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})"  |  rex field=_raw "(?<BGP_STATUS>(Up|Down))"  |  stats count by HOST, BGP_NEIGHBOR, BGP_STATUS     #     HOST               BGP_NEIGHBOR       BGP_STATUS       count   1     Router A          neighbor 10.1.1.1          Down                    1 2     Router A          neighbor 10.1.1.1          Up                          1   3     Router B          neighbor 10.2.2.2          Down                   1 4     Router B          neighbor 10.2.2.2          Up                         1   5     Router C          neighbor 10.3.3.3         Down                    2 6     Router C          neighbor 10.3.3.3         Up                          1   7     Router D          neighbor 10.4.4.4         Down                   2 8     Router D          neighbor 10.4.4.4         Up                         2    
Query1: |tstats count as Requests sum(attributes.ResponseTime) as TotalResponseTime where index=app-index NOT attributes.uriPath("/", null, "/provider") |eval TotResTime=TotalResponseTime/Requests ... See more...
Query1: |tstats count as Requests sum(attributes.ResponseTime) as TotalResponseTime where index=app-index NOT attributes.uriPath("/", null, "/provider") |eval TotResTime=TotalResponseTime/Requests |fields TotResTime  Query2: |tstats count as Requests sum(attributes.latencyTime) as TotalatcyTime where index=app-index NOT attributes.uriPath("/", null, "/provider") |eval TotlatencyTime=TotalatcyTime/Requests |fields TotlatencyTime We want to combine these 2 queries and create area chart panel.  how to do this??
Hi All, I'm trying to build a dashboard that will take input from a dropdown field and perform a search based on the item selected from the dashboard. I have two inputs, one dropdown and one multise... See more...
Hi All, I'm trying to build a dashboard that will take input from a dropdown field and perform a search based on the item selected from the dashboard. I have two inputs, one dropdown and one multiselect.  I am passing two tokens, one $competency$ for dropdown and $sub_competency$ for multiselect.    My token sub_competency is not syncing with the dashboard. I am adding like this | search Sub_Competency="$sub_competency$"     | inputlookup cyber_q1_available_hours.csv | rename "Sub- Competency" as Sub_Competency | search Sub_Competency="$sub_competency$" | eval split_name=split('Resource Name', ",") | eval first_name=mvindex(split_name,1) | eval last_name=mvindex(split_name,0) | eval Resource_Name=trim(first_name) . " " . trim(last_name) | stats count,values(Sub_Competency) as Sub_Competency values(Competency) as Competency values("FWD Looking Util") as FWD_Util values("YTD Util") as YTD_Util by Resource_Name | search Competency="$selected_competency$" | table Resource_Name, Competency, Sub_Competency,FWD_Util,YTD_Util |sort FWD_Util       Need some urgent help on this.  Thanks in advance
Hello, I installed the forwarder on a Windows machine, and during the installation, I selected the Windows performance monitor to collect performance data. However, I am not sure where to find thi... See more...
Hello, I installed the forwarder on a Windows machine, and during the installation, I selected the Windows performance monitor to collect performance data. However, I am not sure where to find this data in Splunk or which on default index it is stored in.
Hi Team, Is there a easy way to convert Dashboard Studio to Classic dashboard and enable export option?
I am analysing Incident to Problem linkage by doing a search of the Incident table and then using a Join to the Problem to get supporting data for linked problems. Problem I have is with Join I am cl... See more...
I am analysing Incident to Problem linkage by doing a search of the Incident table and then using a Join to the Problem to get supporting data for linked problems. Problem I have is with Join I am close to threshold for time periods for the search to fail I have tried to use multisearch and OR search but I need to retain Incident results where there is no problem linked, hope this makes sense, code I have written... | multisearch [search index=servicenow sourcetype="incident" ] [search index=servicenow sourcetype="problem" ] | eval incident=if(sourcetype="incident",number,null), problem=if(sourcetype="incident",dv_problem_id,dv_number) | stats latest(eval(if(sourcetype="incident",dv_opened_at,null()))) as inc_opened, latest(problem) as problem, latest(eval(if(sourcetype="problem",dv_state,null()))) as prb_state by incident
I am using the Splunk OTEL Collector Helm chart to send logs from my GKE pods to the Splunk Cloud Platform. I have set `UsesplunkIncludeAnnotation` to `true` to filter logs from specific pods. This s... See more...
I am using the Splunk OTEL Collector Helm chart to send logs from my GKE pods to the Splunk Cloud Platform. I have set `UsesplunkIncludeAnnotation` to `true` to filter logs from specific pods. This setup was working fine until I tried to filter the logs being sent. I added the following configuration to my `splunk` values.yaml: config: processors: filter/ottl: error_mode: ignore logs: log_record: - 'IsMatch(body, "GET /status")' - 'IsMatch(body, "GET /healthcheck")' When I applied this configuration, the specified logs were excluded as expected, but it did not filter logs from the specified pods. I am still receiving logs from all my pods, and the annotation is not taking effect. Additionally, the host is not displaying correctly and is showing as "unknown". (I will attach a screenshot for reference.) My questions are: 1. How can I exclude these specific logs more effectively? 2. Is there a more efficient way to achieve this filtering?
Hello, I used Splunk REST API with Search endpoint to be able to retrieve the latest fired alerts based on a title search. I get the fired alerts in alphabetical order but not in chronological orde... See more...
Hello, I used Splunk REST API with Search endpoint to be able to retrieve the latest fired alerts based on a title search. I get the fired alerts in alphabetical order but not in chronological order since all the alerts obtained have the default field <updated>1970-01-01T01:00:00+01:00</updated>. Here's the url and query I used : https://<host>:<mPort>/services/alerts/fired_alerts?search=name%3DSOC%20-*&&sort_dir=desc&sort_key=updated     | rest /services/alerts/fired_alerts/ | search title="SOC - *" | sort -updated | table title, updated, triggered_alert_count, author     Here are the references I used :  Search endpoint descriptions - Splunk Documentation Using the REST API reference - Splunk Documentation So, how can I retrieve fired alerts in chronological order with a title search ? Or how can I obtain a field indicating the date the alert was triggered ? Thanks in advance.
  This is a line of code that takes the fields from the CSV file     |lookup xxx.csv id OUTPUTNEW system time_range      I want to add one field     |lookup xxx.csv id OUTPUTNEW s... See more...
  This is a line of code that takes the fields from the CSV file     |lookup xxx.csv id OUTPUTNEW system time_range      I want to add one field     |lookup xxx.csv id OUTPUTNEW system time_range count_err     When I do this nothing is added, why? I would appreciate your help, thanks
Dear Experts, We are in the latest version of ABAP agent (24.5). In S4HANA system, we noticed a runtime error getting triggered every hour. We identified the related KPI and disabled it. But custom... See more...
Dear Experts, We are in the latest version of ABAP agent (24.5). In S4HANA system, we noticed a runtime error getting triggered every hour. We identified the related KPI and disabled it. But customer needs permanent solution, because it is related to SOST (Mail monitoring) TSV_TNEW_PAGE_ALLOC_FAILED | No more memory available to add rows to an internal table. | SAPLSX11 | LSX11F02 Any idea on permanent solution? Thanks Jananie
We ingested some data from one device which is not add to network traffic datamodel by default. this device sends data in json format. data is added to datamodel but when i use auto extracted fields... See more...
We ingested some data from one device which is not add to network traffic datamodel by default. this device sends data in json format. data is added to datamodel but when i use auto extracted fields and rename that field to already existed field it is still showing original name in interesting fields.   source field = data.clientaddr dest field = src_ip   why i need this to be changed at source level because i want one search to work for all devices. I am using tstats command in search   in interesting fields it is still showing data.clientaddr instead of src_ip
Hi All,  Is there a way in splunk dashboard studio just I make one column clickable  in table displayed?  I have a table visualisation  in dashboard studio. I want just one column value to be c... See more...
Hi All,  Is there a way in splunk dashboard studio just I make one column clickable  in table displayed?  I have a table visualisation  in dashboard studio. I want just one column value to be clickable. So, that on click of that another table is displayed . ( show / hide).  Please let me know how we can make just value in one column clickable ? Can we ? Regards, PNV
Dear Splunkers, I´m experiencing Splunk AR application network connection issues when trying to add new device. Please see attached print screen.  The error description is following No internet con... See more...
Dear Splunkers, I´m experiencing Splunk AR application network connection issues when trying to add new device. Please see attached print screen.  The error description is following No internet connection - MOB-SSG-6102 and it won´t generate verification code for to register new device. I´ve already tried to re-install the app but it does not help. Can you suggest ? Thank you BR
How can I create alerts based on this app data received using API? How this app https://splunkbase.splunk.com/app/6960 alert if my data matches with the intel feeds? Cyble Threat Intel 
I have a problem with data it's self and i have 2RF 2SF and they are works fine   i tried to roll buckets multiple times it's works for short time and then get back to the problem again   any one... See more...
I have a problem with data it's self and i have 2RF 2SF and they are works fine   i tried to roll buckets multiple times it's works for short time and then get back to the problem again   any one has idea how can i solve this issue   Thanks
Hi Team, I'm seeing following 22.77 as avg latency for the last 24 hours for one of the sourcetype. What is the normal avg latency that can be accepted since the logs are coming through syslog-> Hea... See more...
Hi Team, I'm seeing following 22.77 as avg latency for the last 24 hours for one of the sourcetype. What is the normal avg latency that can be accepted since the logs are coming through syslog-> Heavy Forwarder->Indexer's and ingesting into splunk.  Please let us know if there is any other alternative approach we can use to calculate the latency if below is incorrect.   Any help would be highly appreciated. Regards VK