All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I am looking for the equivalent of performing SQL like such: SELECT transaction_id, vendor FROM orders WHERE transaction_id NOT IN (SELECT transaction_id FROM events). As of right now I c... See more...
Hello, I am looking for the equivalent of performing SQL like such: SELECT transaction_id, vendor FROM orders WHERE transaction_id NOT IN (SELECT transaction_id FROM events). As of right now I can construct a list of transaction_ids for orders in one search query and a list of transaction_ids for events in another search query, but my ultimate goal is to return order logs that do not share transaction_ids with the transaction_ids of the events log. Any help is greatly appreciated, thanks!
This seems to be a bit strange: We are running Enterprise version 8.1.5 in a search head cluster. A custom app is created for our security team to manage their dashboards, etc. The strange thing is,... See more...
This seems to be a bit strange: We are running Enterprise version 8.1.5 in a search head cluster. A custom app is created for our security team to manage their dashboards, etc. The strange thing is, some of the dashboards cannot be deleted -- there is just no delete (or move) option: I've checked on all the individual nodes in the cluster, they are all in the local/ folder in the app. This is on-prem Splunk Enterprise, so I can manually delete them from all the nodes as an admin. But I would like to understand what I am missing here. I did search for answers here and found this one post. But that is in the Splunk Cloud. So I am not sure if my issue is a bug like that in the posting or not. Thanks for any thoughts / discussions. Happy holidays!
I have the following data:     { "remote_addr": "1.2.3.4", "remote_user": "-", "time_local": "24/Nov/2022:09:55:46 +0000", "request": "POST /myService.svc HTTP/1.1", "status": "200", ... See more...
I have the following data:     { "remote_addr": "1.2.3.4", "remote_user": "-", "time_local": "24/Nov/2022:09:55:46 +0000", "request": "POST /myService.svc HTTP/1.1", "status": "200", "request_length": "4581", "body_bytes_sent": "4891", "http_referer": "-", "http_user_agent": "-", "http_x_forward_for": "-", "request_time": "0.576" }     These are nginx access logs.  I have a situation where certain requests are failing and then retrying every hour or so.  I want to identify these as best I can.  So... Return results where status!=200 Group where: remote_addr matches, and request_length matches, and status matches, and body_bytes_sent matches (I'm making the presumption these would be our identical requests with same values for these) Create a table of these results showing the time_local for each occurence Order time_local within each row (from earliest to latest) This would leave rows where the above matches aren't made and I'd just want these listing on individual rows This is beyond my capabilities and I got this (not very) far:     index=index source="/var/log/nginx/access.log" | where status!=200 | stats list(time_local) by request_length | sort - list(time_local)     This is sort of what I want but doesn't do any matching.  It does group the time_local against the request_length which is how I'd like the output (but including the other fields for visibility).  Also, the sort doesn't work as it seems to sort by the first record in each row and I want it to sort WITHIN the row itself. This the output: request_length list(time_local) 26562 24/Nov/2022:16:19:20 +0000 24/Nov/2022:14:16:45 +0000 24/Nov/2022:12:15:04 +0000 24/Nov/2022:11:15:01 +0000 24/Nov/2022:15:18:02 +0000 41977 24/Nov/2022:16:19:20 +0000 24/Nov/2022:14:16:45 +0000 24/Nov/2022:12:15:04 +0000 24/Nov/2022:11:15:01 +0000 24/Nov/2022:15:18:02 +0000 24/Nov/2022:13:15:06 +0000 But I want it to look more like this... request_length status body_bytes_sent remote_addr time_local 26562 500 4899 1.2.3.4 24/Nov/2022:11:15:01 +0000 24/Nov/2022:12:15:04 +0000 24/Nov/2022:14:16:45 +0000 24/Nov/2022:15:18:02 +0000 24/Nov/2022:16:19:20 +0000 41977 500 5061 6.7.8.9 24/Nov/2022:11:15:01 +0000 24/Nov/2022:12:15:04 +0000 24/Nov/2022:13:15:06 +0000 24/Nov/2022:14:16:45 +0000 24/Nov/2022:15:18:02 +0000 24/Nov/2022:16:19:20 +0000
Make sure the 2 below scenarios are right in your file: if you are using fonts locally, make sure the font is uploaded and path is correctly linked to it. If you are calling font from web url, make s... See more...
Make sure the 2 below scenarios are right in your file: if you are using fonts locally, make sure the font is uploaded and path is correctly linked to it. If you are calling font from web url, make sure the path is correct or the site is opening the font in browser tab. Fonts Bee
Hi folks, I have an issue with a HF, I'm getting some spikes reaching the 100% when sending data to Splunk Cloud. This happens every 30 seconds approx. I think this is because of the amount of data... See more...
Hi folks, I have an issue with a HF, I'm getting some spikes reaching the 100% when sending data to Splunk Cloud. This happens every 30 seconds approx. I think this is because of the amount of data we are sending, this is also causing that all data get with a delay to Splunk Cloud, I mean the _time and indextime is different in all data because of this. So I have some questions: 1- How can I check if I'm sending a big amount of data at similar times during the day? Do you have a query I can use or a dashboard? 2- What are your recommendation to distribute the big data to be sent at different times?  I really appreciate your help on this. Thanks in advance!
Hi ,   I wanted to dashboard to monitor my complete splunk environment . I want to monitor _internal index every 5mins and if it is not sending _internal data then it should go to red otherwise it ... See more...
Hi ,   I wanted to dashboard to monitor my complete splunk environment . I want to monitor _internal index every 5mins and if it is not sending _internal data then it should go to red otherwise it should be running and green    Can we achieve this       
Hi, I have a row on a dashboard with a number of panels with metrics. However, the panels appear off-set  and the metrics are not centered: Currently, the row is defined as follows: <... See more...
Hi, I have a row on a dashboard with a number of panels with metrics. However, the panels appear off-set  and the metrics are not centered: Currently, the row is defined as follows: <row> <panel id="tn"> <title>Total</title> <html> <style> single{ width: auto; font-size=20%; } </style> </html> How can I fix this? Thanks,
We have api requests that I want to create statistics by the request but to do this I need to remove variable identifiers and any parameters. For example, with the following requestpatterns POST ... See more...
We have api requests that I want to create statistics by the request but to do this I need to remove variable identifiers and any parameters. For example, with the following requestpatterns POST /api-work-order/v1/work-orders/10611946/labours-reporting/2004131 HTTP/1.1 GET /api-work-order/v1/work-orders/10611946/labours-reporting HTTP/1.1 PUT /api-work-order/v1/work-orders/10611946 HTTP/1.1 GET /api-work-order/v1/work-orders HTTP/1.1 I need to replace the identifiers to extract: POST /api-work-order/v1/work-orders/{id}/labours-reporting/{id} GET /api-work-order/v1/work-orders/{id}/labours-reporting PUT /api-work-order/v1/work-orders/{id} GET /api-work-order/v1/work-orders   
Hi, let me try to explain my problem. I have a main search with a selected timerange (typically "last 4 hours") which is selected with the time picker. In addition, I join a subsearch where I want to... See more...
Hi, let me try to explain my problem. I have a main search with a selected timerange (typically "last 4 hours") which is selected with the time picker. In addition, I join a subsearch where I want to calculate the average of some values with a bigger time range (typically "last 7 days"). To do that I use the earliest and latest commands in the subsearch. Is it somehow possible to get/access the values of info_min_time and info_max_time (which the addinfo command produces) from the main search into the subsearch?
I would like to use a graph (Ex. Sankey) to visualize user navigation from page to page in an application. If I elaborate the requirement, I need a particular user’s navigation through the app in the... See more...
I would like to use a graph (Ex. Sankey) to visualize user navigation from page to page in an application. If I elaborate the requirement, I need a particular user’s navigation through the app in the same order that he has navigated. Which kind of Graph that I can use and what will be the appropriate query? For you to get an understanding what I have tried so far; I’ve attached an image  
Hi Community,   I have a Splunk dashboard which consists of panels that depend on one another in a top-down manner. The data for the panels are powered by accelerated data from the data models. Wh... See more...
Hi Community,   I have a Splunk dashboard which consists of panels that depend on one another in a top-down manner. The data for the panels are powered by accelerated data from the data models. When I initially set the token, the first panel loads with incomplete data, then takes a few seconds to load the complete list of data. So the rest of the panels won't load until the first panel loads completely and shows messages like "Invalid earliest time" OR "waiting for data". Is this an ideal behavior or is there something that needs to be changed?   Regards, Pravin
I am trying to provide developer support to splunk soar app. How we can do that.
HI All, I would like to visualize all the search fields/content I mentioned using the command search: index=*  | search (Apps=value1 Or Apps=value2 OR Apps=value3) | stats count by Apps ... See more...
HI All, I would like to visualize all the search fields/content I mentioned using the command search: index=*  | search (Apps=value1 Or Apps=value2 OR Apps=value3) | stats count by Apps Apps count value1 5 value2 0 value3 0   So, I want to see all the values I have mentioned in the search, even if they were not found (adding for example a 0 count) Is it possible? Thank you in advance. Matteo
Is there a way to set customised colours for sunburst visualisation on the basis of a string value?
File monitoring inputs for Splunk Add-on for Unix and Linux Query 1-->I have installed the above mentioned app to monitor the file monitoring input from the same. When I enable the default file mo... See more...
File monitoring inputs for Splunk Add-on for Unix and Linux Query 1-->I have installed the above mentioned app to monitor the file monitoring input from the same. When I enable the default file monitoring inputs I am getting source and source type as attached in the data. But I do not see much interesting fields for the same source and source type. Please assist me with the exact source and source type along with the list of interesting fields it will extract via field extraction. Query 2-->I have installed the above mentioned app to monitor the file monitoring input from the same. When I updated inputs.conf with new file monitoring inputs I am not getting data for the new input. Please let me know why and how can we work on the same to get exact data from new input files.
I am setting up a number of Kubernetes clusters in my organisation. We are using SPLUNK for monitoring. I have been told that I will need to ask the network team to reserve 2 CDIR range's for pods an... See more...
I am setting up a number of Kubernetes clusters in my organisation. We are using SPLUNK for monitoring. I have been told that I will need to ask the network team to reserve 2 CDIR range's for pods and services on each cluster because SPLUNK requires it. Can anyone clarify if SPLUNK does require every kubernetes cluster on the network to have a unique CIDR range for pods and services? 
Hello, I am trying fetch Azure Virtual Machine Metrics data using Add on 'Splunk_TA_microsoft-cloudservices' I have  created/added azure storage account and Inputs as stated in the doc in the add o... See more...
Hello, I am trying fetch Azure Virtual Machine Metrics data using Add on 'Splunk_TA_microsoft-cloudservices' I have  created/added azure storage account and Inputs as stated in the doc in the add on. But i dont see any logs indexed in splunk for the same.When i check the internal index i see the below error. What does it mean and How do i fix this error?  
Is there a way to monitor the creation of new Splunk users/admins?  I want to be notified if someone creates a new Splunk admin. 
Hello there. I hope you guys doing great. I am faving a problem while trying to implement the mint.jar (com.splunk.mint:mint:5.0.0) It showing error like: * What went wrong: Execution fa... See more...
Hello there. I hope you guys doing great. I am faving a problem while trying to implement the mint.jar (com.splunk.mint:mint:5.0.0) It showing error like: * What went wrong: Execution failed for task ':app:compileProdReleaseKotlin'. > Could not resolve all artifacts for configuration ':app:prodReleaseCompileClasspath'. > Could not download mint.jar (com.splunk.mint:mint:5.0.0) > Could not get resource 'https://mint.splunk.com/gradle/com/splunk/mint/mint/5.0.0/mint-5.0.0.jar'. > Could not GET 'https://mint.splunk.com/gradle/com/splunk/mint/mint/5.0.0/mint-5.0.0.jar'. > Remote host terminated the handshake Is there any archive file or any way to solve the issue guys? Thank you.
Hi All, I would like to know whether AppD provided monitoring for Oracle Cloud. If you can you, please route me to the reference documents?