All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hey, I would love to get help I want to build a query to be a rule that will monitor DNS requests I work with two INDEXES in one of them (INDEX1) I need the following fields src , query , directi... See more...
Hey, I would love to get help I want to build a query to be a rule that will monitor DNS requests I work with two INDEXES in one of them (INDEX1) I need the following fields src , query , direction and I want that according to the results I got from this 1INDEX the second 2INDEX will take the query field and check what category it falls under In the second (2INDEX) the query field is equivalent to the DOMAIN field and the category field does not exist in INDEX1  
If I have a histogram metric, for example request_duration_seconds_bucket, request_duration_seconds_count and request_duration_sum -- how do I plot these onto a heatmap in the Splunk Analytics worksp... See more...
If I have a histogram metric, for example request_duration_seconds_bucket, request_duration_seconds_count and request_duration_sum -- how do I plot these onto a heatmap in the Splunk Analytics workspace?
Hello everyone, I am looking for a way to assign values to variables in order to avoid repetition in my query. I want to search in different resources using the same variables in the same query. I ... See more...
Hello everyone, I am looking for a way to assign values to variables in order to avoid repetition in my query. I want to search in different resources using the same variables in the same query. I have tried the following, but it does not seem to work: | makeresults | eval var_1="var_1_content" | eval var_2="var_2_content" | search (sourcetype=var_1 OR sourcetype=var_2) Could you please help me correct this or provide an alternative approach to achieve this? Thank you for your assistance!
Bit of a reverse error here, splunk is working when it shouldn't. I followed these instructions to run Splunk as non-root - https://docs.splunk.com/Documentation/Forwarder/9.2.1/Forwarder/Installlea... See more...
Bit of a reverse error here, splunk is working when it shouldn't. I followed these instructions to run Splunk as non-root - https://docs.splunk.com/Documentation/Forwarder/9.2.1/Forwarder/Installleastprivileged systemctl stop splunk /opt/splunkforwarder/bin/splunk disable boot-start /opt/splunkforwarder/bin/splunk enable boot-start -systemd-managed 1 -user blueq -group blueq systemctl start splunk Splunk is running as this user and the user cannot view /var/log/messages [root@host1 ~]# ps -ef|grep splunk blueq 137095 1 24 14:22 ? 00:00:00 splunkd --under-systemd --systemd-delegate=yes -p 8089 _internal_launch_under_systemd blueq 137134 137095 0 14:22 ? 00:00:00 [splunkd pid=137095] splunkd --under-systemd --systemd-delegate=yes -p 8089 _internal_launch_under_systemd [process-runner] root 137154 6813 0 14:22 pts/0 00:00:00 grep --color=auto splunk [root@host1 ~]# ls -l /opt/splunkforwarder/ total 172 drwxr-xr-x. 3 blueq blueq 4096 Jun 25 22:11 bin drwxr-xr-x. 2 blueq blueq 66 Jun 25 22:11 cmake -r--r--r--. 1 blueq blueq 57 Mar 21 09:38 copyright.txt ... [root@host1 ~]# su - blueq Last login: Wed Jul 10 14:24:24 AEST 2024 on pts/0 [blueq@host1 ~]$ ls -l /var/log/messages -rw-------. 1 root root 4898581 Jul 10 14:24 /var/log/messages [blueq@host1 ~]$ cat /var/log/messages cat: /var/log/messages: Permission denied Yet I see no errors in /opt/splunkforwarder/var/log/splunk/splunkd.log and the logs are still uploaded to splunk cloud, why???
Hello - wanted to ask if anyone happens to know the best approach (recommended by Cisco) for monitoring an AWS RDS SQL Server instance with the AppDynamics controller type being SaaS/Cloud hosted. Th... See more...
Hello - wanted to ask if anyone happens to know the best approach (recommended by Cisco) for monitoring an AWS RDS SQL Server instance with the AppDynamics controller type being SaaS/Cloud hosted. The documentation for AppDynamics isn't quite clear and is it correct to assume that the best approach is to provision an EC2 instance (or AWS workspace) in my AWS environment with the appropriate VPC / RDS security group settings and install an agent? The EC2 instance or AWS workspace would connect to the RDS instance. If anyone has a step-by-step guide that they can share that would be greatly appreciated.  Thanks!  
Hi, I have a search result with the field message.log, and the field contains this example pattern   /opt/out/instance/log/audit.log 2023-06-04 21:32:59,422| tid:c-NMqD-hKsPm_AEzEJQyGx4O1kY| SSO| 8... See more...
Hi, I have a search result with the field message.log, and the field contains this example pattern   /opt/out/instance/log/audit.log 2023-06-04 21:32:59,422| tid:c-NMqD-hKsPm_AEzEJQyGx4O1kY| SSO| 8e4567c0-9f3a-25a1-a22d-e6b3744559a52| 123.45.678.123 | | this-value-here| SAML20| node1-1.nodeynode.things.svc.cluster.local| IdP| success| yadayadaAdapter| | 285 I'd like to rex "this-value-here" which is always preceded by the pattern pipe-space-pipe-space and always followed by pipe-space-SAML20. Having trouble with the rex expression, appreciate the assistance.
In the Splunk Add-on for ServiceNow, how do you update the description field in Service now? or is there a way to add in a description field?  is it something I need to add in Custom fields to create... See more...
In the Splunk Add-on for ServiceNow, how do you update the description field in Service now? or is there a way to add in a description field?  is it something I need to add in Custom fields to create it?    
We have been using Splunk on a Windows server without issue.  It ingested logs from Vmware hosts, networking hardware, firewalls, Windows events, etc. We created a new Splunk instance on CentOS Stre... See more...
We have been using Splunk on a Windows server without issue.  It ingested logs from Vmware hosts, networking hardware, firewalls, Windows events, etc. We created a new Splunk instance on CentOS Stream 9.  It runs as the Splunk user, so it couldn't use the udp data input of 514.  We set it to 10514 and did port forwarding to get around that.  That works for everything except our VMware hosts.  The logging from them will not show up in the new Splunk server.  All the other devices/logs that want to send on udp 514 show up in Splunk. The value on the VMware hosts that always worked before was:  udp://xxx.xxx.xxx.xxx:514.  We tried the same with 10514 to no avail.  Is there an issue with receiving logs from VMware hosts and having port forwarding send the data to a different port?    
Hello! I'm trying to separate the latency results with Eval by dividing in 3 categories and then showing the percentage using the Top command. This was working for the beginning of the project bu... See more...
Hello! I'm trying to separate the latency results with Eval by dividing in 3 categories and then showing the percentage using the Top command. This was working for the beginning of the project but now I need to separated the results by hour instead of the whole day and including the Table command and using the fields from Eval is not working.   Here's my search | eval tempo= case( 'netPerf.netOriginLatency'<2000, "Under 2s", 'netPerf.netOriginLatency'>2000 AND 'netPerf.netOriginLatency'<3000, "Between 2s and 3s", 'netPerf.netOriginLatency'>3000, "Above 3s" ) | top 0 tempo Latency count percent Under 2s 74209 86.5 % Between 2s and 3s 10736 12.5 % Above 3s 803 0.9 %   Ideal scenario would be something like this: _time Under 2s Between 2s and 3s Above 3s 06/07/2024 00:00 97.3 % 2.3 % 2.3 % 06/07/2024 01:00 96.3 % 2.7 % 1.0 %   Appreciate the time and help!
SQL Monitoring - I'd like to know how to write a Splunk SPL query to alert on the top users running long running SQL queries on my databases.  I'm currently using the MS SQL add-on for Splunk and ... See more...
SQL Monitoring - I'd like to know how to write a Splunk SPL query to alert on the top users running long running SQL queries on my databases.  I'm currently using the MS SQL add-on for Splunk and monitoring the included monitors for  Perfmon:sqlserver:* and sourcetypes "mssql:agentlog" and "mssql:errorlog"   Thank you in advance!
If I increase the RF on a SmartStore enabled indexer cluster what happens on SmartStore? Is a 2nd copy actually created on SmartStore, or is the same copy referenced by more than 1 indexer in the clu... See more...
If I increase the RF on a SmartStore enabled indexer cluster what happens on SmartStore? Is a 2nd copy actually created on SmartStore, or is the same copy referenced by more than 1 indexer in the cluster? 
I am taking the Pluralsight tutorial. I have followed all the steps very carefully in the "Demo: Getting Data into Splunk" video.  I first run into trouble about two minutes in. I uploaded the logfil... See more...
I am taking the Pluralsight tutorial. I have followed all the steps very carefully in the "Demo: Getting Data into Splunk" video.  I first run into trouble about two minutes in. I uploaded the logfile successfully and successfully set the source type as access_combined_wcookie. My Event Break and Timestamp settings are the same as what is shown in the video. But in the large viewing window to the right, mine says "No results found. In the video there are Times and Events in this pane.  I thought that perhaps I just needed to follow all the steps through to see Times and events, so I created the new index, as per the tutorial, and submitted successfully. But then I got the same "No Results Found" message on the New Search screen. I should note that the only difference between me and the tutorial video is that in the bar underneath the words "New Search," the host = my computer's name instead of "thenson-desktop." What do I need to do to see results?
Hi, query : is it possible to use  expand and collapse for table column fields in splunk classic dashboards.   query2 : is it possible to add excel feature (overriding next column value untill... See more...
Hi, query : is it possible to use  expand and collapse for table column fields in splunk classic dashboards.   query2 : is it possible to add excel feature (overriding next column value untill we expand) as above image in splunk classic dashboard.  
I have a cisco ess -3300 con switch with 20 1G copper port and 4 1G Fiber cable .My issue is that out of 24 1G port one of my Fiber interface is showing err-disable status and one 1G copper port is a... See more...
I have a cisco ess -3300 con switch with 20 1G copper port and 4 1G Fiber cable .My issue is that out of 24 1G port one of my Fiber interface is showing err-disable status and one 1G copper port is also not showing connected status how to resolve the above issue. Please reply as soon as possible.
My environment contains two EC2s: one primary and one warm standby. Due to a series of unfortunate events, our database on the warm standby got corrupted and phantom would not start on it. Luckily, w... See more...
My environment contains two EC2s: one primary and one warm standby. Due to a series of unfortunate events, our database on the warm standby got corrupted and phantom would not start on it. Luckily, we had a volume backup in AWS of the SOAR directory, so it was saved.  However, after some research afterwards, we found a different method of backing up: https://docs.splunk.com/Documentation/SOARonprem/6.2.2/Admin/BackupOrRestoreAndWarmStandby I think I'm being dense and overthinking it, but the article mentions a "primary warm standby", a "primary" + a "secondary" + a "warm standby" later on in the article. How many servers are in this configuration? I am not understanding how it is being set up and what the secondary is referencing. Also, what is a "primary warm standby"? Would this article be helpful in the situation I described above with my failed warm standby? 
I'm trying to get a percentage of a field, based on a condition (filtered by search) by another field. e.g.  percentage of 404 errors by application. So need to get the total number of requests for ... See more...
I'm trying to get a percentage of a field, based on a condition (filtered by search) by another field. e.g.  percentage of 404 errors by application. So need to get the total number of requests for each application, filter to keep only 404 errors then count by application. At least that's the logic I used.   <unrelated part to collect proper events> | eventstats count as total by applicationId | search error=404 | stats count as error_404 by applicationId | eval errorRate=((error_404/total)*100)."%" | table applicationId, errorRate     This returns a list of applications, but no values for errorRate. Individually, I'm able to get a result for this:   | stats count as total by applicationId   And also for this:   | search error=404 | stats count as error_404 by applicationId   But something in having them together in the flow I have doesn't work. I also tried this which didn't work. In this instance I get values for applicationId and total. So I guess there's something wrong with how I'm getting the error_404 values.   | stats count as total by applicationId | appendcols[search error=404|stats count as error_404 by applicationId] | eval errorRate=((error_404/total)*100)."%" | table applicationId, error_404, total, errorRate  
Hi, If you make a curl request to the Splunk, that in the web_access.log the client is a 127.0.0.1 and user is '-', can we somehow correct client field to know who actually made the request?
Hi, I am unable to find the upload asset option inside Edit properties in manage app. Eventhough I have admin role, i am unable to upload asset. Does it require any capabilities to upload asset to ... See more...
Hi, I am unable to find the upload asset option inside Edit properties in manage app. Eventhough I have admin role, i am unable to upload asset. Does it require any capabilities to upload asset to splunk cloud.?
Hi, I am uploading a .tgz file with js script, png and css inside my /appserver/static folder of my app. After uploading and installing the app in splunk cloud, i am unable to use the script. Any ... See more...
Hi, I am uploading a .tgz file with js script, png and css inside my /appserver/static folder of my app. After uploading and installing the app in splunk cloud, i am unable to use the script. Any idea on this.  
Hi folks, I have a use case where I am having different types of events in a single sourcetype. I want to apply different timestamp extractions for both the events. I am using TIME_PREFIX and MAX_T... See more...
Hi folks, I have a use case where I am having different types of events in a single sourcetype. I want to apply different timestamp extractions for both the events. I am using TIME_PREFIX and MAX_TIMESTAMP_LOOKAHEAD to extract the timestamp from event #1. However, the same rules won't be useful for event #2. Is there a way to extract the timestamp values from both the events in a single sourcetype? Event #1 Timestamp should be extracted as (Oct  9 23:57:37.887) Oct 10 05:27:48 192.168.100.1 593155: *Oct  9 23:57:37.887: blah blah blah Event #2 Timestamp should be extracted as (Feb 13 11:27:46) Feb 13 11:27:46 100.80.8.22 %abc-INFO-000: blah blah blah TIME_PREFIX = \s[^\s]+\s\d{1,3}.\d{1,3}.\d{1,3}.\d{1,3}\s[^\s]+:\s|\s[^\s]+\s\d{1,3}.\d{1,3}.\d{1,3}.\d{1,3}\s MAX_TIMESTAMP_LOOKAHEAD = 30