All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

After I successfully installed the Splunk Enterprise on my Oracle Vbox, I got a message that says: "The minimum free disk space (5000MB) reached for /opt/splunk/var/run/splunk/dispatch". Then I used... See more...
After I successfully installed the Splunk Enterprise on my Oracle Vbox, I got a message that says: "The minimum free disk space (5000MB) reached for /opt/splunk/var/run/splunk/dispatch". Then I used the (sudo du -sh /opt) command and I found that the /opt directory resides the space 3.8G (less than 5.0G), even though, I allocated 40G for the machine at all !! My question is how to increase the disk space given for the /opt directory? PS: The health status of IoWalt under the resource usage is RED!!  
Hi, I have different sourcetypes like ( A ,B,C,D) Each sourcetype has have field "Status" with (True,False,Error,Not available) values in it . I am in need of a table structure which show like bel... See more...
Hi, I have different sourcetypes like ( A ,B,C,D) Each sourcetype has have field "Status" with (True,False,Error,Not available) values in it . I am in need of a table structure which show like below , Sourcetypes  True False False % False%(sparkline) A Count (currentday) Count(currentday) False%(currentday) Sparkiine for a week time span B  Count (currentday)  Count (currentday) False%(currentday) Sparkiine for a week time span C  Count (currentday)  Count (currentday) False%(currentday) Sparkiine for a week time span   Please help me with the search for this .  
Hi All, we have lots of dashboards where few of them are visited by user and some are not .. here we want to delete those dashboard which has been not seen by any user since long back. How to find l... See more...
Hi All, we have lots of dashboards where few of them are visited by user and some are not .. here we want to delete those dashboard which has been not seen by any user since long back. How to find last time-stamp of all dashboard when user has seen dashboard? have tried below but its not giving the last visited timestamp.. index="_internal" user!="-" sourcetype=splunkd_ui_access | rex field=uri "en-US/app/(?<app>[^/]+)/(?<dashboard>[^?/\s]+)" | search dashboard!="search" dashboard!="home" dashboard!="alert" dashboard!="lookup_edit" dashboard!="@go" dashboard!="data_lab" dashboard!="dataset" dashboard!="datasets" dashboard!="alerts" dashboard!="dashboards" dashboard!="reports" dashboard!="report"| stats count by app dashboard user
Hi,   Actually am trying to send data to hec in splunk where the our splunk is mapped with the dns, but am facing the issue while trying with curl command.   curl -k https://<dns>:8088/services/c... See more...
Hi,   Actually am trying to send data to hec in splunk where the our splunk is mapped with the dns, but am facing the issue while trying with curl command.   curl -k https://<dns>:8088/services/collector/event -H "Authorization: Splunk <token> " -d "{\"sourcetype\": \"_json\", "\event\":\"helloworld\"}" [curl: (7) Failed to connect to splunk-dev.accuknox.com port 8088: Timed out]  ERROR  can any one help me??  
Splunk UBA search head is down. Even after restarting ui services, status is shown as active in CLI but GUI is not available. Commands used to stop/start ui service: sudo service caspida-ui stop ... See more...
Splunk UBA search head is down. Even after restarting ui services, status is shown as active in CLI but GUI is not available. Commands used to stop/start ui service: sudo service caspida-ui stop  sudo service caspida-ui start   Status when checked in CLI: ● caspida-ui.service Loaded: loaded (/etc/init.d/caspida-ui; bad; vendor preset: enabled) Active: active (exited) since Fri 2021-09-03 05:53:12 UTC; 6min ago I also tried rebooting the VM, but it doesn't help.   Can I please get a suggestion around how to fix this?
We are using DBconnect with JTDS driver. When we enabling the connection in DBconnect we are seeing the below script in SQL Diagnostic Manager every 30mins: SELECT @@MAX_PRECISION SET TRANSACTION IS... See more...
We are using DBconnect with JTDS driver. When we enabling the connection in DBconnect we are seeing the below script in SQL Diagnostic Manager every 30mins: SELECT @@MAX_PRECISION SET TRANSACTION ISOLATION LEVEL READ COMMITTED SET IMPLICIT_TRANSACTIONS OFF SET QUOTED_IDENTIFIER ON SET TEXTSIZE 214###### May we know what is the use of this? Can we get rid of this or at least change the frequency to every hour instead?  In addition, we are seeing the Sleeping Session in SQL Diagnostic Manager. Is this usual or is there a way to complete the session after SQL script run from DBconnect to SQL Diagnostic Manager. We are just getting the data from the database on demand but after triggering the DBXquery the Sleeping session occur.  Please advise.
I'm unable to use the Validate & Package function of Add-on builder. When I run it, it says 'preparing validation' then nothing, empty white results. All I can do is press the validate button again w... See more...
I'm unable to use the Validate & Package function of Add-on builder. When I run it, it says 'preparing validation' then nothing, empty white results. All I can do is press the validate button again with the same result.  I've tried with other apps in the Add-on builder with same results. I have a local instance of Splunk Enterprise running. Fresh install so v8.2.2 and App-builder is on 4.0.0.  I did catch a notification in messages:  Unable to initialize modular input "validation_mi" defined in the app "splunk_app_addon-builder": Introspecting scheme=validation_mi: script running failed (exited with code 1).. Don't know what this means or how to fix it. Any ideas?
Hi, I need to calculate average of response time in seconds for my application.   Query i am using index="prod*_ping*"  source="*splunk-audit.log" "event=SSO" connectionid=* | stats avg(response... See more...
Hi, I need to calculate average of response time in seconds for my application.   Query i am using index="prod*_ping*"  source="*splunk-audit.log" "event=SSO" connectionid=* | stats avg(responsetime) as AvgRespTimeInSec by connectionid In connectionid i will get the application details Please let me know whether my query is correct for calculating the average of response time in seconds?   Regards, Madhusri R    
Hi I try to list the different way to collect Active Directory in Splunk Except if I am mistaken there is 2 main way to do that : Using the Splunk Supporting Add-on for Active Directory:  https:/... See more...
Hi I try to list the different way to collect Active Directory in Splunk Except if I am mistaken there is 2 main way to do that : Using the Splunk Supporting Add-on for Active Directory:  https://splunkbase.splunk.com/app/1151/  Using the splunk-admon.exe process  Is it true? What are the advantages and disadvantages of these solutions please? Is it also possible to install a connector between Splunk and AD in order to store the AD events in a KV Store? Thanks in advance
After building a project/add-on based on the Standard naming convention of Splunk, i am facing the issue where i have to remove the prefix set by the app. Renaming it via the add on builder fails ... See more...
After building a project/add-on based on the Standard naming convention of Splunk, i am facing the issue where i have to remove the prefix set by the app. Renaming it via the add on builder fails and renaming it outside of the app breaks the whole app as it runs based on complex scripts within. Any guidance will be hepful
There are multiple sourcetypes in index="main". I'm trying to stats at SOURCETYPE number one and I need a field of sourcetype number two. Is there any way?
I've got some logs I need to join and put on the same row. I've tried a few different ways and searched the community but I can't seem to get exactly what I need. There's a log every 10 minutes for... See more...
I've got some logs I need to join and put on the same row. I've tried a few different ways and searched the community but I can't seem to get exactly what I need. There's a log every 10 minutes for each host and each drive on said hosts (there are a lot of hosts and drives). Each log has 2 events for the same time and drive letter. One for free MB and one for percent. Basically I need to join together each set of these two separate events based on the time, host and drive letter of the log. Is this possible?    base query: index=perfmon host=host1 Category="PERFORMANCE" collection="WIN_PERF" object="LogicalDisk" counter="% Free Space" OR counter="Free Megabytes"   Drive letter is extracted as "instance" percent and MB are both extracted as "Value"   Returns these logs: "09/02/2021 21:48:49","host1","PERFORMANCE","WIN_PERF","LogicalDisk","Free Megabytes","d:","36092.00" "09/02/2021 21:48:49","host1","PERFORMANCE","WIN_PERF","LogicalDisk","% Free Space","d:","41.47" "09/02/2021 22:08:49","host1","PERFORMANCE","WIN_PERF","LogicalDisk","% Free Space","C:","19.30" "09/02/2021 22:08:49","host1","PERFORMANCE","WIN_PERF","LogicalDisk","Free Megabytes","C:","19767.00"     Desired output:   Time                                                       Host       Drive      FreePercent     FreeGB          09/02/2021 21:48:49                host1          d:                41.47            36092.0  09/02/2021 22:08:49                 host1          C:                19.30            19767.00   Any help would be appreciated.    
I have a csv file query as follows :-  | inputlookup file_1.csv which gives the result as follows in a single line as a single field or column  A B C D E F G H i j k l m n o p q r s t u v w x N... See more...
I have a csv file query as follows :-  | inputlookup file_1.csv which gives the result as follows in a single line as a single field or column  A B C D E F G H i j k l m n o p q r s t u v w x Now, I want to turn the above result as follows with multiple fields naming A, B,C,D,E,F,G,H basically what I am trying to acheive is convert the single field into multiple fields with each field name or field value is extracted based on a space separation in the single field from above? A B C D E F G H i j k l m n o p q r s t u v w x    
Hi Splunkers - We are trying to create a dashboard with conditional panels that show/hide based on token values. Easy enough. But we are also attempting to use a Submit button, and it's not working a... See more...
Hi Splunkers - We are trying to create a dashboard with conditional panels that show/hide based on token values. Easy enough. But we are also attempting to use a Submit button, and it's not working as we would like. Currently, the conditional panels are showing/hiding when a user changes the value in the dropdown input, but we would like the panels to show/hide AFTER a user has hit the submit button. Is this possible? FYI, we are not using Dashboard Studio for this particular dashboard.
I have two events as below - event 1    "id=1 api=xyz apiResTime=50"   event 2   "id=1 api=xyz duration=200"   I want to plot the difference between duration and apiResTime by api. So fa... See more...
I have two events as below - event 1    "id=1 api=xyz apiResTime=50"   event 2   "id=1 api=xyz duration=200"   I want to plot the difference between duration and apiResTime by api. So far i have tried this   index="my_index" | search * "apiResponseTime"="*" | table "api", "apiResponseTime" | rename "api" as api1 | rename "apiResponseTime" as x | append [search * "duration"="*" | table "api", "duration" | rename "api" as api2 | rename "duration" as y ] | eval api_match=if(match(api1, api2),1,0) //match the apis | eval diff=if(api_match=1,y-x,y) // get the difference y-x on match | table api1, api2, diff   But this is not giving me the required results. Any suggestions / pointers on how I can plot (timechart) the difference between (duration-apiResponseTime) by api. The above events can occur for multiple ids.
Hello,   I currently have in production Splunk Enterprise 8.0 with Universal and Heavy Forwarder 8.0. I plan to upgrade to the latest version 8.2.2. Can I upgrade : 1. Upgrade Splunk Enterprise ... See more...
Hello,   I currently have in production Splunk Enterprise 8.0 with Universal and Heavy Forwarder 8.0. I plan to upgrade to the latest version 8.2.2. Can I upgrade : 1. Upgrade Splunk Enterprise servers (Master, Indexer, Search Head, Deployment, Licence server) first and then Forwarder later 2. Upgrade Forwarders first and Splunk Enterprise after 3. Both Splunk Enterprise and Forwarders at the same time
Hi all, I'm having some issues onboarding some new server logs into Splunk. These servers are RedHat 6 and 7 machines. I've gotten the Universal Forwarder agent installed onto them and dropped my de... See more...
Hi all, I'm having some issues onboarding some new server logs into Splunk. These servers are RedHat 6 and 7 machines. I've gotten the Universal Forwarder agent installed onto them and dropped my depolyment_client app into /opt/splunkforwarder/etc/apps directory (app has a deploymentclient.conf file in it). These servers connect to my DS fine, but then encounter an issue when trying to download the other two apps that are a part of some different serverclasses (one app is the splunk TA for linux and the other is an app that points to my indexers).  I saw on the splunkd log file on one of the machines I was getting this error: -0500 WARN HTTPClient [18097 HttpClientPollingThread_DD738BE1-8B55-41C7-B82B-A9348CA4DF30] - Download of file /opt/splunkforwarder/var/run/all_nix_hosts/nix_forwarder_outputs_ssl-1630327417.bundle failed with status 502 I have other Linux machines that have connected to the DS and recieved the apps perfectly fine and are sending data. I've also done Windows servers with their respective apps and no issues there. Any idea why this may be happening? 
I want to download the Splunk User Behavior Analytics OVA for testing purpose for a client.  After testing we will buy the license but i did not find any Splunk UBA OVA. can you please provide me th... See more...
I want to download the Splunk User Behavior Analytics OVA for testing purpose for a client.  After testing we will buy the license but i did not find any Splunk UBA OVA. can you please provide me the downloading link of Splunk UBA OVA. Thank you 
As part of a DR plan am writing need to be Alerted if an instance or Splunk Ent / ES has bee deleted / removed or disabled. I do have Monitoring Console in place in both. Is using MC to keep an eye o... See more...
As part of a DR plan am writing need to be Alerted if an instance or Splunk Ent / ES has bee deleted / removed or disabled. I do have Monitoring Console in place in both. Is using MC to keep an eye on such issues the omly way? Thank u in advance. I appreciate a response. 
Hello, I am a freshmen with splunk.  I got a problem trying to concat two/more searches into 1. pretty much my data looks like so  {      "TimeStamp": "\/Date(1630425120000)\/",      "Name":... See more...
Hello, I am a freshmen with splunk.  I got a problem trying to concat two/more searches into 1. pretty much my data looks like so  {      "TimeStamp": "\/Date(1630425120000)\/",      "Name": "Plan-MemoryPercentage-Maximum.json",       "Maximum": 14 } {      "TimeStamp": "\/Date(1630425120000)\/",      "Name": "Plan-MemoryPercentage-Average.json",       "Average": 14 } both sets will have the same timeStamp for the entries and I just want a table that will have the matching time stamps and a column for max and a column for avg so far I'm able to get a single table going that has a query that looks like  Name="Plan-MemoryPercentage-Maximum.json" | table * | fields TimeStamp, Maximum | fields - _time, _raw  but I'm really struggling to figure out how to concat 2 searches into 1 table anyone have any ideas?