All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, Splunk documentation for SAI says it's compatible with Windows Add on from version 5.0.1 onwards. I have version 6.0 and can't seem to get perfmon data from the app. does anyone know how to ... See more...
Hi All, Splunk documentation for SAI says it's compatible with Windows Add on from version 5.0.1 onwards. I have version 6.0 and can't seem to get perfmon data from the app. does anyone know how to configure this? or what data SAI uses from the windows Add on?
I have a dashboard I use to monitor my HF's.  In this dashboard, I want to add an overlay but the value of the overlay will differ by host so I want to just use eval overlay=max(max_size)  but for so... See more...
I have a dashboard I use to monitor my HF's.  In this dashboard, I want to add an overlay but the value of the overlay will differ by host so I want to just use eval overlay=max(max_size)  but for some reason, this doesn't seem to work even though the max_size  field is showing up as a number (has the #).  If instead, I just put a numeric value, then the SPL works as expected... what gives? SPL (hardcoded = working)   index=_internal host=hf_1 group=queue name=tcpout_foo | fillnull value=0 ingest_pipe | timechart avg(current_size) by ingest_pipe | eval threshold=512000   SPL (variable = NOT working)   index=_internal host=hf_1 group=queue name=tcpout_foo | fillnull value=0 ingest_pipe | timechart avg(current_size) by ingest_pipe | eval threshold=max(max_size)  
Tried to start Splunk after installation.  HTTP Error 404. The requested resource is not found.        Thank you,   JT
Hello I have just installed (2 days ago) a Splunk Ent 7 on Linux, and I did logon as admin, and worked ,  while today I receive: "Invalid username or password. Your license is expired. Please log... See more...
Hello I have just installed (2 days ago) a Splunk Ent 7 on Linux, and I did logon as admin, and worked ,  while today I receive: "Invalid username or password. Your license is expired. Please login as an administrator to update the license." How can I logon? best regards Altin Karaulli
Hello, I am trying to complete the lab on module 4 of the the splunk fundamentals 1. I am trying to add a data file i downloaded for the lab and everything works fine until i get to the review segme... See more...
Hello, I am trying to complete the lab on module 4 of the the splunk fundamentals 1. I am trying to add a data file i downloaded for the lab and everything works fine until i get to the review segment. I upload the file, it gets to 100%, and then it just says processing and never actually processes. My splunk instance has a red exclamation mark next to disk space & Tail reader. I cleared up disk space yesterday when it had a yellow exclamation mark next to it and have 4.66gb free. I just want to complete this course and calling customer service was useless.  Thanks
How can I use the HTTP event collector in Splunk cloud 15 days trial? Also, will my instance be disabled after the trial?   I followed the instructions here but I got {"text": "Data channel is miss... See more...
How can I use the HTTP event collector in Splunk cloud 15 days trial? Also, will my instance be disabled after the trial?   I followed the instructions here but I got {"text": "Data channel is missing", "code":10} and  Could not resolve host: when I tried the URLs `https://http-inputs-prd-p-xxxxx.splunkcloud.com:443/services/collector/event` and https://prd-p-xxxxx.splunkcloud.com:8088/services/collector/event respectively.
Hey All,   The one more thing I am trying to get the traffic on each country over time period with span of 2hrs within last 24hrs date  range Like below query. . Index=xyz | bucket span=1h l  eval ... See more...
Hey All,   The one more thing I am trying to get the traffic on each country over time period with span of 2hrs within last 24hrs date  range Like below query. . Index=xyz | bucket span=1h l  eval ftime =strftime(_time, "%d-%m-%Y %H:%M") | chart count(status)as Traffic over country by ftime Like    What should do I need to get the range of  color  varies based on  percentage of  the traffic comapred  to previous hour Like if traffic is less than 99 percent compared to the count of previous bin it shld show yellow if it's  lessthan 50 then red. bg: background Country  1-05-20 01:00. 1-05-20 01:00  US.             100k(green bg)          30k(red bg) Thanks in advance
Hi all, Can someone help me on this problem?  I'm working on a dashboard that I need to show how many users logged into the system and I need to have 2 views for each 30 minutes: 1. Today 2. ... See more...
Hi all, Can someone help me on this problem?  I'm working on a dashboard that I need to show how many users logged into the system and I need to have 2 views for each 30 minutes: 1. Today 2. Over time (just to have the view if today we are getting more users logged on in history) I can search it using earliest and latest function for each one, but I dont know how to join them for the same time. Here is an example: my-search logon-action earliest=1 latest=now() | fields _time | bucket span=30min _time | eval hour=strftime(_time, "%H:%M") | chart count as "Over-time" over hour Statistics came like this: Hour Count 01:00 4 01:30 10 02:00 5 03:00 8 05:00 1   my-search logon-action earliest=-1@d latest=now() | fields _time | bucket span=30min _time | eval hour=strftime(_time, "%H:%M") | chart count as "today" over hour Statistics came like this: hour count 01:30 1 03:00 8   I'm using the "append" command to have 1 result of the count per 30 min to chart it: search logon-action earliest=1 latest=now() | fields _time | bucket span=30min _time | eval hour=strftime(_time, "%H:%M") | chart count as "Over-time" over hour | appendcols [ | my-serach search logon-action earliest=-1@d latest=now() | fields _time | bucket span=30min _time | eval hour=strftime(_time, "%H:%M") | chart count as "today" over hour and I'm having this: hour over time today 01:00 4 1 01:30 10 8 02:00 5   03:00 8   05:00 1           So, the number 1 and 8 of "Today" is in the line of 01:00h and 01:30, but they actualy belongs to 01:30 and 03:00h How can I fix it? I dont know how to do it and I appreciate if you guys can help me to have something like this: hour over time today 01:00 4 0 01:30 10 1 02:00 5 0 03:00 8 8 05:00 1 0         any other idea is welcome to fix it Thank you!
I would like to search for two consecutive ocurrences of an event (separated let's say 1 minute appart). I don't care if that event happended lots of times in the past, I just want to know if two of ... See more...
I would like to search for two consecutive ocurrences of an event (separated let's say 1 minute appart). I don't care if that event happended lots of times in the past, I just want to know if two of these ocurrences happend withing just 1 minute. Thanks in advace for your help
Hi guys, Looking to deploy Splunk on AWS and curious how it translates compared to physical servers. I have around 3TB a day, 30 concurrent users(60 total users), running ES and planning to implemen... See more...
Hi guys, Looking to deploy Splunk on AWS and curious how it translates compared to physical servers. I have around 3TB a day, 30 concurrent users(60 total users), running ES and planning to implement smartstore in a multisite cluster(1 region, 2 AZ). Roughly i am looking at: 1 x SH - M4.10xlarge - concern here being that docs say it supports up to 20 concurrent users for ES - we will have 30. 17 x IDX i3.8xlarge - would this be sufficient for indexing/search needs? There is still alot more low level detail I need to gather so I understand this is hard to accurately suggest - more interested in seeing if my assumption on a loose outline is headed in the right direction. Any input would be appreciated! Thanks!  
Hello, I'm trying get the domain name alone from any given urls. Please see the list of url formats i'm dealing with and how i want the result to be.    https://www.example.com:9090 https://abc.ex... See more...
Hello, I'm trying get the domain name alone from any given urls. Please see the list of url formats i'm dealing with and how i want the result to be.    https://www.example.com:9090 https://abc.example-url.uk/username/test https://test-url.web-url.ch/test https://hello.example.co?test https://test.example.com,   Expected output for the URL field as follows   www.example.com abc.example-url.uk test-url.web-url.ch hello.example.co test.example.com    I tried the following rex    rex "Value1=https://(?P<URL>([\w]+\.[\w]+\.[\w]+))"   The above won't pull anything like abc.example-url.uk or test-url.web-url.ch. Looks like i'm missing something here. Can anyone please help. 
@Encountered the following error while trying to connect with ACI APIC "update: Error while posting to   @url=/servicesNS/nobody/TA_cisco-ACI/admin/cisco_aci_server_setup/cisco_aci_server_setup_se... See more...
@Encountered the following error while trying to connect with ACI APIC "update: Error while posting to   @url=/servicesNS/nobody/TA_cisco-ACI/admin/cisco_aci_server_setup/cisco_aci_server_setup_settings" I have set the SSL to "False" in "@C:\Program Files\Splunk\etc\apps\TA_cisco-ACI\local\password"
testName values 'VerifyBtagsTest' and  'Test_AcceptTAndCModal' values occurred 2 times wanted to take the latest executed row how do I do this  base search.....   | table testName Status resultMessa... See more...
testName values 'VerifyBtagsTest' and  'Test_AcceptTAndCModal' values occurred 2 times wanted to take the latest executed row how do I do this  base search.....   | table testName Status resultMessage | where (resultMessage=="null" AND resultMessage != "Test already passed in test plan/run*") OR (Status=="Fail")   testName Status executed resultMessage VerifyBtagsTest Fail 2020-06-13T18:17:17.701 System.NullReferenceException BonusBalanceTrackingTest Fail 2020-06-13T18:10:36.249 System.NullReferenceException Test_AcceptTAndCModal Fail 2020-06-13T18:10:36.249 OpenQA.Selenium.NoSuchElementException  VerifyBtagsTest Fail 2020-06-13T18:10:36.249 OpenQA.Selenium.NoSuchElementException  BonusBalanceTrackingTest Pass 2020-06-13T18:17:17.702 null Test_AcceptTAndCModal Pass 2020-06-13T18:17:17.702 null MarketBannerWithOutcomesFunctionalityTest Pass 2020-06-13T18:15:50.825 null BasicBannerPromotionTest Pass 2020-06-13T18:15:30.316 null BelgiumLoadBankingDesktopTest Pass 2020-06-13T18:15:20.831 null MaltaLoadBankingDesktopTest Pass 2020-06-13T18:15:13.02 null    
Dear Sir I cannot access my course Splunk fundamental course due the below reason Your access to this course has expired. Access ended October 11, 2019 Please allow me to access the course   Re... See more...
Dear Sir I cannot access my course Splunk fundamental course due the below reason Your access to this course has expired. Access ended October 11, 2019 Please allow me to access the course   Regards Jashim Uddin Bhuiyan  
We are using DBConnect with AML requirements. The retention period of splunk was 1 year. But it turned out to be necessary for seven years. Therefore, I would like to ask three questions. For delet... See more...
We are using DBConnect with AML requirements. The retention period of splunk was 1 year. But it turned out to be necessary for seven years. Therefore, I would like to ask three questions. For deletion after the retention period, will it be judged by the import date and time? Or should it be judged by looking at the DB's timestamp? Please tell us about the timing of deletion after the retention period. (Do you want to delete it immediately, or delete it regularly?) I would like to refer to the deletion history in cloud.Please tell us the query to refer to.Or can you give me the deletion history?
Hi there, Currently running a UF + and HF on one box as well as it being a syslog collector. The HF needs to be there as there are feeds that require apps to pre-process data. This isnt the ideal in... See more...
Hi there, Currently running a UF + and HF on one box as well as it being a syslog collector. The HF needs to be there as there are feeds that require apps to pre-process data. This isnt the ideal intermediate layer but it has to be the way it is for now. I have 2 questions around this: 1. Would it be better to remove the UF and just send all events(even the feeds that dont need preprocessing) through the HF, or should I use UF for all other feeds and run both forwarders on the same server? 2. What would be the ideal server spec if parsing 2Tb of data per day through the HF tier? Thanks!
I'm trying to change my ip address, but when I restart Splunk, the ip in web.conf is the "127.0.0.1". I changed splunk-launch.conf too.  
Hello everyone and Nice to meet you. I created a Dashboard, which included a Top10 bar chart and a table (100+Row). Also, i  attached a pdf to show the dashboard information by email. ( Dashboard >... See more...
Hello everyone and Nice to meet you. I created a Dashboard, which included a Top10 bar chart and a table (100+Row). Also, i  attached a pdf to show the dashboard information by email. ( Dashboard > Export  > Schedule PDF Devlivery )  I found that the font size of the table is quite small. It showed 30+  row in a page.  I understand that Splunk will auto change the font size. But the font size is really small. I want to change the size so that other user can read it smoothly. How can i make this change? (i dont have admin right in Splunk)   Thank you so much!
Hi All, I have setup F5 iApp to push analytics data to Splunk and could see splunk accepting data from tcp dumps on F5. But I dont see any data in Splunk.  I have installed the f5 splunk addon as su... See more...
Hi All, I have setup F5 iApp to push analytics data to Splunk and could see splunk accepting data from tcp dumps on F5. But I dont see any data in Splunk.  I have installed the f5 splunk addon as suggested by the integration guide.  Is there a way to check where is the issue.
We need to capture Oozie workflow Failure events in Splunk - HTTP Event Collector (Example: https://qa.splunk.organization.com/services/collector). To achieve this, we created a separate Oozie Java... See more...
We need to capture Oozie workflow Failure events in Splunk - HTTP Event Collector (Example: https://qa.splunk.organization.com/services/collector). To achieve this, we created a separate Oozie Java action to log the failure event to Splunk. The problem with this approach is, we have more than 100 oozie workflows and adding a new workflow action for Splunk is not feasible. Is there any better approach to capture Oozie workflow failure in Splunk HTTP Event Collector?