All Topics

Top

All Topics

Hi Folks, I have below requirement, I have a dashboard where I have timepicker with token and and bar chart panel. so lets see if i choose 15days from timepicker it shows data in15 bars (Oct 2... See more...
Hi Folks, I have below requirement, I have a dashboard where I have timepicker with token and and bar chart panel. so lets see if i choose 15days from timepicker it shows data in15 bars (Oct 20th to Nov 3rd). so now lets suppose I click on a bar ($click.value$) it takes me to next panel where I want to see the data from $click.value$ to 15 PREVIOUS days. e.g if i click on bar of Oct 20th, the next panel should show me data for past 15 days (6th Oct to 20th Oct). can someone help me with setting up earliest and latest time through tokens for above scenario?
Hello, I use Splunk to look at Office 365 email....but I don't see header info relating to TLS which we are looking for data on.  How do I pull this info into Splunk?  Is it in a different log?     ... See more...
Hello, I use Splunk to look at Office 365 email....but I don't see header info relating to TLS which we are looking for data on.  How do I pull this info into Splunk?  Is it in a different log?     Thanks
When doing a hunting exercise on a ethical hack system, I'm looking for an efficient way to find the unique breadcrumbs on this system compared to all the other systems in same timewindow. Suppose t... See more...
When doing a hunting exercise on a ethical hack system, I'm looking for an efficient way to find the unique breadcrumbs on this system compared to all the other systems in same timewindow. Suppose the EH system 1 has processes A,B,C,D whereas all the systems have processes A,C,D,E,F,G,H.... The result I'm looking for is process=B which was only found on system 1. Tried with subsearches / join etc but seem to run in circles.  All help is much appreciated. Since full population (except system 1) can be a very large dataset, it's important to make the SPL as efficient as possible.   
Hello, Can it be possible for UF to remove/delete files once it's been pushed to the indexer? How would I do that? Thank you...any help will be highly appreciated.    
Hi,    I am a have content like below and i would like to extract git url from it. Please suggest me how to do it using rex?   Content:   proj_url\x1B[0;m=https://my.test.net/sample/test.git test... See more...
Hi,    I am a have content like below and i would like to extract git url from it. Please suggest me how to do it using rex?   Content:   proj_url\x1B[0;m=https://my.test.net/sample/test.git test\x1B[0;m=abcd. Output should be: https://my.test.net/sample/test.git   Any help is appreciated.   Thanks.
We've recently started a Splunk Cloud instance, and are attempting to send data to it locally so we have all the steps ready to push to servers. I've followed the installation instructions pretty muc... See more...
We've recently started a Splunk Cloud instance, and are attempting to send data to it locally so we have all the steps ready to push to servers. I've followed the installation instructions pretty much everywhere a few times and still have no solution. Example of the steps taken can be found here: https://docs.splunk.com/Documentation/SplunkCloud/8.2.2109/Admin/UnixGDI with the exception that I can install through a .dmg and my universal forwarder lives at /Applications/SplunkForwarder. I've been digging around to try to see what could've gone wrong, I haven't messed with any of the configuration files yet, just added the app with the credentials file and added a monitor to the log file. I can tail the log file locally and things print out to it fine, and the file mapping is correct. The only thing I've noticed is that if I go to $SPLUNK_HOME/etc/system/local there's no `inputs.conf` file, but I'm not sure that's even required. Does anyone have any ideas on where to even start to hunt down the issue?   Also, if I run ./bin/splunk list forward-server the forward successfully shows up under active
Hey splunksters, I don't do much powershelling, but I have a big list of windows azure servers that need to have the universal forwarder installed  Anyone have a poweshell script to install a a for... See more...
Hey splunksters, I don't do much powershelling, but I have a big list of windows azure servers that need to have the universal forwarder installed  Anyone have a poweshell script to install a a forwarder on multiple remote (azure) windows machines? Preferably the script will need to check to see if a forwarder is already installed and then skip it if it is.    Thanks
I have a dashboard containing two radio buttons, one for 'Current quarter' and one for 'Previous quarter'.  I also have a timepicker input field for customizing the time range for a query.  Following... See more...
I have a dashboard containing two radio buttons, one for 'Current quarter' and one for 'Previous quarter'.  I also have a timepicker input field for customizing the time range for a query.  Following is the XML code for the input fields: <input type="radio" token="quarter_token" searchWhenChanged="true"> <label>Select quarter...or...</label> <choice value="earliest=@qtr latest=@d">Current quarter</choice> <choice value="earliest=-1qtr@qtr latest=@qtr">Previous quarter</choice> </input> <input type="time" token="time_token" searchWhenChanged="true"> <label>Select date range</label> <change> <set token="form.quarter_token"></set> </change> <default> <earliest>@qtr</earliest> <latest>@d</latest> </default> </input>   My question is: How can I reset the timepicker Label text to match the radio button selected time range?  Currently, if a user selects a custom time range, then selects one of the radio buttons, the timepicker label doesn't match the radio button time range selection.  For example, can I simply do the following to reset the timepicker label? <set token="form.time_token.earliest">@qtr</set> <set token="form.time_token.latest">@d</set>   In the example attachment, the timepicker lable shows 'Last 55 days' as well as the 'Previous quarter' radio button being selected.  This situation presents an inconsistent UI. thanks in advance, Bob
Hi,   I am trying to get the AVG response time for calls over 3 seconds and have the bellow:   index=test sourcetype="test" | bin span=1d _time | table response_time | eventstats count as Even... See more...
Hi,   I am trying to get the AVG response time for calls over 3 seconds and have the bellow:   index=test sourcetype="test" | bin span=1d _time | table response_time | eventstats count as Event | eval ResponseTime=response_time/1000 | eval isOver3s=if(ResponseTime>3,1,0) | stats values(Event) as "Event",avg(ResponseTime) as "Average Response" ,sum(isOver3s) as "isOver3s" max(ResponseTime) as "Max Response Time" avg(eval(ResponseTime>=3)) as avgisOver3s | eval Percentage=round((isOver3s/Event)*100,2) | table Event "Average Response" isOver3s Percentage ,"Max Response Time", avgisOver3s   However the AVG response for the over 3 seconds is less than the normal AVG which is incorrect. Any help would be greatly Appreciated.   Thanks,   Joe  
Hi,  I have the Forescout Technology Add-on and the Forescout Adaptive Response Add-on installed on my ES SH. The integration is working fine in respect to retrieving events from Forescout, however... See more...
Hi,  I have the Forescout Technology Add-on and the Forescout Adaptive Response Add-on installed on my ES SH. The integration is working fine in respect to retrieving events from Forescout, however I am having a problem with the Adaptive Response Add-on. I installed the Add-on but when i restart the ES SH it gives an error message (screen shot attached). When i go into /opt/splunk/var/log/splunk and check the log file TA-forescout_response_init.log, it shows ... [splunk@dub2splk203 splunk]$ tail TA-forescout_response_init.log 2021-11-03 15:42:29 - fsct_rest_api_wrapper.py:30 - INFO - Posting new message to bulletin. 2021-11-03 15:42:29 - fsct_rest_api_wrapper.py:44 - DEBUG - REST API request succeeded 2021-11-03 17:15:17 - ta_forescout_response_init.py:35 - DEBUG - Initializing app: [TA-forescout_response]... 2021-11-03 17:15:18 - fsct_ar_actions_reader.py:34 - INFO - Read usessl: [1], verify_cert: [1] from app: [TA-forescout] 2021-11-03 17:15:18 - fsct_ta_config_reader.py:59 - DEBUG - Getting credentials configured in app: [TA-forescout]. 2021-11-03 17:15:18 - fsct_ar_actions_reader.py:38 - INFO - Read fsct_emip: [dub2fst202.syncreon.local] from app: [TA-forescout] 2021-11-03 17:15:18 - fsct_ar_actions_reader.py:56 - DEBUG - Action url: https://dub2fst202.syncreon.local/splunk/actions_info?auth=CounterACT%20 2021-11-03 17:15:18 - ta_forescout_response_init.py:41 - CRITICAL - Error while getting alert actions from CounterACT: Unsuccessful Actions Info API call. Invalid status: [401] or request ID mismatch 2021-11-03 17:15:18 - fsct_rest_api_wrapper.py:30 - INFO - Posting new message to bulletin. 2021-11-03 17:15:18 - fsct_rest_api_wrapper.py:44 - DEBUG - REST API request succeeded There is no problem with regards access to my CounterAct server (on-prem) as I verified that the HTTPS connection can be made. Has anybody have any experience with this add-on or this error, as Im kind of lost and there is very little from Forescout on this? Thanks!
suppose i hAVE 3 DATES AND ONE SUBMIT BUTTON WHEN I M CLICKING ON DASHBORAD NAME After that i selected the date one by  one and click the submit button then down of the dashborad chart has been disp... See more...
suppose i hAVE 3 DATES AND ONE SUBMIT BUTTON WHEN I M CLICKING ON DASHBORAD NAME After that i selected the date one by  one and click the submit button then down of the dashborad chart has been displalyed..   Plz help to implement the same as its very critical .  
Is it possible to have a UF/HF automatically restarted when they stop working or not sending expected rate of events? There has been times the a UF went down / froze on Friday nights & we found out a... See more...
Is it possible to have a UF/HF automatically restarted when they stop working or not sending expected rate of events? There has been times the a UF went down / froze on Friday nights & we found out about it on Monday!! Appreciate your feed back.
Last week a large portion of our Windows hosts reported in with a different "host" value. This is causing all sorts of issues with dashboards that think our number of monitored hosts have doubled. Th... See more...
Last week a large portion of our Windows hosts reported in with a different "host" value. This is causing all sorts of issues with dashboards that think our number of monitored hosts have doubled. The issue we're seeing is that for around a week they all began being logged under their FQDN, not just the host name. (Similar to what was seen here: https://community.splunk.com/t5/Getting-Data-In/Where-does-windows-get-its-host-field-from/m-p/17422#M2242) I've compared 2 logs from the same host, with the same event ID. The only difference I can see in the logs is that dvc_nt_host is different between the 2, while dvc is the fqdn on both. Which is super off because this line is in the props.conf of the Windows TA app  FIELDALIAS-dvc = host as dvc, host as dvc_nt_host So it appears that the FQDN is always available, however, sometimes its used and sometimes it is shortened to just the hostname. I've hit a wall trying to work out what is causing this to happen, as no changes have been made to Splunk in the last week.
I apologize since similar questions have been asked numerous times in the past. I have read several of them on this site as well as Splunk's own timezone article. I've tried a lot of things based on ... See more...
I apologize since similar questions have been asked numerous times in the past. I have read several of them on this site as well as Splunk's own timezone article. I've tried a lot of things based on these articles, but the _time value doesn't appear to change at all.  I'm either doing something wrong or my expectations are off.  Background: We are PST. The Operating Systems for all our Splunk servers are configured for PST and are running Splunk 8.1.3. We are using a heavy forwarder to index IIS logs that are in UTC.  When searching these logs in Splunk, I would like the canned times (Last 4 Hours, Last 60 Minutes, etc.) to reflect the PST-equivalent times they occurred. So if I'm searching for something that happened 30 minutes ago in real time, "Last 60 Minutes" will contain that log.  It is my understanding that I am supposed to create/edit the props.conf on the heavy forwarder (/opt/splunk/etc/apps/iis/local/props.conf) and specify the TZ these logs files are set to: [sourcetype_name] TZ = UTC Then restart Splunk on the heavy forwarder.  This is done and I've restarted the entire Splunk farm. I've even set this in the /opt/splunk/etc/system/local/props.conf on the HF. These logs are still being indexed 7 hours into the future.  Should this be working or am I thinking about this completely wrong?  If my thinking is off-base, is it possible to accomplish what I'm attempting? Any suggestions would be appreciated.  Thank you. 
Hi,   I am trying to calculate the percentage of two fields however the Perc filed is not being anything back: Index=test sourcetype=Iis |table response_time |eval ResponseTime=response_time/100... See more...
Hi,   I am trying to calculate the percentage of two fields however the Perc filed is not being anything back: Index=test sourcetype=Iis |table response_time |eval ResponseTime=response_time/1000 |eval isOver3s=if(ResponseTime>3,1,0) |eval Perc=round((isOver3s/Event)*100,2) |eventstats count as Event| stats Values(Event),sum(isOver3s) |table Event, isOver3s, Perc   Any advice would be greatly appreciated.   thanks,   Joe 
From time to time we can see that if you try to access the search head GUI that you get a proxy error. When this happens we also can see that the count of apache processes increases from around 50 to... See more...
From time to time we can see that if you try to access the search head GUI that you get a proxy error. When this happens we also can see that the count of apache processes increases from around 50 to 200. After around 20 minutes the apache count drops and you can access the GUI again. Has somebody has seen such an behavior and knows what the reason is?  
I would like the background to be either Red or Green based on the text of "deviceSeverity." The value of deviceSeverity can either be "Up" or "Down." No matter what I do, the background is staying g... See more...
I would like the background to be either Red or Green based on the text of "deviceSeverity." The value of deviceSeverity can either be "Up" or "Down." No matter what I do, the background is staying grey. I am new to Splunk formatting and tried searching through these various messages here, but have not had any luck. This is the latest that I have and am probably over-complicating things (just want background to be red if deviceSeverity is "Down" and background to be green if deviceSeverity is "Up"):     <query>index=arcmisc dvc = $psmserver$ AND deviceProduct = "ApplicationMonitor" name="VIP Health Check Status" deviceSeverity=* | stats latest(deviceSeverity) | eval range=case(deviceSeverity == "Up", "low", deviceSeverity == "Down", "severe")</query> <earliest>-24h@h</earliest> <latest>now</latest> <refresh>30s</refresh> <refreshType>delay</refreshType> </search> <option name="colorMode">block</option> <option name="drilldown">none</option> <option name="classField">deviceSeverity</option> <option name="refresh.display">progressbar</option> <option name="useColors">1</option> <option name="charting.fieldColors"> {"severe": 0xFF0000, "low": 0x00FF00, "NULL":0xC4C4C0} </option>      
I need to upgrade our DB connect version from 2.3.1 to 3.7.0.  Then I need to install the MySQL Driver. The zip files both contain the same top-level directory, C:\Program Files\Splunk\etc\apps\splu... See more...
I need to upgrade our DB connect version from 2.3.1 to 3.7.0.  Then I need to install the MySQL Driver. The zip files both contain the same top-level directory, C:\Program Files\Splunk\etc\apps\splunk_app_db_connect.  That seems to suggest that simply installing the 3.7.0 product might overwrite existing objects. (such as the local and metadata directories). Is there a process for performing the upgrade to avoid overwriting existing objects?
Hello all, Basically, I can't use Splunk Cloud Trial. It constantly throws "An internal error was detected when creating the stack.".  
Hi Team, We installed these apps on our License Master as part of IT Essentials Work app SA-ITSI-Licensechecker SA-UserAccess https://docs.splunk.com/Documentation/ITEWork/4.10.2/Install/Install#... See more...
Hi Team, We installed these apps on our License Master as part of IT Essentials Work app SA-ITSI-Licensechecker SA-UserAccess https://docs.splunk.com/Documentation/ITEWork/4.10.2/Install/Install#Install_IT_Essentials_Work_in_a_distributed_environment Then we saw this License "IT Service Intelligence Internals *DO NOT COPY*" appeared Can this license be used for production data ingestion? Can you confirm that this license is included on the IT Essentials Work app itself?