All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, How  I would know when the last time data came in under any index/sourcetype?  I have one query (see below) is showing me the feed status by index. But, my objective is to find when the last... See more...
Hello, How  I would know when the last time data came in under any index/sourcetype?  I have one query (see below) is showing me the feed status by index. But, my objective is to find when the last time data was fed to  indexes/source types. Any help will be highly appreciated. Thank you! | tstats prestats=t count where earliest=-7d@d latest=-0d@d index=* by index, _time span=1d | timechart useother=false limit=0 span=1d count by index  
Hi  everyone, Thanks for taking time in reading this and providing your knowledge , since i've been struggling a bit with this . I am having an issue with  making a connection from the Endpoint Clo... See more...
Hi  everyone, Thanks for taking time in reading this and providing your knowledge , since i've been struggling a bit with this . I am having an issue with  making a connection from the Endpoint Cloud (Cylance)   to the Splunk  Heavy Forwarder pushing syslogs, for then to be forwarded to the Cloud.  When testing , UDP ports work and the connection is successful, however the logs are still not coming in Splunk Enterprise  and not appearing in Splunk Cloud either. I have configured the Data input, the inputs.conf and the index correctly. Port 514 and 6514 TCP are opened on the security side (Firewalls). My question is , for either port 514 or 6514, is TLS/SSL required by default  to make a connection to these ports ? Or it should connect successfully  if I choose it to not be encrypted?(testing)  Even when trying  with a different random TCP port and the connection is successful, the dashboards in Cylance do not populate. Am I missing a piece of the puzzle ? I've made sure to follow all steps  provided Any help is appreciated. Thanks
Hello, I recently upgraded Splunk Enterprise (and Heavy Forwarder) instances to 8.2.5 and 8.2.6. Both versions (maybe others too) install the Python Upgrade Readiness App 1.0 as default. Then Splun... See more...
Hello, I recently upgraded Splunk Enterprise (and Heavy Forwarder) instances to 8.2.5 and 8.2.6. Both versions (maybe others too) install the Python Upgrade Readiness App 1.0 as default. Then Splunk asked to update the App to 3.1.  Nicely done from Splunk, but after the restart, the Integrity check starts to complain about the missing files of 1.0 version. It is annoying. Is there a way to "teach" Splunk the new version? (I know the check could be completely turned off, but I won't like to lose the information if ever something important changes.)
Hello, I have source files with very inconsistent/ complex events/data structure. I wrote field extraction (inline) codes which are working for most of the cases, however not extracting field as ex... See more...
Hello, I have source files with very inconsistent/ complex events/data structure. I wrote field extraction (inline) codes which are working for most of the cases, however not extracting field as expected for some cases. I included 3 sample events and my inline field extraction codes. Ayn help will be highly appreciated. Thank you! Three Sample Events June 10, 2021 10:41:39:993-0400 - INFO: 439749134|REGT|TEST|SITEMINDER|VALIDATE_ASSERTION|439749134|4deef81s-6455-460b-bf41-c126700d1e9d|2607:fb91:118e:89c9:ad53:43b0:ccce:417c|00||Application data=^CSPProviderName=IDME^givenName=KELLIE^surName=THOMPSON^dateofBirth=1975-04-25^address=21341 E Valley Vista Dr^city=Liberty June 10, 2021 10:41:39:993-0400  EDT 2021^iat= June 10, 2021 10:41:39:993-0400 EDT 2021^AppID=OLA^cspTransactionID=7bdd62bb-966a-426a-9e47-8d2a5a772162 June 10, 2021 10:42:36:991-0400 - INFO: 439741123|REGT|TEST|SITEMINDER|VALIDATE_ASSERTION|439741123|4deef81s-6455-460b-bf41-c126700d1e9d|65.115.214.106|00||Application data=^CSPProviderName=IDME^givenName=KELLIE^surName=THOMPSON^dateofBirth=1975-04-25^address=21341 E Valley Vista Dr^city=Liberty June 10, 2021 10:42:36:991-0400  EDT 2021^iat= June 10, 2021 10:42:36:991-0400 EDT 2021^AppID=OLA^cspTransactionID=7bdd62bb-966a-426a-9e47-8d2a5a772162 May 03, 2021 10:33:50:223-0400 - INFO: NON-8016|IdtokenAuth||authenticate‖lookupClaimVal is null|ERROR|SITEMINDER| QDIAUTH|vp22wsnnn012 |null|null|   My Inline field extraction codes: (Working for first 2 events but not the 3rd event) ^(?P<TIMESTAMPT>.+)\s+\-\s\w+\:\s(?P<USER>.+)\|(?P<TYPE>\w+)\|(?P<SYSTEM>\w+)\|(?P<EVENT>\w+)\|(?P<EVENTID>\w+)\|(?P<SUBJECT>\w+)\|(?P<LESSION>\w+?\-?\w+?\-?\w+?\-?\w+?-\w+?)\|(?P<SRCADDR>.+)\|(?P<STATUS>\w+)\|(?P<MSG>\w*?)\|(?P<DATA>.+)
i have the 2 values let's say expected time= 6:00:00 completion time= 08:32:44 and the expected output should be the difference of the above i.e (expected-completion) in 12 hrs format including ne... See more...
i have the 2 values let's say expected time= 6:00:00 completion time= 08:32:44 and the expected output should be the difference of the above i.e (expected-completion) in 12 hrs format including negative sign for example : output= -2:32:44 (which is the diff between expected and completion)
The percentage of non high priority searches delayed (19%) over the last 24 hours is very high and exceeded the yellow thresholds (10%) on this Splunk instance. Total Searches that were part of thi... See more...
The percentage of non high priority searches delayed (19%) over the last 24 hours is very high and exceeded the yellow thresholds (10%) on this Splunk instance. Total Searches that were part of this percentage=5927. Total delayed Searches=1141 Can anyone help me out.
I'm using an HTTP Event Collector to ingest Palo Alto logs from my syslog forwarders. Its using the raw endpoint: 'https://host:8088/services/collector/raw' I'm using the Splunk_TA_paloalto to do s... See more...
I'm using an HTTP Event Collector to ingest Palo Alto logs from my syslog forwarders. Its using the raw endpoint: 'https://host:8088/services/collector/raw' I'm using the Splunk_TA_paloalto to do sourcetyping and field extraction. it also does the time extraction which appears to work. However, my devices are in the pacific timezone and not UTC (don't ask why... I just can't fix it). So I create a local directory and a props.conf file in there that looks like:   -bash-4.2$ pwd /opt/splunk/etc/master-apps/Splunk_TA_paloalto/local -bash-4.2$ cat props.conf [pan_log] TZ = US/Pacific [pan:traffic] TZ = US/Pacific   Then I go to apply the cluster bundle and push the timezone changes to my indexers (this is an indexer cluster). However, traffic still is received in the UTC timezone.  What am I missing? Why won't the indexers correct the time? The Palo app takes in logs using the pan_log sourcetype. It then runs transforms to set the correct sourcetype to pan:traffic or whatever type (I'm testing with just traffic logs at this point). In theory, I think it should work with just the pan_log sourcetype as time extraction happens before transforms. But it isn't working. I also tried blocks for [source::http:myinput] but that did nothing as well. What am I missing? I'm also trying to change the TIME_FORMAT and override datetime.xml. That doesn't work either. Clearly I'm missing something.
Hello, I would like to do a search to filter some result matching my conditions and then use a common ID field to combine result with an other source. Lets say :   SOURCE A :                 ... See more...
Hello, I would like to do a search to filter some result matching my conditions and then use a common ID field to combine result with an other source. Lets say :   SOURCE A :                        field ID  field x field y    SOURCE B :  field ID  field z   I want to do a search with some condition on Source A : Index=A sourcetype=A'  "x=value" "y<=value" and then use a join to get value "z"  for the result that i got from main search.   For now i have something like this :       index=A sourcetype=A' "x=value" "y<=value" | join [ search index=B sourcetype=B' | fields ID | stats count by z         It does not seems to work. 
Hello Splunkers! Initially I added the monitor stanza for all the inputs from various time zones and then when I had a check there was difference _time and the time present in the event and there w... See more...
Hello Splunkers! Initially I added the monitor stanza for all the inputs from various time zones and then when I had a check there was difference _time and the time present in the event and there was a lag by 1 or 2 hours based on that country's time zone and Splunk time zone, then figured out the it is because Splunk looks for a timestamp in the event and parse the data. Now , I need to monitor logs being received from different time zones from various countries and Splunk is in different time zone, can you please drop in your knowledge on this please. When investigated, found that we can add the below as false as per https://docs.splunk.com/Documentation/Splunk/8.2.6/Admin/Propsconf  BREAK_ONLY_BEFORE_DATE = <boolean> DATETIME_CONFIG = NONE   And could see that there are options to define the time zones using TZ. Can anyone help me out please!   Example:  My source: test.csv  SYSTEMDATE,SYSTEMTIME,FAILUREMESSAGE "2022-05-04","12.51.08", The JobA has failed "2022-05-04","13.00.05", The JobB has failed Data reflecting in Splunk UI: Time Event 04/05/2022 12:51:03.000 SYSTEMDATE,SYSTEMTIME,FAILUREMESSAGE 04/05/2022 11:51:08.000 "2022-05-04","14.51.08",The JobA has failed 04/05/2022 12:00:05.000 "2022-05-04","13.00.05",The JobB has failed   Only the below event is reflecting at the current time when the job is triggered from Application end which is the correct one since the below has no timestamp defined. 04/05/2022 12:51:03.000 SYSTEMDATE,SYSTEMTIME,FAILUREMESSAGE   Source time zone: Various Countries like Italy, Romania, Cyprus etc., Destination/Splunk Time Zone: BST   Many thanks! Sarah
We have upgraded splunk version 8.2.6 from 8.0.1. Post upgrade we are observing IOWait status yellow, how can we solve this issue? Before upgrade we didn't observed this issue. Attaching the snap... See more...
We have upgraded splunk version 8.2.6 from 8.0.1. Post upgrade we are observing IOWait status yellow, how can we solve this issue? Before upgrade we didn't observed this issue. Attaching the snapshot for the reference -  
Hi all! I need to store more than 500.000 events in an event index and apply aggregation logic that produces metrics to display on a dashboard. I want to use a metrics index to store these metrics... See more...
Hi all! I need to store more than 500.000 events in an event index and apply aggregation logic that produces metrics to display on a dashboard. I want to use a metrics index to store these metrics so I can improve the performance of the dashboard. The dashboard will have some filters that could generate n! different combinations (one combination per set of filter values).   My concern is that in order to be able to guarantee acceptable response times I will need to generate a metric for every possible combination of the filters, and that just seems excessive.   Is this the only way to achieve what I am looking for?
Hi,     I'm trying to isolate why I'm not able to drop data from a HEC Collector endpoint. I have some docker logs I don't need to ingest. The Splunk HF is still on 7.3.8 for backwards compatibility... See more...
Hi,     I'm trying to isolate why I'm not able to drop data from a HEC Collector endpoint. I have some docker logs I don't need to ingest. The Splunk HF is still on 7.3.8 for backwards compatibility, so I don't know if that's in play here. I checked with btool, and the files did load correctly. inputs.conf:  - Sidenote here: When I set "source" value, it remained as "httpevent". But when I changed Sourcetype, the event changed correctly, which is odd.     [http://tpas_token] disabled = 0 index = elm-tpas-spc token = DD0D58D8-9F38-4A96-956C-XXXXXXXXXXXXXX source = tpas-event sourcetype = tpas-event     props.conf  - Sidenote: I tried also [ tpas-event ], and that also did not work     [ source::tpas-event ] TRANSFORMS-drop-handlers = drop-handlers      transforms.conf     [ drop-handlers ] REGEX = handlers.py|connection.py DEST_KEY = queue FORMAT = nullQueue      
Hi, I want to compare the count of calls obtained in a day with the target in lookup csv, for example: input csv: header: label hr1, hr2,hr3,......hr24 row1: LA, 1,2,1,5.....6 search: dat... See more...
Hi, I want to compare the count of calls obtained in a day with the target in lookup csv, for example: input csv: header: label hr1, hr2,hr3,......hr24 row1: LA, 1,2,1,5.....6 search: date hour: index=foo | stats count by Label date hour output: LA, 0,0,0,...5   Expected output:   count(from lookup file) count(from search) Passed LA 1 1 pass OA 2 1 fail Can someone me in writing the code combining search and input lookup?  
We are working to enhance our potential bot-traffic blocking and would like to see every IP that has hit AWS cloudfront > 3000 hits per day with a total + percentage of the total traffic that day. ... See more...
We are working to enhance our potential bot-traffic blocking and would like to see every IP that has hit AWS cloudfront > 3000 hits per day with a total + percentage of the total traffic that day. Eventually I got as for with my searches to include appendpipe, this is also the point where I get stuck and will require some guidance.  The result I would like to get is as follows: weekday 1.1.1.1 2.2.2.2 3.3.3.3 total traffic perc. of all traffic Monday 3000     400000 0.75 Tuesday   3000 3000 400000 1.5 Wednesday 3000     400000 0.75 Thursday 3000 4000 5000 400000 3 Friday   3000   400000 0.75 Saturday 3000     400000 0.75 Sunday   3000   400000 0.75 This is where I got stuck with my query (and yes the percentage is not even included in the query below)   index=awscloudfront | fields date_wday, c_ip | convert auto(*) | stats count by date_wday c_ip | appendpipe [stats count as cnt by date_wday] | where count > 3000 | xyseries date_wday,c_ip,cnt    Any insights / thoughts are very welcome.
Hi all,  Running Splunk ES, it's installed smoothly and appears to be working, but on one of 4 search heads in a cluster we're getting a message stating "Splunk_SC_Scientific_python Disabled but re... See more...
Hi all,  Running Splunk ES, it's installed smoothly and appears to be working, but on one of 4 search heads in a cluster we're getting a message stating "Splunk_SC_Scientific_python Disabled but required for SplunkEnterpriseSecuritySuite"  In the manage apps menu i'm not able to enable it either - is this something I should ping support for? 
Hello, Is it possible to search on ServiceNow ticket number in ITSI episode review. We use the ServiceNow add-on to integrate. Tickets are added to the episode, but I don't seem to find a way to se... See more...
Hello, Is it possible to search on ServiceNow ticket number in ITSI episode review. We use the ServiceNow add-on to integrate. Tickets are added to the episode, but I don't seem to find a way to search on snow ticket number in the episode review.   Can someone help me?
Hi   I use a search refresh like this   <earliest>-15m</earliest> <latest>now</latest> <refresh>30s</refresh> <refreshType>delay</refreshType>    I... See more...
Hi   I use a search refresh like this   <earliest>-15m</earliest> <latest>now</latest> <refresh>30s</refresh> <refreshType>delay</refreshType>    I have 2 questions : 1) Is the refresh delay starts from the search saving?  2) Is it possible to synchronize th search delay between 2 searches because actually I use the same refresh delay between 2 searches but the refresh doesn't occurs in the same time Thanks
We would like to collect logs from Azure Log Analytics workspace and have configured addon Azure Log Analytics Kusto Grabber, but we are getting below error while collecting the logs- 2022-05-04 08... See more...
We would like to collect logs from Azure Log Analytics workspace and have configured addon Azure Log Analytics Kusto Grabber, but we are getting below error while collecting the logs- 2022-05-04 08:54:52,440 ERROR pid=29779 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Traceback (most recent call last): File "$SPLUNK HOME$/etc/apps/TA-azure-log-analytics-kql-grabber/bin/ta_azure_log_analytics_kql_grabber/aob_py3/modinput_wrapper/base_modinput.py", line 128, in stream_events self.collect_events(ew) File "$SPLUNK HOME$/etc/apps/TA-azure-log-analytics-kql-grabber/bin/azure_log_analytics.py", line 88, in collect_events input_module.collect_events(self, ew) File "$SPLUNK HOME$/etc/apps/TA-azure-log-analytics-kql-grabber/bin/input_module_azure_log_analytics.py", line 94, in collect_events raise e File "$SPLUNK HOME$/etc/apps/TA-azure-log-analytics-kql-grabber/bin/input_module_azure_log_analytics.py", line 63, in collect_events rows = len(result.json()["tables"][0]["rows"]) KeyError: 'tables'
Hello, I want to see the default configuartion of ''phoneHomeIntervalInSecs'' in UF. I came across splunk docs/answers as per that checked in $splunk_home/etc/system/default/deploymentclient.conf ... See more...
Hello, I want to see the default configuartion of ''phoneHomeIntervalInSecs'' in UF. I came across splunk docs/answers as per that checked in $splunk_home/etc/system/default/deploymentclient.conf in both UF and Splunk enterprise but was unable to locate it. Could you please help me with the exact location to validate the phoneHomeIntervalInSecs. Also, We are manually updating new outputs.conf in the UF in the path splunk_home/etc/apps/deployment-apps/UFtoHF/local/outputs.conf. As per the splunk docs, due to polling between Deployment server and UF the new manual updates in UF should be erased but strangely it is not been erased (even though the new outputs.conf are not present in DS) and the updates are retained. How exactly does this polling works between DS and UF ? And Why the manual updates aren't been erased?
Hi, I am trying to subscribe to the RSS feed for Splunk Product Security announcements on https://www.splunk.com/en_us/product-security.html?locale=en_us but keep getting "This XML file does not ap... See more...
Hi, I am trying to subscribe to the RSS feed for Splunk Product Security announcements on https://www.splunk.com/en_us/product-security.html?locale=en_us but keep getting "This XML file does not appear to have any style information associated with it. The document tree is shown below." I have tried on IE, Chrome and Firefox.