All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi,     I'm trying to isolate why I'm not able to drop data from a HEC Collector endpoint. I have some docker logs I don't need to ingest. The Splunk HF is still on 7.3.8 for backwards compatibility... See more...
Hi,     I'm trying to isolate why I'm not able to drop data from a HEC Collector endpoint. I have some docker logs I don't need to ingest. The Splunk HF is still on 7.3.8 for backwards compatibility, so I don't know if that's in play here. I checked with btool, and the files did load correctly. inputs.conf:  - Sidenote here: When I set "source" value, it remained as "httpevent". But when I changed Sourcetype, the event changed correctly, which is odd.     [http://tpas_token] disabled = 0 index = elm-tpas-spc token = DD0D58D8-9F38-4A96-956C-XXXXXXXXXXXXXX source = tpas-event sourcetype = tpas-event     props.conf  - Sidenote: I tried also [ tpas-event ], and that also did not work     [ source::tpas-event ] TRANSFORMS-drop-handlers = drop-handlers      transforms.conf     [ drop-handlers ] REGEX = handlers.py|connection.py DEST_KEY = queue FORMAT = nullQueue      
Hi, I want to compare the count of calls obtained in a day with the target in lookup csv, for example: input csv: header: label hr1, hr2,hr3,......hr24 row1: LA, 1,2,1,5.....6 search: dat... See more...
Hi, I want to compare the count of calls obtained in a day with the target in lookup csv, for example: input csv: header: label hr1, hr2,hr3,......hr24 row1: LA, 1,2,1,5.....6 search: date hour: index=foo | stats count by Label date hour output: LA, 0,0,0,...5   Expected output:   count(from lookup file) count(from search) Passed LA 1 1 pass OA 2 1 fail Can someone me in writing the code combining search and input lookup?  
We are working to enhance our potential bot-traffic blocking and would like to see every IP that has hit AWS cloudfront > 3000 hits per day with a total + percentage of the total traffic that day. ... See more...
We are working to enhance our potential bot-traffic blocking and would like to see every IP that has hit AWS cloudfront > 3000 hits per day with a total + percentage of the total traffic that day. Eventually I got as for with my searches to include appendpipe, this is also the point where I get stuck and will require some guidance.  The result I would like to get is as follows: weekday 1.1.1.1 2.2.2.2 3.3.3.3 total traffic perc. of all traffic Monday 3000     400000 0.75 Tuesday   3000 3000 400000 1.5 Wednesday 3000     400000 0.75 Thursday 3000 4000 5000 400000 3 Friday   3000   400000 0.75 Saturday 3000     400000 0.75 Sunday   3000   400000 0.75 This is where I got stuck with my query (and yes the percentage is not even included in the query below)   index=awscloudfront | fields date_wday, c_ip | convert auto(*) | stats count by date_wday c_ip | appendpipe [stats count as cnt by date_wday] | where count > 3000 | xyseries date_wday,c_ip,cnt    Any insights / thoughts are very welcome.
Hi all,  Running Splunk ES, it's installed smoothly and appears to be working, but on one of 4 search heads in a cluster we're getting a message stating "Splunk_SC_Scientific_python Disabled but re... See more...
Hi all,  Running Splunk ES, it's installed smoothly and appears to be working, but on one of 4 search heads in a cluster we're getting a message stating "Splunk_SC_Scientific_python Disabled but required for SplunkEnterpriseSecuritySuite"  In the manage apps menu i'm not able to enable it either - is this something I should ping support for? 
Hello, Is it possible to search on ServiceNow ticket number in ITSI episode review. We use the ServiceNow add-on to integrate. Tickets are added to the episode, but I don't seem to find a way to se... See more...
Hello, Is it possible to search on ServiceNow ticket number in ITSI episode review. We use the ServiceNow add-on to integrate. Tickets are added to the episode, but I don't seem to find a way to search on snow ticket number in the episode review.   Can someone help me?
Hi   I use a search refresh like this   <earliest>-15m</earliest> <latest>now</latest> <refresh>30s</refresh> <refreshType>delay</refreshType>    I... See more...
Hi   I use a search refresh like this   <earliest>-15m</earliest> <latest>now</latest> <refresh>30s</refresh> <refreshType>delay</refreshType>    I have 2 questions : 1) Is the refresh delay starts from the search saving?  2) Is it possible to synchronize th search delay between 2 searches because actually I use the same refresh delay between 2 searches but the refresh doesn't occurs in the same time Thanks
We would like to collect logs from Azure Log Analytics workspace and have configured addon Azure Log Analytics Kusto Grabber, but we are getting below error while collecting the logs- 2022-05-04 08... See more...
We would like to collect logs from Azure Log Analytics workspace and have configured addon Azure Log Analytics Kusto Grabber, but we are getting below error while collecting the logs- 2022-05-04 08:54:52,440 ERROR pid=29779 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Traceback (most recent call last): File "$SPLUNK HOME$/etc/apps/TA-azure-log-analytics-kql-grabber/bin/ta_azure_log_analytics_kql_grabber/aob_py3/modinput_wrapper/base_modinput.py", line 128, in stream_events self.collect_events(ew) File "$SPLUNK HOME$/etc/apps/TA-azure-log-analytics-kql-grabber/bin/azure_log_analytics.py", line 88, in collect_events input_module.collect_events(self, ew) File "$SPLUNK HOME$/etc/apps/TA-azure-log-analytics-kql-grabber/bin/input_module_azure_log_analytics.py", line 94, in collect_events raise e File "$SPLUNK HOME$/etc/apps/TA-azure-log-analytics-kql-grabber/bin/input_module_azure_log_analytics.py", line 63, in collect_events rows = len(result.json()["tables"][0]["rows"]) KeyError: 'tables'
Hello, I want to see the default configuartion of ''phoneHomeIntervalInSecs'' in UF. I came across splunk docs/answers as per that checked in $splunk_home/etc/system/default/deploymentclient.conf ... See more...
Hello, I want to see the default configuartion of ''phoneHomeIntervalInSecs'' in UF. I came across splunk docs/answers as per that checked in $splunk_home/etc/system/default/deploymentclient.conf in both UF and Splunk enterprise but was unable to locate it. Could you please help me with the exact location to validate the phoneHomeIntervalInSecs. Also, We are manually updating new outputs.conf in the UF in the path splunk_home/etc/apps/deployment-apps/UFtoHF/local/outputs.conf. As per the splunk docs, due to polling between Deployment server and UF the new manual updates in UF should be erased but strangely it is not been erased (even though the new outputs.conf are not present in DS) and the updates are retained. How exactly does this polling works between DS and UF ? And Why the manual updates aren't been erased?
Hi, I am trying to subscribe to the RSS feed for Splunk Product Security announcements on https://www.splunk.com/en_us/product-security.html?locale=en_us but keep getting "This XML file does not ap... See more...
Hi, I am trying to subscribe to the RSS feed for Splunk Product Security announcements on https://www.splunk.com/en_us/product-security.html?locale=en_us but keep getting "This XML file does not appear to have any style information associated with it. The document tree is shown below." I have tried on IE, Chrome and Firefox.
Hello, I am trying to join two searches for see, same hash exists on the other index as well. Below is my search, the issue is every time I run a search for the same timelimit, I see different resul... See more...
Hello, I am trying to join two searches for see, same hash exists on the other index as well. Below is my search, the issue is every time I run a search for the same timelimit, I see different results. WHY? Basesearch: I've tried to combine results of three different hash fields into one   (index=a sourcetype="a" (hash1=* OR hash2=* OR hash3=*)) | fields hash1, hash2, hash3 | table hash1, hash2, hash3 | eval hash=mvzip(mvzip('hash1','hash2',"|"),'hash3',"|") | fields hash | makemv hash delim="|" | mvexpand hash   From here, I've joined two indexes and both indexes have same field for hash files, so I'm attempting to join hash as the focus. Search seems to work fine   join type=left hash [| search (index=b sourcetype=b hashfile=*) OR (index=c sourcetype=c hashfile=*) | fields hashfile, filename,index | eval hash=hashfile]   Both the search on running individually returns 2k+ results, whereas on combining it, I could see only 1 result in the stats table and on hitting run for the same time limit every time I see different file name WHYYY? Any help would be appreciated, thanks!
Hi All, I've got a generic syslog app which pulls in EVERYTHING in the syslog directory with the sourcetype=syslog-unconfigured inputs.conf     [monitor:///var/log/syslog-ng/*/messages] ind... See more...
Hi All, I've got a generic syslog app which pulls in EVERYTHING in the syslog directory with the sourcetype=syslog-unconfigured inputs.conf     [monitor:///var/log/syslog-ng/*/messages] index = syslog sourcetype = syslog:unconfigured host_segment = 4       This is done so we can catch any new syslog devices that were not configured to go to the correct sourcetype. We have a props.conf that routes data to the right index/sourcetype depending on the hostname. props.conf     # InfoBlox [source::/var/log/syslog-ng/(10.164.55.55|10.9.55.56|prodinfoblox1|prodinfoblox2)/messages] TRANSFORMS-reroute_index = route_to_index_infoblox TRANSFORMS-reroute_sourcetype = route_to_sourcetype_infoblox:file TZ=UTC     transforms.conf     [route_to_index_infoblox] REGEX = . DEST_KEY = _MetaData:Index FORMAT = infoblox [route_to_sourcetype_infoblox:file] REGEX = . DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::infoblox:file     Now the above props.conf with a regex for matching on the host in the source doesn't work. However naming each individually does or with a basic wildcard     # InfoBlox [source::/var/log/syslog-ng/10.164.55.55/messages] TRANSFORMS-reroute_index = route_to_index_infoblox TRANSFORMS-reroute_sourcetype = route_to_sourcetype_infoblox:file TZ=UTC [source::/var/log/syslog-ng/10.9.55.56/messages] TRANSFORMS-reroute_index = route_to_index_infoblox TRANSFORMS-reroute_sourcetype = route_to_sourcetype_infoblox:file TZ=UTC [source::/var/log/syslog-ng/prodinfoblox*/messages] TRANSFORMS-reroute_index = route_to_index_infoblox TRANSFORMS-reroute_sourcetype = route_to_sourcetype_infoblox:file TZ=UTC     I've tried escaping the slashes but that doesn't work either.     # This also doesn't work [source::\/var\/log\/syslog-ng\/(10.164.55.55|10.9.55.56|prodinfoblox1|prodinfoblox2)\/messages]       Anyone have any ideas how to get the regex to work in the source:: stanza? Some of these devices have up to 30 hosts and having it all as a one liner would make things much cleaner. Also I'm aware I can do this in transforms.conf with something like this but then I'd need the source match in two spots which is prone to user error.         [route_to_index_infoblox] SOURCE_KEY = Metadata:Source REGEX = \/var\log\syslog\/(192.168.1.1|192.168.1.2|etc.) DEST_KEY = _MetaData:Index FORMAT = infoblox [route_to_sourcetype_infoblox:file] SOURCE_KEY = Metadata:Source REGEX = \/var\log\syslog\/(192.168.1.1|192.168.1.2|etc.) DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::infoblox:file           There has to be something just slightly off with my regex.
There's no time in my log You want to extract the source file date using the INGEST command Source name  /var/log/data_20220507.log How can I add random time after the date over there? i wa... See more...
There's no time in my log You want to extract the source file date using the INGEST command Source name  /var/log/data_20220507.log How can I add random time after the date over there? i want _time = 2022/05/07 11:23:22.2 I would appreciate it if you could tell me the settings of Props.conf transforms.conf
Given json with hashes     | makeresults | eval _raw="{\"yes\":true,\"no\":false,\"a\":{\"x\":0,\"y\":0,\"z\":0},\"c\":{\"x\":1,\"y\":2,\"z\":3},\"d\":{\"x\":1,\"y\":4,\"z\":9}}" | spath   ... See more...
Given json with hashes     | makeresults | eval _raw="{\"yes\":true,\"no\":false,\"a\":{\"x\":0,\"y\":0,\"z\":0},\"c\":{\"x\":1,\"y\":2,\"z\":3},\"d\":{\"x\":1,\"y\":4,\"z\":9}}" | spath     "a", "c", and "d" are nested hashes. There are other fields, "yes" and "no" that are not hashes. What I am trying to do filter out non-hashes and then split into multiple row. Name x y z a 0 0 0 c 1 2 3 d 1 4 9 The tricky part is that the top level field names, "yes", "no", "a", "c", "d" are not constant. However the sub fields "x", "y", "z" are. Thoughts?
any idea of this error information? I can not find a message like this  what is the main issue this error???   2022-04-29 18:11:03, 533+0900 process: 10780 thread: MainThread ERROR [itsi.mi... See more...
any idea of this error information? I can not find a message like this  what is the main issue this error???   2022-04-29 18:11:03, 533+0900 process: 10780 thread: MainThread ERROR [itsi.migration] [filesave_migration_interface:96] [migration_save_single_object_to_kystore] Exception adding inage 5e6eda58d3bc8 f4ad0af: HTTP 409 Conflict -- A document with the same key and user already exists host = ITSIOS source = /opt/splunk/splunk/var/log/splunk/itsi_migration queue.log sourcetype = itsi_internal_log    
Unable to perform the following search provided by Splunk to check forwarder certificate package version: index=_internal source=*metrics.log group=tcpout_connections name=splunkcloud* | stats ... See more...
Unable to perform the following search provided by Splunk to check forwarder certificate package version: index=_internal source=*metrics.log group=tcpout_connections name=splunkcloud* | stats latest(_time) AS _time latest(name) AS name by host | rex field=name "(?<output_group>splunkcloud_202[23456789]\d+)\_" | eval fwd_config=if(isnotnull(output_group),“new”,“legacy”) | stats count by _time host output_group fwd_config | reltime | fields _time reltime host output_group fwd_config | sort 0 fwd_config
Hello. I'm seeing a lot of articles in web searches about turning on https for HEC, but approximately zilch on turning it off. I did find: Whether the HTTP Event Collector server protocol is ... See more...
Hello. I'm seeing a lot of articles in web searches about turning on https for HEC, but approximately zilch on turning it off. I did find: Whether the HTTP Event Collector server protocol is HTTP or HTTPS. 1 indicates HTTPS is enabled; 0 indicates HTTP. The default value is 1. HTTP Event Collector shares SSL settings with the Splunk Enterprise instance and can't have enableSSL settings that differ from the settings on the Splunk Enterprise instance.   We need HEC to run without TLS, and can live with the Web UI not having TLS too if that'll help with HEC. But if I toss: [http] disabled = 0 enableSSL = 0 ...into /opt/splunk/etc/apps/splunk_httpinput/local/inputs.conf and restart splunk, then HEC continues to demand https, and /opt/splunk/etc/apps/splunk_httpinput/local/inputs.conf is rewritten automatically to: [http] disabled = 0 enableSSL = 1 What do I need to do to make HEC use http, not https? (We realize that https is more secure.  For our production splunk we'll use https, but for our team's development environments it just makes more sense to use http.  I've not discussed why, but I suspect https is proxied somehow)  Thanks!
Greetings. I've been trying to build a correlation search that sets a default disposition value when it runs but so far it doesnt work as advertised.  I've tried this one of two ways:   1) manua... See more...
Greetings. I've been trying to build a correlation search that sets a default disposition value when it runs but so far it doesnt work as advertised.  I've tried this one of two ways:   1) manually setting it to a valid value e.g.     | eval disposition="disposition:7" | eval disposition_label="My cool disposition label"     2) by editing the correlation search in the advanced search editor and setting the parameter myself: I've tried this with and without quotes, but I still always get a disposition:6 set by default.   I have validated that this disposition and label does exist and is enabled in Incident Review settings, and can manually be set. Can someone shine some light on what I'm doing wrong? Many thanks!
Hello, I was able to create dashboards back in 8.1.9. After upgrading to 8.2.6, I get a weird error anytime I try to create a new dashboard or clone one (whether classic or studio). Haven't been ab... See more...
Hello, I was able to create dashboards back in 8.1.9. After upgrading to 8.2.6, I get a weird error anytime I try to create a new dashboard or clone one (whether classic or studio). Haven't been able to find any information on it. Any help is greatly appreciated! V/r, mello920
MS 365 TA has been installed on HF, all input have been configured but client is not getting the teams information. Only 0365 is supported  Microsoft teams TA is not supported by splunk. what can I d... See more...
MS 365 TA has been installed on HF, all input have been configured but client is not getting the teams information. Only 0365 is supported  Microsoft teams TA is not supported by splunk. what can I do to perform the team data pull?
We have a 3rd party pulling AWS logs as far back as AWS holds onto logs. However, we want to be able to go back further so we are looking at our AWS index in Splunk. We want to extract a full export ... See more...
We have a 3rd party pulling AWS logs as far back as AWS holds onto logs. However, we want to be able to go back further so we are looking at our AWS index in Splunk. We want to extract a full export of _raw for the entire index. We have access to the management port of our searchhead which is pointing to an indexer cluster with all of the aws index data - noting that the index is SmartStore enabled. What's the best way to export this programmatically? It would not scale to manually run the search in the GUI and export it. We've looked at the oneshot search with js but it seems to be timing out even though we have baked in pagination. Thanks in advance