All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a search that find a  match of events this counts all events that match the string index=data-kia-cer-app-n sourcetype=cer | regex "BLOCK,\d*,\d*,1"| stats count as "default"   I would lik... See more...
I have a search that find a  match of events this counts all events that match the string index=data-kia-cer-app-n sourcetype=cer | regex "BLOCK,\d*,\d*,1"| stats count as "default"   I would like a search that results in a percent figure of this match as proportion of all events.   Thanks
This looks like a bug in the new version 20.6. On every flow map in the dashboard or in the snapshot,  I keep seeing "ui:ms_transactionExitPointType_null" appearing on the link between the tiers. Do... See more...
This looks like a bug in the new version 20.6. On every flow map in the dashboard or in the snapshot,  I keep seeing "ui:ms_transactionExitPointType_null" appearing on the link between the tiers. Does anybody know why is this happening?
We created a Splunk Add-on for one of our customers. We are able to call the Rest API and index the data in our environment. But when our client installed the Add-on he gets the below error message. ... See more...
We created a Splunk Add-on for one of our customers. We are able to call the Rest API and index the data in our environment. But when our client installed the Add-on he gets the below error message.  Error, connection aborted , 104, Connection Reset by Peer error. Below is the screenshot of the log. We deployed the same AWS and GCP and it is working fine except in client environment.   We did a curl to the URL from the client server and it is working fine. Also the same URL is used in Splunk by our client in other Apps. Can someone kindly help us here. We burned our all our debugging options.         
Hi I have a table which displays duration for each category. I would like color code fields based on its duration. In the above screenshot, how do I write an expression to color code which are ... See more...
Hi I have a table which displays duration for each category. I would like color code fields based on its duration. In the above screenshot, how do I write an expression to color code which are exceeding 1min. I would prefer using the color coding from the source rather than creating JSS because we dont have the permission to splunk servers as such to update config/put the JS scripts. Could someone please help me out here
Hi...I have installed trial version of Splunk. It is very strange that I see a folder named as Splunk in my Program files of  C: I am able to create Apps, do searches , Dashboards and everything.... See more...
Hi...I have installed trial version of Splunk. It is very strange that I see a folder named as Splunk in my Program files of  C: I am able to create Apps, do searches , Dashboards and everything. But I wanted to open  the Splunk directory in C: but couldn't open it. The Folder just doesn't open. Could there be any specific reason for this. KIndly help me with this. Thanks!!
Hi, We have setup our Splunk cluster, version 7.3.3 in AWS and seeing that on logging into splunk UI, some of the web elements are not loading - like User id, 'Search and Reporting' App(sometimes) e... See more...
Hi, We have setup our Splunk cluster, version 7.3.3 in AWS and seeing that on logging into splunk UI, some of the web elements are not loading - like User id, 'Search and Reporting' App(sometimes) etc. Also when we are performing any search we see that it displays "Cannot get username when all users are selected" Our search heads are behind AWS NLB. Our policies do not support ALB and hence we have to settle with NLB, which at this time do not support Sticky sessions or persistence. I have gone thru some of the Splunk Answers on this similar issue and came to know that this is probably because of missing sticky sessions.  Is this really a missing sticky session issue? If yes, Is there any alternate way to resolve this apart from moving from NLB to ALB?
Hello everyone, I need to find common fields from two different logs. After finding common fields I need to extract the result as a table. I need help with the first part of my problem. I have two d... See more...
Hello everyone, I need to find common fields from two different logs. After finding common fields I need to extract the result as a table. I need help with the first part of my problem. I have two different log files with the names of AAA and BBB. How can I compare them and find the common fields?  Thank you.  
Hi I recently started using splunk in my company. Being new to it, I don't know much at the back end of the splunk. I want to know how can we get search results to feed in the python script via aler... See more...
Hi I recently started using splunk in my company. Being new to it, I don't know much at the back end of the splunk. I want to know how can we get search results to feed in the python script via alert action -> run script.  Pointing towards useful  documentation also would be helpful. 
Hi ,   I tried installing Splunk Enterprise 8.0.4 version for windows 10 ( 64- bit ). After success installation , when i am trying opening , it is showing below error :   Can you please help... See more...
Hi ,   I tried installing Splunk Enterprise 8.0.4 version for windows 10 ( 64- bit ). After success installation , when i am trying opening , it is showing below error :   Can you please help me resolve the issue.   Thanks, Akshay
Question:  I'm trying to compare SQL results between two databases using stats and xyseries. Everything is working as expected when we have unique table name in the events but when we have same table... See more...
Question:  I'm trying to compare SQL results between two databases using stats and xyseries. Everything is working as expected when we have unique table name in the events but when we have same table name multiple times (3 times) in the events, xyseries is printing the table name only once instead of 3 times. So, how do we print same table name 3 times in the output.   Below is the query which i'm using.   (index="ABC_index" source=XX ) OR (index="XYZ_index" source="YY" )  | fillnull value=NULL | eval Env = if(source="yy", "P", "C") | rename TABLE_NAME as TABLE_NAME | rename CONSTRAINT_TYPE as CONSTRAINT_TYPE | rex field=_raw "COLUMN_LIST=\"\s+\((?P<Column_List>\w+)" | rename R_CONSTRAINT_NAME as R_CONSTRAINT_NAME | stats count by TABLE_NAME CONSTRAINT_TYPE Column_List Env R_CONSTRAINT_NAME | xyseries Column_List, Env, TABLE_NAME, Column_List, CONSTRAINT_TYPE R_CONSTRAINT_NAME grouped=TRUE sep="_" | table TABLE_NAME_C TABLE_NAME_P Column_List_C Column_List_P CONSTRAINT_TYPE_C CONSTRAINT_TYPE_P R_CONSTRAINT_NAME_C R_CONSTRAINT_NAME_P | fillnull value=NULL Results attached:  In the below output, table name is printed only two times and 3rd time is missing in the results. So, how do we display all 3 times.    Thanks for the help in advance.  
I am planning a migration from Windows to Linux. As I found in Splunk guide, I did following steps: 1. remove instance from distributed search peer. 2. copy whole splunk_home folder to new Linu... See more...
I am planning a migration from Windows to Linux. As I found in Splunk guide, I did following steps: 1. remove instance from distributed search peer. 2. copy whole splunk_home folder to new Linux server. 3. change the paths in configuration file to fit Linux format. now, I want to install splunk via RPM package to finish migration. but I am not sure which command can keep my configuration, alerts, reports, dashboards from old splunk. does "rpm -i --replacepkgs --prefix=/splunkdirectory/ splunk_package_name.rpm" can cover my requirement? if anything I did wrong, please help, thanks in advance.
Hi , i am working on user dashboard need to show Batch name , SLA and actual time . i am using Trellis to show , bu the label to small compared to the value(actual time). pls advise
I downloaded the Palo Alto Networks Add-on to pull down feeds from the MineMeld Service.  I have configured the Palo Alto TA with the relevant credentials and output node feed URL's as per our config... See more...
I downloaded the Palo Alto Networks Add-on to pull down feeds from the MineMeld Service.  I have configured the Palo Alto TA with the relevant credentials and output node feed URL's as per our configuration.  I have configured proxy settings and can confirm that my instance is attempting a connection.  However I noted that when the Palo Alto TA runs to fetch this, I note on the proxy that the "connectionpool.py" in the "Splunk_TA_paloalto_minemeld_feed.log" is starting new HTTPS requests to the loop back adapter (i.e. 127.0.0.1). I have other TA's installed that have their own proxy configurations and these tend to work as expected.  Any thoughts on what might be happening here? "MYINSTALL\etc\apps\Splunk_TA_paloalto\bin\splunk_ta_paloalto\modinput_wrapper\base_modinput.py", line 127, in stream_events self.collect_events(ew) File "MYINSTALL\etc\apps\Splunk_TA_paloalto\bin\minemeld_feed.py", line 72, in collect_events input_module.collect_events(self, ew) File "MYINSTALL\etc\apps\Splunk_TA_paloalto\bin\input_module_minemeld_feed.py", line 78, in collect_events kvs_entries = pull_from_kvstore(helper, name, start, stats) File "MYINSTALL\etc\apps\Splunk_TA_paloalto\bin\input_module_minemeld_feed.py", line 45, in inner ret_val = func(*args) File "MYINSTALL\etc\apps\Splunk_TA_paloalto\bin\input_module_minemeld_feed.py", line 120, in pull_from_kvstore parameters={'query': json.dumps({'splunk_source': name})}) File "MYINSTALL\etc\apps\Splunk_TA_paloalto\bin\splunk_ta_paloalto\modinput_wrapper\base_modinput.py", line 476, in send_http_request proxy_uri=self._get_proxy_uri() if use_proxy else None) File "MYINSTALL\etc\apps\Splunk_TA_paloalto\bin\splunk_ta_paloalto\splunk_aoblib\rest_helper.py", line 43, in send_http_request return self.http_session.request(method, url, **requests_args) File "MYINSTALL\etc\apps\Splunk_TA_paloalto\bin\splunk_ta_paloalto\requests\sessions.py", line 488, in request resp = self.send(prep, **send_kwargs) File "MYINSTALL\etc\apps\Splunk_TA_paloalto\bin\splunk_ta_paloalto\requests\sessions.py", line 609, in send r = adapter.send(request, **kwargs) File "MYINSTALL\etc\apps\Splunk_TA_paloalto\bin\splunk_ta_paloalto\requests\adapters.py", line 390, in send conn = self.get_connection(request.url, proxies) File "MYINSTALL\etc\apps\Splunk_TA_paloalto\bin\splunk_ta_paloalto\requests\adapters.py", line 290, in get_connection proxy_manager = self.proxy_manager_for(proxy) File "MYINSTALL\etc\apps\Splunk_TA_paloalto\bin\splunk_ta_paloalto\requests\adapters.py", line 184, in proxy_manager_for **proxy_kwargs File "MYINSTALL\etc\apps\Splunk_TA_paloalto\bin\splunk_ta_paloalto\requests\adapters.py", line 43, in SOCKSProxyManager raise InvalidSchema("Missing dependencies for SOCKS support.") InvalidSchema: Missing dependencies for SOCKS support. DateTimeStamp,039 INFO pid=183272 tid=MainThread file=connectionpool.py:_new_conn:758 | Starting new HTTPS connection (1): 127.0.0.1 DateTimeStamp,352 INFO pid=154016 tid=MainThread file=connectionpool.py:_new_conn:758 | Starting new HTTPS connection (1): 127.0.0.1 DateTimeStamp,655 INFO pid=16340 tid=MainThread file=connectionpool.py:_new_conn:758 | Starting new HTTPS connection (1): 127.0.0.1
I'm working to upload some data sets from Kaggle in order to learn how to use Splunk and am unable to get the datasets fully added and am receiving an error message of: Upload failed with WARN : supp... See more...
I'm working to upload some data sets from Kaggle in order to learn how to use Splunk and am unable to get the datasets fully added and am receiving an error message of: Upload failed with WARN : supplied index 'Airbnb2' missing.  I have downloaded the .csv files from Kaggle and everything in the Add Data process seems to be working fine up until the review page and when I try to Submit it I receive the above error message. I am fairly new to learning computers and any assistance anyone can offer would be greatly appreciated? 
Hi A recent agent install across our infrastructure has created a flood in the proxy logs of blocked messages which is blowing out our license. Until there is a proper fix I need to stop ingesting ... See more...
Hi A recent agent install across our infrastructure has created a flood in the proxy logs of blocked messages which is blowing out our license. Until there is a proper fix I need to stop ingesting events related to 1 URL. I think  I just need help with the REGEX part. Log are dropped to a syslog server running a heavy forwarder, then we run a monitor on that log file. Inputs.conf   [monitor:///remotesyslogs/mgmt-austaiaecho00*/*.log] disabled = false index = star_proxy sourcetype = cisco:wsa:squid     props.conf   [cisco:wsa:squid] TRANSFORMS-screen=eliminate-screenconnect     transforms.conf   [eliminate-screenconnect] REGEX = ?.=screenconnect DEST_KEYi = queue FORMAT = nullQueue     Example event from log file:   2020-06-12T04:04:55+10:00 mgmt-austaiaecho005.casino.internal accesslogs_splunk: Info: 1591898695.320 0 10.10.216.100 TCP_DENIED/407 0 CONNECT tunnel://screenconnect.techmedia.com.au:8080/ - NONE/- - OTHER-NONE-DefaultGroup-NONE-NONE-NONE-NONE <-,-,-,"-",-,-,-,-,"-",-,-,-,"-",-,-,"-","-",-,-,-,-,"-","-","-","-","-","-",0.00,0,-,"-","-",-,"-",-,-,"-","-",-,-,"-"> - - "12/Jun/2020:04:04:55 +1000" -     Basically for next 4 weeks I need to drop all events with   screenconnect.techmedia.com.au       Thanks Mark    
I don't know if TOU is the right term, but I took some screenshots of the Controller interface when we were using the AppDynamics tool for monitoring our machines.  It has the machines' name, the var... See more...
I don't know if TOU is the right term, but I took some screenshots of the Controller interface when we were using the AppDynamics tool for monitoring our machines.  It has the machines' name, the various metrics related to the infrastructure, and some of the information reported by the various agents.  If I want to include these screenshots in a publication or webpage, do I need to get written permission from AppDynamics, or it is covered under the fair use if I credit AppDynamics on each screenshot?  I think the total number of screenshots I want to show my readers is about a dozen or so.  Thank  you very much for your time.
Hi,  I have a file with many records but when it is indexed in a single event. Example: 20859000133104142002020052140014M101000042394286020200521012000136024001R00014000000000000395001113428171113... See more...
Hi,  I have a file with many records but when it is indexed in a single event. Example: 20859000133104142002020052140014M101000042394286020200521012000136024001R0001400000000000039500111342817111342817211342818311342818300000000011342819911342820800000000011342837310500 2085900013320414208085904142200000000046 20200521012000136024001R0050200000000000000056211344550011344550211344551211344551200000000011344552511344553300000000011344569410500 2085900013330414206085904142200000000047 20200521012000136024001R0050200000000000000056311351275511351275511351276711351276700000000011351278411351279500000000011351293910500 My props.conf file is configured   [Prueba] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) MAX_TIMESTAMP_LOOKAHEAD = 17 NO_BINARY_CHECK = true SHOULD_LINEMERGE = false TIME_FORMAT = %Y%m%d0%H%M%S%3Q TIME_PREFIX = ^.{49} category = Custom pulldown_type = true BREAK_ONLY_BEFORE_DATE = disabled = false I changed the regex in LINE_BREAKER  by   ^.+\n but it does not work. Regards, 
I've tried setting the ldap bind password in authentication.conf and restarting Splunk, creating my hashes with the splunk hash-passwd 'pw', creating hashes with other linux commands. Nothing I do le... See more...
I've tried setting the ldap bind password in authentication.conf and restarting Splunk, creating my hashes with the splunk hash-passwd 'pw', creating hashes with other linux commands. Nothing I do lets me set the password with the CLI or in the .conf (I'm trying to not use the Web. Trying to automate the step essentially). When I set the password in the Web there are no issues and I'm connected to my domain. Any ideas?
I'm trying to receive all the behaviour from a computer in real time, and receive the data in my other computer that has Splunk Enterprise already installed. Both computers in the same network. Wha... See more...
I'm trying to receive all the behaviour from a computer in real time, and receive the data in my other computer that has Splunk Enterprise already installed. Both computers in the same network. What is the best way to make this happen?
I have a custom add-on in the works and when I test with a username the script works create I am however stuck on trying to get the custom alert action to read an alert output that would bring up onl... See more...
I have a custom add-on in the works and when I test with a username the script works create I am however stuck on trying to get the custom alert action to read an alert output that would bring up only the username.