All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I read in another thread that events are not really "deleted" as such, but that they are simply marked in a way that removes them from the search - however that may be just relating to SPLUNK. So ... See more...
I read in another thread that events are not really "deleted" as such, but that they are simply marked in a way that removes them from the search - however that may be just relating to SPLUNK. So if you have a need to restore a deleted event + it's associated containers, actions, logs, etc, is there a way to do that in SOAR/Phantom
We have episodes creating service now tickets through a 3rd party interface. The episode status is changing when the status is changed in service now. But when the episode status is closed in splunk,... See more...
We have episodes creating service now tickets through a 3rd party interface. The episode status is changing when the status is changed in service now. But when the episode status is closed in splunk, new episode is not created. Instead events are updated in the same closed episode. Can any one let me know what could be the issue. regards Manjunath R
Dear professional, I run my search string bellow index="hcg_oapi_prod" source="/var/log/app/rest.log"  And this is my result that is illustrated in the figure. I have about 13 source types, ex: "... See more...
Dear professional, I run my search string bellow index="hcg_oapi_prod" source="/var/log/app/rest.log"  And this is my result that is illustrated in the figure. I have about 13 source types, ex: "oapi:atl-customer:rest", etc. Please help me to get the size of each source type. I can only get the size of index like this Thank you.    
Hi all,  I am facing strange behavior,  for which I can't find anything in the docs. I have a source that generates CSV files(comma sep.). They are indexed in a dedicated index, and sourcetype. ... See more...
Hi all,  I am facing strange behavior,  for which I can't find anything in the docs. I have a source that generates CSV files(comma sep.). They are indexed in a dedicated index, and sourcetype. A look-alike example : id,service_id,product_id,shop_id,user_id,blah_blah,whatever,name,date,client_id 1,34456789,12234,23,4,f,45678,ivan,2022-01-13 07:04:49,1 2,34452789,12134,25,4,f,45678,ivan,2022-01-13 07:14:49,1 3,34451789,12134,27,4,f,45678,ivan,2022-01-13 07:14:49,1 4,34451789,12134,27,4,f,45678,ivan,2022-01-13 07:15:49,1 5,34451789,12133,23,4,f,45678,ivan,2022-01-13 07:15:49,1 6,34456789,12234,23,4,f,45678,ivan,2022-01-13 07:04:49,1 7,34452789,12134,25,4,f,45678,ivan,2022-01-13 07:14:49,1 8,34451789,12134,27,4,f,45678,ivan,2022-01-13 07:14:49,1 9,34451789,12134,27,4,f,45678,ivan,2022-01-13 07:15:49,1 Now, the challenge no1 is that the script that generates the csv, can edit on already existing lines. challenge no2 is that this does not result in one and the same behaviors all the time. If a simple value is changed ( from 1 to 2, or from ivan to ivag - important is same number of characters) , the change is no-were to be found in the indexed data. However, if the change includes a change in the number of characters(say ivan becomes johnathan) then, the whole file is re-indexed with the new value, causing lots of duplications. I am sure that this must be documented somewhere....but I can not find it, thus can not really understand it.  Does anyone know what is going on( I managed to find something in the community about splunk checking the first 256 char. of a file to decide, but I have tested changing both before 256 threshold and after it).....?   Kind regards! rd  
Hi, We are taken Splunk cloud community edition trail.  and we have installed universal forwarder in windows but it is not communicating to cloud server. We are getting error like this:   02-... See more...
Hi, We are taken Splunk cloud community edition trail.  and we have installed universal forwarder in windows but it is not communicating to cloud server. We are getting error like this:   02-21-2022 12:42:48.381 +0530 INFO DC:DeploymentClient [691880 PhonehomeThread] - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected 02-21-2022 12:42:59.014 +0530 INFO ProxyConfig [595472 HttpClientPollingThread_422CEEC3-132D-4E49-B8B8-20DC5A33230D] - Failed to initialize http_proxy from server.conf for splunkd. Please make sure that the http_proxy property is set as http_proxy=http://host:port in case HTTP proxying needs to be enabled.   we are enable all ports which are required for communication but still it is not connecting to cloud server. Help us to resolve this issue. Thank You.
I am trying to export data from splunk using splukCLI as given here    splunk search "index=_internal earliest=09/14/2015:23:59:00 latest=09/16/2015:01:00:00 " -output rawdata -maxout 200000 > c:/t... See more...
I am trying to export data from splunk using splukCLI as given here    splunk search "index=_internal earliest=09/14/2015:23:59:00 latest=09/16/2015:01:00:00 " -output rawdata -maxout 200000 > c:/test123.dmp   Just want to print event fields seperated by tab character instaed of '\t' as string tried something like    ...eval _raw=field1 + "<tabchar_copied_from_notpad>" + field2 + ...   Results in improper character whcih behaves differently in different OS and    ...eval _raw=field1 + "\t" + field2 + ...   Results in "\t" avaiable in _raw as regular two characters.. not as a single tab character Any cleaner way to achieve it  
Hi, i have an requirement as like below. TimeStamp LoginUsers Avg SLA Min SLA  Max SLA 20-02-2022 11:30 35 1 1 3.4 20-02-2022 11:45 40 1.3 1 5.3 20... See more...
Hi, i have an requirement as like below. TimeStamp LoginUsers Avg SLA Min SLA  Max SLA 20-02-2022 11:30 35 1 1 3.4 20-02-2022 11:45 40 1.3 1 5.3 20-02-2022 12:00 32 2.4 1 7.6 20-02-2022 12:15 53 1.2 1 4.2 20-02-2022 12:30 44 2.3 1 3.5   I have an above dashboard panel which was showing up on 15minutes span. Onclick of max SLA we are suppose to get the particular splunk log event in new tab. may  i know how we can work on this.
Hello. Up to Splunk 7 version, it was python2, so I was using the app below to search elasticsearch. https://github.com/brunotm/elasticsplunk   As I upgraded the Splunk version to 8 and start... See more...
Hello. Up to Splunk 7 version, it was python2, so I was using the app below to search elasticsearch. https://github.com/brunotm/elasticsplunk   As I upgraded the Splunk version to 8 and started using python3, the app could not run. So, I ask if there is a way to use this app. * How to use it in python3? (If the conversion is successful and is in use, can you share it?) * Is there an app that can replace it? (I'm not going to use the Elasticsearch Data Integrator - Modular Input app.) * If there is an app you are using with splunk 8 (python3), please recommend it.
Hello, I have a SPL which detects the lookalike short and long domains. My goal is to  implement a CSV lookup which  allows  to add exceptions to these results. For example if “compeni”... See more...
Hello, I have a SPL which detects the lookalike short and long domains. My goal is to  implement a CSV lookup which  allows  to add exceptions to these results. For example if “compeni” is a false & its close to “company” it will keep flooding these results. So, I would like to have a CSV lookup which can  be edited overtime to reduce the false cases column in csv can be something like reel_domain which consists of the list of false domains. Now my question is how can I implement my idea in the below search?   index=* src_user!="" src_user!="*company" AND src_user!="*comp.com" AND src_user!="*compan.com" AND src_user!="*compe.com" AND src_user!="*compani.com" | dedup src_user | rex field=src_user "(?:@)(?<detected_domain>[^>]*)" | eval domain_list=split(detected_domain, ".") | eval domain_list=mvfilter(len(domain_list)>3) | eval domain_list=mvfilter(domain_list!="filter_example") | eval domain_list=if(mvcount(domain_list)>3, mvindex(domain_list, -3), domain_list) | rename domain_list as word2 | makemv word1 | eval word1 = mvappend(word1, "company") | lookup local=t ut_levenshtein_lookup word1 word2 | eval ut_levenshtein=mvfilter(ut_levenshtein!=0) | eval ut_levenshtein=min(ut_levenshtein) | rename ut_levenshtein as ct_long | rename word1 as lg_domain | eval word1=mvappend(word1, "comp"), word1=mvappend(word1, "compan"), word1=mvappend(word1, "compe"), word1=mvappend(word1, "compani") | lookup local=t ut_levenshtein_lookup word1 word2 | eval ut_levenshtein=mvfilter(ut_levenshtein!=0) | eval ut_levenshtein=min(ut_levenshtein) | rename ut_levenshtein as ct_short | rename word1 as st_domain | rename word2 as input_dom | search ct_short<=1 OR ct_long<=3 | table src_user input_dom st_domain ct_short lg_domain ct_long   Any help would be appreciated.. Thanks in advance
Hi, Snapshot: I had some alerts with script actions. Alerts are simply if value A exceeds value B by 10 more e.g. value A=411 and value B=400 then trigger that alert and trigger the respective scrip... See more...
Hi, Snapshot: I had some alerts with script actions. Alerts are simply if value A exceeds value B by 10 more e.g. value A=411 and value B=400 then trigger that alert and trigger the respective script alert is pointed to. Whenever alerts were triggered, specific shell scripts were triggered. Script were simply a shell command . Whenever those command/s ran, they fed alerts to event pipeline of another tool. It worked perfectly fine as designed until we upgraded Splunk from 7.x to 8.x. After upgrade, Script actions option is still available but does not work anymore since functionality has been deprecated. Alternative: Suggested functionality to use is custom alert actions from Splunk. I have developed a TA using Splunk Add On builder. This allows users to create same alerts where Action section allows them to select the script of choice to attach with alert so whenever alert is triggered, script is trigged and event is initiated as before up tp event pipeline of another tool. This TA has a python script which simply call the shell scripts of choice if a condition is met for respective script. Issue/s: 1. When same shell command that in the script/s is ran manually from one of the Search Heads command line, it works perfectly. This is for testing to make sure it works. 2. When same shell command that in the script/s is ran via Splunk Add On Builder @ validation section, it gives syntax error for "-" in the command. When I escaped it with \ using "\-", syntax error resolves and test runs fine with success message. However, event that is supposed to be injected into another tools event pipeline, that event does not come at all. 3. When same shell command that in the script/s is ran WITHOUT "\-options" flags, it works. This is just for testing to make sure the most basic functionality is working. So what can I do to make this python script run the commands we want it to run to restore functionality which we had before via scripted alerts actions? I am suspecyting issue with Python. FYI, these scripts with shell commands and python script to trigger them are all in one package under same TA. Alerts are in a different application and are globally shared as always. We are on Splunk 8.x. Python is upgraded via Splunk upgrade to 3.x. I am suspecting issue with Python engine and libraries. I did some libraries check and some libraries are still @ 2.7 even after the upgrade. # encoding = utf-8 from __future__ import print_function from future import standard_library standard_library.install_aliases() import requests import sys,os import json import logging import logging.handlers import subprocess from subprocess import Popen, PIPE def process_event(helper, *args, **kwargs): helper.set_log_level(helper.log_level) dropdown_list = helper.get_param("dropdown_list") helper.log_info("dropdown_list={}".format(dropdown_list)) helper.log_info("Alert action alert_action_test started.") # Basic test command. Working and making it thru the event messaging queue if dropdown_list == 'abcscript.sh': execute=subprocess.run("/opt/OV/bin/opcmsg application=ABC object=ABC severity=Major msg_text=TESTTESTTEST, stdout=subprocess.PIPE, text=True) # Working syntax wise without any errors in Add On builder but not making it thru the messaging queue as desired for custom alert actions. Note: there is an option Description below that I need to take out to make it work because there is a syntax error with it because python is not aking values with spaces. (\-option Description>>>>) if dropdown_list == 'abcscript.sh': execute=subprocess.run("/opt/OV/bin/opcmsg application=ABC object=ABC severity=Major msg_t=TEST \-option CIHint=ABC_Abstract_Node \-option ETIHint=ABCPortal:Major \-option Description='The rate of SendSubmission failures to successes exceeded the threshold in the JVM logs' \-option category=ABC_JVM \-option subcategory=SendSubmissionRate", stdout=subprocess.PIPE, text=True) # There are 4 scripts in total so I have three more drop downs and 3 more commands respectively like above. Thanks in-advance!!!
Hi: How can I filter to find out gender = male and age < 40, then count ? there are multiple fields and values, thx
We have installed the CISCO WEBEX MEETING ADD ON FOR SPLUNK in the heavy forwarder to on board the logs, but we are getting the below connection error. kindly advise me where i missed. I hope this is... See more...
We have installed the CISCO WEBEX MEETING ADD ON FOR SPLUNK in the heavy forwarder to on board the logs, but we are getting the below connection error. kindly advise me where i missed. I hope this is related to some proxy issue.  2022-02-20 16:36:43,766 WARNING pid=3953874 tid=MainThread file=connectionpool.py:urlopen:745 | Retrying (Retry(total=5, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7fb34aed6050>: Failed to establish a new connection: [Errno -2] Name or service not known')': /WBXService/XMLService 2022-02-20 16:36:27,753 WARNING pid=3953874 tid=MainThread file=connectionpool.py:urlopen:745 | Retrying (Retry(total=6, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7fb34aec5d50>: Failed to establish a new connection: [Errno -2] Name or service not known')': /WBXService/XMLService 2022-02-20 16:36:13,741 WARNING pid=3953874 tid=MainThread file=connectionpool.py:urlopen:745 | Retrying (Retry(total=9, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7fb34aec55d0>: Failed to establish a new connection: [Errno -2] Name or service not known')': /WBXService/XMLService 2022-02-20 16:36:07,939 ERROR pid=3952469 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/ta-cisco-webex-meetings-add-on-for-splunk/bin/ta_cisco_webex_meetings_add_on_for_splunk/aob_py3/urllib3/connection.py", line 157, in _new_conn (self._dns_host, self.port), self.timeout, **extra_kw File "/opt/splunk/etc/apps/ta-cisco-webex-meetings-add-on-for-splunk/bin/ta_cisco_webex_meetings_add_on_for_splunk/aob_py3/urllib3/util/connection.py", line 61, in create_connection
Hello Guys, We have to remove some of the fields permanently. Is there any configuration file or something to remove fields from backend ?    Note : We are not looking for "fields - " to remove the... See more...
Hello Guys, We have to remove some of the fields permanently. Is there any configuration file or something to remove fields from backend ?    Note : We are not looking for "fields - " to remove the fields in search..
I want to control the number of concurrent user searches on an app-by-app basis. I think it is possible to control the number of concurrent executions on a role-by-role basis, but is it possible to ... See more...
I want to control the number of concurrent user searches on an app-by-app basis. I think it is possible to control the number of concurrent executions on a role-by-role basis, but is it possible to control on an app-by-app basis? If control is possible on an app-by-app basis, please tell me how to control it. (I think it is feasible by distributing limits.conf (base_max_searches) under App.)
Hello splunkies! I'm trying to be and admin and I'm trying to add data manually to my inputs.conf,  please see my scenario: path: /logfiles/syslog/log.txt The output from a script that contacts ... See more...
Hello splunkies! I'm trying to be and admin and I'm trying to add data manually to my inputs.conf,  please see my scenario: path: /logfiles/syslog/log.txt The output from a script that contacts an internal REST API. There are two kinds of requests in this file: 1 . http://localhost:8080/api/requests/xTraining.json API shows data from the non-production host and should be written to Index = API-NPTraining 2. http://localhost:8080/api/requests/Training.json api shows data from the production host and should be written to index = API-PTraining Both should use sourcetype ss:training Data in this file will rotate daily to log.txt.1020, log.txt.1021...etc   I have my stanzas like this #first stanza [monitor:///logfiles/syslog/log*.txt] disabled = 0  host = http://localhost:8080/api/requests/xTraining.json index = API-NPTraining sourcetype = ss:training   # second Stanza [monitor:///logfiles/syslog/log*.txt] disabled = 0  host = http://localhost:8080/api/requests/Training.json index = API-PTraining sourcetype = ss:training   What am I missing?  Am I wrong in something?  thank you.    
Hi, Splunkers, I have a Search with an input token ,  which is not working in my query in dashboard t_TargetType is token name.       | search AFRoute=if($t_TargetType|s$ == "A","true","*") w... See more...
Hi, Splunkers, I have a Search with an input token ,  which is not working in my query in dashboard t_TargetType is token name.       | search AFRoute=if($t_TargetType|s$ == "A","true","*") when token has value  A,       | search AFRoute=if("A" == "A","true","*"),  which I assume is equal to  | search AFRoute="true". but when I directly run a search  with  | search AFRoute=if("A" == "A","true","*") , it doesn't  work  same as | search AFRoute="true". what's the difference between | search AFRoute=if("A" == "A","true","*") and  | search AFRoute="true"?    Kevin
Need to Data Balancing after upsizing of indexer ??
I work at a company in Brazil that is a Splunk enterprise customer. I am trying to request a Dev/test license to install in an environment that is already running as a test on the Splunk Enterprise ... See more...
I work at a company in Brazil that is a Splunk enterprise customer. I am trying to request a Dev/test license to install in an environment that is already running as a test on the Splunk Enterprise 60 days license. Can I insert the test license "on top" of this free license? And how do I request the test license? The site is returning page not found error 404 (http://splunk.com/dev-test?_ga=2.8916146.684201008.1645269262-1587620948.1640209718) https://www.splunk.com/dev-test Thank you
I am unable to open Splunk Enterprise, getting error as  This site can’t be reached 127.0.0.1 refused to connect. ERR_CONNECTION_REFUSED   I have checked the proxy and firewall It... See more...
I am unable to open Splunk Enterprise, getting error as  This site can’t be reached 127.0.0.1 refused to connect. ERR_CONNECTION_REFUSED   I have checked the proxy and firewall It worked after i did repair the application but again i am getting the error.   i think after windows update it is not working as the after installation and repair windows did an automatic update   Any suggestions on resolving the issue.
Hello splunkies! I'm trying to be and admin and I'm doing an exercise but I cannot find the way to configure my inputs.conf here the exercise: path:  /logfiles/syslog/training-nix01.txt This file... See more...
Hello splunkies! I'm trying to be and admin and I'm doing an exercise but I cannot find the way to configure my inputs.conf here the exercise: path:  /logfiles/syslog/training-nix01.txt This file will be updated continuously and will roll daily to training-nix01.1, training-nix01.1 etc Data from these files should be written to Index: Training Sourcetype: tp:tr    any ideas?