All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am trying to get 10 events from Splunk. But it takes more than 40 minutes when UI returns results less than 1 sec   String token = "token"; String host = "splunk.mycompany.com";... See more...
I am trying to get 10 events from Splunk. But it takes more than 40 minutes when UI returns results less than 1 sec   String token = "token"; String host = "splunk.mycompany.com"; Map<String, Object> result = new HashMap<>(); result.put("host", host); result.put("token", token); HttpService.setSslSecurityProtocol(SSLSecurityProtocol.TLSv1_2); Service service = new Service(result); Job job = service.getJobs().create("search index=some_index earliest=-1h |head 10"); while (!job.isReady()) { try { Thread.sleep(500); // 500 ms } catch (Exception e) { // Handle exception here. } } // Read results try { ResultsReader reader = new ResultsReaderXml(job.getEvents()); // Iterate over events and print _raw field reader.forEach(event -> System.out.println(event.get("_raw"))); } catch (Exception e) { // Handle exception here. }   What can be a cause of this? This code is from Splunk java sdk GitHub page. Token, host, etc. are changed from real to stub due to NDA
Hi, We are facing issue that we are unable to forward logs into Splunk via rsyslogd. They are forwarding as shown below. if $syslogfacility-text == "local4" then { action( type="omfwd" ... See more...
Hi, We are facing issue that we are unable to forward logs into Splunk via rsyslogd. They are forwarding as shown below. if $syslogfacility-text == "local4" then { action( type="omfwd" Target="syslog.ad.crop" Port="5514" Protocol="tcp" ## queue.type default Direct queue.type="LinkedList" ## queue.size default 1000 queue.size="100000" queue.filename="local4" ) stop } logs were getting ingested till 8th feb . please help to resolve this issue. Regards, Rahul
I have installed a new Indexer but I am getting the below error  looks like the data is copied  but also I don't see the server in  Indexer Clustering: Master Node 2 questions  1. how to add the n... See more...
I have installed a new Indexer but I am getting the below error  looks like the data is copied  but also I don't see the server in  Indexer Clustering: Master Node 2 questions  1. how to add the new indexer to the list  2. how to resolve the error message    02-21-2022 17:52:56.508 +0200 INFO CMSlave - event=addPeer status=failure shutdown=false request: AddPeerRequest: { _id= active_bundle_id=EE37C1F78B2D04FFE51AD60A72882ADB add_type=Initial-Add base_generation_id=0 batch_serialno=1 batch_size=1 forwarderdata_rcv_port=9997 forwarderdata_use_ssl=0 last_complete_generation_id=0 latest_bundle_id=EE37C1F78B2D04FFE51AD60A72882ADB mgmt_port=8089 name=243C06C6-E196-4E2D-A990-5AD71B271ED5 register_forwarder_address= register_replication_address= register_search_address= replication_port=8080 replication_use_ssl=0 replications= server_name=ilissplidx11 site=default splunk_version=7.3.4 splunkd_build_number=13e97039fb65 status=Up } 02-21-2022 17:52:56.508 +0200 ERROR CMSlave - event=addPeer start over and retry after sleep 100ms reason= addType=Initial-Add Batch SN=1/1 failed. add_peer_network_ms=3 02-21-2022 17:52:56.608 +0200 INFO CMSlave - event=addPeer Batch=1/1 02-21-2022 17:52:56.611 +0200 WARN CMSlave - Failed to register with cluster master reason: failed method=POST path=/services/cluster/master/peers/?output_mode=json master=illinissplnkmaster:8089 rv=0 gotConnectionError=0 gotUnexpectedStatusCode=1 actual_response_code=500 expected_response_code=2xx status_line="Internal Server Error" socket_error="No error" remote_error=Cannot add peer=10.232.208.35 mgmtport=8089 (reason: http client error=No route to host, while trying to reach https://10.232.208.35:8089/services/cluster/config). [ event=addPeer status=retrying AddPeerRequest: { _id= active_bundle_id=EE37C1F78B2D04FFE51AD60A72882ADB add_type=Initial-Add base_generation_id=0 batch_serialno=1 batch_size=1 forwarderdata_rcv_port=9997 forwarderdata_use_ssl=0 last_complete_generation_id=0 latest_bundle_id=EE37C1F78B2D04FFE51AD60A72882ADB mgmt_port=8089 name=243C06C6-E196-4E2D-A990-5AD71B271ED5 register_forwarder_address= register_replication_address= register_search_address= replication_port=8080 replication_use_ssl=0 replications= server_name=ilissplidx11 site=default splunk_version=7.3.4 splunkd_build_number=13e97039fb65 status=Up } ].
  I have a question on the Dev tutorial as I am unable to figure the behavior or is the output expected under the DESCRIPTION: AInterpidPanoramaofaMadScientistAndaBoywhomustRedeemBoyinAMonastery ... See more...
  I have a question on the Dev tutorial as I am unable to figure the behavior or is the output expected under the DESCRIPTION: AInterpidPanoramaofaMadScientistAndaBoywhomustRedeemBoyinAMonastery All the words in “DESCRIPTION” are not delimited a by a white space , is the normal behavior ?   Module 1 of the Splunk>Dev tutorial  https://dev.splunk.com/enterprise/tutorials/module_getstarted/ Set up the sample data bundle To get the Eventgen sample bundle and send it to the devtutorial index, do the following steps: Go to https://github.com/splunk/eventgen/blob/develop/tests/sample_bundle.zip and click Download to download the Eventgen sample data file, sample_bundle.zip, to your computer.       DESCRIPTION AInterpidPanoramaofaMadScientistAndaBoywhomustRedeemBoyinAMonastery    
Hello, I'm currently using two csv to make an report as - index A : file A CSV -index B : file B CSV   I m trying to add an value from raw csv file A into raw csv file B. With my first se... See more...
Hello, I'm currently using two csv to make an report as - index A : file A CSV -index B : file B CSV   I m trying to add an value from raw csv file A into raw csv file B. With my first search form CSV file B I get this values Owner IP Owner A 10.10.10.2 Owner B 10.10.10.3   I would like to add the Owner value from my first search into my second search... Owner -> I would get from file A IP -> same value as ip into file A CVE Risk Owner A 10.10.10.2     Owner B 10.10.10.3       How can get value of Owner from csv file B and add them on my search ?   Regards,    Miguel
I just noticed a difference in license usage when looking  at 30 days license usage. With "no split" I am within license limit by 60GB or so, but with "split by" for example by index, I am way over... See more...
I just noticed a difference in license usage when looking  at 30 days license usage. With "no split" I am within license limit by 60GB or so, but with "split by" for example by index, I am way over our license limit? It differs like 90GB or so in total between "no split" and "split by"? No warnings are shown about license usage, so I think that "no split" shows the correct summary. Anybody has a clue as to why?
I have a dashboard with positive and negative values. I contains the difference with the month before. I want the fonts color green if value is negative and red if positive. I've come to this:   ... See more...
I have a dashboard with positive and negative values. I contains the difference with the month before. I want the fonts color green if value is negative and red if positive. I've come to this:   <format type="color" field="verschil tov vorige maand"> <colorPalette type="expression">if (like(value,"%-%"),"#0E9421","#EC6521")</colorPalette> </format>   but this is changing the background color and I want to change the font color. Any idea if it is possible to change the font-color in a similar way?
403 Forbidden - unable to post questions in Splunk community   .. My data is masked , but still why am I not allowed to post questions.
I read in another thread that events are not really "deleted" as such, but that they are simply marked in a way that removes them from the search - however that may be just relating to SPLUNK. So ... See more...
I read in another thread that events are not really "deleted" as such, but that they are simply marked in a way that removes them from the search - however that may be just relating to SPLUNK. So if you have a need to restore a deleted event + it's associated containers, actions, logs, etc, is there a way to do that in SOAR/Phantom
We have episodes creating service now tickets through a 3rd party interface. The episode status is changing when the status is changed in service now. But when the episode status is closed in splunk,... See more...
We have episodes creating service now tickets through a 3rd party interface. The episode status is changing when the status is changed in service now. But when the episode status is closed in splunk, new episode is not created. Instead events are updated in the same closed episode. Can any one let me know what could be the issue. regards Manjunath R
Dear professional, I run my search string bellow index="hcg_oapi_prod" source="/var/log/app/rest.log"  And this is my result that is illustrated in the figure. I have about 13 source types, ex: "... See more...
Dear professional, I run my search string bellow index="hcg_oapi_prod" source="/var/log/app/rest.log"  And this is my result that is illustrated in the figure. I have about 13 source types, ex: "oapi:atl-customer:rest", etc. Please help me to get the size of each source type. I can only get the size of index like this Thank you.    
Hi all,  I am facing strange behavior,  for which I can't find anything in the docs. I have a source that generates CSV files(comma sep.). They are indexed in a dedicated index, and sourcetype. ... See more...
Hi all,  I am facing strange behavior,  for which I can't find anything in the docs. I have a source that generates CSV files(comma sep.). They are indexed in a dedicated index, and sourcetype. A look-alike example : id,service_id,product_id,shop_id,user_id,blah_blah,whatever,name,date,client_id 1,34456789,12234,23,4,f,45678,ivan,2022-01-13 07:04:49,1 2,34452789,12134,25,4,f,45678,ivan,2022-01-13 07:14:49,1 3,34451789,12134,27,4,f,45678,ivan,2022-01-13 07:14:49,1 4,34451789,12134,27,4,f,45678,ivan,2022-01-13 07:15:49,1 5,34451789,12133,23,4,f,45678,ivan,2022-01-13 07:15:49,1 6,34456789,12234,23,4,f,45678,ivan,2022-01-13 07:04:49,1 7,34452789,12134,25,4,f,45678,ivan,2022-01-13 07:14:49,1 8,34451789,12134,27,4,f,45678,ivan,2022-01-13 07:14:49,1 9,34451789,12134,27,4,f,45678,ivan,2022-01-13 07:15:49,1 Now, the challenge no1 is that the script that generates the csv, can edit on already existing lines. challenge no2 is that this does not result in one and the same behaviors all the time. If a simple value is changed ( from 1 to 2, or from ivan to ivag - important is same number of characters) , the change is no-were to be found in the indexed data. However, if the change includes a change in the number of characters(say ivan becomes johnathan) then, the whole file is re-indexed with the new value, causing lots of duplications. I am sure that this must be documented somewhere....but I can not find it, thus can not really understand it.  Does anyone know what is going on( I managed to find something in the community about splunk checking the first 256 char. of a file to decide, but I have tested changing both before 256 threshold and after it).....?   Kind regards! rd  
Hi, We are taken Splunk cloud community edition trail.  and we have installed universal forwarder in windows but it is not communicating to cloud server. We are getting error like this:   02-... See more...
Hi, We are taken Splunk cloud community edition trail.  and we have installed universal forwarder in windows but it is not communicating to cloud server. We are getting error like this:   02-21-2022 12:42:48.381 +0530 INFO DC:DeploymentClient [691880 PhonehomeThread] - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected 02-21-2022 12:42:59.014 +0530 INFO ProxyConfig [595472 HttpClientPollingThread_422CEEC3-132D-4E49-B8B8-20DC5A33230D] - Failed to initialize http_proxy from server.conf for splunkd. Please make sure that the http_proxy property is set as http_proxy=http://host:port in case HTTP proxying needs to be enabled.   we are enable all ports which are required for communication but still it is not connecting to cloud server. Help us to resolve this issue. Thank You.
I am trying to export data from splunk using splukCLI as given here    splunk search "index=_internal earliest=09/14/2015:23:59:00 latest=09/16/2015:01:00:00 " -output rawdata -maxout 200000 > c:/t... See more...
I am trying to export data from splunk using splukCLI as given here    splunk search "index=_internal earliest=09/14/2015:23:59:00 latest=09/16/2015:01:00:00 " -output rawdata -maxout 200000 > c:/test123.dmp   Just want to print event fields seperated by tab character instaed of '\t' as string tried something like    ...eval _raw=field1 + "<tabchar_copied_from_notpad>" + field2 + ...   Results in improper character whcih behaves differently in different OS and    ...eval _raw=field1 + "\t" + field2 + ...   Results in "\t" avaiable in _raw as regular two characters.. not as a single tab character Any cleaner way to achieve it  
Hi, i have an requirement as like below. TimeStamp LoginUsers Avg SLA Min SLA  Max SLA 20-02-2022 11:30 35 1 1 3.4 20-02-2022 11:45 40 1.3 1 5.3 20... See more...
Hi, i have an requirement as like below. TimeStamp LoginUsers Avg SLA Min SLA  Max SLA 20-02-2022 11:30 35 1 1 3.4 20-02-2022 11:45 40 1.3 1 5.3 20-02-2022 12:00 32 2.4 1 7.6 20-02-2022 12:15 53 1.2 1 4.2 20-02-2022 12:30 44 2.3 1 3.5   I have an above dashboard panel which was showing up on 15minutes span. Onclick of max SLA we are suppose to get the particular splunk log event in new tab. may  i know how we can work on this.
Hello. Up to Splunk 7 version, it was python2, so I was using the app below to search elasticsearch. https://github.com/brunotm/elasticsplunk   As I upgraded the Splunk version to 8 and start... See more...
Hello. Up to Splunk 7 version, it was python2, so I was using the app below to search elasticsearch. https://github.com/brunotm/elasticsplunk   As I upgraded the Splunk version to 8 and started using python3, the app could not run. So, I ask if there is a way to use this app. * How to use it in python3? (If the conversion is successful and is in use, can you share it?) * Is there an app that can replace it? (I'm not going to use the Elasticsearch Data Integrator - Modular Input app.) * If there is an app you are using with splunk 8 (python3), please recommend it.
Hello, I have a SPL which detects the lookalike short and long domains. My goal is to  implement a CSV lookup which  allows  to add exceptions to these results. For example if “compeni”... See more...
Hello, I have a SPL which detects the lookalike short and long domains. My goal is to  implement a CSV lookup which  allows  to add exceptions to these results. For example if “compeni” is a false & its close to “company” it will keep flooding these results. So, I would like to have a CSV lookup which can  be edited overtime to reduce the false cases column in csv can be something like reel_domain which consists of the list of false domains. Now my question is how can I implement my idea in the below search?   index=* src_user!="" src_user!="*company" AND src_user!="*comp.com" AND src_user!="*compan.com" AND src_user!="*compe.com" AND src_user!="*compani.com" | dedup src_user | rex field=src_user "(?:@)(?<detected_domain>[^>]*)" | eval domain_list=split(detected_domain, ".") | eval domain_list=mvfilter(len(domain_list)>3) | eval domain_list=mvfilter(domain_list!="filter_example") | eval domain_list=if(mvcount(domain_list)>3, mvindex(domain_list, -3), domain_list) | rename domain_list as word2 | makemv word1 | eval word1 = mvappend(word1, "company") | lookup local=t ut_levenshtein_lookup word1 word2 | eval ut_levenshtein=mvfilter(ut_levenshtein!=0) | eval ut_levenshtein=min(ut_levenshtein) | rename ut_levenshtein as ct_long | rename word1 as lg_domain | eval word1=mvappend(word1, "comp"), word1=mvappend(word1, "compan"), word1=mvappend(word1, "compe"), word1=mvappend(word1, "compani") | lookup local=t ut_levenshtein_lookup word1 word2 | eval ut_levenshtein=mvfilter(ut_levenshtein!=0) | eval ut_levenshtein=min(ut_levenshtein) | rename ut_levenshtein as ct_short | rename word1 as st_domain | rename word2 as input_dom | search ct_short<=1 OR ct_long<=3 | table src_user input_dom st_domain ct_short lg_domain ct_long   Any help would be appreciated.. Thanks in advance
Hi, Snapshot: I had some alerts with script actions. Alerts are simply if value A exceeds value B by 10 more e.g. value A=411 and value B=400 then trigger that alert and trigger the respective scrip... See more...
Hi, Snapshot: I had some alerts with script actions. Alerts are simply if value A exceeds value B by 10 more e.g. value A=411 and value B=400 then trigger that alert and trigger the respective script alert is pointed to. Whenever alerts were triggered, specific shell scripts were triggered. Script were simply a shell command . Whenever those command/s ran, they fed alerts to event pipeline of another tool. It worked perfectly fine as designed until we upgraded Splunk from 7.x to 8.x. After upgrade, Script actions option is still available but does not work anymore since functionality has been deprecated. Alternative: Suggested functionality to use is custom alert actions from Splunk. I have developed a TA using Splunk Add On builder. This allows users to create same alerts where Action section allows them to select the script of choice to attach with alert so whenever alert is triggered, script is trigged and event is initiated as before up tp event pipeline of another tool. This TA has a python script which simply call the shell scripts of choice if a condition is met for respective script. Issue/s: 1. When same shell command that in the script/s is ran manually from one of the Search Heads command line, it works perfectly. This is for testing to make sure it works. 2. When same shell command that in the script/s is ran via Splunk Add On Builder @ validation section, it gives syntax error for "-" in the command. When I escaped it with \ using "\-", syntax error resolves and test runs fine with success message. However, event that is supposed to be injected into another tools event pipeline, that event does not come at all. 3. When same shell command that in the script/s is ran WITHOUT "\-options" flags, it works. This is just for testing to make sure the most basic functionality is working. So what can I do to make this python script run the commands we want it to run to restore functionality which we had before via scripted alerts actions? I am suspecyting issue with Python. FYI, these scripts with shell commands and python script to trigger them are all in one package under same TA. Alerts are in a different application and are globally shared as always. We are on Splunk 8.x. Python is upgraded via Splunk upgrade to 3.x. I am suspecting issue with Python engine and libraries. I did some libraries check and some libraries are still @ 2.7 even after the upgrade. # encoding = utf-8 from __future__ import print_function from future import standard_library standard_library.install_aliases() import requests import sys,os import json import logging import logging.handlers import subprocess from subprocess import Popen, PIPE def process_event(helper, *args, **kwargs): helper.set_log_level(helper.log_level) dropdown_list = helper.get_param("dropdown_list") helper.log_info("dropdown_list={}".format(dropdown_list)) helper.log_info("Alert action alert_action_test started.") # Basic test command. Working and making it thru the event messaging queue if dropdown_list == 'abcscript.sh': execute=subprocess.run("/opt/OV/bin/opcmsg application=ABC object=ABC severity=Major msg_text=TESTTESTTEST, stdout=subprocess.PIPE, text=True) # Working syntax wise without any errors in Add On builder but not making it thru the messaging queue as desired for custom alert actions. Note: there is an option Description below that I need to take out to make it work because there is a syntax error with it because python is not aking values with spaces. (\-option Description>>>>) if dropdown_list == 'abcscript.sh': execute=subprocess.run("/opt/OV/bin/opcmsg application=ABC object=ABC severity=Major msg_t=TEST \-option CIHint=ABC_Abstract_Node \-option ETIHint=ABCPortal:Major \-option Description='The rate of SendSubmission failures to successes exceeded the threshold in the JVM logs' \-option category=ABC_JVM \-option subcategory=SendSubmissionRate", stdout=subprocess.PIPE, text=True) # There are 4 scripts in total so I have three more drop downs and 3 more commands respectively like above. Thanks in-advance!!!
Hi: How can I filter to find out gender = male and age < 40, then count ? there are multiple fields and values, thx
We have installed the CISCO WEBEX MEETING ADD ON FOR SPLUNK in the heavy forwarder to on board the logs, but we are getting the below connection error. kindly advise me where i missed. I hope this is... See more...
We have installed the CISCO WEBEX MEETING ADD ON FOR SPLUNK in the heavy forwarder to on board the logs, but we are getting the below connection error. kindly advise me where i missed. I hope this is related to some proxy issue.  2022-02-20 16:36:43,766 WARNING pid=3953874 tid=MainThread file=connectionpool.py:urlopen:745 | Retrying (Retry(total=5, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7fb34aed6050>: Failed to establish a new connection: [Errno -2] Name or service not known')': /WBXService/XMLService 2022-02-20 16:36:27,753 WARNING pid=3953874 tid=MainThread file=connectionpool.py:urlopen:745 | Retrying (Retry(total=6, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7fb34aec5d50>: Failed to establish a new connection: [Errno -2] Name or service not known')': /WBXService/XMLService 2022-02-20 16:36:13,741 WARNING pid=3953874 tid=MainThread file=connectionpool.py:urlopen:745 | Retrying (Retry(total=9, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7fb34aec55d0>: Failed to establish a new connection: [Errno -2] Name or service not known')': /WBXService/XMLService 2022-02-20 16:36:07,939 ERROR pid=3952469 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/ta-cisco-webex-meetings-add-on-for-splunk/bin/ta_cisco_webex_meetings_add_on_for_splunk/aob_py3/urllib3/connection.py", line 157, in _new_conn (self._dns_host, self.port), self.timeout, **extra_kw File "/opt/splunk/etc/apps/ta-cisco-webex-meetings-add-on-for-splunk/bin/ta_cisco_webex_meetings_add_on_for_splunk/aob_py3/urllib3/util/connection.py", line 61, in create_connection