All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, Can you please let me know  which Splunk enterprise version is more stable release to deploy. Thank you
Hello any ideas how can i check rdp attempts or connections in Splunk? many thanks  
Hello, What will be the best practices for the java. thanks. 
Hi  I need to do a post request with some params to a external rest endpoint which expects a SSL cert for authentication. If anyone has done anything like this , it would be great if you can share... See more...
Hi  I need to do a post request with some params to a external rest endpoint which expects a SSL cert for authentication. If anyone has done anything like this , it would be great if you can share the steps.  Thanks in advance !!
Hi  Please let me know if i upgrade from current version of  TA for Microsoft Windows Defender from 1.0.0 to 1.0.6 , will it cause any issues . Do i have to save the secret key and all .    Regard... See more...
Hi  Please let me know if i upgrade from current version of  TA for Microsoft Windows Defender from 1.0.0 to 1.0.6 , will it cause any issues . Do i have to save the secret key and all .    Regards    Rahul   
Since our last update to 8.2.2.1 the index _internal contains lots of ERROR messages where we cannot get any information about it's meaning: ERROR ILightWeightSearchStringParser [4392 SchedulerThre... See more...
Since our last update to 8.2.2.1 the index _internal contains lots of ERROR messages where we cannot get any information about it's meaning: ERROR ILightWeightSearchStringParser [4392 SchedulerThread] - still in inQueto=true Does anybody know this message and can give some information about it?   Thanks
Hi, all! Here are the sources that I want to contain at my search: - /appvol/wlp/DIVR01HK-AS01/applogs/appl.log - /appvol/wlp/DIVR01HK-AS01/applogs/appl.log.1 -  /appvol/wlp/DIVR01HK-AS01/applogs... See more...
Hi, all! Here are the sources that I want to contain at my search: - /appvol/wlp/DIVR01HK-AS01/applogs/appl.log - /appvol/wlp/DIVR01HK-AS01/applogs/appl.log.1 -  /appvol/wlp/DIVR01HK-AS01/applogs/appl.log.2 ... - /appvol/wlp/DIVR01HK-AS01/applogs/appl.log.50 How I could summarize those sources in a simple way in my Search command?  
Greetings!   How to create tickets in splunk and assign it to someone?   Thank you in advance!
Hello, I am trying to configure Splunk Connect for Kubernetes to capture a k8s cluster application logs. I have problems when configuring https connection to HEC. On the Heavy Forwarder, I have co... See more...
Hello, I am trying to configure Splunk Connect for Kubernetes to capture a k8s cluster application logs. I have problems when configuring https connection to HEC. On the Heavy Forwarder, I have configured a ServerCert, which has been signed by our Company Authority. Then, on Splunk Connect for Kubernetes Helm, if I configure https :           splunk: hec: # host is required and should be provided by user host: hostname.domain.com # token is required and should be provided by user token: MY-HEC-TOKEN # protocol has two options: "http" and "https", default is "https" # For self signed certificate leave this field blank protocol: https           When deploying, I see the following logs on Heavy Forwarder :            01-25-2022 09:37:16.729 +0100 WARN SSLCommon [1235867 HttpInputServerDataThread] - Received fatal SSL3 alert. ssl_state='SSLv3 read client key exchange A', alert_description='unknown CA'. 01-25-2022 09:37:16.729 +0100 WARN HttpListener [1235867 HttpInputServerDataThread] - Socket error from 10.8.199.195:55608 while idling: error:14094418:SSL routines:ssl3_read_bytes:tlsv1 alert unknown ca - please check the output of the `openssl verify` command for the certificates involved; note that if certificate verification is enabled (requireClientCert or sslVerifyServerCert set to "true"), the CA certificate and the server certificate should not have the same Common Name.             I have to configure insecureSSL: true to get the connection working and see logs on Indexer. But, If I activate HTTPS connection, I do not want it to be insecure ^^.   I am a bit confused about Splunk Connect 4 Kubernetes configuration. Can I use :            splunk: # Configurations for HEC (HTTP Event Collector) hec: # The PEM-format CA certificate file. # NOTE: The content of the file itself should be used here, not the file path. # The file will be stored as a secret in kubernetes. caFile:           To configure ma Company CA ?   Or are keys clientCert, clientKey and CaFile only used for mTLS configuration ?   Thank you in advance for your answers. Regards. Nicolas.  
Hi All,  We are currently using CrowdStrike Falcon Event Streams Technical Add-On" in our instance. https://splunkbase.splunk.com/app/5082/ Recently received the alert about update of jQuery. Acc... See more...
Hi All,  We are currently using CrowdStrike Falcon Event Streams Technical Add-On" in our instance. https://splunkbase.splunk.com/app/5082/ Recently received the alert about update of jQuery. According to the Upgrade Readiness App, this Add-on does not support jQuery 3.5 so far. Does anyone know about the support schedule for this add-on? Regards,
I currently have a Universal Forwarder running on a linux syslog server with a bunch of file monitors in place such as: [monitor:///var/log/10.10.10.99/syslog.log]index=hphost_segment=3disabled=0 T... See more...
I currently have a Universal Forwarder running on a linux syslog server with a bunch of file monitors in place such as: [monitor:///var/log/10.10.10.99/syslog.log]index=hphost_segment=3disabled=0 The index that i'm using for my new file monitor stanzas is a newly created index, that i haven't used previously. I've created a couple of new deployment apps with the new file monitors and pushed them out to the UF on my syslog server. I can see other monitored files on the syslog server being forwarded into Splunk, however i'm not seeing my new files being monitored. I've reloaded the deploy-server to ensure that the configs are being pushed out. I have also run a "./splunk btool inputs list" command and I can see that it is listing my new configuration as a part of the aggregated inputs.conf. However i'm not seeing any events for these new file monitors being forwarded into Splunk. The new index is showing 0 events received. Is there a way to list events being outputted by the Universal Forwarder? Also is there a way to list events from my Universal Forwarder that are hitting the input queue on my Splunk indexer? Thanks,
01-25-2022 13:55:43.694 +0800 ERROR ExecProcessor [128178 ExecProcessorSchedulerThread] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/rest_ta/bin/rest.py" Exception in thread Thread-... See more...
01-25-2022 13:55:43.694 +0800 ERROR ExecProcessor [128178 ExecProcessorSchedulerThread] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/rest_ta/bin/rest.py" Exception in thread Thread-1: 01-25-2022 13:55:43.694 +0800 ERROR ExecProcessor [128178 ExecProcessorSchedulerThread] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/rest_ta/bin/rest.py" Traceback (most recent call last): 01-25-2022 13:55:43.694 +0800 ERROR ExecProcessor [128178 ExecProcessorSchedulerThread] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/rest_ta/bin/rest.py" File "/opt/splunk/lib/python3.7/threading.py", line 926, in _bootstrap_inner 01-25-2022 13:55:43.694 +0800 ERROR ExecProcessor [128178 ExecProcessorSchedulerThread] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/rest_ta/bin/rest.py" self.run() 01-25-2022 13:55:43.694 +0800 ERROR ExecProcessor [128178 ExecProcessorSchedulerThread] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/rest_ta/bin/rest.py" File "/opt/splunk/lib/python3.7/threading.py", line 870, in run 01-25-2022 13:55:43.694 +0800 ERROR ExecProcessor [128178 ExecProcessorSchedulerThread] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/rest_ta/bin/rest.py" self._target(*self._args, **self._kwargs) 01-25-2022 13:55:43.694 +0800 ERROR ExecProcessor [128178 ExecProcessorSchedulerThread] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/rest_ta/bin/rest.py" File "/opt/splunk/etc/apps/rest_ta/bin/rest.py", line 461, in do_run 01-25-2022 13:55:43.694 +0800 ERROR ExecProcessor [128178 ExecProcessorSchedulerThread] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/rest_ta/bin/rest.py" endpoint_list[i] = endpoint.replace(replace_key,c['clear_password'])
Hello, If I try to show the source of an event, splunk shows only "loading ...". I took care, that the result is finalized. We are using Splunk 8.0.8. Edit: I start a search with "index=..." ... See more...
Hello, If I try to show the source of an event, splunk shows only "loading ...". I took care, that the result is finalized. We are using Splunk 8.0.8. Edit: I start a search with "index=..." and "source=.." I open an event -> event actions -> show source  Now the loading screen will not display any result/source, just "loading ...." 
Hi, I have registered for Splunk Phantom Community edition download 3 days ago. However, still the approval is pending and I didn't received link so far. Please let me know how do I get the link for... See more...
Hi, I have registered for Splunk Phantom Community edition download 3 days ago. However, still the approval is pending and I didn't received link so far. Please let me know how do I get the link for download.   Advanced Thanks
Hi, My environment has multiple apps. I got a requirement to default a value to a temp field. While my eval in the search works but eval in the props conf file isn't creating the new field. Please ... See more...
Hi, My environment has multiple apps. I got a requirement to default a value to a temp field. While my eval in the search works but eval in the props conf file isn't creating the new field. Please help me troubleshoot. My conf files are below:     INPUTS ON FORWARDERS: [monitor:///var/log/omega] index=foo_bar_transaction sourcetype=foo_car PROPS ON SHC: [foo_car] EVAL-tempvariable = "Test_Eval" EVAL-datacenter = if(IN(mvindex(split(host,"-"),1),"clc","dkn"),"DANGER",mvindex(split(host,"-"),1)) INDEXER: /opt/splunk/bin/splunk cmd btool props list foo_car --debug /opt/splunk/etc/slave-apps/INFRA_APP_logs/default/props.conf [foo_car] /opt/splunk/etc/system/default/props.conf ADD_EXTRA_TIME_FIELDS = True /opt/splunk/etc/system/default/props.conf ANNOTATE_PUNCT = True /opt/splunk/etc/system/default/props.conf AUTO_KV_JSON = true /opt/splunk/etc/system/default/props.conf BREAK_ONLY_BEFORE = /opt/splunk/etc/system/default/props.conf BREAK_ONLY_BEFORE_DATE = True /opt/splunk/etc/system/default/props.conf CHARSET = UTF-8 /opt/splunk/etc/system/default/props.conf DATETIME_CONFIG = /etc/datetime.xml /opt/splunk/etc/system/default/props.conf DEPTH_LIMIT = 1000 /opt/splunk/etc/system/default/props.conf HEADER_MODE = /opt/splunk/etc/system/default/props.conf LEARN_MODEL = true /opt/splunk/etc/system/default/props.conf LEARN_SOURCETYPE = true /opt/splunk/etc/system/default/props.conf LINE_BREAKER_LOOKBEHIND = 100 /opt/splunk/etc/system/default/props.conf MATCH_LIMIT = 100000 /opt/splunk/etc/system/default/props.conf MAX_DAYS_AGO = 2000 /opt/splunk/etc/system/local/props.conf MAX_DAYS_HENCE = 40 /opt/splunk/etc/system/default/props.conf MAX_DIFF_SECS_AGO = 3600 /opt/splunk/etc/system/default/props.conf MAX_DIFF_SECS_HENCE = 604800 /opt/splunk/etc/system/default/props.conf MAX_EVENTS = 256 /opt/splunk/etc/system/default/props.conf MAX_TIMESTAMP_LOOKAHEAD = 128 /opt/splunk/etc/system/default/props.conf MUST_BREAK_AFTER = /opt/splunk/etc/system/default/props.conf MUST_NOT_BREAK_AFTER = /opt/splunk/etc/system/default/props.conf MUST_NOT_BREAK_BEFORE = /opt/splunk/etc/system/default/props.conf SEGMENTATION = indexing /opt/splunk/etc/system/default/props.conf SEGMENTATION-all = full /opt/splunk/etc/system/default/props.conf SEGMENTATION-inner = inner /opt/splunk/etc/system/default/props.conf SEGMENTATION-outer = outer /opt/splunk/etc/system/default/props.conf SEGMENTATION-raw = none /opt/splunk/etc/system/default/props.conf SEGMENTATION-standard = standard /opt/splunk/etc/system/default/props.conf SHOULD_LINEMERGE = True /opt/splunk/etc/system/default/props.conf TRANSFORMS = /opt/splunk/etc/system/default/props.conf TRUNCATE = 10000 /opt/splunk/etc/system/default/props.conf detect_trailing_nulls = false /opt/splunk/etc/system/default/props.conf maxDist = 100 /opt/splunk/etc/system/default/props.conf priority = /opt/splunk/etc/system/default/props.conf sourcetype =    
Hi, I am trying to calculate age for a task. Time is in below format. What am I missing? | makeresults | eval Last_Checkin="2021-05-26T20:47:22Z" | table Last_Checkin, _time | eval Age_Days=ceil... See more...
Hi, I am trying to calculate age for a task. Time is in below format. What am I missing? | makeresults | eval Last_Checkin="2021-05-26T20:47:22Z" | table Last_Checkin, _time | eval Age_Days=ceiling((now()-strptime(Last_Checkin,"%Y-%m-%dT%H:%M:%S%3NZ"))/86400) | eval CVE_Age=case( Age_Days<30,"A_0 to 29 Days", Age_Days>=365,"G_365 Days+", Age_Days>=180,"F_180 to 364 Days", Age_Days>=120,"E_120 to 179 Days", Age_Days>=90,"D_90 to 119 Days", Age_Days>=60,"C_60 to 89 Days", Age_Days>=30,"B_30 to 59 Days", 0==0,"H_No Age Data")  
Greetings, Would anyone have any recommendations for forwarding events from physically isolated networks to a main network and thus providing a "single pane of glass"?  The networks must be physica... See more...
Greetings, Would anyone have any recommendations for forwarding events from physically isolated networks to a main network and thus providing a "single pane of glass"?  The networks must be physically isolated due to security requirements. Data diode connections are approved. We cannot implement firewalls, thus supporting TCP connections (and typical Splunk forwarding protocols). Here is a reference diagram of the logical architecture. We are looking at utilizing data diodes, which then require UDP connections, which of course, limits our options. We know how to get the traffic across the data diodes from a network configuration standpoint, so it's a matter of what data and how the data is structured so it can be processed correctly by the upstream Splunk Indexer that is the big question. Reference Logical ArchitectureOur biggest issue lies in the fact that UDP ingest on the upstream Splunk server only sees the Heavy Forwarders as the source vs. the original endpoints. We've searched at other forum posts with no luck on how to adapt them if they are even adaptable for this type of scenario, given we are ingesting Windows, Linux, and Syslog from each isolated network. Example: https://community.splunk.com/t5/Getting-Data-In/Keeping-Host-data-when-using-Heavy-Forwarder/m-p/232328 What happens between the Heavy Forwarders and the upstream Splunk Indexer is where we need help. Any and all creative ideas are welcome! For example: UDP packet manipulation inserting a source hostname on the Heavy Forwarders (but how can we determine the original source hostname?) Flat file export on Heavy Forwarder + ingest on upstream Splunk Indexer (but how can we leverage existing Props & Transforms without creating lots of custom Regex?) Is there a better "flat file" method that would support Windows and Linux in addition to Syslog (via SC4S)? Na, forget it, just leave the networks separate We are currently running Splunk Enterprise v8.2.3, on premise. Thank you in advance!
Hi, I am currently using the AWS Add-on for Splunk, and am looking to see if I can blacklist based on regex other than the applications UI for blacklisting based on eventnames. (using the blacklist m... See more...
Hi, I am currently using the AWS Add-on for Splunk, and am looking to see if I can blacklist based on regex other than the applications UI for blacklisting based on eventnames. (using the blacklist method provided by the app: https://docs.splunk.com/Documentation/AddOns/latest/AWS/CloudTrail)   I have a central Cloudtrail for all of my accounts and looking to send logs from a certain account to nullque so they are not ingested. The logs do have a field for AccountID. Reason being the specific logs from the account are about 80 percent of my ingestion and are not needed. I saw this article but as mentioned before I am not able to modify these files directly due to being on Splunk Cloud: https://docs.splunk.com/Documentation/Splunk/6.4.1/Forwarding/Routeandfilterdatad#Discard_specific_events_and_keep_the_rest   Since I do not have access to modify transform.conf or props.conf.  I was told I could modify the applications .conf files and send a zipped folder of the modified contents for Splunk team to upload and install. Currently I do have blacklisting implemented on EventNames as this is part of the application.  Is there any guidance on how I can blacklist based on regex such as accountID=(id for account I want to send to nullque)?
Hello.  We would like to use Splunk Add-on for AWS to ingest CloudTrail data from S3; however, we do not want to allow the entire s3.amazonaws.com namespace at the proxy, but rather the specific s3 ... See more...
Hello.  We would like to use Splunk Add-on for AWS to ingest CloudTrail data from S3; however, we do not want to allow the entire s3.amazonaws.com namespace at the proxy, but rather the specific s3 bucket that contains the data for ingest. What configuration / py files will i need to modify to force Splunk Add-On for AWS to request bucketname.s3.us-east-2.amazonaws.com instead of just s3.amazonaws.com? Thank you!
Hello, I am getting some error messages within my PROPS Configuration file to parse timestamp data. The sample file/event, my props configuration, and error message are giving below. Any help will b... See more...
Hello, I am getting some error messages within my PROPS Configuration file to parse timestamp data. The sample file/event, my props configuration, and error message are giving below. Any help will be highly appreciated. Thank you so much: Sample Event: <?xml version="1.0" encoding="ISO99991"?> <SDWDATA> <MDWDATA> <TIMESTAMP>20110630143000</TIMESTAMP>   <USERTYPE>TEST</USERTYPE> <SESSION>zx530</SESSION> <IPADDR>142.225.163.60</IPADDR> <SYSTEM>CDE</SYSTEM> <EVENTID>NAMEE</EVENTID> <EVENTTYPE>SEARCH</EVENTTYPE> <RETURNCODE>0102</RETURNCODE> <ERRORMSG>None</ERRORMSG> <ESTATCD>1</ESTATCD> <TESTCODE>210</TESTCODE> <FNUMBER>1321</FNUMBER> <OUTPUTCODE>10</OUTPUTCODE> <RCODE>ASDC</RCODE> <NAMECTRL>TESTPWE</NAMECTRL> <USERID>I00XY09</USERID> <ACCESS>ngd</ACCESS> <CAMPUSCODE>p</CAMPUSCODE> <SRCCD>ab31</SRCCD> <SLNR>123456</SLNR> </MDWDATA> <SDWDATA> PROPS: [ __auto__learned__ ] SHOULD_LINEMERGE=true LINE_BREAKER=([\r\n]*)<MODTRANSAUDTRL> TIME_PREFIX=<TIMESTAMP> TIME_FORMAT=%Y%m%d%H%M%S MAX_TIMESTAMP_LOOKAHEAD=14 TRUNCATE=1000   ERROR Message: