All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

So I need to run search on a firewall index where I need to look for field values matching from two lookup files, one is src.csv and dst_withsubnets.csv and output corresponding fields Test SPL from... See more...
So I need to run search on a firewall index where I need to look for field values matching from two lookup files, one is src.csv and dst_withsubnets.csv and output corresponding fields Test SPL from my lab | makeresults |eval src_ip="1.1.1.1", src_translated_ip="3.3.3.3", dest_ip="192.168.1.1", dest_port=443, action="drop" | join src_ip [| inputlookup src.csv | rename src AS src_ip] | join dest_ip [| inputlookup dst_withsubnets.csv | rename dst AS dest_ip ] | table _time, src_ip, src_translated_ip, dest_ip, dest_port, action src.csv 1.1.1.1 dst_withsubnets.csv   dst 192.168.1.0/24   As you can notice, the SPL is searching for dest_ip in a lookup that only has destination subnets. To make it work, I have also added following transforms.conf [dst_withsubnets] filename = dst_withsubnets.csv match_type = CIDR(dst) max_matches = 1   However, its still not working
Need to trigger an alert when a process id is not running, here my query  index=os  source=ps   sourcetype=ps  host=gk2406  process=ora_d4   process_id=5955  
Hello, I wrote a PROPS Configuration file for following csv file but getting error message. Any help will be highly appreciated. Thank you so much.       [ csv ] SHOULD_LINEMERGE=false CHA... See more...
Hello, I wrote a PROPS Configuration file for following csv file but getting error message. Any help will be highly appreciated. Thank you so much.       [ csv ] SHOULD_LINEMERGE=false CHARSET=UTF-8 INDEXED_EXTRACTIONS=csv TIME_FORMAT=%Y%m%d %H:%M:%S:%Q HEADER_FIELD)LINE_NUMBER=1 TIMESTAMP_FIELDS=TIMESTAMP category=Structured    
Hello Splunk community, When trying to splice multiple events so that it can generate a specific output from a Splunk index, I’ve been running into the “Regex: syntax error in subpattern name (missi... See more...
Hello Splunk community, When trying to splice multiple events so that it can generate a specific output from a Splunk index, I’ve been running into the “Regex: syntax error in subpattern name (missing terminator)” error often. ========================================================================= For example, there are events that are being shown in a Splunk index: (each line is a different Splunk event)   “This is one way to do everything” “Regular Expressions in Splunk” “test: 123fourfive” “and escape characters” “test: !A-Z” “are an interesting exercise in” “test: ~Lettersand Numbers” “finding out how Regex works” “test: What is the? AndWhen to use it!” “in Splunk.” “test:   This is the Splunk query :   *randomsplunkindex*|rex field=_raw “(?<OUTPUT>(?<=” “).*(?=” “test:))”   I’m trying to get the output between two the quotes. So that the output would be:   Regular Expressions in Splunk and escape characters are an interesting exercise in finding out how Regex works in splunk.   However I’ve run into this error “Regex: syntax error in subpattern name (missing terminator)” I’ve tried these combinations of exit characters so that I won’t get the “Regex: syntax error in subpattern name (missing terminator)” error:   *randomsplunkindex*|rex field=_raw "(?<OUTPUT>(?<=\" \").*(?=\" \"test:))" *randomsplunkindex*|rex field=_raw "(?<OUTPUT>(?<=\" ").*(?=" \"test:))" *randomsplunkindex*|rex field=_raw "(?<OUTPUT>(?<=\" \").*(?=" "test:))" *randomsplunkindex*|rex field=_raw "(?<OUTPUT>(?<=" ").*(?=\" \"test:))" *randomsplunkindex*|rex field=_raw "(?<OUTPUT>\(?<=" ").*(?=" "test:)\)"   Is there any way to use regular expressions so that if there are characters like “or ‘ in said event so that you’re trying to extract the output using rex?
Hello, how i can change profile email? I have no any of subscribe, so i cant create a case for support. Thanks in advance.
Hi How can create issue (on demand) in my "issue tracker" from splunk? e.g I search through the logs suddenly found two events that need work on it, then hit bottom on splunk it will automatcally cr... See more...
Hi How can create issue (on demand) in my "issue tracker" from splunk? e.g I search through the logs suddenly found two events that need work on it, then hit bottom on splunk it will automatcally create issue and attach that events to this issue on my issue tracker.   FYI: I know alert will be do this but alert is autmatic process I need on demand.   Any idea? Thanks
Dears, Hope you're doing great. Please note that the Splunk indexer is not booting but other servers is booting and working (All servers is Centos 7), please seek your help. PS: we have made an up... See more...
Dears, Hope you're doing great. Please note that the Splunk indexer is not booting but other servers is booting and working (All servers is Centos 7), please seek your help. PS: we have made an update on the Vcenter domain admin user (Changes on the Domain Admin Name). Kindly please help, Best Regards, Yousef H.
Hi Team, I have a situation, where I want my team to have power user access in production (for creating ko) but with no write access to KO's whose owner is Nobody..   Only  one user to have the w... See more...
Hi Team, I have a situation, where I want my team to have power user access in production (for creating ko) but with no write access to KO's whose owner is Nobody..   Only  one user to have the write capability .   Is there any way I can achieve this view configuration and I do not want to create a separate role for only one user who will have write access..    
Hi I have compress file that contain several files. in source just show compress file. e.g compress files name is log.bz2,  it contain log1 log2 log3   currently in source just show log.bz2 , how ... See more...
Hi I have compress file that contain several files. in source just show compress file. e.g compress files name is log.bz2,  it contain log1 log2 log3   currently in source just show log.bz2 , how can I find which event belong to which file? something like this  log.bz2 > log2 Any idea? thanks
i'm having trouble indexing and monitoring the alerts.log file from ossec. ive tried manually adding in "/var/ossec/alerts/alerts.log" to the data inputs with source type automatic and index default ... See more...
i'm having trouble indexing and monitoring the alerts.log file from ossec. ive tried manually adding in "/var/ossec/alerts/alerts.log" to the data inputs with source type automatic and index default but with no luck as well. when i try to search in the default search and reporting app, no alerts show up, and when i use the Reporting and Management app for OSSEC this error shows up. ive tried rebuilding the lookup table as well but no luck. attached are screenshots showing the file data inputs and the result from regenerating the lookup table. if anyone has any idea on how to properly setup the app please let me know. thanks
Splunk 8's HEC defaults to TLSv1.2 only.  But I have a need to allow TLSv1.1 with AES256-SHA in order for puppetserver 2.7.0 to connect. So far, I figured that in order to effect HEC protocols, I mu... See more...
Splunk 8's HEC defaults to TLSv1.2 only.  But I have a need to allow TLSv1.1 with AES256-SHA in order for puppetserver 2.7.0 to connect. So far, I figured that in order to effect HEC protocols, I must also alter $SPLUNKE_HOME/etc/system/local/web.conf.  So I changed sslVersion to *, and made sure that AES256-SHA is in cipherSuite.  I can verify that TLSv1.1 is allowed when using openssl command line to connect; the same code in Puppet's splunk_hec reporter is also able to connect via TLSv1.1 when invoked from native Ruby (Ruby 2.0).  But I cannot externally examine the exact cipher used even with Wireshark. Anyway, even with this setup on Splunk's side, I still get "ssl3_get_client_hello:no shared cipher" when puppetserver tries to connect.  The difference is that puppetserver 2.7.0 runs in outdated JRuby that uses Ruby 1.9.  Nevertheless, https://ask.puppet.com/question/33316/puppet-https-connection-using-latest-tls-version-and-cipher-suites/ states "the only way to get puppet to successfully connect is to enable the AES256-SHA cipher."  So, I would expect the combination to be successful. What other things do I need to change?
Search 1 dashboard panel - Search 2 dashboard panel = third dashboard panel difference between two searches.   Here is my first search: index="signa_pool" name!="Pir8Radio"| stats sum(pendingBalan... See more...
Search 1 dashboard panel - Search 2 dashboard panel = third dashboard panel difference between two searches.   Here is my first search: index="signa_pool" name!="Pir8Radio"| stats sum(pendingBalanceNum) The result of the above is :  595.3440 Here is my second search: index="signum_node" | stats latest(guaranteedBalanceNQT) as PoolBal | eval PoolBal=round(PoolBal/100000000,4) The result of the above is: 1,904.5167 I need the third dashboard panel to take 1,904.5167 - 595.3440 = 1,309.1727 MY QUESTION:  How can I either create my end search that equals 1,309.1727 or how can i store previous search results as a variable to use in the third panel?      I'm stuck lol, tried for about an hour, so any help would be greatly appreciated.  
How to update SMTP credentials in the backend config. in which config file we should update. Do we need to encrypt the password before updating in the backend 
I'm running Splunk 8.2.1 in DEVTEST to learn how to use MLTK. Trial and Free licenses work, DEVTEST does not work. I left the license in trial mode, at this point I confirmed that MLTK was able to g... See more...
I'm running Splunk 8.2.1 in DEVTEST to learn how to use MLTK. Trial and Free licenses work, DEVTEST does not work. I left the license in trial mode, at this point I confirmed that MLTK was able to generate results for a few of the showcase items (disk space utilisation etc). Once I installed my DEVTEST license, the error is reported: Error in 'fit' command: Error while saving model "example_disk_utilization": Model does not exist. I switched the Splunk instance to Free mode, ran the example (Predict Disk Utilization) again and it began working again. Below the search.log is included. Any suggestions/advice is welcomed. Dave 08-14-2021 17:02:44.511 INFO dispatchRunner [0 MainThread] - Search process mode: freestanding (build ddff1c41e5cf). 08-14-2021 17:02:44.511 INFO dispatchRunner [0 MainThread] - initing LicenseMgr in search process: nonPro=1 08-14-2021 17:02:44.511 INFO LMStackMgr [0 MainThread] - Initializing CleMgr... 08-14-2021 17:02:44.512 INFO LicenseMgr [0 MainThread] - Initing LicenseMgr 08-14-2021 17:02:44.515 INFO ServerConfig [0 MainThread] - Found no hostname options in server.conf. Will attempt to use default for now. 08-14-2021 17:02:44.515 INFO ServerConfig [0 MainThread] - Host name option is "". 08-14-2021 17:02:44.530 INFO ServerConfig [0 MainThread] - SSL session cache path enabled 0 session timeout on SSL server 300.000 08-14-2021 17:02:44.532 INFO ServerConfig [0 MainThread] - Splunk is starting with EC-SSC disabled 08-14-2021 17:02:44.532 INFO LMConfig [0 MainThread] - serverName=homeserver guid=xxxxxxxxxxxxxxxxxxxxxxxx 08-14-2021 17:02:44.532 INFO LMConfig [0 MainThread] - connection_timeout=30 08-14-2021 17:02:44.532 INFO LMConfig [0 MainThread] - send_timeout=30 08-14-2021 17:02:44.532 INFO LMConfig [0 MainThread] - receive_timeout=30 08-14-2021 17:02:44.532 INFO LMConfig [0 MainThread] - key=license_warnings_update_interval not found in licenser stanza of server.conf, defaulting=0 08-14-2021 17:02:44.532 INFO LMConfig [0 MainThread] - squash_threshold=2000 08-14-2021 17:02:44.532 INFO LMConfig [0 MainThread] - strict_pool_quota=1 08-14-2021 17:02:44.532 INFO LMConfig [0 MainThread] - key=pool_suggestion not found in licenser stanza of server.conf, defaulting='' 08-14-2021 17:02:44.532 INFO LMConfig [0 MainThread] - key=test_aws_metering not found in licenser stanza of server.conf, defaulting=0 08-14-2021 17:02:44.532 INFO LMConfig [0 MainThread] - key=test_aws_product_code not found in licenser stanza of server.conf, defaulting=0 08-14-2021 17:02:44.532 INFO LicenseMgr [0 MainThread] - Initing LicenseMgr runContext_splunkd=false 08-14-2021 17:02:44.532 INFO LMStackMgr [0 MainThread] - closing stack mgr 08-14-2021 17:02:44.532 INFO LMSlaveInfo [0 MainThread] - all slaves cleared 08-14-2021 17:02:44.532 INFO LMStackMgr [0 MainThread] - Initalized license_warnings_update_interval=auto 08-14-2021 17:02:44.532 INFO LMStackMgr [0 MainThread] - License Manager supports Conditional Licensing Enforcement. For baked in CLE policies, window_period=60 days, max_violations=45, for stack size below 107374182400 bytes 08-14-2021 17:02:44.532 INFO LMLicense [0 MainThread] - Applying default enforcement policy for free 08-14-2021 17:02:44.532 INFO LMStackMgr [0 MainThread] - Added policy WinSz=30 Warnings=3 MaxSize=0 isDefault=1 features= for free 08-14-2021 17:02:44.532 INFO LMLicense [0 MainThread] - Applying default enforcement policy for forwarder 08-14-2021 17:02:44.532 INFO LMStackMgr [0 MainThread] - Added policy WinSz=30 Warnings=5 MaxSize=0 isDefault=1 features= for forwarder 08-14-2021 17:02:44.532 INFO LMStackMgr [0 MainThread] - Skipping trial license as alternative license type in use 08-14-2021 17:02:44.534 INFO LMStack [0 MainThread] - Added type=enterprise license, from file=Splunk (2).License.lic, to stack=enterprise of group=Enterprise 08-14-2021 17:02:44.534 INFO LMLicense [0 MainThread] - Applying default CLE policy for enterprise 08-14-2021 17:02:44.534 INFO LMStackMgr [0 MainThread] - created stack='enterprise' 08-14-2021 17:02:44.534 INFO LMStackMgr [0 MainThread] - Added policy WinSz=60 Warnings=45 MaxSize=107374182400 isDefault=1 features=LocalSearch for enterprise 08-14-2021 17:02:44.534 INFO LMStackMgr [0 MainThread] - Skipping trial pool stanza as alternative license in use 08-14-2021 17:02:44.534 INFO LMStackMgr [0 MainThread] - added pool auto_generated_pool_enterprise to stack enterprise 08-14-2021 17:02:44.534 INFO LMStackMgr [0 MainThread] - added pool auto_generated_pool_forwarder to stack forwarder 08-14-2021 17:02:44.534 INFO LMStackMgr [0 MainThread] - added pool auto_generated_pool_free to stack free 08-14-2021 17:02:44.534 INFO LMStackMgr [0 MainThread] - Initialized hideQuotaWarning = "0" 08-14-2021 17:02:44.534 INFO LMStackMgr [0 MainThread] - init completed [xxxxxxxxxxxxxxxxxxxxxx,Enterprise,runContext_splunkd=false] 08-14-2021 17:02:44.534 INFO LicenseMgr [0 MainThread] - StackMgr init complete... 08-14-2021 17:02:44.534 INFO LMTracker [0 MainThread] - Setting default product type='enterprise' 08-14-2021 17:02:44.534 INFO LMTracker [0 MainThread] - this is not splunkd, will perform partial init 08-14-2021 17:02:44.534 INFO LicenseMgr [0 MainThread] - Tracker init complete... ---- 08-14-2021 17:02:52.634 INFO DispatchExecutor [16924 phase_1] - BEGIN OPEN: Processor=inputlookup 08-14-2021 17:02:52.634 INFO DispatchExecutor [16924 phase_1] - END OPEN: Processor=inputlookup 08-14-2021 17:02:52.634 INFO DispatchExecutor [16924 phase_1] - BEGIN OPEN: Processor=fit 08-14-2021 17:02:52.688 INFO SearchOperator:inputcsv [16924 phase_1] - sid:1628931764.26 Successfully read lookup file 'C:\Program Files\Splunk\etc\apps\Splunk_ML_Toolkit\lookups\server_power.csv'. 08-14-2021 17:02:52.688 INFO PreviewExecutor [8616 StatusEnforcerThread] - Preview Enforcing initialization done 08-14-2021 17:02:52.689 INFO ReducePhaseExecutor [8616 StatusEnforcerThread] - ReducePhaseExecutor=1 action=PREVIEW 08-14-2021 17:02:52.690 INFO DispatchExecutor [16924 phase_1] - END OPEN: Processor=fit 08-14-2021 17:02:53.975 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: C:\Program Files\Splunk\etc\apps\Splunk_SA_Scientific_Python_windows_x86_64\bin\windows_x86_64\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22. 08-14-2021 17:02:53.975 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: "10 in version 0.20 to 100 in 0.22.", FutureWarning) 08-14-2021 17:02:53.975 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: C:\Program Files\Splunk\etc\apps\Splunk_ML_Toolkit\bin\util\df_util.py:190: FutureWarning: The join_axes-keyword is deprecated. Use .reindex or .reindex_like on the result to achieve the same functionality. 08-14-2021 17:02:53.975 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: merged_df = pd.concat([original_df, additional_df], axis=1, join_axes=[original_df.index]) 08-14-2021 17:02:54.308 ERROR ChunkedExternProcessor [16924 phase_1] - Error in 'fit' command: Error while saving model "example_disk_utilization": Model does not exist 08-14-2021 17:02:54.314 INFO ReducePhaseExecutor [16924 phase_1] - Not downloading remote search.log files. Reason: No remote event providers. 08-14-2021 17:02:54.314 INFO ReducePhaseExecutor [16924 phase_1] - Not downloading remote search_telemetry.json files. Reason: No remote event providers. 08-14-2021 17:02:54.314 INFO ReducePhaseExecutor [16924 phase_1] - Ending phase_1 08-14-2021 17:02:54.314 INFO UserManager [16924 phase_1] - Unwound user context: NULL -> NULL 08-14-2021 17:02:54.314 ERROR SearchOrchestrator [7440 searchOrchestrator] - Phase_1 failed due to : Error in 'fit' command: Error while saving model "example_disk_utilization": Model does not exist 08-14-2021 17:02:54.314 INFO ReducePhaseExecutor [8616 StatusEnforcerThread] - ReducePhaseExecutor=1 action=CANCEL 08-14-2021 17:02:54.314 INFO DispatchExecutor [8616 StatusEnforcerThread] - User applied action=CANCEL while status=0 08-14-2021 17:02:54.314 ERROR SearchStatusEnforcer [8616 StatusEnforcerThread] - sid:1628931764.26 Error in 'fit' command: Error while saving model "example_disk_utilization": Model does not exist 08-14-2021 17:02:54.314 INFO SearchStatusEnforcer [8616 StatusEnforcerThread] - State changed to FAILED due to: Error in 'fit' command: Error while saving model "example_disk_utilization": Model does not exist 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: Traceback (most recent call last): 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: File "C:\Program Files\Splunk\etc\apps\Splunk_ML_Toolkit\bin\util\lookups_util.py", line 73, in parse_model_reply 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: return lookups_parse_reply(reply) 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: File "C:\Program Files\Splunk\etc\apps\Splunk_ML_Toolkit\bin\util\lookups_util.py", line 57, in lookups_parse_reply 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: raise e 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: File "C:\Program Files\Splunk\etc\apps\Splunk_ML_Toolkit\bin\util\lookups_util.py", line 47, in lookups_parse_reply 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: raise LookupNotFoundException() 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: util.lookup_exceptions.LookupNotFoundException: Lookup does not exist 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: During handling of the above exception, another exception occurred: 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: Traceback (most recent call last): 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: File "C:\Program Files\Splunk\etc\apps\Splunk_ML_Toolkit\bin\processors\FitBatchProcessor.py", line 202, in save_model 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: namespace=self.namespace, 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: File "C:\Program Files\Splunk\etc\apps\Splunk_ML_Toolkit\bin\models\base.py", line 177, in save_model 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: parse_model_reply(reply) 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: File "C:\Program Files\Splunk\etc\apps\Splunk_ML_Toolkit\bin\util\lookups_util.py", line 75, in parse_model_reply 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: raise ModelNotFoundException() 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: util.lookup_exceptions.ModelNotFoundException: Model does not exist 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: Error while saving model "example_disk_utilization": Model does not exist 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: Traceback (most recent call last): 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: File "C:\Program Files\Splunk\etc\apps\Splunk_ML_Toolkit\bin\util\lookups_util.py", line 73, in parse_model_reply 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: return lookups_parse_reply(reply) 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: File "C:\Program Files\Splunk\etc\apps\Splunk_ML_Toolkit\bin\util\lookups_util.py", line 57, in lookups_parse_reply 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: raise e 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: File "C:\Program Files\Splunk\etc\apps\Splunk_ML_Toolkit\bin\util\lookups_util.py", line 47, in lookups_parse_reply 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: raise LookupNotFoundException() 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: util.lookup_exceptions.LookupNotFoundException: Lookup does not exist 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: During handling of the above exception, another exception occurred: 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: Traceback (most recent call last): 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: File "C:\Program Files\Splunk\etc\apps\Splunk_ML_Toolkit\bin\processors\FitBatchProcessor.py", line 202, in save_model 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: namespace=self.namespace, 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: File "C:\Program Files\Splunk\etc\apps\Splunk_ML_Toolkit\bin\models\base.py", line 177, in save_model 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: parse_model_reply(reply) 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: File "C:\Program Files\Splunk\etc\apps\Splunk_ML_Toolkit\bin\util\lookups_util.py", line 75, in parse_model_reply 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: raise ModelNotFoundException() 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: util.lookup_exceptions.ModelNotFoundException: Model does not exist 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: During handling of the above exception, another exception occurred: 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: Traceback (most recent call last): 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: File "C:\Program Files\Splunk\etc\apps\Splunk_ML_Toolkit\bin\cexc\__init__.py", line 174, in run 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: while self._handle_chunk(): 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: File "C:\Program Files\Splunk\etc\apps\Splunk_ML_Toolkit\bin\cexc\__init__.py", line 236, in _handle_chunk 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: ret = self.handler(metadata, body) 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: File "C:\Program Files\Splunk\etc\apps\Splunk_ML_Toolkit\bin\fit.py", line 154, in handler 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: self.controller.finalize() 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: File "C:\Program Files\Splunk\etc\apps\Splunk_ML_Toolkit\bin\chunked_controller.py", line 241, in finalize 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: self.processor.save_model() 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: File "C:\Program Files\Splunk\etc\apps\Splunk_ML_Toolkit\bin\processors\FitBatchProcessor.py", line 207, in save_model 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: 'Error while saving model "%s": %s' % (self.algo_options['model_name'], e) 08-14-2021 17:02:54.333 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: RuntimeError: Error while saving model "example_disk_utilization": Model does not exist 08-14-2021 17:02:54.343 INFO UserManager [8616 StatusEnforcerThread] - Unwound user context: NULL -> NULL 08-14-2021 17:02:54.345 INFO DispatchStorageManager [7440 searchOrchestrator] - Remote storage disabled for search artifacts. 08-14-2021 17:02:54.345 INFO DispatchManager [7440 searchOrchestrator] - DispatchManager::dispatchHasFinished(id='1628931764.26', username='admin') 08-14-2021 17:02:54.346 INFO UserManager [7440 searchOrchestrator] - Unwound user context: NULL -> NULL 08-14-2021 17:02:54.410 INFO SearchStatusEnforcer [13600 RunDispatch] - SearchStatusEnforcer is already terminated 08-14-2021 17:02:54.410 INFO UserManager [13600 RunDispatch] - Unwound user context: NULL -> NULL 08-14-2021 17:02:54.410 INFO LookupDataProvider [13600 RunDispatch] - Clearing out lookup shared provider map 08-14-2021 17:02:54.436 INFO ShutdownHandler [15220 Shutdown] - Shutting down splunkd ---- 08-14-2021 17:02:54.436 INFO ShutdownHandler [15220 Shutdown] - Shutdown complete in 0 microseconds 08-14-2021 17:02:54.438 INFO HealthReporter [15936 MainThread] - aggregate_ingestion_latency_health with value=0 from stanza=health_reporter will disable the aggregation of ingestion latency health reporter. 08-14-2021 17:02:54.438 ERROR dispatchRunner [15936 MainThread] - RunDispatch::runDispatchThread threw error: Error in 'fit' command: Error while saving model "example_disk_utilization": Model does not exist  
Hi there,  I have splunk enterprise set up on my local machine. I was able to obtain network traffic from a particular machine which was captured using wireshark (source type:csv file). I would like... See more...
Hi there,  I have splunk enterprise set up on my local machine. I was able to obtain network traffic from a particular machine which was captured using wireshark (source type:csv file). I would like to know, how do I identify the TCP SYN scan activities of a particular ip address using splunk ? 
Hello,   I am trying to only return the values of certain fields to be used in a subsearch. The problem I'm encountering, is that I have multiple values from different fields which I want to extrac... See more...
Hello,   I am trying to only return the values of certain fields to be used in a subsearch. The problem I'm encountering, is that I have multiple values from different fields which I want to extract. I have 4 fields - src, src_port, dst, dst_port If I table out the results and use format, my search reads as such: "src"="<IP>" AND "src_port"="<port>" AND "dst"="<IP>" AND "dst_port"="<port>"  What I want is only the values: "<ip>" AND "<port>" AND "<ip>" AND "<port>" I've tried using: return 10 $src $src_port $dst $dst_port which gives me the desired output, but encases the entire output in one set of quotations and not individually as per the same output that would be created using the table command I've also tried using: eval query = src. " " .src_port. " " .dst. " " .dst_port  which gets me closer, but then outputs each four values encased within the quotations.   Can anyone help me out with the desired output?   Regards, Dan            
Hi, I am trying to figure this out - I have a data set that I need to compare the DNS values. The index data contains values like: hostname1 hostname2.domian.com hostname3.domain Whereas the csv... See more...
Hi, I am trying to figure this out - I have a data set that I need to compare the DNS values. The index data contains values like: hostname1 hostname2.domian.com hostname3.domain Whereas the csv file may also contain: hostname2 hostname1.domain.com hostname3.domain I can use JOIN to find values that MATCH exactly; however what I am looking for is "*indexdomain* = "*csvdomain*" After finding the matches, then in another table display the NOT match results This is what I have so far - but all of the different variation are returning odd number of hits:   | inputlookup linuxhostnames.csv | rename hostname as DNS | [search index=dnsdata| stats count by DNS | table DNS]       index=dnsdata SUMMARY TRACK=AGENT | dedup DNS | search [ | inputlookup linuxhostnames.csv | rename hostname as DNS]| eval result=if(like(hostname,"%".DNS."%"),"Contained","Not Contained") | table DNS, result     Worse case scenario - I can modify the .csv file and exclude the domain.com and just leave the hostname - but still have a Contains / Like search in index is what I can't seem to figure out.   Will appreciate any guidance. Thanks
Hi All, Do we need an indexer restart in non clustered search peers for these changes? Is reloading not enough?  https://docs.splunk.com/Documentation/Splunk/8.2.0/Indexer/Determinerestart   in ... See more...
Hi All, Do we need an indexer restart in non clustered search peers for these changes? Is reloading not enough?  https://docs.splunk.com/Documentation/Splunk/8.2.0/Indexer/Determinerestart   in particular "coldPath.maxDataSizeMB" and "Enabling or disabling an index that contains data" I don't think Splunk throws any errors or blocks subsequent indexes.conf changes when this happens. I need to check the logs, when any change is made in coldPath.maxDataSizeMB. But I am sure when disabling an index, the things go smooth without a restart. Why do we need a restart then?   Thanks!  
 I am trying to craft a search that uses the most recent source as the basis for my search. The source is a file path <C:\foo\bar.csv> I think that a sub search is the best option because the source... See more...
 I am trying to craft a search that uses the most recent source as the basis for my search. The source is a file path <C:\foo\bar.csv> I think that a sub search is the best option because the source name is going to change weekly.  This is my sub search that returns one result with the file name index=foo | stats latest(source) AS SourceName | return $SourceName This is the search that I am trying to use: index= foo | eval source=[search index=foo | stats latest(source) AS SN | return $SN ] But I am getting this error:  Error in 'eval' command: The expression is malformed. I have tested it when using the file path instead of the sub search and it does work but there is one problem. I need to put the file path in quotes. I am thinking that things are breaking down because the file path has \'s in it. I tried to look into concatenating strings  to put the sub-search in quotes and I found the strcat command but that is looking for 2 fields instead of one.  
Hi I'm filtering windows events from the Heavy Forwarder, everything works fine, all events are filtered except for EventCode = 0 any idea why?