All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Search 1 dashboard panel - Search 2 dashboard panel = third dashboard panel difference between two searches.   Here is my first search: index="signa_pool" name!="Pir8Radio"| stats sum(pendingBalan... See more...
Search 1 dashboard panel - Search 2 dashboard panel = third dashboard panel difference between two searches.   Here is my first search: index="signa_pool" name!="Pir8Radio"| stats sum(pendingBalanceNum) The result of the above is :  595.3440 Here is my second search: index="signum_node" | stats latest(guaranteedBalanceNQT) as PoolBal | eval PoolBal=round(PoolBal/100000000,4) The result of the above is: 1,904.5167 I need the third dashboard panel to take 1,904.5167 - 595.3440 = 1,309.1727 MY QUESTION:  How can I either create my end search that equals 1,309.1727 or how can i store previous search results as a variable to use in the third panel?      I'm stuck lol, tried for about an hour, so any help would be greatly appreciated.  
How to update SMTP credentials in the backend config. in which config file we should update. Do we need to encrypt the password before updating in the backend 
I'm running Splunk 8.2.1 in DEVTEST to learn how to use MLTK. Trial and Free licenses work, DEVTEST does not work. I left the license in trial mode, at this point I confirmed that MLTK was able to g... See more...
I'm running Splunk 8.2.1 in DEVTEST to learn how to use MLTK. Trial and Free licenses work, DEVTEST does not work. I left the license in trial mode, at this point I confirmed that MLTK was able to generate results for a few of the showcase items (disk space utilisation etc). Once I installed my DEVTEST license, the error is reported: Error in 'fit' command: Error while saving model "example_disk_utilization": Model does not exist. I switched the Splunk instance to Free mode, ran the example (Predict Disk Utilization) again and it began working again. Below the search.log is included. Any suggestions/advice is welcomed. Dave 08-14-2021 17:02:44.511 INFO dispatchRunner [0 MainThread] - Search process mode: freestanding (build ddff1c41e5cf). 08-14-2021 17:02:44.511 INFO dispatchRunner [0 MainThread] - initing LicenseMgr in search process: nonPro=1 08-14-2021 17:02:44.511 INFO LMStackMgr [0 MainThread] - Initializing CleMgr... 08-14-2021 17:02:44.512 INFO LicenseMgr [0 MainThread] - Initing LicenseMgr 08-14-2021 17:02:44.515 INFO ServerConfig [0 MainThread] - Found no hostname options in server.conf. Will attempt to use default for now. 08-14-2021 17:02:44.515 INFO ServerConfig [0 MainThread] - Host name option is "". 08-14-2021 17:02:44.530 INFO ServerConfig [0 MainThread] - SSL session cache path enabled 0 session timeout on SSL server 300.000 08-14-2021 17:02:44.532 INFO ServerConfig [0 MainThread] - Splunk is starting with EC-SSC disabled 08-14-2021 17:02:44.532 INFO LMConfig [0 MainThread] - serverName=homeserver guid=xxxxxxxxxxxxxxxxxxxxxxxx 08-14-2021 17:02:44.532 INFO LMConfig [0 MainThread] - connection_timeout=30 08-14-2021 17:02:44.532 INFO LMConfig [0 MainThread] - send_timeout=30 08-14-2021 17:02:44.532 INFO LMConfig [0 MainThread] - receive_timeout=30 08-14-2021 17:02:44.532 INFO LMConfig [0 MainThread] - key=license_warnings_update_interval not found in licenser stanza of server.conf, defaulting=0 08-14-2021 17:02:44.532 INFO LMConfig [0 MainThread] - squash_threshold=2000 08-14-2021 17:02:44.532 INFO LMConfig [0 MainThread] - strict_pool_quota=1 08-14-2021 17:02:44.532 INFO LMConfig [0 MainThread] - key=pool_suggestion not found in licenser stanza of server.conf, defaulting='' 08-14-2021 17:02:44.532 INFO LMConfig [0 MainThread] - key=test_aws_metering not found in licenser stanza of server.conf, defaulting=0 08-14-2021 17:02:44.532 INFO LMConfig [0 MainThread] - key=test_aws_product_code not found in licenser stanza of server.conf, defaulting=0 08-14-2021 17:02:44.532 INFO LicenseMgr [0 MainThread] - Initing LicenseMgr runContext_splunkd=false 08-14-2021 17:02:44.532 INFO LMStackMgr [0 MainThread] - closing stack mgr 08-14-2021 17:02:44.532 INFO LMSlaveInfo [0 MainThread] - all slaves cleared 08-14-2021 17:02:44.532 INFO LMStackMgr [0 MainThread] - Initalized license_warnings_update_interval=auto 08-14-2021 17:02:44.532 INFO LMStackMgr [0 MainThread] - License Manager supports Conditional Licensing Enforcement. For baked in CLE policies, window_period=60 days, max_violations=45, for stack size below 107374182400 bytes 08-14-2021 17:02:44.532 INFO LMLicense [0 MainThread] - Applying default enforcement policy for free 08-14-2021 17:02:44.532 INFO LMStackMgr [0 MainThread] - Added policy WinSz=30 Warnings=3 MaxSize=0 isDefault=1 features= for free 08-14-2021 17:02:44.532 INFO LMLicense [0 MainThread] - Applying default enforcement policy for forwarder 08-14-2021 17:02:44.532 INFO LMStackMgr [0 MainThread] - Added policy WinSz=30 Warnings=5 MaxSize=0 isDefault=1 features= for forwarder 08-14-2021 17:02:44.532 INFO LMStackMgr [0 MainThread] - Skipping trial license as alternative license type in use 08-14-2021 17:02:44.534 INFO LMStack [0 MainThread] - Added type=enterprise license, from file=Splunk (2).License.lic, to stack=enterprise of group=Enterprise 08-14-2021 17:02:44.534 INFO LMLicense [0 MainThread] - Applying default CLE policy for enterprise 08-14-2021 17:02:44.534 INFO LMStackMgr [0 MainThread] - created stack='enterprise' 08-14-2021 17:02:44.534 INFO LMStackMgr [0 MainThread] - Added policy WinSz=60 Warnings=45 MaxSize=107374182400 isDefault=1 features=LocalSearch for enterprise 08-14-2021 17:02:44.534 INFO LMStackMgr [0 MainThread] - Skipping trial pool stanza as alternative license in use 08-14-2021 17:02:44.534 INFO LMStackMgr [0 MainThread] - added pool auto_generated_pool_enterprise to stack enterprise 08-14-2021 17:02:44.534 INFO LMStackMgr [0 MainThread] - added pool auto_generated_pool_forwarder to stack forwarder 08-14-2021 17:02:44.534 INFO LMStackMgr [0 MainThread] - added pool auto_generated_pool_free to stack free 08-14-2021 17:02:44.534 INFO LMStackMgr [0 MainThread] - Initialized hideQuotaWarning = "0" 08-14-2021 17:02:44.534 INFO LMStackMgr [0 MainThread] - init completed [xxxxxxxxxxxxxxxxxxxxxx,Enterprise,runContext_splunkd=false] 08-14-2021 17:02:44.534 INFO LicenseMgr [0 MainThread] - StackMgr init complete... 08-14-2021 17:02:44.534 INFO LMTracker [0 MainThread] - Setting default product type='enterprise' 08-14-2021 17:02:44.534 INFO LMTracker [0 MainThread] - this is not splunkd, will perform partial init 08-14-2021 17:02:44.534 INFO LicenseMgr [0 MainThread] - Tracker init complete... ---- 08-14-2021 17:02:52.634 INFO DispatchExecutor [16924 phase_1] - BEGIN OPEN: Processor=inputlookup 08-14-2021 17:02:52.634 INFO DispatchExecutor [16924 phase_1] - END OPEN: Processor=inputlookup 08-14-2021 17:02:52.634 INFO DispatchExecutor [16924 phase_1] - BEGIN OPEN: Processor=fit 08-14-2021 17:02:52.688 INFO SearchOperator:inputcsv [16924 phase_1] - sid:1628931764.26 Successfully read lookup file 'C:\Program Files\Splunk\etc\apps\Splunk_ML_Toolkit\lookups\server_power.csv'. 08-14-2021 17:02:52.688 INFO PreviewExecutor [8616 StatusEnforcerThread] - Preview Enforcing initialization done 08-14-2021 17:02:52.689 INFO ReducePhaseExecutor [8616 StatusEnforcerThread] - ReducePhaseExecutor=1 action=PREVIEW 08-14-2021 17:02:52.690 INFO DispatchExecutor [16924 phase_1] - END OPEN: Processor=fit 08-14-2021 17:02:53.975 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: C:\Program Files\Splunk\etc\apps\Splunk_SA_Scientific_Python_windows_x86_64\bin\windows_x86_64\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22. 08-14-2021 17:02:53.975 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: "10 in version 0.20 to 100 in 0.22.", FutureWarning) 08-14-2021 17:02:53.975 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: C:\Program Files\Splunk\etc\apps\Splunk_ML_Toolkit\bin\util\df_util.py:190: FutureWarning: The join_axes-keyword is deprecated. Use .reindex or .reindex_like on the result to achieve the same functionality. 08-14-2021 17:02:53.975 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: merged_df = pd.concat([original_df, additional_df], axis=1, join_axes=[original_df.index]) 08-14-2021 17:02:54.308 ERROR ChunkedExternProcessor [16924 phase_1] - Error in 'fit' command: Error while saving model "example_disk_utilization": Model does not exist 08-14-2021 17:02:54.314 INFO ReducePhaseExecutor [16924 phase_1] - Not downloading remote search.log files. Reason: No remote event providers. 08-14-2021 17:02:54.314 INFO ReducePhaseExecutor [16924 phase_1] - Not downloading remote search_telemetry.json files. Reason: No remote event providers. 08-14-2021 17:02:54.314 INFO ReducePhaseExecutor [16924 phase_1] - Ending phase_1 08-14-2021 17:02:54.314 INFO UserManager [16924 phase_1] - Unwound user context: NULL -> NULL 08-14-2021 17:02:54.314 ERROR SearchOrchestrator [7440 searchOrchestrator] - Phase_1 failed due to : Error in 'fit' command: Error while saving model "example_disk_utilization": Model does not exist 08-14-2021 17:02:54.314 INFO ReducePhaseExecutor [8616 StatusEnforcerThread] - ReducePhaseExecutor=1 action=CANCEL 08-14-2021 17:02:54.314 INFO DispatchExecutor [8616 StatusEnforcerThread] - User applied action=CANCEL while status=0 08-14-2021 17:02:54.314 ERROR SearchStatusEnforcer [8616 StatusEnforcerThread] - sid:1628931764.26 Error in 'fit' command: Error while saving model "example_disk_utilization": Model does not exist 08-14-2021 17:02:54.314 INFO SearchStatusEnforcer [8616 StatusEnforcerThread] - State changed to FAILED due to: Error in 'fit' command: Error while saving model "example_disk_utilization": Model does not exist 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: Traceback (most recent call last): 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: File "C:\Program Files\Splunk\etc\apps\Splunk_ML_Toolkit\bin\util\lookups_util.py", line 73, in parse_model_reply 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: return lookups_parse_reply(reply) 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: File "C:\Program Files\Splunk\etc\apps\Splunk_ML_Toolkit\bin\util\lookups_util.py", line 57, in lookups_parse_reply 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: raise e 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: File "C:\Program Files\Splunk\etc\apps\Splunk_ML_Toolkit\bin\util\lookups_util.py", line 47, in lookups_parse_reply 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: raise LookupNotFoundException() 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: util.lookup_exceptions.LookupNotFoundException: Lookup does not exist 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: During handling of the above exception, another exception occurred: 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: Traceback (most recent call last): 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: File "C:\Program Files\Splunk\etc\apps\Splunk_ML_Toolkit\bin\processors\FitBatchProcessor.py", line 202, in save_model 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: namespace=self.namespace, 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: File "C:\Program Files\Splunk\etc\apps\Splunk_ML_Toolkit\bin\models\base.py", line 177, in save_model 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: parse_model_reply(reply) 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: File "C:\Program Files\Splunk\etc\apps\Splunk_ML_Toolkit\bin\util\lookups_util.py", line 75, in parse_model_reply 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: raise ModelNotFoundException() 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: util.lookup_exceptions.ModelNotFoundException: Model does not exist 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: Error while saving model "example_disk_utilization": Model does not exist 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: Traceback (most recent call last): 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: File "C:\Program Files\Splunk\etc\apps\Splunk_ML_Toolkit\bin\util\lookups_util.py", line 73, in parse_model_reply 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: return lookups_parse_reply(reply) 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: File "C:\Program Files\Splunk\etc\apps\Splunk_ML_Toolkit\bin\util\lookups_util.py", line 57, in lookups_parse_reply 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: raise e 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: File "C:\Program Files\Splunk\etc\apps\Splunk_ML_Toolkit\bin\util\lookups_util.py", line 47, in lookups_parse_reply 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: raise LookupNotFoundException() 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: util.lookup_exceptions.LookupNotFoundException: Lookup does not exist 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: During handling of the above exception, another exception occurred: 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: Traceback (most recent call last): 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: File "C:\Program Files\Splunk\etc\apps\Splunk_ML_Toolkit\bin\processors\FitBatchProcessor.py", line 202, in save_model 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: namespace=self.namespace, 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: File "C:\Program Files\Splunk\etc\apps\Splunk_ML_Toolkit\bin\models\base.py", line 177, in save_model 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: parse_model_reply(reply) 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: File "C:\Program Files\Splunk\etc\apps\Splunk_ML_Toolkit\bin\util\lookups_util.py", line 75, in parse_model_reply 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: raise ModelNotFoundException() 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: util.lookup_exceptions.ModelNotFoundException: Model does not exist 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: During handling of the above exception, another exception occurred: 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: Traceback (most recent call last): 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: File "C:\Program Files\Splunk\etc\apps\Splunk_ML_Toolkit\bin\cexc\__init__.py", line 174, in run 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: while self._handle_chunk(): 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: File "C:\Program Files\Splunk\etc\apps\Splunk_ML_Toolkit\bin\cexc\__init__.py", line 236, in _handle_chunk 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: ret = self.handler(metadata, body) 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: File "C:\Program Files\Splunk\etc\apps\Splunk_ML_Toolkit\bin\fit.py", line 154, in handler 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: self.controller.finalize() 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: File "C:\Program Files\Splunk\etc\apps\Splunk_ML_Toolkit\bin\chunked_controller.py", line 241, in finalize 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: self.processor.save_model() 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: File "C:\Program Files\Splunk\etc\apps\Splunk_ML_Toolkit\bin\processors\FitBatchProcessor.py", line 207, in save_model 08-14-2021 17:02:54.332 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: 'Error while saving model "%s": %s' % (self.algo_options['model_name'], e) 08-14-2021 17:02:54.333 ERROR ChunkedExternProcessor [12592 ChunkedExternProcessorStderrLogger] - stderr: RuntimeError: Error while saving model "example_disk_utilization": Model does not exist 08-14-2021 17:02:54.343 INFO UserManager [8616 StatusEnforcerThread] - Unwound user context: NULL -> NULL 08-14-2021 17:02:54.345 INFO DispatchStorageManager [7440 searchOrchestrator] - Remote storage disabled for search artifacts. 08-14-2021 17:02:54.345 INFO DispatchManager [7440 searchOrchestrator] - DispatchManager::dispatchHasFinished(id='1628931764.26', username='admin') 08-14-2021 17:02:54.346 INFO UserManager [7440 searchOrchestrator] - Unwound user context: NULL -> NULL 08-14-2021 17:02:54.410 INFO SearchStatusEnforcer [13600 RunDispatch] - SearchStatusEnforcer is already terminated 08-14-2021 17:02:54.410 INFO UserManager [13600 RunDispatch] - Unwound user context: NULL -> NULL 08-14-2021 17:02:54.410 INFO LookupDataProvider [13600 RunDispatch] - Clearing out lookup shared provider map 08-14-2021 17:02:54.436 INFO ShutdownHandler [15220 Shutdown] - Shutting down splunkd ---- 08-14-2021 17:02:54.436 INFO ShutdownHandler [15220 Shutdown] - Shutdown complete in 0 microseconds 08-14-2021 17:02:54.438 INFO HealthReporter [15936 MainThread] - aggregate_ingestion_latency_health with value=0 from stanza=health_reporter will disable the aggregation of ingestion latency health reporter. 08-14-2021 17:02:54.438 ERROR dispatchRunner [15936 MainThread] - RunDispatch::runDispatchThread threw error: Error in 'fit' command: Error while saving model "example_disk_utilization": Model does not exist  
Hi there,  I have splunk enterprise set up on my local machine. I was able to obtain network traffic from a particular machine which was captured using wireshark (source type:csv file). I would like... See more...
Hi there,  I have splunk enterprise set up on my local machine. I was able to obtain network traffic from a particular machine which was captured using wireshark (source type:csv file). I would like to know, how do I identify the TCP SYN scan activities of a particular ip address using splunk ? 
Hello,   I am trying to only return the values of certain fields to be used in a subsearch. The problem I'm encountering, is that I have multiple values from different fields which I want to extrac... See more...
Hello,   I am trying to only return the values of certain fields to be used in a subsearch. The problem I'm encountering, is that I have multiple values from different fields which I want to extract. I have 4 fields - src, src_port, dst, dst_port If I table out the results and use format, my search reads as such: "src"="<IP>" AND "src_port"="<port>" AND "dst"="<IP>" AND "dst_port"="<port>"  What I want is only the values: "<ip>" AND "<port>" AND "<ip>" AND "<port>" I've tried using: return 10 $src $src_port $dst $dst_port which gives me the desired output, but encases the entire output in one set of quotations and not individually as per the same output that would be created using the table command I've also tried using: eval query = src. " " .src_port. " " .dst. " " .dst_port  which gets me closer, but then outputs each four values encased within the quotations.   Can anyone help me out with the desired output?   Regards, Dan            
Hi, I am trying to figure this out - I have a data set that I need to compare the DNS values. The index data contains values like: hostname1 hostname2.domian.com hostname3.domain Whereas the csv... See more...
Hi, I am trying to figure this out - I have a data set that I need to compare the DNS values. The index data contains values like: hostname1 hostname2.domian.com hostname3.domain Whereas the csv file may also contain: hostname2 hostname1.domain.com hostname3.domain I can use JOIN to find values that MATCH exactly; however what I am looking for is "*indexdomain* = "*csvdomain*" After finding the matches, then in another table display the NOT match results This is what I have so far - but all of the different variation are returning odd number of hits:   | inputlookup linuxhostnames.csv | rename hostname as DNS | [search index=dnsdata| stats count by DNS | table DNS]       index=dnsdata SUMMARY TRACK=AGENT | dedup DNS | search [ | inputlookup linuxhostnames.csv | rename hostname as DNS]| eval result=if(like(hostname,"%".DNS."%"),"Contained","Not Contained") | table DNS, result     Worse case scenario - I can modify the .csv file and exclude the domain.com and just leave the hostname - but still have a Contains / Like search in index is what I can't seem to figure out.   Will appreciate any guidance. Thanks
Hi All, Do we need an indexer restart in non clustered search peers for these changes? Is reloading not enough?  https://docs.splunk.com/Documentation/Splunk/8.2.0/Indexer/Determinerestart   in ... See more...
Hi All, Do we need an indexer restart in non clustered search peers for these changes? Is reloading not enough?  https://docs.splunk.com/Documentation/Splunk/8.2.0/Indexer/Determinerestart   in particular "coldPath.maxDataSizeMB" and "Enabling or disabling an index that contains data" I don't think Splunk throws any errors or blocks subsequent indexes.conf changes when this happens. I need to check the logs, when any change is made in coldPath.maxDataSizeMB. But I am sure when disabling an index, the things go smooth without a restart. Why do we need a restart then?   Thanks!  
 I am trying to craft a search that uses the most recent source as the basis for my search. The source is a file path <C:\foo\bar.csv> I think that a sub search is the best option because the source... See more...
 I am trying to craft a search that uses the most recent source as the basis for my search. The source is a file path <C:\foo\bar.csv> I think that a sub search is the best option because the source name is going to change weekly.  This is my sub search that returns one result with the file name index=foo | stats latest(source) AS SourceName | return $SourceName This is the search that I am trying to use: index= foo | eval source=[search index=foo | stats latest(source) AS SN | return $SN ] But I am getting this error:  Error in 'eval' command: The expression is malformed. I have tested it when using the file path instead of the sub search and it does work but there is one problem. I need to put the file path in quotes. I am thinking that things are breaking down because the file path has \'s in it. I tried to look into concatenating strings  to put the sub-search in quotes and I found the strcat command but that is looking for 2 fields instead of one.  
Hi I'm filtering windows events from the Heavy Forwarder, everything works fine, all events are filtered except for EventCode = 0 any idea why?        
Hello Are there any internal logs in Splunk that show changes made to the query, who made it and what change he made?
I have an issue, and I found a posting here that I thought would fix me up, but there is something wrong and I am not sure what it is. I want to create a stacked barchart showing a date from a datest... See more...
I have an issue, and I found a posting here that I thought would fix me up, but there is something wrong and I am not sure what it is. I want to create a stacked barchart showing a date from a datestamp field we have, an error code and the number of devices that get that error code on that day. now if I run my current search just using the | timechart dc(field1), it works just fine, but uses the _time field. My datestamp field is a string, with the format of "2021-07-30". I tried using this code to assign the datestamp field to _ time: | eval NewTime=strptime(datestamp,"%Y-%m-%d %H:%M:%S") | eval _time=NewTime | timechart dc(field1) by field2 The search runs, but returns no values. Any suggestions would be helpful.
I have an issue with the connectivity between the heavy forwarder and the deployment server. What is a search that I could use in the GUI to diagnose the issue?
I have Splunk setup on an air gapped network (no internet connection). The search head is a single instance running 8.1.1. There are about 320 machines on the network. It's mostly Windows 10 clients,... See more...
I have Splunk setup on an air gapped network (no internet connection). The search head is a single instance running 8.1.1. There are about 320 machines on the network. It's mostly Windows 10 clients, the servers are 2016 & 2019. The clients are running the universal forwarder 7.1.1. I've created a couple of reports to see different things about the computers on the network (OS, if the Print log is turned on, etc). I get very inconsistent results from these reports (and other searches I'll do). E.G. - If I do a search on the Security event log for EventCode 4608 (restart event) I'll only get about 80 results (clients reboot nightly) when I should be getting closer to about 300. I've searched the event logs of machines that aren't on the report and they have the event code logged, but it's not being reported to splunk. I've checked everything I can think of, uninstalled/reinstalled the forwarder, installed different versions of the forwarder, etc. I can't figure out why one machine will report all events and another only reports some events (4624, 4627, 4634). Has anyone else had this issue?  Thank you 
operationName urls avg_time max_time count MethodUsingGET https://www.google.com/api/v1/571114808/CAR.202 https://www.google.com/api/v1/571114899 3255 3255 2 UsingGET https://w... See more...
operationName urls avg_time max_time count MethodUsingGET https://www.google.com/api/v1/571114808/CAR.202 https://www.google.com/api/v1/571114899 3255 3255 2 UsingGET https://www.googleA.com/api/v1/571114888/api/ https://www.googleB.com/api/v1/571114877/api/ 1316.889 5345 18 I would only want one url but it should count others as well. Is there a way?
Kindly help on the below scenario where I need to compare two different columns created using  different sourcetype.    For Ex:  |appendcols [search index="X" sourcetype="xy" |table ID,CASE_ID|] [... See more...
Kindly help on the below scenario where I need to compare two different columns created using  different sourcetype.    For Ex:  |appendcols [search index="X" sourcetype="xy" |table ID,CASE_ID|] [search index="X" sourcetype="YZ" OR sourcetype="ABC"|table Role,Name,NewID| Now here,  I need to Match ID and NewID which has similar results but not is same row.    ID      NewID 123   789 456  123  789  987 987 456     Now, the result should come match for the data.  I have tried many ways like (|foreach ID [eval status =if (match (ID, NewID), "YES", "NO")]. But nothing worked .  Please provide you suggestion.
This is affecting one of our HF that we use to do ingest external data via scripts, vendor provided apps and REST API polls.   For the REST API part we use the REST API Modular Input app (https://spl... See more...
This is affecting one of our HF that we use to do ingest external data via scripts, vendor provided apps and REST API polls.   For the REST API part we use the REST API Modular Input app (https://splunkbase.splunk.com/app/1546/).  The REST inputs works without any issues when we were at Splunk Enterprise 7.1.3. After upgrade SE to 8.1.1 and the rest_ta app to 2.0.1 last weekend, none of the scheduled REST inputs worked.   Problem is, this only happens on this server.   The REST inputs still work on a separate, dev server that was also upgraded to SE 8.1.1 and rest_ta 2.0.1.  I see the following set of error events in splunkd.log but they only show up when I make a change to any of the REST inputs, like changing the cron schedule to force it to run at the next minute.   Exception in thread Thread-1: Traceback (most recent call last): File "/opt/splunk/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/opt/splunk/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "/opt/splunk/etc/apps/rest_ta/bin/rest.py", line 447, in do_run endpoint_list[i] = endpoint.replace(replace_key,c['clear_password']) File "/opt/splunk/lib/python3.7/site-packages/splunk/entity.py", line 574, in __getitem__ return self.properties[key] KeyError: 'clear_password'    I do not see any errors at the times when the cron schedules's supposed to execute the API calls.   So it feels like the rest_ta app itself just quit working.  Honestly, I'm a bit lost trying to interpret the errors.  Anyone have seen something similar, or have any tips on how to resolve this? I tried removing the app completely, restart splunkd then reinstall and reconfigure rest_ta 2.0.1 from scratch.  Still none of the scheduled jobs run.  The same errors still only show up after I modified one of the REST inputs.   Here's one of the several REST inputs configured.   They're all identical in that I'm only using the bundled "JSONArrayHandler" response_handler to process the returning JSON data from Infoblox.  It's not customized in any way.   [rest://InfoBlox_Networks] activation_key = --snip-- auth_password = {encrypted:splunk_svc_user} auth_type = basic auth_user = splunk_svc_user delimiter = : endpoint = https://a.b.c.d/wapi/v2.6.1/network?_max_results=15000 host = a.b.c.d http_method = GET index = infoblox index_error_response_codes = 1 log_level = INFO polling_interval = 3 * * * * request_timeout = 60 response_handler = JSONArrayHandler response_type = json sequential_mode = 0 sourcetype = infoblox:api:network streaming_request = 0    
Hello, I used the following search to convert the Date field in the CSV so Splunk could read it. I would like to create a chart using the Date and Amount fields but am having no luck. source="graph... See more...
Hello, I used the following search to convert the Date field in the CSV so Splunk could read it. I would like to create a chart using the Date and Amount fields but am having no luck. source="graph info-csv.csv" host="Tom1-PC" sourcetype="csv" | convert timeformat="%m/%d/%Y" mktime(Date) as numdate | reverse | table Date Amount Any help would be appreciated.  
I track the overall CPU usage on a server with:     index=mcadth_metrics host=IS20_DB sourcetype=PerfmonMk:CPU instance=_Total     And it works well for all other servers, and for this server u... See more...
I track the overall CPU usage on a server with:     index=mcadth_metrics host=IS20_DB sourcetype=PerfmonMk:CPU instance=_Total     And it works well for all other servers, and for this server until it went down for a reboot about 10 days ago.   The server's average CPU load (%_Processor_Time) is usually around 40%, but has been reporting in Splunk at about 5% since the reboot. No config was changed over the time of the reboot; and the Splunk forwarder has been restarted with no change. Image below shows Splunk search for %_Processor_Time returning with an Average of 5.35%. Image below shows actual server metrics reporting at 41% utilization.    
Hello, I am pretty new to splunk, and just feel lost at times. I have a question that i cant seem to find an answer for.  I have data that looks like  so the above is like 1 row and then the... See more...
Hello, I am pretty new to splunk, and just feel lost at times. I have a question that i cant seem to find an answer for.  I have data that looks like  so the above is like 1 row and then there are multiple  rows with the same type of list of entries for timestamp and total now I want to turn each row into a line on a line chart where the x-axis is the timestamp and the y-axis is the "Total". sort of like overlapping line charts based on all the rows. anyone have ideas 
Hi, I am trying to check if date that is stored within a field in table is within the last 24h from the moment the search is ran. I do NOT mean that for the search itself, it is set to 30 days in my... See more...
Hi, I am trying to check if date that is stored within a field in table is within the last 24h from the moment the search is ran. I do NOT mean that for the search itself, it is set to 30 days in my case and I cant change it, I want to check the value within only a specific field. For example I receive the following date:  2021-05-13T12:02:44.000+0000 And I need to know if its a date from the last 24h or not. So far I am out of luck, any ideas?