All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi all, We are trying to deploy pre trained Deep Learning models for ESCU. DSDL has been installed and container are loaded successfully. Connection with docker is also in good shape.  But when run... See more...
Hi all, We are trying to deploy pre trained Deep Learning models for ESCU. DSDL has been installed and container are loaded successfully. Connection with docker is also in good shape.  But when running the ESCU search, I am getting the following error messages.    MLTKC error: /apply: ERROR: unable to initialize module. Ended with exception: No module named 'keras_preprocessing' MLTKC parameters: {'params': {'mode': 'stage', 'algo': 'pretrained_dga_model_dsdl'}, 'args': ['is_dga', 'domain'], 'target_variable': ['is_dga'], 'feature_variables': ['domain'], 'model_name': 'pretrained_dga_model_dsdl', 'algo_name': 'MLTKContainer', 'mlspl_limits': {'handle_new_cat': 'default', 'max_distinct_cat_values': '100', 'max_distinct_cat_values_for_classifiers': '100', 'max_distinct_cat_values_for_scoring': '100', 'max_fit_time': '600', 'max_inputs': '100000', 'max_memory_usage_mb': '4000', 'max_model_size_mb': '30', 'max_score_time': '600', 'use_sampling': 'true'}, 'kfold_cv': None, 'dispatch_dir': '/opt/splunk/var/run/splunk/dispatch/1704812182.86156_AC9C076F-2C37-4E94-9DD0-0AE04AEB7952'}     From search.log   01-09-2024 09:56:44.725 INFO ChunkedExternProcessor [47063 ChunkedExternProcessorStderrLogger] - stderr: MLTKC endpoint: https://docker_host:32802 01-09-2024 09:56:44.850 INFO ChunkedExternProcessor [47063 ChunkedExternProcessorStderrLogger] - stderr: POST endpoint [https://docker_host:32802/apply] called with payload (2298991 bytes) 01-09-2024 09:56:45.166 INFO ChunkedExternProcessor [47063 ChunkedExternProcessorStderrLogger] - stderr: POST endpoint [https://docker_host:32802/apply] returned with payload (134 bytes) with status 200 01-09-2024 09:56:45.166 ERROR ChunkedExternProcessor [47063 ChunkedExternProcessorStderrLogger] - stderr: MLTKC error: /apply: ERROR: unable to initialize module. Ended with exception: No module named 'keras_preprocessing' 01-09-2024 09:56:45.167 ERROR ChunkedExternProcessor [47063 ChunkedExternProcessorStderrLogger] - stderr: MLTKC parameters: {'params': {'mode': 'stage', 'algo': 'pretrained_dga_model_dsdl'}, 'args': ['is_dga', 'domain'], 'target_variable': ['is_dga'], 'feature_variables': ['domain'], 'model_name': 'pretrained_dga_model_dsdl', 'algo_name': 'MLTKContainer', 'mlspl_limits': {'handle_new_cat': 'default', 'max_distinct_cat_values': '100', 'max_distinct_cat_values_for_classifiers': '100', 'max_distinct_cat_values_for_scoring': '100', 'max_fit_time': '600', 'max_inputs': '100000', 'max_memory_usage_mb': '4000', 'max_model_size_mb': '30', 'max_score_time': '600', 'use_sampling': 'true'}, 'kfold_cv': None, 'dispatch_dir': '/opt/splunk/var/run/splunk/dispatch/1704812182.86156_AC9C076F-2C37-4E94-9DD0-0AE04AEB7952'} 01-09-2024 09:56:45.167 ERROR ChunkedExternProcessor [47063 ChunkedExternProcessorStderrLogger] - stderr: apply ended with options {'params': {'mode': 'stage', 'algo': 'pretrained_dga_model_dsdl'}, 'args': ['is_dga', 'domain'], 'target_variable': ['is_dga'], 'feature_variables': ['domain'], 'model_name': 'pretrained_dga_model_dsdl', 'algo_name': 'MLTKContainer', 'mlspl_limits': {'handle_new_cat': 'default', 'max_distinct_cat_values': '100', 'max_distinct_cat_values_for_classifiers': '100', 'max_distinct_cat_values_for_scoring': '100', 'max_fit_time': '600', 'max_inputs': '100000', 'max_memory_usage_mb': '4000', 'max_model_size_mb': '30', 'max_score_time': '600', 'use_sampling': 'true'}, 'kfold_cv': None, 'dispatch_dir': '/opt/splunk/var/run/splunk/dispatch/1704812182.86156_AC9C076F-2C37-4E94-9DD0-0AE04AEB7952'}     Has anyone run into this before?  We have Golden Image CPU running .  Following shows up in container logs.  Thanks
Hi @michaelteck , As I said, you can add a text input to your inputs and use it to give a parameter to your search. The sample from @dtburrows3 could solve your requirement. Ciao. Giuseppe
Create a Firewall Summary Report that has Inbound Allow, Inbound Deny Traffic? 
Hi!   We have been installing Splunk Universal Forwarder on different servers in the on-prem environment of the company where I work, to bring the logs to an index in our Splunk Cloud. We managed ... See more...
Hi!   We have been installing Splunk Universal Forwarder on different servers in the on-prem environment of the company where I work, to bring the logs to an index in our Splunk Cloud. We managed to do it on almost all servers running Ubuntu, CentOS and Windows. Occasionally, we are having problems on a server with Ubuntu. For the installation, we did the following as we did for every other Ubuntu server: dpkg -i splunkforwarder-9.1.2-b6b9c8185839-linux-2.6-amd64.deb cd /opt/splunkforwarder/bin ./splunk start Insert user and password Download splunkclouduf.spl /opt/splunkforwarder/bin/splunk install app splunkclouduf.spl ./splunk add forward-server http-inputs-klar.splunkcloud.com:443 cd /opt/splunkforwarder/etc/system/local define input.conf as: # Monitor system logs for authentication and authorization events [monitor:///var/log/auth.log] disabled = false index = spei_servers sourcetype = linux_secure #fix bug in ubuntu related to: "Events from tracker.log have not been seen for the last 90 seconds, which is more than the yellow threshold (45 seconds). This typically occurs when indexing or forwarding are falling behind or are blocked." [health_reporter] aggregate_ingestion_latency_health = 0 [feature:ingestion_latency] alert.disabled = 1   disabled = 1 # Monitor system logs for general security events [monitor:///var/log/syslog] disabled = false index = spei_servers sourcetype = linux_syslog # Monitor Apache access and error logs [monitor:///var/log/apache2/access.log] disabled = false index = spei_servers sourcetype = apache_access [monitor:///var/log/apache2/error.log] disabled = false index = spei_servers sourcetype = apache_error # Monitor SSH logs for login attempts [monitor:///var/log/auth.log] disabled = false index = spei_servers sourcetype = sshd # Monitor sudo commands executed by users [monitor:///var/log/auth.log] disabled = false index = spei_servers sourcetype = sudo # Monitor UFW firewall logs (assuming UFW is used) [monitor:///var/log/ufw.log] disabled = false index = spei_servers sourcetype = ufw # Monitor audit logs (if available) [monitor:///var/log/audit/audit.log] disabled = false index = spei_servers sourcetype = linux_audit # Monitor file integrity using auditd (if available) [monitor:///var/log/audit/auditd.log] disabled = false index = spei_servers sourcetype = auditd # Monitor for changes to critical system files [monitor:///etc/passwd] disabled = false index = spei_servers sourcetype = linux_config # Monitor for changes to critical system binaries [monitor:///bin] disabled = false index = spei_servers sourcetype = linux_config # Monitor for changes to critical system configuration files [monitor:///etc] disabled = false index = spei_servers sourcetype = linux_config echo "[httpout] httpEventCollectorToken = <our index token> uri = https:// <our subdomain>.splunkcloud.com:443" > outputs.conf cd /opt/splunkforwarder/bin export SPLUNK_HOME=/opt/splunkforwarder ./splunk restart When going to Splunk Cloud, we don't see the logs coming from this specific server. So we saw our logs and we saw this in health.log: root@coas:/opt/splunkforwarder/var/log/splunk# tail health.log 01-09-2024 08:21:30.197 -0600 INFO PeriodicHealthReporter - feature="Forwarder Ingestion Latency" color=green due_to_stanza="feature:ingestion_latency_reported" node_type=feature node_path=splunkd.file_monitor_input.forwarder_ingestion_latency 01-09-2024 08:21:30.197 -0600 INFO PeriodicHealthReporter - feature="Ingestion Latency" color=red due_to_stanza="feature:ingestion_latency" due_to_indicator="ingestion_latency_gap_multiplier" node_type=feature node_path=splunkd.file_monitor_input.ingestion_latency 01-09-2024 08:21:30.197 -0600 INFO PeriodicHealthReporter - feature="Ingestion Latency" color=red indicator="ingestion_latency_gap_multiplier" due_to_threshold_value=1 measured_value=1755 reason="Events from tracker.log have not been seen for the last 1755 seconds, which is more than the red threshold (210 seconds). This typically occurs when indexing or forwarding are falling behind or are blocked." node_type=indicator node_path=splunkd.file_monitor_input.ingestion_latency.ingestion_latency_gap_multiplier 01-09-2024 08:21:30.197 -0600 INFO PeriodicHealthReporter - feature="Large and Archive File Reader-0" color=green due_to_stanza="feature:batchreader" node_type=feature node_path=splunkd.file_monitor_input.large_and_archive_file_reader-0 01-09-2024 08:21:30.197 -0600 INFO PeriodicHealthReporter - feature="Real-time Reader-0" color=red due_to_stanza="feature:tailreader" due_to_indicator="data_out_rate" node_type=feature node_path=splunkd.file_monitor_input.real-time_reader-0 01-09-2024 08:21:30.197 -0600 INFO PeriodicHealthReporter - feature="Real-time Reader-0" color=red indicator="data_out_rate" due_to_threshold_value=2 measured_value=352 reason="The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data." node_type=indicator node_path=splunkd.file_monitor_input.real-time_reader-0.data_out_rate 01-09-2024 08:21:30.197 -0600 INFO PeriodicHealthReporter - feature="Workload Management" color=green node_type=category node_path=splunkd.workload_management 01-09-2024 08:21:30.197 -0600 INFO PeriodicHealthReporter - feature="Admission Rules Check" color=green due_to_stanza="feature:admission_rules_check" node_type=feature node_path=splunkd.workload_management.admission_rules_check 01-09-2024 08:21:30.198 -0600 INFO PeriodicHealthReporter - feature="Configuration Check" color=green due_to_stanza="feature:wlm_configuration_check" node_type=feature node_path=splunkd.workload_management.configuration_check 01-09-2024 08:21:30.198 -0600 INFO PeriodicHealthReporter - feature="System Check" color=green due_to_stanza="feature:wlm_system_check" node_type=feature node_path=splunkd.workload_management.system_check   and this in splunkd.log: root@coas:/opt/splunkforwarder/var/log/splunk# tail splunkd.log 01-09-2024 08:33:01.227 -0600 WARN AutoLoadBalancedConnectionStrategy [3273664 TcpOutEloop] - Cooked connection to ip=54.87.146.250:9997 timed out 01-09-2024 08:33:21.135 -0600 WARN AutoLoadBalancedConnectionStrategy [3273664 TcpOutEloop] - Cooked connection to ip=54.160.213.9:9997 timed out 01-09-2024 08:33:41.034 -0600 WARN AutoLoadBalancedConnectionStrategy [3273664 TcpOutEloop] - Cooked connection to ip=54.160.213.9:9997 timed out 01-09-2024 08:34:00.942 -0600 WARN AutoLoadBalancedConnectionStrategy [3273664 TcpOutEloop] - Cooked connection to ip=54.87.146.250:9997 timed out 01-09-2024 08:34:20.841 -0600 WARN AutoLoadBalancedConnectionStrategy [3273664 TcpOutEloop] - Cooked connection to ip=18.214.192.43:9997 timed out 01-09-2024 08:34:40.750 -0600 WARN AutoLoadBalancedConnectionStrategy [3273664 TcpOutEloop] - Cooked connection to ip=18.214.192.43:9997 timed out 01-09-2024 08:35:00.637 -0600 WARN AutoLoadBalancedConnectionStrategy [3273664 TcpOutEloop] - Cooked connection to ip=54.87.146.250:9997 timed out 01-09-2024 08:35:20.544 -0600 WARN AutoLoadBalancedConnectionStrategy [3273664 TcpOutEloop] - Cooked connection to ip=54.160.213.9:9997 timed out 01-09-2024 08:35:40.443 -0600 WARN AutoLoadBalancedConnectionStrategy [3273664 TcpOutEloop] - Cooked connection to ip=18.214.192.43:9997 timed out 01-09-2024 08:36:00.352 -0600 WARN AutoLoadBalancedConnectionStrategy [3273664 TcpOutEloop] - Cooked connection to ip=54.87.146.250:9997 timed out   do you have any thought or have faced this issue in the past?
The documentation does not say step 3 is optional.  That you can see your data confirms it is present, but that is not the same thing as fetching the ACK. Restarting the service clears the pending A... See more...
The documentation does not say step 3 is optional.  That you can see your data confirms it is present, but that is not the same thing as fetching the ACK. Restarting the service clears the pending ACKs and re-enables reception of data.  Fetching the ACKs will also re-enable reception without a restart. If the client cannot fetch ACKs then I suggest turning off HEC ACK.
I think the closest you can get to emulating the vanilla Splunk search bar on a dashboard is to use a time selector input, textbox input, and submit button input. With these three inputs the user can... See more...
I think the closest you can get to emulating the vanilla Splunk search bar on a dashboard is to use a time selector input, textbox input, and submit button input. With these three inputs the user can select search time window and with the textbox insert some sort of filter criteria, whether that be a specific field value or any other sort of SPL that can be passed into a search elsewhere on the dashboard.  Default size of textbox input is pretty small so probably wouldn't work so well for full search SPL but should work out nicely for searching specific field values (e.i. fieldname=$textbox_input|s$) Here is an example in its simplest form. Example of SPL on a panel utilizing the textbox input from the dashboard index=<index> sourcetype=<sourcetype> uid=$textbox_input|s$ | stats count as count, earliest(_time) as earliest_epoch, latest(_time) as latest_epoch, values(host) as host by uid  
But step 3 you mentioned is optional, in the sense that it's not required to request statuses for events to be indexed (I can verify my data is present, and events logged). So I didn't expect this be... See more...
But step 3 you mentioned is optional, in the sense that it's not required to request statuses for events to be indexed (I can verify my data is present, and events logged). So I didn't expect this behavior. After this max number of pending events reached, channel for the related token go on busy status, and leads to loss of logs until I restart service. I tried to increase max_number_of_acked_requests_pending_query, but it will only allow me to postpone the deadline, and set a huge value could perhaps also have negative impact on servers health. As I cannot control anything on client except channel header and authorization header, and as client doesn't seem do status requests (firewall logs), I will try to update maxIdleTime under 60, as client send data every 60 seconds. Thanks    
Thank you for your reply. I have a dashboard. I would like to add a search bar, where a user can enter a talend's job name and launch a search with a button. Example: I would like to put it in... See more...
Thank you for your reply. I have a dashboard. I would like to add a search bar, where a user can enter a talend's job name and launch a search with a button. Example: I would like to put it in a <fieldset> tag.  
Thanks @isoutamo  and @richgalloway . Closing thread- After uploading new Developer license, issue got fixed and users were able to login [probably overrid the Dev/Test Personnel license. Thanks
Hi @michaelteck, let me understand: you have a dashboard with some panels, in addition you want to add another panel in which user can run a search, using SPL and visualizing results in the same pan... See more...
Hi @michaelteck, let me understand: you have a dashboard with some panels, in addition you want to add another panel in which user can run a search, using SPL and visualizing results in the same panel, is it correct? if this is your requirement, you can create a panel with a free text input (inside the same panel. Ciao. Giuseppe
Hello everyone and Happy New Year   I'm newbie with splunk. And I try to make a full dynamique dashboard with the app Search & Reporting.  I work on Talend's logs. I'm looking for to create a s... See more...
Hello everyone and Happy New Year   I'm newbie with splunk. And I try to make a full dynamique dashboard with the app Search & Reporting.  I work on Talend's logs. I'm looking for to create a search bar for searching job directly but do not use the drop-down menus.  Is there solution to make a search bar on top of the dashboard with "search" button ?  Thanks for reading me. 
Hi, Has anyone else encountered a situation where the 'orig_time' field isn't showing up in the Windows event logs with Eventcode=7040?
This sounds like a good use case to utilize the WILDCARD(keyword) capability within advanced settings in lookup definitions. I tried it out on a local instance and think I got what you are looking... See more...
This sounds like a good use case to utilize the WILDCARD(keyword) capability within advanced settings in lookup definitions. I tried it out on a local instance and think I got what you are looking for.   Wildcards will need to be included in the lookup though so would look like this. And if you are only looking for matches against the beginning of the "Asset" field value then you can also just set up the wildcards on the end of the values in lookup (This example also has a net-new field in lookup to retain the original keyword value in the lookup in case it is needed elsewhere) and under the advanced settings checkbox in the lookup definition you would configure the field "keyword" to match with wildcards like this (you can turn off case-sensitivity too.   Note: If you decide to go with the wildcard match using a new "keyword_wildcard" field from lookup you will have to adjust the lookup definition advanced settings to WILDCARD(keyword_wildcard) instead.   Example SPL:     <base_search> | lookup splunk_community_keyword_association keyword as Asset OUTPUT country as match_country | eval country=coalesce(if(NOT match(country, "^(?i)(?:unknown|not\s+available|n\/a|na)$"), 'country', null()), 'match_country')     Full SPL to simulate:   | makeresults | eval Asset="braiskdidi001", country="Britain" | append [ | makeresults | eval Asset="breliudusfidf002", country="Unknown" ] | append [ | makeresults | eval Asset="bruliwhdcjn001", country="not available" ] | rename country as country_from_index ``` lookup wildcard match against Asset field value to the keyword_wildcard field in lookup and return the country if match is found ``` | lookup splunk_community_keyword_association keyword_wildcard as Asset OUTPUT country as country_from_lookup ``` evaluate new country field that uses derived country from lookup if a match is found and the country_from_index indicates that it was not found ``` | eval coalesced_country=coalesce(if(NOT match(country_from_index, "^(?i)(?:unknown|not\s+available|n\/a|na)$"), 'country_from_index', null()), 'country_from_lookup') | fields + _time, Asset, country_from_index, country_from_lookup, coalesced_country     Referenced splunk_community_keyword_association.csv country keyword keyword_wildcard Britain bru bru* Britain bre bre* USA usa usa*
After a while I solved my problem with a EVAL statement My props.conf is now: ... ... EVAL-disposition_split = split(disposition, ";") LOOKUP-action = fortimail_action_lookup.csv vendor_action A... See more...
After a while I solved my problem with a EVAL statement My props.conf is now: ... ... EVAL-disposition_split = split(disposition, ";") LOOKUP-action = fortimail_action_lookup.csv vendor_action AS disposition_split OUTPUT action
The steps seem pretty clear in the docs. 1) Send data to HEC 2) Get an ACK *ID* in response 3) Use the ACK ID to confirm the data has been written To verify that the indexer has indexed the event... See more...
The steps seem pretty clear in the docs. 1) Send data to HEC 2) Get an ACK *ID* in response 3) Use the ACK ID to confirm the data has been written To verify that the indexer has indexed the event(s) contained in the request, query the [https://<host>:<port>/services/collector/ack] endpoint Indexers get pending queries because the client has not closed them by requesting the status.
Hello, As I want to get my email events CIM compliant, I have trouble parsing a "disposition" key-value pair. Example: Having an event:   date=2024-01-09 time=11:59:43.258 device_id=XXXXXXXXXXXX... See more...
Hello, As I want to get my email events CIM compliant, I have trouble parsing a "disposition" key-value pair. Example: Having an event:   date=2024-01-09 time=11:59:43.258 device_id=XXXXXXXXXXXXXX log_id=0200012329 type=statistics pri=information session_id="4XXXXXXXXXXX-4XXXXXXXXXXXXX" client_name="example.com" disposition="Modify Subject;Insert Disclaimer;Defer Disposition" classifier="Data Loss Prevention" message_length="94756" subject="Test subject" message_id="xxxxxxxxxxxxxxxxxxxx@example.com" recv_time="" notif_delay="0" scan_time="0.186489" xfer_time="0.002166" srcfolder="" read_status="   I have disposition field extracted at search-time with the value "Modify Subject;Insert Disclaimer;Defer Disposition" Want I need to do is to separate the values into a multivalue field, and then use a lookup to determine the action. Lookup file:   vendor_action,action Accept,delivered Reject,blocked Add Header,delivered Modify Subject, Quarantine,quarantined Discard,blocked Replace, Delay, Rewrite, Insert Disclaimer, Defer Disposition,delivered Disclaimer Body,delivered Disclaimer Header,delivered Defer, Quarantine to Review,quarantined Content Filter as Spam, Encrypt, Decrypt, Alternate Host, BCC, Archive, Customized repackage, Repackage, Notification,   In the end, the event should have a field named action, and the value should for this example be delivered My props.conf:   [fortimail] ... ... LOOKUP-action = fortimail_action_lookup.csv vendor_action as disposition OUTPUT action REPORT-disposition = disposition_extraction   My transforms.conf:   [disposition_extraction] SOURCE_KEY = disposition DELIMS = ";" MV_ADD = true   But eventually i just end up with the original value ("Modify Subject;Insert Disclaimer;Defer Disposition") and it doesn't get separated What am I doing wrong?
@pdrieger_splunkany idea?  
Yes, that was helpful and sorry for my delayed confirmation. 
Hello, Thanks for your answer but I don't have the same understanding of Splunk documentation as you. If you were right, HEC service would be down a few hours after startup, or less. As explained ... See more...
Hello, Thanks for your answer but I don't have the same understanding of Splunk documentation as you. If you were right, HEC service would be down a few hours after startup, or less. As explained in Splunk documentation (see the graph), HEC responds with an ACK for each event thrown, but you can send a request for a particular event to verify the status : "Each time a client sends a request to the HEC endpoint using a token with indexer acknowledgment enabled (1), HEC returns an acknowledgment identifier to the client (2)." https://docs.splunk.com/Documentation/Splunk/9.1.2/Data/AboutHECIDXAck#Query_for_indexing_status 1. Client send HEC request with event data 2. HEC acks the request once event is indexed HEC clients don't need to ask for status for events to get indexed well (millions each day), but after a while, the indexers become busy due to the maximum number of pending requests. I already increased this value so now I need to understand why this pending queries So my problem is something with pending requests and why they are increasing like that. I don't see any errors with the metrics, but they don't seem to be cumulative (Because Splunk Enterprise deletes status information after clients retrieve it) : I cannot control HEC client behavior beyond basic settings (for information, this is Akamai DataStream).    
Hi, For the past 90 days, we haven't detected any alerts triggered by the GitHub secret scanning rule in my Splunk ES. Consequently, we're unable to even query an index. Tq