All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Greetings @xxkenta  Were you ever able to find a viable solution for this issue?  I'm having a similar situation.
Presuming your firewall is logging allow and deny events to Splunk and presuming those events are stored in the 'network' index and also presuming those events have an 'action' field saying whether t... See more...
Presuming your firewall is logging allow and deny events to Splunk and presuming those events are stored in the 'network' index and also presuming those events have an 'action' field saying whether the traffic was allowed or blocked then this may get you started index=network | stats count by action
opps, it did work, my eval was not written correctly, i think, i was missing  space after commas in below syntax. <eval token="input_tok">replace($form.input_tok$, "(\\\\)", "\\\\\1")
Recently we upgraded FMC from 6.x to 7.x and noticed no data was being streamed into the /opt/splunk/etc/apps/TA-eStreamer/bin/encore/data/splunk directory.  We then started getting a firewall error ... See more...
Recently we upgraded FMC from 6.x to 7.x and noticed no data was being streamed into the /opt/splunk/etc/apps/TA-eStreamer/bin/encore/data/splunk directory.  We then started getting a firewall error when testing the connection.. Does anyone know if FMC 7.x is compatible with the TA-eStreamer add-on?  ./splencore.sh test Diagnostics ERROR [no message or attrs]: Could not connect to eStreamer Server at all. Are you sure the host and port are correct? If so then perhaps it is a firewall issue.
I would suggest pinging the Splunk admins, as the data is coming in with an issue, and will always be an issue until they modify the input or sourcetype. You can add/remove whatever number of hours ... See more...
I would suggest pinging the Splunk admins, as the data is coming in with an issue, and will always be an issue until they modify the input or sourcetype. You can add/remove whatever number of hours you need for a particular _time field, but if it gets corrected in the future, all of your searches will fail. As well, I'm not sure how things would behave if you were to drilldown from a dashboard into raw data.   It really is a simple as adding that TZ key/value to the sourcetype. What that does is makes the display of the data with different timezones seamless to end users. For example, searching for the last 60 minutes data sets configured in GMT AND CST will correctly display to the end user if TZ is configured for the sourcetypes.  
I do not have access to update that. So I was trying to figure out how to do it with SPL
Do you have the ability to modify the sourcetype for the ticketing system data?  You can add a single config to the input / sourcetype:  # The following props.conf entry sets Eastern Time Zone if... See more...
Do you have the ability to modify the sourcetype for the ticketing system data?  You can add a single config to the input / sourcetype:  # The following props.conf entry sets Eastern Time Zone if host matches nyc*. [host::nyc*] TZ = US/Eastern Is your Splunk environment Splunk Cloud, or self-hosted?  If cloud, you should be able to go to "Settings"->"Source Types", click on the specific sourcetype and add a key/value pair in the advanced section key="TZ", value ="US/Eastern" 
the value of the backslash \ to double backslash \\ in side of token is sets up in dashboard xml. tried following but did not worked. <eval token="input_tok">replace($form.input_tok$, "(\\\\)", "\\... See more...
the value of the backslash \ to double backslash \\ in side of token is sets up in dashboard xml. tried following but did not worked. <eval token="input_tok">replace($form.input_tok$, "(\\\\)", "\\\\\1") </eval>
how to change backslash of text input of a dashboard to use in subsequent search?
The field _time needs to be available at the time of using the "| timechart " command you example of: index=prueba source="*blablabla*"  | eval Date=strftime(_time,"%Y/%m/%d") | eval Time=strft... See more...
The field _time needs to be available at the time of using the "| timechart " command you example of: index=prueba source="*blablabla*"  | eval Date=strftime(_time,"%Y/%m/%d") | eval Time=strftime(_time,"%H:%M:%S") | eval Fecha=strftime(_time,"%Y/%m/%d %H:%M:%S") | rex "^.+transactType:\s(?P<transactType>(.\w+)+)" | stats values(Fecha) as Fecha, values(transactType) as transactType by ID | timechart span=5m count by transactType is not carrying over the_time field from the raw events. In the bolded SPL above the stats transformation will need some sort of method of carrying over the _time field I would recommend either a     | stats earliest(_time) as _time, values(Fecha) as Fecha, values(transactType) as transactType by ID | timechart span=5m count as count by transactType     OR a      | stats latest(_time) as _time, values(Fecha) as Fecha, values(transactType) as transactType by ID | timechart span=5m count as count by transactType     (depending on what makes more sense for your scenario)  So example of your Full SPL would look something like this:   index=prueba source="*blablabla*" ``` The field ID is assumed to already be extracted ``` ``` regex extraction of transactType field ``` | rex "^.+transactType:\s(?P<transactType>(.\w+)+)" ``` transform raw events to singular events, each representing a unique ID with their own list of tranactType value and _time value ``` | stats latest(_time) as _time, values(transactType) as transactType by ID ``` make a time series tallying up all the unique IDs belonging to the unique transactType values in 5 minute buckets ``` | timechart span=5m count as count by transactType  
Hello, I need some help. Manipulating time is something I have struggled with  Below is the code I have   ((index="desktop_os") (sourcetype="itsm_remedy")) earliest=-1d@d | search ASSIGNED_GROUP ... See more...
Hello, I need some help. Manipulating time is something I have struggled with  Below is the code I have   ((index="desktop_os") (sourcetype="itsm_remedy")) earliest=-1d@d | search ASSIGNED_GROUP IN ("Desktop_Support_1", "Remote_Support") ``` Convert REPORTED_DATE to epoch form ``` | eval REPORTED_DATE2=strptime(REPORTED_DATE, "%Y-%m-%d %H:%M:%S") ``` Keep events reported more than 12 hours ago so are due in < 12 hours ``` | where REPORTED_DATE2 <= relative_time(now(), "-12h") | eval MTTRSET = round((now()-REPORTED_DATE2)/3600) | dedup INCIDENT_NUMBER | stats values(REPORTED_DATE) AS Reported, values(DESCRIPTION) AS Title, values(ASSIGNED_GROUP) AS Group, values(ASSIGNEE) AS Assignee, LAST(STATUS_TXT) as Status,values(MTTRSET) as MTTRHours, values(STATUS_REASON_TXT) as PendStatus by INCIDENT_NUMBER | search Status IN ("ASSIGNED", "IN PROGRESS", "PENDING") | sort Assignee | table Assignee MTTRHours INCIDENT_NUMBER Reported Title Title Status PendStatus  this code runs and gives us the results we need, but the issue is that REPORTED_DATE field is off by 5 hours due to time zone issue. that is a custom field from out ticketing system that is stuck on GMT and the output looks like  2024-01-08 09:22:49.0 I need to get that field produce a correct timezone for EST. I am struggling with making it work. I looked at this thread but that is not working for us: Solved: How to convert date and time in UTC to EST? - Splunk Community Any help is appreciated.   Thanks  
Hi @michaelteck , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
It works, even if I have to manage the time range. Thanks a lot!
Hi, I have a log with several transactions, each one have some events. All event in one transaction share the same ID. The other events contains some information each one, for example, execution ti... See more...
Hi, I have a log with several transactions, each one have some events. All event in one transaction share the same ID. The other events contains some information each one, for example, execution time, transact type, url. login url, etc.... This fields can be in one or several of the events. I want to obtain the total transactions of each type in spanned time, for example each 5m. I need to group the events of each trasaction for extract the info for it. index=prueba source="*blablabla*" | eval Date=strftime(_time,"%Y/%m/%d") | eval Time=strftime(_time,"%H:%M:%S") | eval Fecha=strftime(_time,"%Y/%m/%d %H:%M:%S") | rex "^.+transactType:\s(?P<transactType>(.\w+)+)" | stats values(Fecha) as Fecha, values(transactType) as transactType by ID This is Ok, if i want count transactType then i do: index=prueba source="*blablabla*" | eval Date=strftime(_time,"%Y/%m/%d") | eval Time=strftime(_time,"%H:%M:%S") | eval Fecha=strftime(_time,"%Y/%m/%d %H:%M:%S") | rex "^.+transactType:\s(?P<transactType>(.\w+)+)" | stats values(Fecha) as Fecha, values(transactType) as transactType by ID |stats count by transactType The problem is if i want to obtain that in a span time: I cant do this because there is some events with the transactType field in one transaction: index=prueba source="*blablabla*" | eval Date=strftime(_time,"%Y/%m/%d") | eval Time=strftime(_time,"%H:%M:%S") | eval Fecha=strftime(_time,"%Y/%m/%d %H:%M:%S") | rex "^.+transactType:\s(?P<transactType>(.\w+)+)" | timechart span=5m count by transactType And following query dont give me any result: index=prueba source="*blablabla*" | eval Date=strftime(_time,"%Y/%m/%d") | eval Time=strftime(_time,"%H:%M:%S") | eval Fecha=strftime(_time,"%Y/%m/%d %H:%M:%S") | rex "^.+transactType:\s(?P<transactType>(.\w+)+)" | stats values(Fecha) as Fecha, values(transactType) as transactType by ID | timechart span=5m count by transactType Im tried too (but i dont get results): index=prueba source="*blablabla*" | eval Date=strftime(_time,"%Y/%m/%d") | eval Time=strftime(_time,"%H:%M:%S") | eval Fecha=strftime(_time,"%Y/%m/%d %H:%M:%S") | rex "^.+transactType:\s(?P<transactType>(.\w+)+)" | bucket Fecha span=5m | stats values(Fecha) as Fecha, values(transactType) as transactType by ID |stats count by transactType Or: index=prueba source="*blablabla*" | eval Date=strftime(_time,"%Y/%m/%d") | eval Time=strftime(_time,"%H:%M:%S") | eval Fecha=strftime(_time,"%Y/%m/%d %H:%M:%S") | rex "^.+transactType:\s(?P<transactType>(.\w+)+)" | stats values(Fecha) as Fecha, values(transactType) as transactType by ID | bucket Fecha span=5m |stats count by transactType How can i obtain what i want?  
i have configured the splunk addon for jmx and added the jmx server. i could able to get jmx server data. When i delete and reinstall new splunk enterprise and I copied splunk addon for jmx app of pr... See more...
i have configured the splunk addon for jmx and added the jmx server. i could able to get jmx server data. When i delete and reinstall new splunk enterprise and I copied splunk addon for jmx app of previous splunk to /etc/app folder. But here I am getting error as internal server cannot reach in configuration page. But input is configuration clear. Is their any option to add jmx server other then web interface . When I copy app why same configuration of jmx server is not applying.  
Hi all, We are trying to deploy pre trained Deep Learning models for ESCU. DSDL has been installed and container are loaded successfully. Connection with docker is also in good shape.  But when run... See more...
Hi all, We are trying to deploy pre trained Deep Learning models for ESCU. DSDL has been installed and container are loaded successfully. Connection with docker is also in good shape.  But when running the ESCU search, I am getting the following error messages.    MLTKC error: /apply: ERROR: unable to initialize module. Ended with exception: No module named 'keras_preprocessing' MLTKC parameters: {'params': {'mode': 'stage', 'algo': 'pretrained_dga_model_dsdl'}, 'args': ['is_dga', 'domain'], 'target_variable': ['is_dga'], 'feature_variables': ['domain'], 'model_name': 'pretrained_dga_model_dsdl', 'algo_name': 'MLTKContainer', 'mlspl_limits': {'handle_new_cat': 'default', 'max_distinct_cat_values': '100', 'max_distinct_cat_values_for_classifiers': '100', 'max_distinct_cat_values_for_scoring': '100', 'max_fit_time': '600', 'max_inputs': '100000', 'max_memory_usage_mb': '4000', 'max_model_size_mb': '30', 'max_score_time': '600', 'use_sampling': 'true'}, 'kfold_cv': None, 'dispatch_dir': '/opt/splunk/var/run/splunk/dispatch/1704812182.86156_AC9C076F-2C37-4E94-9DD0-0AE04AEB7952'}     From search.log   01-09-2024 09:56:44.725 INFO ChunkedExternProcessor [47063 ChunkedExternProcessorStderrLogger] - stderr: MLTKC endpoint: https://docker_host:32802 01-09-2024 09:56:44.850 INFO ChunkedExternProcessor [47063 ChunkedExternProcessorStderrLogger] - stderr: POST endpoint [https://docker_host:32802/apply] called with payload (2298991 bytes) 01-09-2024 09:56:45.166 INFO ChunkedExternProcessor [47063 ChunkedExternProcessorStderrLogger] - stderr: POST endpoint [https://docker_host:32802/apply] returned with payload (134 bytes) with status 200 01-09-2024 09:56:45.166 ERROR ChunkedExternProcessor [47063 ChunkedExternProcessorStderrLogger] - stderr: MLTKC error: /apply: ERROR: unable to initialize module. Ended with exception: No module named 'keras_preprocessing' 01-09-2024 09:56:45.167 ERROR ChunkedExternProcessor [47063 ChunkedExternProcessorStderrLogger] - stderr: MLTKC parameters: {'params': {'mode': 'stage', 'algo': 'pretrained_dga_model_dsdl'}, 'args': ['is_dga', 'domain'], 'target_variable': ['is_dga'], 'feature_variables': ['domain'], 'model_name': 'pretrained_dga_model_dsdl', 'algo_name': 'MLTKContainer', 'mlspl_limits': {'handle_new_cat': 'default', 'max_distinct_cat_values': '100', 'max_distinct_cat_values_for_classifiers': '100', 'max_distinct_cat_values_for_scoring': '100', 'max_fit_time': '600', 'max_inputs': '100000', 'max_memory_usage_mb': '4000', 'max_model_size_mb': '30', 'max_score_time': '600', 'use_sampling': 'true'}, 'kfold_cv': None, 'dispatch_dir': '/opt/splunk/var/run/splunk/dispatch/1704812182.86156_AC9C076F-2C37-4E94-9DD0-0AE04AEB7952'} 01-09-2024 09:56:45.167 ERROR ChunkedExternProcessor [47063 ChunkedExternProcessorStderrLogger] - stderr: apply ended with options {'params': {'mode': 'stage', 'algo': 'pretrained_dga_model_dsdl'}, 'args': ['is_dga', 'domain'], 'target_variable': ['is_dga'], 'feature_variables': ['domain'], 'model_name': 'pretrained_dga_model_dsdl', 'algo_name': 'MLTKContainer', 'mlspl_limits': {'handle_new_cat': 'default', 'max_distinct_cat_values': '100', 'max_distinct_cat_values_for_classifiers': '100', 'max_distinct_cat_values_for_scoring': '100', 'max_fit_time': '600', 'max_inputs': '100000', 'max_memory_usage_mb': '4000', 'max_model_size_mb': '30', 'max_score_time': '600', 'use_sampling': 'true'}, 'kfold_cv': None, 'dispatch_dir': '/opt/splunk/var/run/splunk/dispatch/1704812182.86156_AC9C076F-2C37-4E94-9DD0-0AE04AEB7952'}     Has anyone run into this before?  We have Golden Image CPU running .  Following shows up in container logs.  Thanks
Hi @michaelteck , As I said, you can add a text input to your inputs and use it to give a parameter to your search. The sample from @dtburrows3 could solve your requirement. Ciao. Giuseppe
Create a Firewall Summary Report that has Inbound Allow, Inbound Deny Traffic? 
Hi!   We have been installing Splunk Universal Forwarder on different servers in the on-prem environment of the company where I work, to bring the logs to an index in our Splunk Cloud. We managed ... See more...
Hi!   We have been installing Splunk Universal Forwarder on different servers in the on-prem environment of the company where I work, to bring the logs to an index in our Splunk Cloud. We managed to do it on almost all servers running Ubuntu, CentOS and Windows. Occasionally, we are having problems on a server with Ubuntu. For the installation, we did the following as we did for every other Ubuntu server: dpkg -i splunkforwarder-9.1.2-b6b9c8185839-linux-2.6-amd64.deb cd /opt/splunkforwarder/bin ./splunk start Insert user and password Download splunkclouduf.spl /opt/splunkforwarder/bin/splunk install app splunkclouduf.spl ./splunk add forward-server http-inputs-klar.splunkcloud.com:443 cd /opt/splunkforwarder/etc/system/local define input.conf as: # Monitor system logs for authentication and authorization events [monitor:///var/log/auth.log] disabled = false index = spei_servers sourcetype = linux_secure #fix bug in ubuntu related to: "Events from tracker.log have not been seen for the last 90 seconds, which is more than the yellow threshold (45 seconds). This typically occurs when indexing or forwarding are falling behind or are blocked." [health_reporter] aggregate_ingestion_latency_health = 0 [feature:ingestion_latency] alert.disabled = 1   disabled = 1 # Monitor system logs for general security events [monitor:///var/log/syslog] disabled = false index = spei_servers sourcetype = linux_syslog # Monitor Apache access and error logs [monitor:///var/log/apache2/access.log] disabled = false index = spei_servers sourcetype = apache_access [monitor:///var/log/apache2/error.log] disabled = false index = spei_servers sourcetype = apache_error # Monitor SSH logs for login attempts [monitor:///var/log/auth.log] disabled = false index = spei_servers sourcetype = sshd # Monitor sudo commands executed by users [monitor:///var/log/auth.log] disabled = false index = spei_servers sourcetype = sudo # Monitor UFW firewall logs (assuming UFW is used) [monitor:///var/log/ufw.log] disabled = false index = spei_servers sourcetype = ufw # Monitor audit logs (if available) [monitor:///var/log/audit/audit.log] disabled = false index = spei_servers sourcetype = linux_audit # Monitor file integrity using auditd (if available) [monitor:///var/log/audit/auditd.log] disabled = false index = spei_servers sourcetype = auditd # Monitor for changes to critical system files [monitor:///etc/passwd] disabled = false index = spei_servers sourcetype = linux_config # Monitor for changes to critical system binaries [monitor:///bin] disabled = false index = spei_servers sourcetype = linux_config # Monitor for changes to critical system configuration files [monitor:///etc] disabled = false index = spei_servers sourcetype = linux_config echo "[httpout] httpEventCollectorToken = <our index token> uri = https:// <our subdomain>.splunkcloud.com:443" > outputs.conf cd /opt/splunkforwarder/bin export SPLUNK_HOME=/opt/splunkforwarder ./splunk restart When going to Splunk Cloud, we don't see the logs coming from this specific server. So we saw our logs and we saw this in health.log: root@coas:/opt/splunkforwarder/var/log/splunk# tail health.log 01-09-2024 08:21:30.197 -0600 INFO PeriodicHealthReporter - feature="Forwarder Ingestion Latency" color=green due_to_stanza="feature:ingestion_latency_reported" node_type=feature node_path=splunkd.file_monitor_input.forwarder_ingestion_latency 01-09-2024 08:21:30.197 -0600 INFO PeriodicHealthReporter - feature="Ingestion Latency" color=red due_to_stanza="feature:ingestion_latency" due_to_indicator="ingestion_latency_gap_multiplier" node_type=feature node_path=splunkd.file_monitor_input.ingestion_latency 01-09-2024 08:21:30.197 -0600 INFO PeriodicHealthReporter - feature="Ingestion Latency" color=red indicator="ingestion_latency_gap_multiplier" due_to_threshold_value=1 measured_value=1755 reason="Events from tracker.log have not been seen for the last 1755 seconds, which is more than the red threshold (210 seconds). This typically occurs when indexing or forwarding are falling behind or are blocked." node_type=indicator node_path=splunkd.file_monitor_input.ingestion_latency.ingestion_latency_gap_multiplier 01-09-2024 08:21:30.197 -0600 INFO PeriodicHealthReporter - feature="Large and Archive File Reader-0" color=green due_to_stanza="feature:batchreader" node_type=feature node_path=splunkd.file_monitor_input.large_and_archive_file_reader-0 01-09-2024 08:21:30.197 -0600 INFO PeriodicHealthReporter - feature="Real-time Reader-0" color=red due_to_stanza="feature:tailreader" due_to_indicator="data_out_rate" node_type=feature node_path=splunkd.file_monitor_input.real-time_reader-0 01-09-2024 08:21:30.197 -0600 INFO PeriodicHealthReporter - feature="Real-time Reader-0" color=red indicator="data_out_rate" due_to_threshold_value=2 measured_value=352 reason="The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data." node_type=indicator node_path=splunkd.file_monitor_input.real-time_reader-0.data_out_rate 01-09-2024 08:21:30.197 -0600 INFO PeriodicHealthReporter - feature="Workload Management" color=green node_type=category node_path=splunkd.workload_management 01-09-2024 08:21:30.197 -0600 INFO PeriodicHealthReporter - feature="Admission Rules Check" color=green due_to_stanza="feature:admission_rules_check" node_type=feature node_path=splunkd.workload_management.admission_rules_check 01-09-2024 08:21:30.198 -0600 INFO PeriodicHealthReporter - feature="Configuration Check" color=green due_to_stanza="feature:wlm_configuration_check" node_type=feature node_path=splunkd.workload_management.configuration_check 01-09-2024 08:21:30.198 -0600 INFO PeriodicHealthReporter - feature="System Check" color=green due_to_stanza="feature:wlm_system_check" node_type=feature node_path=splunkd.workload_management.system_check   and this in splunkd.log: root@coas:/opt/splunkforwarder/var/log/splunk# tail splunkd.log 01-09-2024 08:33:01.227 -0600 WARN AutoLoadBalancedConnectionStrategy [3273664 TcpOutEloop] - Cooked connection to ip=54.87.146.250:9997 timed out 01-09-2024 08:33:21.135 -0600 WARN AutoLoadBalancedConnectionStrategy [3273664 TcpOutEloop] - Cooked connection to ip=54.160.213.9:9997 timed out 01-09-2024 08:33:41.034 -0600 WARN AutoLoadBalancedConnectionStrategy [3273664 TcpOutEloop] - Cooked connection to ip=54.160.213.9:9997 timed out 01-09-2024 08:34:00.942 -0600 WARN AutoLoadBalancedConnectionStrategy [3273664 TcpOutEloop] - Cooked connection to ip=54.87.146.250:9997 timed out 01-09-2024 08:34:20.841 -0600 WARN AutoLoadBalancedConnectionStrategy [3273664 TcpOutEloop] - Cooked connection to ip=18.214.192.43:9997 timed out 01-09-2024 08:34:40.750 -0600 WARN AutoLoadBalancedConnectionStrategy [3273664 TcpOutEloop] - Cooked connection to ip=18.214.192.43:9997 timed out 01-09-2024 08:35:00.637 -0600 WARN AutoLoadBalancedConnectionStrategy [3273664 TcpOutEloop] - Cooked connection to ip=54.87.146.250:9997 timed out 01-09-2024 08:35:20.544 -0600 WARN AutoLoadBalancedConnectionStrategy [3273664 TcpOutEloop] - Cooked connection to ip=54.160.213.9:9997 timed out 01-09-2024 08:35:40.443 -0600 WARN AutoLoadBalancedConnectionStrategy [3273664 TcpOutEloop] - Cooked connection to ip=18.214.192.43:9997 timed out 01-09-2024 08:36:00.352 -0600 WARN AutoLoadBalancedConnectionStrategy [3273664 TcpOutEloop] - Cooked connection to ip=54.87.146.250:9997 timed out   do you have any thought or have faced this issue in the past?
The documentation does not say step 3 is optional.  That you can see your data confirms it is present, but that is not the same thing as fetching the ACK. Restarting the service clears the pending A... See more...
The documentation does not say step 3 is optional.  That you can see your data confirms it is present, but that is not the same thing as fetching the ACK. Restarting the service clears the pending ACKs and re-enables reception of data.  Fetching the ACKs will also re-enable reception without a restart. If the client cannot fetch ACKs then I suggest turning off HEC ACK.