All Topics

Top

All Topics

I have two data sources or searches that return a number. They are used to supply data to radial components. I've ticked the box so both are also available as tokens, Numerator and Denominator. I'd l... See more...
I have two data sources or searches that return a number. They are used to supply data to radial components. I've ticked the box so both are also available as tokens, Numerator and Denominator. I'd like a dashboard component that expresses the ratio of those numbers as a percent. How do I do this? I've tried creating a third search that returns the value, but that does not to work: | eval result=round("$Denominator$" / "$Numerator$" * 100)."%"
Register here. This thread is for the Community Office Hours session with the Splunk Threat Research Team on Generative AI on Wed, Mar 13, 2024 at 1pm PT / 4pm ET.    This is your opportunity to ... See more...
Register here. This thread is for the Community Office Hours session with the Splunk Threat Research Team on Generative AI on Wed, Mar 13, 2024 at 1pm PT / 4pm ET.    This is your opportunity to ask questions related to your specific Generative AI challenge or use case. Including: Understanding generative AI technologies and techniques The application of AI techniques in cybersecurity How to use Large Language Models (LLMs), Generative Adversarial Networks (GANs), Diffusion Models, and Autoencoders The particular strengths of different generative AI techniques Real-world security scenarios that these techniques can support Practical tips for implementing these techniques to enhance threat detection Anything else you'd like to learn!   Please submit your questions at registration or as comments below. You can also head to the #office-hours user Slack channel to ask questions (request access here).    Pre-submitted questions will be prioritized. After that, we will open the floor up to live Q&A with meeting participants.   Look forward to connecting!
Hi, I have created a custom metric to monitor the tablespace usage for Oracle databases that selects two columns, the tablespace name and used percent: "select tablespace_name,used_percent from dba_t... See more...
Hi, I have created a custom metric to monitor the tablespace usage for Oracle databases that selects two columns, the tablespace name and used percent: "select tablespace_name,used_percent from dba_tablespace_usage_metrics". In the metrics browser it will show me a list of items which are the tablespaces: On the health rule I try to specify the relative metric path but it is not being evaluated, I don't want to use the first option because new tablespaces are constantly created and I would like this to work in a dynamic way. My intention is to send an alert when the used_percent column is above a certain threshold for any of the tablespaces.
Hello, The description is not very descriptive. Hopefully, the example and data will be. I have a list of 1500 numbers. I need to calculate the sum in increments of 5 numbers. However, the numbers ... See more...
Hello, The description is not very descriptive. Hopefully, the example and data will be. I have a list of 1500 numbers. I need to calculate the sum in increments of 5 numbers. However, the numbers will overlap (be used more than once). Using this code of only 10 values. | makeresults | fields - _time | eval nums="1,2,3,4,5,6,7,8,9,10" | makemv nums delim="," | eval cnt=0 | foreach nums [| eval nums_set_of_3 = mvindex(nums,cnt,+2) | eval sum_nums_{cnt} = sum(mvindex(nums_set_of_3,cnt,+2)) | eval cnt = cnt + 1]   The first sum (1st value + 2nd value + 3rd value or 1 + 2+ 3) = 6. The second sum (2nd value + 3rd value + 4th value or 2 + 3 + 4) = 9. The third sum would be (3rd value + 4th value + 5th value or 3 + 4 + 5) =12. And so on. The above code only makes it through one pass, the first sum. Thanks and God bless, Genesius
Hi everyone.   I am generating a cluster map which to make a count by log_subtype and in the map itself shows me the county and the latitude and longitude data. The question here is whether I c... See more...
Hi everyone.   I am generating a cluster map which to make a count by log_subtype and in the map itself shows me the county and the latitude and longitude data. The question here is whether I can replace the latitude and longitude data with the name of the country.   I have the query as follows:   | iplocation client_ip | geostats count by log_subtype
Hi, I am trying to to forward logs from a heavy forwarder to a gcp bucket using the outputs.conf, but it has been unsuccessful (no logs seen in the bucket). Not sure if that has to do with my confi... See more...
Hi, I am trying to to forward logs from a heavy forwarder to a gcp bucket using the outputs.conf, but it has been unsuccessful (no logs seen in the bucket). Not sure if that has to do with my config file or something else. Can anyone help me with an example? This is my outputs.conf and I don't know what is wrong. # BASE SETTINGS [tcpout] defaultGroup = primary_indexers forceTimebasedAutoLB = true [tcpout:bucket_index] indexAndForward = true forwardedindex.0.whitelist = my_index [bucket] compressed = false json_escaping = auto google_storage_key = “12345abcde” google_storage_bucket = my-gcp-bucket path = /path/my-gcp-bucket route = bucket_index
We have both the Microsoft 365 App for Splunk and Microsoft Teams Add-on for Splunk installed in our Splunk cloud instance. However, we do not have the Teams Call QoS dashboard option seen in the scr... See more...
We have both the Microsoft 365 App for Splunk and Microsoft Teams Add-on for Splunk installed in our Splunk cloud instance. However, we do not have the Teams Call QoS dashboard option seen in the screenshots here: https://splunkbase.splunk.com/app/4994. Has that feature been removed? Are we missing something?
Does anyone know if version 7.x of Threat Defense Manager (f.k.a. Firepower Management Center)  is compatible with the latest version of Cisco's eStreamer add-on? https://splunkbase.splunk.com/app... See more...
Does anyone know if version 7.x of Threat Defense Manager (f.k.a. Firepower Management Center)  is compatible with the latest version of Cisco's eStreamer add-on? https://splunkbase.splunk.com/app/3662
how to change backslash of text input of a dashboard to use in subsequent search?
Hello, I need some help. Manipulating time is something I have struggled with  Below is the code I have   ((index="desktop_os") (sourcetype="itsm_remedy")) earliest=-1d@d | search ASSIGNED_GROUP ... See more...
Hello, I need some help. Manipulating time is something I have struggled with  Below is the code I have   ((index="desktop_os") (sourcetype="itsm_remedy")) earliest=-1d@d | search ASSIGNED_GROUP IN ("Desktop_Support_1", "Remote_Support") ``` Convert REPORTED_DATE to epoch form ``` | eval REPORTED_DATE2=strptime(REPORTED_DATE, "%Y-%m-%d %H:%M:%S") ``` Keep events reported more than 12 hours ago so are due in < 12 hours ``` | where REPORTED_DATE2 <= relative_time(now(), "-12h") | eval MTTRSET = round((now()-REPORTED_DATE2)/3600) | dedup INCIDENT_NUMBER | stats values(REPORTED_DATE) AS Reported, values(DESCRIPTION) AS Title, values(ASSIGNED_GROUP) AS Group, values(ASSIGNEE) AS Assignee, LAST(STATUS_TXT) as Status,values(MTTRSET) as MTTRHours, values(STATUS_REASON_TXT) as PendStatus by INCIDENT_NUMBER | search Status IN ("ASSIGNED", "IN PROGRESS", "PENDING") | sort Assignee | table Assignee MTTRHours INCIDENT_NUMBER Reported Title Title Status PendStatus  this code runs and gives us the results we need, but the issue is that REPORTED_DATE field is off by 5 hours due to time zone issue. that is a custom field from out ticketing system that is stuck on GMT and the output looks like  2024-01-08 09:22:49.0 I need to get that field produce a correct timezone for EST. I am struggling with making it work. I looked at this thread but that is not working for us: Solved: How to convert date and time in UTC to EST? - Splunk Community Any help is appreciated.   Thanks  
Hi, I have a log with several transactions, each one have some events. All event in one transaction share the same ID. The other events contains some information each one, for example, execution ti... See more...
Hi, I have a log with several transactions, each one have some events. All event in one transaction share the same ID. The other events contains some information each one, for example, execution time, transact type, url. login url, etc.... This fields can be in one or several of the events. I want to obtain the total transactions of each type in spanned time, for example each 5m. I need to group the events of each trasaction for extract the info for it. index=prueba source="*blablabla*" | eval Date=strftime(_time,"%Y/%m/%d") | eval Time=strftime(_time,"%H:%M:%S") | eval Fecha=strftime(_time,"%Y/%m/%d %H:%M:%S") | rex "^.+transactType:\s(?P<transactType>(.\w+)+)" | stats values(Fecha) as Fecha, values(transactType) as transactType by ID This is Ok, if i want count transactType then i do: index=prueba source="*blablabla*" | eval Date=strftime(_time,"%Y/%m/%d") | eval Time=strftime(_time,"%H:%M:%S") | eval Fecha=strftime(_time,"%Y/%m/%d %H:%M:%S") | rex "^.+transactType:\s(?P<transactType>(.\w+)+)" | stats values(Fecha) as Fecha, values(transactType) as transactType by ID |stats count by transactType The problem is if i want to obtain that in a span time: I cant do this because there is some events with the transactType field in one transaction: index=prueba source="*blablabla*" | eval Date=strftime(_time,"%Y/%m/%d") | eval Time=strftime(_time,"%H:%M:%S") | eval Fecha=strftime(_time,"%Y/%m/%d %H:%M:%S") | rex "^.+transactType:\s(?P<transactType>(.\w+)+)" | timechart span=5m count by transactType And following query dont give me any result: index=prueba source="*blablabla*" | eval Date=strftime(_time,"%Y/%m/%d") | eval Time=strftime(_time,"%H:%M:%S") | eval Fecha=strftime(_time,"%Y/%m/%d %H:%M:%S") | rex "^.+transactType:\s(?P<transactType>(.\w+)+)" | stats values(Fecha) as Fecha, values(transactType) as transactType by ID | timechart span=5m count by transactType Im tried too (but i dont get results): index=prueba source="*blablabla*" | eval Date=strftime(_time,"%Y/%m/%d") | eval Time=strftime(_time,"%H:%M:%S") | eval Fecha=strftime(_time,"%Y/%m/%d %H:%M:%S") | rex "^.+transactType:\s(?P<transactType>(.\w+)+)" | bucket Fecha span=5m | stats values(Fecha) as Fecha, values(transactType) as transactType by ID |stats count by transactType Or: index=prueba source="*blablabla*" | eval Date=strftime(_time,"%Y/%m/%d") | eval Time=strftime(_time,"%H:%M:%S") | eval Fecha=strftime(_time,"%Y/%m/%d %H:%M:%S") | rex "^.+transactType:\s(?P<transactType>(.\w+)+)" | stats values(Fecha) as Fecha, values(transactType) as transactType by ID | bucket Fecha span=5m |stats count by transactType How can i obtain what i want?  
i have configured the splunk addon for jmx and added the jmx server. i could able to get jmx server data. When i delete and reinstall new splunk enterprise and I copied splunk addon for jmx app of pr... See more...
i have configured the splunk addon for jmx and added the jmx server. i could able to get jmx server data. When i delete and reinstall new splunk enterprise and I copied splunk addon for jmx app of previous splunk to /etc/app folder. But here I am getting error as internal server cannot reach in configuration page. But input is configuration clear. Is their any option to add jmx server other then web interface . When I copy app why same configuration of jmx server is not applying.  
Hi all, We are trying to deploy pre trained Deep Learning models for ESCU. DSDL has been installed and container are loaded successfully. Connection with docker is also in good shape.  But when run... See more...
Hi all, We are trying to deploy pre trained Deep Learning models for ESCU. DSDL has been installed and container are loaded successfully. Connection with docker is also in good shape.  But when running the ESCU search, I am getting the following error messages.    MLTKC error: /apply: ERROR: unable to initialize module. Ended with exception: No module named 'keras_preprocessing' MLTKC parameters: {'params': {'mode': 'stage', 'algo': 'pretrained_dga_model_dsdl'}, 'args': ['is_dga', 'domain'], 'target_variable': ['is_dga'], 'feature_variables': ['domain'], 'model_name': 'pretrained_dga_model_dsdl', 'algo_name': 'MLTKContainer', 'mlspl_limits': {'handle_new_cat': 'default', 'max_distinct_cat_values': '100', 'max_distinct_cat_values_for_classifiers': '100', 'max_distinct_cat_values_for_scoring': '100', 'max_fit_time': '600', 'max_inputs': '100000', 'max_memory_usage_mb': '4000', 'max_model_size_mb': '30', 'max_score_time': '600', 'use_sampling': 'true'}, 'kfold_cv': None, 'dispatch_dir': '/opt/splunk/var/run/splunk/dispatch/1704812182.86156_AC9C076F-2C37-4E94-9DD0-0AE04AEB7952'}     From search.log   01-09-2024 09:56:44.725 INFO ChunkedExternProcessor [47063 ChunkedExternProcessorStderrLogger] - stderr: MLTKC endpoint: https://docker_host:32802 01-09-2024 09:56:44.850 INFO ChunkedExternProcessor [47063 ChunkedExternProcessorStderrLogger] - stderr: POST endpoint [https://docker_host:32802/apply] called with payload (2298991 bytes) 01-09-2024 09:56:45.166 INFO ChunkedExternProcessor [47063 ChunkedExternProcessorStderrLogger] - stderr: POST endpoint [https://docker_host:32802/apply] returned with payload (134 bytes) with status 200 01-09-2024 09:56:45.166 ERROR ChunkedExternProcessor [47063 ChunkedExternProcessorStderrLogger] - stderr: MLTKC error: /apply: ERROR: unable to initialize module. Ended with exception: No module named 'keras_preprocessing' 01-09-2024 09:56:45.167 ERROR ChunkedExternProcessor [47063 ChunkedExternProcessorStderrLogger] - stderr: MLTKC parameters: {'params': {'mode': 'stage', 'algo': 'pretrained_dga_model_dsdl'}, 'args': ['is_dga', 'domain'], 'target_variable': ['is_dga'], 'feature_variables': ['domain'], 'model_name': 'pretrained_dga_model_dsdl', 'algo_name': 'MLTKContainer', 'mlspl_limits': {'handle_new_cat': 'default', 'max_distinct_cat_values': '100', 'max_distinct_cat_values_for_classifiers': '100', 'max_distinct_cat_values_for_scoring': '100', 'max_fit_time': '600', 'max_inputs': '100000', 'max_memory_usage_mb': '4000', 'max_model_size_mb': '30', 'max_score_time': '600', 'use_sampling': 'true'}, 'kfold_cv': None, 'dispatch_dir': '/opt/splunk/var/run/splunk/dispatch/1704812182.86156_AC9C076F-2C37-4E94-9DD0-0AE04AEB7952'} 01-09-2024 09:56:45.167 ERROR ChunkedExternProcessor [47063 ChunkedExternProcessorStderrLogger] - stderr: apply ended with options {'params': {'mode': 'stage', 'algo': 'pretrained_dga_model_dsdl'}, 'args': ['is_dga', 'domain'], 'target_variable': ['is_dga'], 'feature_variables': ['domain'], 'model_name': 'pretrained_dga_model_dsdl', 'algo_name': 'MLTKContainer', 'mlspl_limits': {'handle_new_cat': 'default', 'max_distinct_cat_values': '100', 'max_distinct_cat_values_for_classifiers': '100', 'max_distinct_cat_values_for_scoring': '100', 'max_fit_time': '600', 'max_inputs': '100000', 'max_memory_usage_mb': '4000', 'max_model_size_mb': '30', 'max_score_time': '600', 'use_sampling': 'true'}, 'kfold_cv': None, 'dispatch_dir': '/opt/splunk/var/run/splunk/dispatch/1704812182.86156_AC9C076F-2C37-4E94-9DD0-0AE04AEB7952'}     Has anyone run into this before?  We have Golden Image CPU running .  Following shows up in container logs.  Thanks
Create a Firewall Summary Report that has Inbound Allow, Inbound Deny Traffic? 
Hi!   We have been installing Splunk Universal Forwarder on different servers in the on-prem environment of the company where I work, to bring the logs to an index in our Splunk Cloud. We managed ... See more...
Hi!   We have been installing Splunk Universal Forwarder on different servers in the on-prem environment of the company where I work, to bring the logs to an index in our Splunk Cloud. We managed to do it on almost all servers running Ubuntu, CentOS and Windows. Occasionally, we are having problems on a server with Ubuntu. For the installation, we did the following as we did for every other Ubuntu server: dpkg -i splunkforwarder-9.1.2-b6b9c8185839-linux-2.6-amd64.deb cd /opt/splunkforwarder/bin ./splunk start Insert user and password Download splunkclouduf.spl /opt/splunkforwarder/bin/splunk install app splunkclouduf.spl ./splunk add forward-server http-inputs-klar.splunkcloud.com:443 cd /opt/splunkforwarder/etc/system/local define input.conf as: # Monitor system logs for authentication and authorization events [monitor:///var/log/auth.log] disabled = false index = spei_servers sourcetype = linux_secure #fix bug in ubuntu related to: "Events from tracker.log have not been seen for the last 90 seconds, which is more than the yellow threshold (45 seconds). This typically occurs when indexing or forwarding are falling behind or are blocked." [health_reporter] aggregate_ingestion_latency_health = 0 [feature:ingestion_latency] alert.disabled = 1   disabled = 1 # Monitor system logs for general security events [monitor:///var/log/syslog] disabled = false index = spei_servers sourcetype = linux_syslog # Monitor Apache access and error logs [monitor:///var/log/apache2/access.log] disabled = false index = spei_servers sourcetype = apache_access [monitor:///var/log/apache2/error.log] disabled = false index = spei_servers sourcetype = apache_error # Monitor SSH logs for login attempts [monitor:///var/log/auth.log] disabled = false index = spei_servers sourcetype = sshd # Monitor sudo commands executed by users [monitor:///var/log/auth.log] disabled = false index = spei_servers sourcetype = sudo # Monitor UFW firewall logs (assuming UFW is used) [monitor:///var/log/ufw.log] disabled = false index = spei_servers sourcetype = ufw # Monitor audit logs (if available) [monitor:///var/log/audit/audit.log] disabled = false index = spei_servers sourcetype = linux_audit # Monitor file integrity using auditd (if available) [monitor:///var/log/audit/auditd.log] disabled = false index = spei_servers sourcetype = auditd # Monitor for changes to critical system files [monitor:///etc/passwd] disabled = false index = spei_servers sourcetype = linux_config # Monitor for changes to critical system binaries [monitor:///bin] disabled = false index = spei_servers sourcetype = linux_config # Monitor for changes to critical system configuration files [monitor:///etc] disabled = false index = spei_servers sourcetype = linux_config echo "[httpout] httpEventCollectorToken = <our index token> uri = https:// <our subdomain>.splunkcloud.com:443" > outputs.conf cd /opt/splunkforwarder/bin export SPLUNK_HOME=/opt/splunkforwarder ./splunk restart When going to Splunk Cloud, we don't see the logs coming from this specific server. So we saw our logs and we saw this in health.log: root@coas:/opt/splunkforwarder/var/log/splunk# tail health.log 01-09-2024 08:21:30.197 -0600 INFO PeriodicHealthReporter - feature="Forwarder Ingestion Latency" color=green due_to_stanza="feature:ingestion_latency_reported" node_type=feature node_path=splunkd.file_monitor_input.forwarder_ingestion_latency 01-09-2024 08:21:30.197 -0600 INFO PeriodicHealthReporter - feature="Ingestion Latency" color=red due_to_stanza="feature:ingestion_latency" due_to_indicator="ingestion_latency_gap_multiplier" node_type=feature node_path=splunkd.file_monitor_input.ingestion_latency 01-09-2024 08:21:30.197 -0600 INFO PeriodicHealthReporter - feature="Ingestion Latency" color=red indicator="ingestion_latency_gap_multiplier" due_to_threshold_value=1 measured_value=1755 reason="Events from tracker.log have not been seen for the last 1755 seconds, which is more than the red threshold (210 seconds). This typically occurs when indexing or forwarding are falling behind or are blocked." node_type=indicator node_path=splunkd.file_monitor_input.ingestion_latency.ingestion_latency_gap_multiplier 01-09-2024 08:21:30.197 -0600 INFO PeriodicHealthReporter - feature="Large and Archive File Reader-0" color=green due_to_stanza="feature:batchreader" node_type=feature node_path=splunkd.file_monitor_input.large_and_archive_file_reader-0 01-09-2024 08:21:30.197 -0600 INFO PeriodicHealthReporter - feature="Real-time Reader-0" color=red due_to_stanza="feature:tailreader" due_to_indicator="data_out_rate" node_type=feature node_path=splunkd.file_monitor_input.real-time_reader-0 01-09-2024 08:21:30.197 -0600 INFO PeriodicHealthReporter - feature="Real-time Reader-0" color=red indicator="data_out_rate" due_to_threshold_value=2 measured_value=352 reason="The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data." node_type=indicator node_path=splunkd.file_monitor_input.real-time_reader-0.data_out_rate 01-09-2024 08:21:30.197 -0600 INFO PeriodicHealthReporter - feature="Workload Management" color=green node_type=category node_path=splunkd.workload_management 01-09-2024 08:21:30.197 -0600 INFO PeriodicHealthReporter - feature="Admission Rules Check" color=green due_to_stanza="feature:admission_rules_check" node_type=feature node_path=splunkd.workload_management.admission_rules_check 01-09-2024 08:21:30.198 -0600 INFO PeriodicHealthReporter - feature="Configuration Check" color=green due_to_stanza="feature:wlm_configuration_check" node_type=feature node_path=splunkd.workload_management.configuration_check 01-09-2024 08:21:30.198 -0600 INFO PeriodicHealthReporter - feature="System Check" color=green due_to_stanza="feature:wlm_system_check" node_type=feature node_path=splunkd.workload_management.system_check   and this in splunkd.log: root@coas:/opt/splunkforwarder/var/log/splunk# tail splunkd.log 01-09-2024 08:33:01.227 -0600 WARN AutoLoadBalancedConnectionStrategy [3273664 TcpOutEloop] - Cooked connection to ip=54.87.146.250:9997 timed out 01-09-2024 08:33:21.135 -0600 WARN AutoLoadBalancedConnectionStrategy [3273664 TcpOutEloop] - Cooked connection to ip=54.160.213.9:9997 timed out 01-09-2024 08:33:41.034 -0600 WARN AutoLoadBalancedConnectionStrategy [3273664 TcpOutEloop] - Cooked connection to ip=54.160.213.9:9997 timed out 01-09-2024 08:34:00.942 -0600 WARN AutoLoadBalancedConnectionStrategy [3273664 TcpOutEloop] - Cooked connection to ip=54.87.146.250:9997 timed out 01-09-2024 08:34:20.841 -0600 WARN AutoLoadBalancedConnectionStrategy [3273664 TcpOutEloop] - Cooked connection to ip=18.214.192.43:9997 timed out 01-09-2024 08:34:40.750 -0600 WARN AutoLoadBalancedConnectionStrategy [3273664 TcpOutEloop] - Cooked connection to ip=18.214.192.43:9997 timed out 01-09-2024 08:35:00.637 -0600 WARN AutoLoadBalancedConnectionStrategy [3273664 TcpOutEloop] - Cooked connection to ip=54.87.146.250:9997 timed out 01-09-2024 08:35:20.544 -0600 WARN AutoLoadBalancedConnectionStrategy [3273664 TcpOutEloop] - Cooked connection to ip=54.160.213.9:9997 timed out 01-09-2024 08:35:40.443 -0600 WARN AutoLoadBalancedConnectionStrategy [3273664 TcpOutEloop] - Cooked connection to ip=18.214.192.43:9997 timed out 01-09-2024 08:36:00.352 -0600 WARN AutoLoadBalancedConnectionStrategy [3273664 TcpOutEloop] - Cooked connection to ip=54.87.146.250:9997 timed out   do you have any thought or have faced this issue in the past?
Hello everyone and Happy New Year   I'm newbie with splunk. And I try to make a full dynamique dashboard with the app Search & Reporting.  I work on Talend's logs. I'm looking for to create a s... See more...
Hello everyone and Happy New Year   I'm newbie with splunk. And I try to make a full dynamique dashboard with the app Search & Reporting.  I work on Talend's logs. I'm looking for to create a search bar for searching job directly but do not use the drop-down menus.  Is there solution to make a search bar on top of the dashboard with "search" button ?  Thanks for reading me. 
Hi, Has anyone else encountered a situation where the 'orig_time' field isn't showing up in the Windows event logs with Eventcode=7040?
Hello, As I want to get my email events CIM compliant, I have trouble parsing a "disposition" key-value pair. Example: Having an event:   date=2024-01-09 time=11:59:43.258 device_id=XXXXXXXXXXXX... See more...
Hello, As I want to get my email events CIM compliant, I have trouble parsing a "disposition" key-value pair. Example: Having an event:   date=2024-01-09 time=11:59:43.258 device_id=XXXXXXXXXXXXXX log_id=0200012329 type=statistics pri=information session_id="4XXXXXXXXXXX-4XXXXXXXXXXXXX" client_name="example.com" disposition="Modify Subject;Insert Disclaimer;Defer Disposition" classifier="Data Loss Prevention" message_length="94756" subject="Test subject" message_id="xxxxxxxxxxxxxxxxxxxx@example.com" recv_time="" notif_delay="0" scan_time="0.186489" xfer_time="0.002166" srcfolder="" read_status="   I have disposition field extracted at search-time with the value "Modify Subject;Insert Disclaimer;Defer Disposition" Want I need to do is to separate the values into a multivalue field, and then use a lookup to determine the action. Lookup file:   vendor_action,action Accept,delivered Reject,blocked Add Header,delivered Modify Subject, Quarantine,quarantined Discard,blocked Replace, Delay, Rewrite, Insert Disclaimer, Defer Disposition,delivered Disclaimer Body,delivered Disclaimer Header,delivered Defer, Quarantine to Review,quarantined Content Filter as Spam, Encrypt, Decrypt, Alternate Host, BCC, Archive, Customized repackage, Repackage, Notification,   In the end, the event should have a field named action, and the value should for this example be delivered My props.conf:   [fortimail] ... ... LOOKUP-action = fortimail_action_lookup.csv vendor_action as disposition OUTPUT action REPORT-disposition = disposition_extraction   My transforms.conf:   [disposition_extraction] SOURCE_KEY = disposition DELIMS = ";" MV_ADD = true   But eventually i just end up with the original value ("Modify Subject;Insert Disclaimer;Defer Disposition") and it doesn't get separated What am I doing wrong?
Hi, For the past 90 days, we haven't detected any alerts triggered by the GitHub secret scanning rule in my Splunk ES. Consequently, we're unable to even query an index. Tq
Hi here is the default spl of App: Splunk App for Data Science and Deep Learning (Time Series Anomalies with STUMPY -Time Series Anomaly Detection with Matrix Profiles) | inputlookup cyclical_busin... See more...
Hi here is the default spl of App: Splunk App for Data Science and Deep Learning (Time Series Anomalies with STUMPY -Time Series Anomaly Detection with Matrix Profiles) | inputlookup cyclical_business_process.csv | eval _time=strptime(_time, "%Y-%m-%dT%H:%M:%S") | timechart span=15m avg(logons) as logons | fit MLTKContainer algo=stumpy m=96 logons from _time into app:stumpy_anomalies | table _time logons matrix_profile | eventstats p95(matrix_profile) as p95_matrix_profile | eval anomaly=if(matrix_profile>p95_matrix_profile,1,0) | fields - p95_matrix_profile     now want to run this command for my data, here is the sample log: 2022-11-30 23:59:00,122,124 2022-11-30 23:58:00,113,112 2022-11-30 23:57:00,144,143 2022-11-30 23:56:00,137,138 2022-11-30 23:55:00,119,120 2022-11-30 23:54:00,103,102 2022-11-30 23:53:00,104,105 2022-11-30 23:52:00,143,142 2022-11-30 23:51:00,138,139 2022-11-30 23:50:00,155,153 2022-11-30 23:49:00,100,102   timestamp: 2022-11-30 23:59:00 logons: 122   here is the spl that i run: | rex field=_raw "(?<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}),(?<logons>\d+)" | eval _time=strptime(time, "%Y-%m-%d %H:%M:%S") | timechart span=15m avg(logons) as logons | fit MLTKContainer algo=stumpy m=96 logons from _time into app:stumpy_anomalies | table _time logons matrix_profile | eventstats p95(matrix_profile) as p95_matrix_profile | eval anomaly=if(matrix_profile>p95_matrix_profile,1,0) | fields - p95_matrix_profile   before fit command _time show correctly, but after fit command it's empty! FYI: logon, matrix_profile, anomaly return correctly but _time is empty!   Any  idea?