All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all, We are trying to deploy pre trained Deep Learning models for ESCU. DSDL has been installed and container are loaded successfully. Connection with docker is also in good shape.  But when run... See more...
Hi all, We are trying to deploy pre trained Deep Learning models for ESCU. DSDL has been installed and container are loaded successfully. Connection with docker is also in good shape.  But when running the ESCU search, I am getting the following error messages.    MLTKC error: /apply: ERROR: unable to initialize module. Ended with exception: No module named 'keras_preprocessing' MLTKC parameters: {'params': {'mode': 'stage', 'algo': 'pretrained_dga_model_dsdl'}, 'args': ['is_dga', 'domain'], 'target_variable': ['is_dga'], 'feature_variables': ['domain'], 'model_name': 'pretrained_dga_model_dsdl', 'algo_name': 'MLTKContainer', 'mlspl_limits': {'handle_new_cat': 'default', 'max_distinct_cat_values': '100', 'max_distinct_cat_values_for_classifiers': '100', 'max_distinct_cat_values_for_scoring': '100', 'max_fit_time': '600', 'max_inputs': '100000', 'max_memory_usage_mb': '4000', 'max_model_size_mb': '30', 'max_score_time': '600', 'use_sampling': 'true'}, 'kfold_cv': None, 'dispatch_dir': '/opt/splunk/var/run/splunk/dispatch/1704812182.86156_AC9C076F-2C37-4E94-9DD0-0AE04AEB7952'}     From search.log   01-09-2024 09:56:44.725 INFO ChunkedExternProcessor [47063 ChunkedExternProcessorStderrLogger] - stderr: MLTKC endpoint: https://docker_host:32802 01-09-2024 09:56:44.850 INFO ChunkedExternProcessor [47063 ChunkedExternProcessorStderrLogger] - stderr: POST endpoint [https://docker_host:32802/apply] called with payload (2298991 bytes) 01-09-2024 09:56:45.166 INFO ChunkedExternProcessor [47063 ChunkedExternProcessorStderrLogger] - stderr: POST endpoint [https://docker_host:32802/apply] returned with payload (134 bytes) with status 200 01-09-2024 09:56:45.166 ERROR ChunkedExternProcessor [47063 ChunkedExternProcessorStderrLogger] - stderr: MLTKC error: /apply: ERROR: unable to initialize module. Ended with exception: No module named 'keras_preprocessing' 01-09-2024 09:56:45.167 ERROR ChunkedExternProcessor [47063 ChunkedExternProcessorStderrLogger] - stderr: MLTKC parameters: {'params': {'mode': 'stage', 'algo': 'pretrained_dga_model_dsdl'}, 'args': ['is_dga', 'domain'], 'target_variable': ['is_dga'], 'feature_variables': ['domain'], 'model_name': 'pretrained_dga_model_dsdl', 'algo_name': 'MLTKContainer', 'mlspl_limits': {'handle_new_cat': 'default', 'max_distinct_cat_values': '100', 'max_distinct_cat_values_for_classifiers': '100', 'max_distinct_cat_values_for_scoring': '100', 'max_fit_time': '600', 'max_inputs': '100000', 'max_memory_usage_mb': '4000', 'max_model_size_mb': '30', 'max_score_time': '600', 'use_sampling': 'true'}, 'kfold_cv': None, 'dispatch_dir': '/opt/splunk/var/run/splunk/dispatch/1704812182.86156_AC9C076F-2C37-4E94-9DD0-0AE04AEB7952'} 01-09-2024 09:56:45.167 ERROR ChunkedExternProcessor [47063 ChunkedExternProcessorStderrLogger] - stderr: apply ended with options {'params': {'mode': 'stage', 'algo': 'pretrained_dga_model_dsdl'}, 'args': ['is_dga', 'domain'], 'target_variable': ['is_dga'], 'feature_variables': ['domain'], 'model_name': 'pretrained_dga_model_dsdl', 'algo_name': 'MLTKContainer', 'mlspl_limits': {'handle_new_cat': 'default', 'max_distinct_cat_values': '100', 'max_distinct_cat_values_for_classifiers': '100', 'max_distinct_cat_values_for_scoring': '100', 'max_fit_time': '600', 'max_inputs': '100000', 'max_memory_usage_mb': '4000', 'max_model_size_mb': '30', 'max_score_time': '600', 'use_sampling': 'true'}, 'kfold_cv': None, 'dispatch_dir': '/opt/splunk/var/run/splunk/dispatch/1704812182.86156_AC9C076F-2C37-4E94-9DD0-0AE04AEB7952'}     Has anyone run into this before?  We have Golden Image CPU running .  Following shows up in container logs.  Thanks
Create a Firewall Summary Report that has Inbound Allow, Inbound Deny Traffic? 
Hi!   We have been installing Splunk Universal Forwarder on different servers in the on-prem environment of the company where I work, to bring the logs to an index in our Splunk Cloud. We managed ... See more...
Hi!   We have been installing Splunk Universal Forwarder on different servers in the on-prem environment of the company where I work, to bring the logs to an index in our Splunk Cloud. We managed to do it on almost all servers running Ubuntu, CentOS and Windows. Occasionally, we are having problems on a server with Ubuntu. For the installation, we did the following as we did for every other Ubuntu server: dpkg -i splunkforwarder-9.1.2-b6b9c8185839-linux-2.6-amd64.deb cd /opt/splunkforwarder/bin ./splunk start Insert user and password Download splunkclouduf.spl /opt/splunkforwarder/bin/splunk install app splunkclouduf.spl ./splunk add forward-server http-inputs-klar.splunkcloud.com:443 cd /opt/splunkforwarder/etc/system/local define input.conf as: # Monitor system logs for authentication and authorization events [monitor:///var/log/auth.log] disabled = false index = spei_servers sourcetype = linux_secure #fix bug in ubuntu related to: "Events from tracker.log have not been seen for the last 90 seconds, which is more than the yellow threshold (45 seconds). This typically occurs when indexing or forwarding are falling behind or are blocked." [health_reporter] aggregate_ingestion_latency_health = 0 [feature:ingestion_latency] alert.disabled = 1   disabled = 1 # Monitor system logs for general security events [monitor:///var/log/syslog] disabled = false index = spei_servers sourcetype = linux_syslog # Monitor Apache access and error logs [monitor:///var/log/apache2/access.log] disabled = false index = spei_servers sourcetype = apache_access [monitor:///var/log/apache2/error.log] disabled = false index = spei_servers sourcetype = apache_error # Monitor SSH logs for login attempts [monitor:///var/log/auth.log] disabled = false index = spei_servers sourcetype = sshd # Monitor sudo commands executed by users [monitor:///var/log/auth.log] disabled = false index = spei_servers sourcetype = sudo # Monitor UFW firewall logs (assuming UFW is used) [monitor:///var/log/ufw.log] disabled = false index = spei_servers sourcetype = ufw # Monitor audit logs (if available) [monitor:///var/log/audit/audit.log] disabled = false index = spei_servers sourcetype = linux_audit # Monitor file integrity using auditd (if available) [monitor:///var/log/audit/auditd.log] disabled = false index = spei_servers sourcetype = auditd # Monitor for changes to critical system files [monitor:///etc/passwd] disabled = false index = spei_servers sourcetype = linux_config # Monitor for changes to critical system binaries [monitor:///bin] disabled = false index = spei_servers sourcetype = linux_config # Monitor for changes to critical system configuration files [monitor:///etc] disabled = false index = spei_servers sourcetype = linux_config echo "[httpout] httpEventCollectorToken = <our index token> uri = https:// <our subdomain>.splunkcloud.com:443" > outputs.conf cd /opt/splunkforwarder/bin export SPLUNK_HOME=/opt/splunkforwarder ./splunk restart When going to Splunk Cloud, we don't see the logs coming from this specific server. So we saw our logs and we saw this in health.log: root@coas:/opt/splunkforwarder/var/log/splunk# tail health.log 01-09-2024 08:21:30.197 -0600 INFO PeriodicHealthReporter - feature="Forwarder Ingestion Latency" color=green due_to_stanza="feature:ingestion_latency_reported" node_type=feature node_path=splunkd.file_monitor_input.forwarder_ingestion_latency 01-09-2024 08:21:30.197 -0600 INFO PeriodicHealthReporter - feature="Ingestion Latency" color=red due_to_stanza="feature:ingestion_latency" due_to_indicator="ingestion_latency_gap_multiplier" node_type=feature node_path=splunkd.file_monitor_input.ingestion_latency 01-09-2024 08:21:30.197 -0600 INFO PeriodicHealthReporter - feature="Ingestion Latency" color=red indicator="ingestion_latency_gap_multiplier" due_to_threshold_value=1 measured_value=1755 reason="Events from tracker.log have not been seen for the last 1755 seconds, which is more than the red threshold (210 seconds). This typically occurs when indexing or forwarding are falling behind or are blocked." node_type=indicator node_path=splunkd.file_monitor_input.ingestion_latency.ingestion_latency_gap_multiplier 01-09-2024 08:21:30.197 -0600 INFO PeriodicHealthReporter - feature="Large and Archive File Reader-0" color=green due_to_stanza="feature:batchreader" node_type=feature node_path=splunkd.file_monitor_input.large_and_archive_file_reader-0 01-09-2024 08:21:30.197 -0600 INFO PeriodicHealthReporter - feature="Real-time Reader-0" color=red due_to_stanza="feature:tailreader" due_to_indicator="data_out_rate" node_type=feature node_path=splunkd.file_monitor_input.real-time_reader-0 01-09-2024 08:21:30.197 -0600 INFO PeriodicHealthReporter - feature="Real-time Reader-0" color=red indicator="data_out_rate" due_to_threshold_value=2 measured_value=352 reason="The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data." node_type=indicator node_path=splunkd.file_monitor_input.real-time_reader-0.data_out_rate 01-09-2024 08:21:30.197 -0600 INFO PeriodicHealthReporter - feature="Workload Management" color=green node_type=category node_path=splunkd.workload_management 01-09-2024 08:21:30.197 -0600 INFO PeriodicHealthReporter - feature="Admission Rules Check" color=green due_to_stanza="feature:admission_rules_check" node_type=feature node_path=splunkd.workload_management.admission_rules_check 01-09-2024 08:21:30.198 -0600 INFO PeriodicHealthReporter - feature="Configuration Check" color=green due_to_stanza="feature:wlm_configuration_check" node_type=feature node_path=splunkd.workload_management.configuration_check 01-09-2024 08:21:30.198 -0600 INFO PeriodicHealthReporter - feature="System Check" color=green due_to_stanza="feature:wlm_system_check" node_type=feature node_path=splunkd.workload_management.system_check   and this in splunkd.log: root@coas:/opt/splunkforwarder/var/log/splunk# tail splunkd.log 01-09-2024 08:33:01.227 -0600 WARN AutoLoadBalancedConnectionStrategy [3273664 TcpOutEloop] - Cooked connection to ip=54.87.146.250:9997 timed out 01-09-2024 08:33:21.135 -0600 WARN AutoLoadBalancedConnectionStrategy [3273664 TcpOutEloop] - Cooked connection to ip=54.160.213.9:9997 timed out 01-09-2024 08:33:41.034 -0600 WARN AutoLoadBalancedConnectionStrategy [3273664 TcpOutEloop] - Cooked connection to ip=54.160.213.9:9997 timed out 01-09-2024 08:34:00.942 -0600 WARN AutoLoadBalancedConnectionStrategy [3273664 TcpOutEloop] - Cooked connection to ip=54.87.146.250:9997 timed out 01-09-2024 08:34:20.841 -0600 WARN AutoLoadBalancedConnectionStrategy [3273664 TcpOutEloop] - Cooked connection to ip=18.214.192.43:9997 timed out 01-09-2024 08:34:40.750 -0600 WARN AutoLoadBalancedConnectionStrategy [3273664 TcpOutEloop] - Cooked connection to ip=18.214.192.43:9997 timed out 01-09-2024 08:35:00.637 -0600 WARN AutoLoadBalancedConnectionStrategy [3273664 TcpOutEloop] - Cooked connection to ip=54.87.146.250:9997 timed out 01-09-2024 08:35:20.544 -0600 WARN AutoLoadBalancedConnectionStrategy [3273664 TcpOutEloop] - Cooked connection to ip=54.160.213.9:9997 timed out 01-09-2024 08:35:40.443 -0600 WARN AutoLoadBalancedConnectionStrategy [3273664 TcpOutEloop] - Cooked connection to ip=18.214.192.43:9997 timed out 01-09-2024 08:36:00.352 -0600 WARN AutoLoadBalancedConnectionStrategy [3273664 TcpOutEloop] - Cooked connection to ip=54.87.146.250:9997 timed out   do you have any thought or have faced this issue in the past?
Hello everyone and Happy New Year   I'm newbie with splunk. And I try to make a full dynamique dashboard with the app Search & Reporting.  I work on Talend's logs. I'm looking for to create a s... See more...
Hello everyone and Happy New Year   I'm newbie with splunk. And I try to make a full dynamique dashboard with the app Search & Reporting.  I work on Talend's logs. I'm looking for to create a search bar for searching job directly but do not use the drop-down menus.  Is there solution to make a search bar on top of the dashboard with "search" button ?  Thanks for reading me. 
Hi, Has anyone else encountered a situation where the 'orig_time' field isn't showing up in the Windows event logs with Eventcode=7040?
Hello, As I want to get my email events CIM compliant, I have trouble parsing a "disposition" key-value pair. Example: Having an event:   date=2024-01-09 time=11:59:43.258 device_id=XXXXXXXXXXXX... See more...
Hello, As I want to get my email events CIM compliant, I have trouble parsing a "disposition" key-value pair. Example: Having an event:   date=2024-01-09 time=11:59:43.258 device_id=XXXXXXXXXXXXXX log_id=0200012329 type=statistics pri=information session_id="4XXXXXXXXXXX-4XXXXXXXXXXXXX" client_name="example.com" disposition="Modify Subject;Insert Disclaimer;Defer Disposition" classifier="Data Loss Prevention" message_length="94756" subject="Test subject" message_id="xxxxxxxxxxxxxxxxxxxx@example.com" recv_time="" notif_delay="0" scan_time="0.186489" xfer_time="0.002166" srcfolder="" read_status="   I have disposition field extracted at search-time with the value "Modify Subject;Insert Disclaimer;Defer Disposition" Want I need to do is to separate the values into a multivalue field, and then use a lookup to determine the action. Lookup file:   vendor_action,action Accept,delivered Reject,blocked Add Header,delivered Modify Subject, Quarantine,quarantined Discard,blocked Replace, Delay, Rewrite, Insert Disclaimer, Defer Disposition,delivered Disclaimer Body,delivered Disclaimer Header,delivered Defer, Quarantine to Review,quarantined Content Filter as Spam, Encrypt, Decrypt, Alternate Host, BCC, Archive, Customized repackage, Repackage, Notification,   In the end, the event should have a field named action, and the value should for this example be delivered My props.conf:   [fortimail] ... ... LOOKUP-action = fortimail_action_lookup.csv vendor_action as disposition OUTPUT action REPORT-disposition = disposition_extraction   My transforms.conf:   [disposition_extraction] SOURCE_KEY = disposition DELIMS = ";" MV_ADD = true   But eventually i just end up with the original value ("Modify Subject;Insert Disclaimer;Defer Disposition") and it doesn't get separated What am I doing wrong?
Hi, For the past 90 days, we haven't detected any alerts triggered by the GitHub secret scanning rule in my Splunk ES. Consequently, we're unable to even query an index. Tq
Hi here is the default spl of App: Splunk App for Data Science and Deep Learning (Time Series Anomalies with STUMPY -Time Series Anomaly Detection with Matrix Profiles) | inputlookup cyclical_busin... See more...
Hi here is the default spl of App: Splunk App for Data Science and Deep Learning (Time Series Anomalies with STUMPY -Time Series Anomaly Detection with Matrix Profiles) | inputlookup cyclical_business_process.csv | eval _time=strptime(_time, "%Y-%m-%dT%H:%M:%S") | timechart span=15m avg(logons) as logons | fit MLTKContainer algo=stumpy m=96 logons from _time into app:stumpy_anomalies | table _time logons matrix_profile | eventstats p95(matrix_profile) as p95_matrix_profile | eval anomaly=if(matrix_profile>p95_matrix_profile,1,0) | fields - p95_matrix_profile     now want to run this command for my data, here is the sample log: 2022-11-30 23:59:00,122,124 2022-11-30 23:58:00,113,112 2022-11-30 23:57:00,144,143 2022-11-30 23:56:00,137,138 2022-11-30 23:55:00,119,120 2022-11-30 23:54:00,103,102 2022-11-30 23:53:00,104,105 2022-11-30 23:52:00,143,142 2022-11-30 23:51:00,138,139 2022-11-30 23:50:00,155,153 2022-11-30 23:49:00,100,102   timestamp: 2022-11-30 23:59:00 logons: 122   here is the spl that i run: | rex field=_raw "(?<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}),(?<logons>\d+)" | eval _time=strptime(time, "%Y-%m-%d %H:%M:%S") | timechart span=15m avg(logons) as logons | fit MLTKContainer algo=stumpy m=96 logons from _time into app:stumpy_anomalies | table _time logons matrix_profile | eventstats p95(matrix_profile) as p95_matrix_profile | eval anomaly=if(matrix_profile>p95_matrix_profile,1,0) | fields - p95_matrix_profile   before fit command _time show correctly, but after fit command it's empty! FYI: logon, matrix_profile, anomaly return correctly but _time is empty!   Any  idea?
Hi Splunkers,    I'm having a lookup country_categorization, which have the keyword and its equivalent country, we need to use this info for the main search asset when the country field from index i... See more...
Hi Splunkers,    I'm having a lookup country_categorization, which have the keyword and its equivalent country, we need to use this info for the main search asset when the country field from index is "not available" or "Unknown", we need to use this keyword from lookup, need to compare with asset name with index, usually keyqords are set of prefix of asset name with multiple entries and it should match with equivalent country. Index- Asset, country braiskdidi001, Britain breliudusfidf002, Unknown bruliwhdcjn001, not available lookup keyword, country bru - Britain bre - Britain   the output should be   braiskdidi001, Britain breliudusfidf002, Britain bruliwhdcjn001, Britain. Thanks in Advance! Manoj Kumar S
Hello, I'd like to know how to locate the correlation searches that XSOAR is monitoring, rather than the incident review panel in the ES. Could you please check if there's a REST API Search availabl... See more...
Hello, I'd like to know how to locate the correlation searches that XSOAR is monitoring, rather than the incident review panel in the ES. Could you please check if there's a REST API Search available for this? Thanks!
I have alert configure in Splunk and alert search query is generating the events but am not receiving any email alerts  other alerts are working fine in my environment . I have selected "send email" ... See more...
I have alert configure in Splunk and alert search query is generating the events but am not receiving any email alerts  other alerts are working fine in my environment . I have selected "send email" in alert action In splunk . 
Hello Splunkers!!  While accessing the advance search setting macro page is not visible anymore. We have a macro folder that is present under the app default folder but is not visible on the UI.   ... See more...
Hello Splunkers!!  While accessing the advance search setting macro page is not visible anymore. We have a macro folder that is present under the app default folder but is not visible on the UI.        
Hi, We want to automate creation of Some common Health Rules & Policies for multiple Applications at a time, could you please help/Suggest us how we can implement without manual creation for each Ap... See more...
Hi, We want to automate creation of Some common Health Rules & Policies for multiple Applications at a time, could you please help/Suggest us how we can implement without manual creation for each Application individually?
Hi Splunkers, I'm performing some test on my test environment and I'm curious about observed behavior. I want to add some network inputs, so tcp and udp ones, to my env. I found easily on doc how t... See more...
Hi Splunkers, I'm performing some test on my test environment and I'm curious about observed behavior. I want to add some network inputs, so tcp and udp ones, to my env. I found easily on doc how to achieve this: Monitornetworkports and it works fine, with no issues. Inputs are correctly added to my Splunk. I can confirm this with no problem on both web GUI and from CLI using btool. My wonder is: if I use the command in the above link, inputs are added on inputs.conf located in SPLUNK_HOME\etc\apps\search\local. For example, if I use: splunk add tcp 3514 -index network -soucetype checkpoint   And then, I digit  splunk btool inputs list --debug | findstr 3514   The output is: C:\Program Files\Splunk\etc\apps\search\local\inputs.conf [tcp://3514]   And, checking manually the file, confs related to my add command are exactly on it. So, I assume that search is the default app if no additional parameter are provided. Now, I know well that if I want edit another inputs.conf file, I can simply manually edit it. But what about if I want edit another inputs.conf from CLI? In other words: I want to know if I can use the splunk add command and specify which inputs.conf file modify. Is it possible? 
Hi! I have a saved historical data of some metric, from even before the agent got installed, is there any way to load it into the controller? So I can see this metric even for the time that I didn't... See more...
Hi! I have a saved historical data of some metric, from even before the agent got installed, is there any way to load it into the controller? So I can see this metric even for the time that I didn't have an agent installed? I don't see any API for that...   Thanks! -Dimitri )
  Subject: Issue with Splunk server not starting after configuring TLS Description: I'm encountering an issue with my Splunk server after configuring TLS. Here's a summary of the steps I've taken:... See more...
  Subject: Issue with Splunk server not starting after configuring TLS Description: I'm encountering an issue with my Splunk server after configuring TLS. Here's a summary of the steps I've taken: Placed the certificate files (cert.pem, cacert.pem, key.pem) in the directory: /opt/splunk/etc/auth/mycerts/. Modified the /opt/splunk/etc/system/local/server.conf file with the following configurations: ⁠[sslConfig] enableSplunkdSSL = true sslVersions = tls1.2,tls1.3 serverCert = /opt/splunk/etc/auth/mycerts/cert.pem sslRootCAPath = /opt/splunk/etc/auth/mycerts/cacert.pem sslKeysfile = /opt/splunk/etc/auth/mycerts/key.pem   After restarting the Splunk server using the command ./splunk restart, the following messages were displayed: ⁠Starting splunk server daemon (splunkd)... Done  Waiting for web server at http://127.0.0.1:8000 to be available.... WARNING: web interface does not seem to be available!   Additionally, when checking the status with ./splunk status, the result is: splunkd is not running. Could someone assist me in troubleshooting this issue? I'm unsure why the Splunk server is not starting properly after enabling TLS. Thank you for your help!          
From Splunk, can I see the queries that have been implemented in the database? like update, delete, insert, etc.?
Hello Team, as we delve into Splunk Attack Range 3.0, we're interested in understanding the MITRE ATT&CK tactics and techniques that can be simulated within this environment. If you have information ... See more...
Hello Team, as we delve into Splunk Attack Range 3.0, we're interested in understanding the MITRE ATT&CK tactics and techniques that can be simulated within this environment. If you have information on this, kindly share it with us. Thank you!
I have this query which is working as expected. There are two different body axs_event_txn_visa_req_parsedbody and axs_event_txn_visa_rsp_formatting and common between two is F62_2 (eventtype =axs_e... See more...
I have this query which is working as expected. There are two different body axs_event_txn_visa_req_parsedbody and axs_event_txn_visa_rsp_formatting and common between two is F62_2 (eventtype =axs_event_txn_visa_req_parsedbody "++EXT-ID[C0] FLD[Authentication Program..] FRMT[TLV] LL[1] LEN[2] DATA[01]") OR eventtype=axs_event_txn_visa_rsp_formatting | rex field=_raw "(?s)(.*?FLD\[Acquiring Institution.*?DATA\[(?<F19>[^\]]*).*)" | rex field=_raw "(?s)(.*?FLD\[Authentication Program.*?DATA\[(?<FCO>[^\]]*).*)" | rex field=_raw "(?s)(.*?FLD\[62-2 Transaction Ident.*?DATA\[(?<F62_2>[^\]]*).*)" | rex field=_raw "(?s)(.*?FLD\[Response Code.*?DATA\[(?<VRC>[^\]]*).*)" | stats values(txn_uid) as txn_uid, values(txn_timestamp) as txn_timestamp, values(F19) as F19, values(FCO) as FCO, values(VRC) as VRC by F62_2 | where F19!=036 AND FCO=01 now lets say i want to rewrite this query using appendcol/substring. something like this. TID from axs_event_txn_visa_req_parsedbody the resulted output should be passing to another query so i can corresponding log For example Table -1  Name Emp-id Jayesh 12345 Table Designation Emp-id Engineer 12345 use Emp-id from table-1 and get the destination from table-2, similarly TID is the common field between two index, i want to fetch VRC using TID from Table-1 index=au_axs_common_log source=*Visa* "++EXT-ID[C0] FLD[Authentication Program..] FRMT[TLV] LL[1] LEN[2] DATA[01]" | rex field=_raw "(?s)(.*?FLD\[62-2 Transaction Ident.*?DATA\[(?<TID>[^\]]*).*)" |appendcols search [ index=au_axs_common_log source=*Visa* "FORMATTING:" | rex field=_raw "(?s)(.*?FLD\[62-2 Transaction Ident.*?DATA\[(?<TID>[^\]]*).*)" |rex field=_raw "(?s)(.*?FLD\[Response Code.*?DATA\[(?<VRC>[^\]]*).*)" | stats values(VRC) as VRC by TID ]
Hello, I have a dashboard where the drop down list is working for me as i have splunk admin access where as the same drop down list is not populating for a user with splunk user level access. How d... See more...
Hello, I have a dashboard where the drop down list is working for me as i have splunk admin access where as the same drop down list is not populating for a user with splunk user level access. How do i need to troubleshoot this issue? Thanks