All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Installed the app, could launch and get to the search page. However am unable to execute any search that has exportalerts command. I get the error message : "Error in 'exportalerts' command: Cannot f... See more...
Installed the app, could launch and get to the search page. However am unable to execute any search that has exportalerts command. I get the error message : "Error in 'exportalerts' command: Cannot find program 'exportalerts' or script 'exportalerts'."
Hi Everyone When I click on an area on the map, link to another dashboard,  how to setting ? such as the picture, when i click on Beijing, link to dashborad A , Click on Shanghai, link to dashbor... See more...
Hi Everyone When I click on an area on the map, link to another dashboard,  how to setting ? such as the picture, when i click on Beijing, link to dashborad A , Click on Shanghai, link to dashborad B how to setting ?  
I am trying to make first two columns of a table output to be sticky...I can do one by using      <html> <style> #myTable th:first-child,td:first-child { left:0; z-index: 999... See more...
I am trying to make first two columns of a table output to be sticky...I can do one by using      <html> <style> #myTable th:first-child,td:first-child { left:0; z-index: 9999; position: sticky; }     The above code works for one column on the left..But I want two to be sticky
I'm trying to do a simple query to get a hostname from events in a different sourcetype. I have a event in sourcetype A, which don't have a field "host_name". This field is present in sourcetype B. T... See more...
I'm trying to do a simple query to get a hostname from events in a different sourcetype. I have a event in sourcetype A, which don't have a field "host_name". This field is present in sourcetype B. The index is the same, let's call it X. Both events can be matched through the field "sensor_id". I want to retrieve the field "process_command_line" from sourcetype A and host_name from sourcetype B, for the events that match the same "sensor_id" field. Here's a sample query that works:     index=X sourcetype=B [search index=X sourcetype=A | table sensor_id] | table sensor_id host_name     However, I also need to retrieve the process_command_line, which is only present in sourcetype A. If I add that to the subsearch, it retrieves zero results:     index=X sourcetype=B [search index=X sourcetype=A | table sensor_id process_command_line] | table sensor_id host_name process_command_line     Any idea how can I retrieve all three fields?    
I am trying to define various functions for each component level. However I am having multiple Splunk environments and I wanted to split Indexer group by different region. Say I have 2 Indexers in A ... See more...
I am trying to define various functions for each component level. However I am having multiple Splunk environments and I wanted to split Indexer group by different region. Say I have 2 Indexers in A region and 5 in B region and 1 in C region. Apart from splunk_server_group=dmc_group_indexers can I call custom group in my REST query to fetch particular indexers in that region? Or is that possible to call via custom macro? Please throw some light on this
Hello everyone, I'm trying to use the new custom containers feature to create a container with numpy, pandas and feature-engine packets. The container is created successfully, but every time I try ... See more...
Hello everyone, I'm trying to use the new custom containers feature to create a container with numpy, pandas and feature-engine packets. The container is created successfully, but every time I try to send data to it the search returns the "container unable to read JSON response from http://localhost:<api_port>/fit" message. It only happens with the custom container, the mltk-container-golden-image-cpu:5.0.0 continues working well. I've tried almost all the solutions o could find for the same error, but none of them work. Can anyone help me, please?
I would like to sync my production search head to a development search head on a daily/weekly basis.  I need the same apps/configs on development server for testing before moving approved configs bac... See more...
I would like to sync my production search head to a development search head on a daily/weekly basis.  I need the same apps/configs on development server for testing before moving approved configs back into production.   Any tips on building a development search head and pulling in the production configs/apps?
I've been fighting all day trying to figure out what keeps causing above error when starting Splunk, and here is some background:   OS: CentOS Stream 9 Kernel: Linux 5.14.0-295.el9.x86_64 Splunk: s... See more...
I've been fighting all day trying to figure out what keeps causing above error when starting Splunk, and here is some background:   OS: CentOS Stream 9 Kernel: Linux 5.14.0-295.el9.x86_64 Splunk: splunk-9.0.4.1-419ad9369127-Linux-x86_64.tgz   Earlier (today) a ran version: 8.2.9, but as it kept failing I thought it could be the Splunk version and some systemctl stuff issues (as I've read quite a bit about), but after upgrading it's still the same. The service has been initiated as:   sudo /opt/splunk/bin/splunk enable boot-start -systemd-managed 1 -user splunk   The sudo systemctl status Splunkd shows:   × Splunkd.service - Systemd service file for Splunk, generated by 'splunk enable boot-start' Loaded: loaded (/etc/systemd/system/Splunkd.service; enabled; preset: disabled) Active: failed (Result: exit-code) since Wed 2023-08-02 19:53:42 CEST; 1h 37min ago Duration: 983us Process: 402262 ExecStart=/opt/splunk/bin/splunk _internal_launch_under_systemd (code=exited, status=8) Process: 402263 ExecStartPost=/bin/bash -c chown -R splunk:splunk /sys/fs/cgroup/system.slice/Splunkd.service (code=exited, status=0/SUCCESS) Main PID: 402262 (code=exited, status=8) CPU: 7ms aug 02 19:53:42 localhost.localdomain systemd[1]: Stopped Systemd service file for Splunk, generated by 'splunk enable boot-start'. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Converting job Splunkd.service/restart -> Splunkd.service/start aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Consumed 7ms CPU time. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Start request repeated too quickly. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Failed with result 'exit-code'. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Service restart not allowed. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Changed dead -> failed aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Job 99417 Splunkd.service/start finished, result=failed aug 02 19:53:42 localhost.localdomain systemd[1]: Failed to start Systemd service file for Splunk, generated by 'splunk enable boot-start'. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Unit entered failed state.   If I run the ExecStart=/opt/splunk/bin/splunk _internal_launch_under_systemd directly from the command line splunk starts without any problems - I don't get it. I've edited: /etc/systemd/system.conf and added:   LogLevel=debug   And running: journalctl -xeu Splunkd.service writes:   ░░ ░░ The unit Splunkd.service completed and consumed the indicated resources. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Will spawn child (service_enter_start): /opt/splunk/bin/splunk aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: cgroup-compat: Applying [Startup]CPUShares=1024 as [Startup]CPUWeight=100 on /system.slice/Splunkd.service aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Failed to set 'io.weight' attribute on '/system.slice/Splunkd.service' to 'default 100': No such file or directory aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: cgroup-compat: Applying MemoryLimit=7922106368 as MemoryMax= aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Passing 0 fds to service aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: About to execute /opt/splunk/bin/splunk _internal_launch_under_systemd aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Forked /opt/splunk/bin/splunk as 402262 aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Will spawn child (service_enter_start_post): /bin/bash aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: About to execute /bin/bash -c "chown -R splunk:splunk /sys/fs/cgroup/system.slice/Splunkd.service" aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Forked /bin/bash as 402263 aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Changed dead -> start-post aug 02 19:53:42 localhost.localdomain systemd[1]: Starting Systemd service file for Splunk, generated by 'splunk enable boot-start'... ░░ Subject: A start job for unit Splunkd.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit Splunkd.service has begun execution. ░░ ░░ The job identifier is 99280. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: User lookup succeeded: uid=1002 gid=1002 aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: User lookup succeeded: uid=1002 gid=1002 aug 02 19:53:42 localhost.localdomain systemd[402263]: Splunkd.service: Executing: /bin/bash -c "chown -R splunk:splunk /sys/fs/cgroup/system.slice/Splunkd.service" aug 02 19:53:42 localhost.localdomain systemd[402262]: Splunkd.service: Executing: /opt/splunk/bin/splunk _internal_launch_under_systemd aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Child 402263 belongs to Splunkd.service. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Control process exited, code=exited, status=0/SUCCESS (success) ░░ Subject: Unit process exited ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ An ExecStartPost= process belonging to unit Splunkd.service has exited. ░░ ░░ The process' exit code is 'exited' and its exit status is 0. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Got final SIGCHLD for state start-post. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Changed start-post -> running aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Job 99280 Splunkd.service/start finished, result=done aug 02 19:53:42 localhost.localdomain systemd[1]: Started Systemd service file for Splunk, generated by 'splunk enable boot-start'. ░░ Subject: A start job for unit Splunkd.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit Splunkd.service has finished successfully. ░░ ░░ The job identifier is 99280. aug 02 19:53:42 localhost.localdomain splunk[402262]: Couldn't open "/opt/splunk/bin/../etc/splunk-launch.conf": Permission denied aug 02 19:53:42 localhost.localdomain splunk[402262]: Couldn't open "/opt/splunk/bin/../etc/splunk-launch.conf": Permission denied aug 02 19:53:42 localhost.localdomain splunk[402262]: ERROR: Couldn't determine $SPLUNK_HOME or $SPLUNK_ETC; perhaps one should be set in environment aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Child 402262 belongs to Splunkd.service. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Main process exited, code=exited, status=8/n/a ░░ Subject: Unit process exited ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ An ExecStart= process belonging to unit Splunkd.service has exited. ░░ ░░ The process' exit code is 'exited' and its exit status is 8. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Failed with result 'exit-code'. ░░ Subject: Unit failed ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit Splunkd.service has entered the 'failed' state with result 'exit-code'. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Service will restart (restart setting) aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Changed running -> failed aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Unit entered failed state. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Consumed 7ms CPU time. ░░ Subject: Resources consumed by unit runtime ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit Splunkd.service completed and consumed the indicated resources. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Changed failed -> auto-restart aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Service RestartSec=100ms expired, scheduling restart. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Trying to enqueue job Splunkd.service/restart/replace aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Installed new job Splunkd.service/restart as 99417 aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Enqueued job Splunkd.service/restart as 99417 aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Scheduled restart job, restart counter is at 5. ░░ Subject: Automatic restarting of a unit has been scheduled ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ Automatic restarting of the unit Splunkd.service has been scheduled, as the result for ░░ the configured Restart= setting for the unit. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Changed auto-restart -> dead aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Job 99417 Splunkd.service/restart finished, result=done aug 02 19:53:42 localhost.localdomain systemd[1]: Stopped Systemd service file for Splunk, generated by 'splunk enable boot-start'. ░░ Subject: A stop job for unit Splunkd.service has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit Splunkd.service has finished. ░░ ░░ The job identifier is 99417 and the job result is done. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Converting job Splunkd.service/restart -> Splunkd.service/start aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Consumed 7ms CPU time. ░░ Subject: Resources consumed by unit runtime ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit Splunkd.service completed and consumed the indicated resources. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Start request repeated too quickly. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Failed with result 'exit-code'. ░░ Subject: Unit failed ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit Splunkd.service has entered the 'failed' state with result 'exit-code'. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Service restart not allowed. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Changed dead -> failed aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Job 99417 Splunkd.service/start finished, result=failed aug 02 19:53:42 localhost.localdomain systemd[1]: Failed to start Systemd service file for Splunk, generated by 'splunk enable boot-start'. ░░ Subject: A start job for unit Splunkd.service has failed ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit Splunkd.service has finished with a failure. ░░ ░░ The job identifier is 99417 and the job result is failed. aug 02 19:53:42 localhost.localdomain systemd[1]: Splunkd.service: Unit entered failed state. lines 2855-2974/2974 (END)   In relation to above error:   aug 02 22:14:58 localhost.localdomain systemd[404267]: Splunkd.service: Executing: /opt/splunk/bin/splunk _internal_launch_under_systemd aug 02 22:14:58 localhost.localdomain splunk[404267]: Couldn't open "/opt/splunk/bin/../etc/splunk-launch.conf": Permission denied aug 02 22:14:58 localhost.localdomain splunk[404267]: Couldn't open "/opt/splunk/bin/../etc/splunk-launch.conf": Permission denied aug 02 22:14:58 localhost.localdomain splunk[404267]: ERROR: Couldn't determine $SPLUNK_HOME or $SPLUNK_ETC; perhaps one should be set in environment aug 02 22:14:58 localhost.localdomain systemd[1]: Splunkd.service: Child 404267 belongs to Splunkd.service. aug 02 22:14:58 localhost.localdomain systemd[1]: Splunkd.service: Main process exited, code=exited, status=8/n/a   The text:   "/opt/splunk/bin/../etc/splunk-launch.conf": Permission denied   does not make any sense it's set as:   -rwxrwxrwx. 1 splunk splunk 765 2 aug 19:31 splunk-launch.conf   Any help would be highly appreciated.
I'm trying to install the Gitlab Add-on on my distributed system with search head and indexer clustering. Where all does the Gitlab addon needs to be installed at?  https://splunkbase.splunk.com/ap... See more...
I'm trying to install the Gitlab Add-on on my distributed system with search head and indexer clustering. Where all does the Gitlab addon needs to be installed at?  https://splunkbase.splunk.com/app/4381   PS: I installed the app on the sh cluster but the Gitlab addon UI keeps on loading only. I'm not able to install configure the app with the Gitlab token
I have a metric from AWS for the number of messages visible in a SQS queue, which gets computed every 5 minutes.  2023-08-02 11:50:00    13.3 2023-08-02 11:55:00    0.0 2023-08-02 12:00:00    33.... See more...
I have a metric from AWS for the number of messages visible in a SQS queue, which gets computed every 5 minutes.  2023-08-02 11:50:00    13.3 2023-08-02 11:55:00    0.0 2023-08-02 12:00:00    33.8 2023-08-02 12:05:00    0.0 This means that there were 13 messages in the queue, and 5 minutes later they were gone (processed). Then there were 33, then they were gone (processed) If messages do not get processed, I'd expect this number to continue to grow and not decrease.  I need to set up an alert when that happens. Is there some way to alert when a value grows, say, over 5 rows?  Or, is there a way to compare a value to itself at different points in time?   
It is possible to clone dashboards from the Enterprise Security app into a private custom app so that I can make modifications to it for the users in my environment? I tried cloning the Identity Inve... See more...
It is possible to clone dashboards from the Enterprise Security app into a private custom app so that I can make modifications to it for the users in my environment? I tried cloning the Identity Investigator dashboard, but it straight up won't load at all, so I'm wondering if this is even possible. Thanks!
When the page gets reloaded for my Dashboard Studio dashboard all of the inputs get reset. I have to enter all the inputs again which is disruptive to my workflow. This is particularlly annoying in ... See more...
When the page gets reloaded for my Dashboard Studio dashboard all of the inputs get reset. I have to enter all the inputs again which is disruptive to my workflow. This is particularlly annoying in two use cases: 1. When I reboot my machine chrome remembers my tabs but it causes the page to reload. So after a reboot I have to enter all the inputs again. 2. When the SSO times out at my company it causes a login page to load, then after auth it navigates back to the dashboard. This is annoying because it can happen any time of the day. Is there a solution for this? Can I encode the input values into the URL for the dashboard so it will automatically load with the correct values even if the page is reloaded?   Here is the source for my inputs { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "-24h@h,now" }, "title": "Time Range" } { "options": { "items": [ { "label": "All", "value": "US, CA, GB" }, { "label": "US", "value": "US" }, { "label": "CA", "value": "CA" }, { "label": "GB", "value": "GB" } ], "defaultValue": "US, CA, GB", "token": "selectedRegion" }, "title": "Region", "type": "input.dropdown" } { "options": { "items": [ { "label": "Unique Companies", "value": "realms" }, { "label": "Percentage of Total Traffic", "value": "percentage" } ], "defaultValue": "realms", "token": "selectedMode" }, "title": "Mode", "type": "input.dropdown" }  
Hello I have some array data located within a field in my data. It comes from DBConnect and isnt exactly JSON. Im trying to figure out how to extract this data so I can make it useful but Ive gone ... See more...
Hello I have some array data located within a field in my data. It comes from DBConnect and isnt exactly JSON. Im trying to figure out how to extract this data so I can make it useful but Ive gone through several iterations and have been unsuccessful so far. Hers a sample record(I picked a fairly long one for an example):       2023-08-02 08:23:28.000, CASE_ID="50031iIQAQ", STAGE="Initial", EVENT_TYPE="SS", SUBMISSION_DATE="2023-08-02 06:23:23.0", SUBMISSION_TYPE="Application INITIAL SUBMISSION", DESTINATION=""Application ", SUBMISSION_DATA="{ "submissionType": "Application INITIAL SUBMISSION", "submissionDate": "2023-08-02 06:23:23", "stage": "Initial", "repository": "central", "xxxxMessage": { "header": { "timeStamp": "2023-08-02 06:23:24", "source": "XXXXX", "messageType": "XXX CCC SSS", "domain": "APPLICATION/SUBMISSION" }, "body": { "submissionUnit": { "field": [ { "value": "500871iIQAQ", "name": "sourceId" }, { "value": "CCC VVVV", "name": "submission_unit_type" }, { "value": "1", "name": "submission_unit_number" }, { "value": "2023-08-02 06:23:23", "name": "submit_date" }, { "value": "2023-08-02", "name": "xxxx_received_date" } ] }, "submission": { "field": [ { "value": "5000871iIQAQ", "name": "sourceId" }, { "value": "00132546", "name": "submission_event_id" }, { "value": "XXXX", "name": "submission_type" }, { "value": "1", "name": "submission_number" }, { "value": "Pending", "name": "submission_status" }, { "value": "2023-08-02", "name": "submission_status_effective_date" }, { "value": "TTTT 1", "name": "xxxxx_xxxxxx" }, { "value": "String of data goes here", "name": "proposed_change" } ] }, "referencedMonographs": { "referencedMonograph": [ { "field": [ { "value": "a001YmysUAC", "name": "sourceId" }, { "value": "XXX2", "name": "ref_monograph_number" }, { "value": " String of data goes here", "name": "ref_monograph_description" } ] } ] }, "organizations": { "organization": [ { "field": [ { "value": "a07000nMzDQAU", "name": "sourceId" }, { "value": "String of data goes here", "name": "organization_name" }, { "value": "117185493", "name": "xxxx_number" }, { "value": "12071", "name": "company_global_id" }, { "value": "Requestor", "name": "contact_type" }, { "value": "String of data goes here", "name": "address_line1" }, { "value": "CityName", "name": "city" }, { "value": "US", "name": "country" }, { "value": "XX", "name": "state" }, { "value": "XXXXX-AAAA", "name": "postal_code" } ], "contact": { "field": [ { "value": "a07nMzDQAU", "name": "sourceId" }, { "value": "Requestor", "name": "contact_type" }, { "value": "XXXX", "name": "first_name" }, { "value": "CCCC", "name": "last_name" }, { "value": "2221112222", "name": "phone_number" }, { "value": "test@test.com", "name": "email_address" }, { "value": "1111 NW 1st St", "name": "address_line1" }, { "value": "CCCCC", "name": "city" }, { "value": "United States", "name": "country" }, { "value": "CD", "name": "state" }, { "value": "11111", "name": "postal_code" } ] } } ] }, "ingredients": { "ingredient": [ { "field": [ { "value": "a041R8IbQAK", "name": "sourceId" }, { "value": "XXXXXXXXX", "name": "sssss_aaaaaa" }, { "value": "U8LYN0Y118", "name": "CCCC" }, { "value": "311218848", "name": "wwwwww_global_id" }, { "value": "String of data goes here", "name": "XXXXX_strength" }, { "value": "XX", "name": "numerator_unit" }, { "value": "12.00", "name": "numerator_strength" }, { "value": "1", "name": "denominator_unit" }, { "value": "1.00000", "name": "denominator_strength" }, { "value": "40", "name": "CCCCCC" }, { "value": "Month", "name": "xxxxx_frequency" }, { "value": "String of data goes here", "name": "age_group" }, { "value": "String of data goes here", "name": "xxxxx_form" }, { "value": "xxxxx", "name": "xxx_xxxx_xxx" }, { "value": "xxxxxx/ccccc", "name": "xxxxx_class" }, { "value": "0117984AB", "name": "xxxxx_xxxxx" }, { "value": "U8LY118", "name": "xxxxx_value" }, { "value": "xx", "name": "xxxxx_type" } ] } ] }, "xxxx_event_id": "00132542", "contacts": { "contact": [ { "field": [ { "value": "a070nMzEQAU", "name": "sourceId" }, { "value": "String of data goes here", "name": "contact_type" }, { "value": "XXXXXXXX", "name": "first_name" }, { "value": "CCCCCCCC", "name": "last_name" }, { "value": "+12223334444", "name": "phone_number" }, { "value": "test@test.com", "name": "email_address" }, { "value": "321 Drive", "name": "address_line1" }, { "value": "XXXXXXXXX", "name": "city" }, { "value": "United States", "name": "country" }, { "value": "VVVVVVV", "name": "state" }, { "value": "11111", "name": "postal_code" } ] } ] }, "attachment_metadata": { "total_attachment_count": 1, "application_attachment": { "type": "application", "submission_attachment": { "sub_submission": [ { "name": "String of data goes here", "attachment_count": 1 } ], "name": "String of data goes here" }, "name": "CCCCC" } }, "application": { "referencedMonographs": { "referencedMonograph": [ { "field": [ { "value": "a0Z3S000001YmynUAC", "name": "sourceId" }, { "value": "M009", "name": "ref_monograph_number" }, { "value": " String of data goes here", "name": "ref_monograph_description" } ] } ] }, "field": [ { "value": "5003872puQAA", "name": "sourceId" }, { "value": "XXXXX", "name": "application_type" }, { "value": "12345678", "name": "application_number" }, { "value": "Pending", "name": "application_status" }, { "value": "2023-08-02", "name": "application_status_effective_date" }, { "value": "Requestor", "name": "requestor_role" }, { "value": "test1", "name": "application_justification" } ] } } }, "eventType": "WE", "documentDestination": "Applications", "caseID": "5003871iIQAQ", "attachments": [ { "type": "docx", "storageLocation": "/XXXX/Data Stored HEre", "processedInd": "N", "name": "Test 3.docx", "fileSize": "11885", "fileId": "0683S000001o43hQAA", "file_metadata": [ { "value": "String of data goes here", "key": "docCategory" }, { "value": "1.1", "key": "sectionNumber" }, { "value": "Table of Contents", "key": "sectionName" }, { "value": "5003iIQAQ", "key": "sourceId" } ], "contentVersionId": "xxx/cccc/dddddd/0693hQAM/0683001QAA/Test 3.docx" } ] }", XXXX_MESSAGE_ID="7eewrty6e9-00ea-4e58-981f-3cn56igb82f", CREATED_BY="XXXX_CCC_APP", CREATED_DATETIME="2023-08-02 09:23:25", MODIFIED_BY="XXXX_CCC_APP", MODIFIED_DATETIME="2023-08-02 09:23:28", PROCESSED_STATUS="success"         Anyone have any ideas how I might accomplish getting the SUBMISSION_DATA field extracted properly? Thanks for the help
A new entry appears every few days in the Forwarder Management area. Phone homes are only working for the latest entry. Same Host Name, same IP Address, only the Client Name is different. Any i... See more...
A new entry appears every few days in the Forwarder Management area. Phone homes are only working for the latest entry. Same Host Name, same IP Address, only the Client Name is different. Any ideas?
I am trying to create an alert or a report to track the number of deferred searches. We had an issue where the cluster captain deferred a massive amount of searches, and it messed up a few things. We... See more...
I am trying to create an alert or a report to track the number of deferred searches. We had an issue where the cluster captain deferred a massive amount of searches, and it messed up a few things. We are trying to create an alert to maybe mitigate that in the future. In addition to asking the best way to create a alert for this, id also like some more clarification on how to find the deferred searches.  Through the monitoring console,  either through the DMC or the Cluster Master,  I thought i had seen a panel for deferred searches, but i can not find one now. And when i run the search        index=_internal earliest=-24h "status=skipped" sourcetype=scheduler | stats count by host app | sort - count       I get results, but if i change status to deferred, which i assume is a status, I do not get anything. I was suggested to run        | rest /services/search/jobs | search status=deferred | table id, search, app, owner, earliest_time, latest_time, status, sid       but I do not get any status. Status is not a field.   The main question I have is How do I access the number of deferred searches? If i can find that, I can run stats count on it.    Thank you. 
I am trying to dig through some records and trying to get the q (query) from the raw data, but I keep getting data back that includes a backslash after the requested field (mostly as a unicode charac... See more...
I am trying to dig through some records and trying to get the q (query) from the raw data, but I keep getting data back that includes a backslash after the requested field (mostly as a unicode character representation, /u0026 which is an &). For example, I have this search query to capture the page from which a search is being made (i.e., "location"):    index="xxxx-data" | regex query="location=([a-zA-Z0-9_]+)+[^&]+" | rex field=_raw "location=(?<location>[a-zA-Z0-9%-]+).*" | rex field=_raw "q=(?<q>[a-zA-Z0-9%-_&+/]+).*"| table location,q   Which mostly works viewing the Statistics tab, except that it occasionally returns the next URL parameter, i.e., location q home_page   hello+world   // this is ok about_page goodbye+cruel+world\u0026anotherparam=anotherval    // not ok  The second result should just be goodbye+cruel+world without the following parameter. I have tried adding variations on regex NOT [^\\] for a backslash character but everything I've tried has either resulted in an error of the final bracket being escaped, or the backslash character ignored like so: rex field=_raw  ... regex attempt result "q=(?<q>[a-zA-Z0-9%-_&+/]+[^\\\]).*"  goodbye+cruel+world\u0026param=val   "q=(?<q>[a-zA-Z0-9%-_&+/]+[^\\]).*"  Error in 'rex' command: Encountered the following error while compiling the regex 'q=(?<q>[a-zA-Z0-9%-_&+/]+[^\]).*': Regex: missing terminating ] for character class.   "q=(?<q>[a-zA-Z0-9%-_&+/]+[^\]).*"  Error in 'rex' command: Encountered the following error while compiling the regex 'q=(?<q>[a-zA-Z0-9%-_&+/]+[^\]).*': Regex: missing terminating ] for character class.   "q=(?<q>[a-zA-Z0-9%-_&+/]+[^\\u0026]).*" Error in 'rex' command: Encountered the following error while compiling the regex 'q=(?<q>[a-zA-Z0-9%-_&+/]+[^\u0026]).*': Regex: PCRE does not support \L, \l, \N{name}, \U, or \u.   "q=(?<q>[a-zA-Z0-9%-_&+/]+[^u0026]).*"  goodbye+cruel+world\u0026param=val" "q=(?<q>[a-zA-Z0-9%-_&+/]+[^&]).*"  goodbye+cruel+world\u0026param=val" "q=(?<q>[a-zA-Z0-9%-_&+/]+).*" goodbye+cruel+world\u0026param=val   "q=(?<q>[a-zA-Z0-9%-_&+/^\\\\]+)"  goodbye+cruel+world\u0026param=val   Events tab data is like:    Event apple: honeycrisp ball: baseball car: Ferrari query: param1=val1&param2=val2&param3=val3&q=goodbye+cruel+world&param=val status: 200   ... etc ... SO, how can I get the q value to return just the first parameter, ignoring anything that has a \ or & before it and terminating just at q? And please, if you would be so kind, include an explanation of why what you suggest works?  Thanks
Hello, I'm trying to create a  search to identify instances of bulk system deletions that took place within a one-minute time frame, and describe a method to consolidate all these results into a s... See more...
Hello, I'm trying to create a  search to identify instances of bulk system deletions that took place within a one-minute time frame, and describe a method to consolidate all these results into a single search query. Thanks 
I am populating the drop-down on the dashboard studio from the lookup table.  I want to display one column as the selection value in the drop-down but use another column value in the searches. I know... See more...
I am populating the drop-down on the dashboard studio from the lookup table.  I want to display one column as the selection value in the drop-down but use another column value in the searches. I know its possible to do in the classic board, but not sure about dashboard studio    Thanks for the help
I'm trying to create an outbound port on our splunk cloud instance without any luck. curl -X POST 'https://admin.splunk.com/important-iguana-u5q/adminconfig/v2/access/outbound-ports' \ --header 'Au... See more...
I'm trying to create an outbound port on our splunk cloud instance without any luck. curl -X POST 'https://admin.splunk.com/important-iguana-u5q/adminconfig/v2/access/outbound-ports' \ --header 'Authorization: Bearer eyJraWQiOiJzcGx1bmsuc2VjcmV0IiwiYWxnI...' \ --header 'Content-Type: application/json' \ --data-raw '{ "outboundPorts": [{"subnets": ["34.226.34.80/32", "54.226.34.80/32"], "port": 8089}], "reason": "testing federated search connection" }'  following the documentation I receive error code": "404-stack-not-found",     "message": "stack not found. Please refer https://docs.splunk.com/Documentation/SplunkCloud/latest/Config/ACSerrormessages for general troubleshooting tips." }   I've also try to import the curl under postman ma also there same answer ...   Could anyone face same issue kr Sandro
I need to understand which event types each search result record belongs to. My search: index="a" AND eventtype="*" I want the results to contain a field with a list of matching event types. ... See more...
I need to understand which event types each search result record belongs to. My search: index="a" AND eventtype="*" I want the results to contain a field with a list of matching event types. It would be ok for me to have a table with columns _raw and eventtypes. We have 10k+ event types and thousands of events. Is it possible to achieve? Thanks.