All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Co-Splunkers, Greetings, I had a point to fix in splunk visualisation (cluster map). As i am plotting the values based on LAT and LON values as a result its fetching good, but is there an... See more...
Hello Co-Splunkers, Greetings, I had a point to fix in splunk visualisation (cluster map). As i am plotting the values based on LAT and LON values as a result its fetching good, but is there any scope to plot the continuous line instead of bubble marks . I need to make that bubble points to connecting line , like we see in google maps how its connect from start point to the end point. Please take a look of snaps attached. Thanks  in advance for your responses and  for time spent on my Question.  The above image is my data  Like the above snap i need to cover .  
I am using Splunk Observability Cloud for Kubernetes monitoring and trying to retrieve data for container CPU limits using the k8s.container.cpu_limit metric, but I'm not getting any data. data('k8s... See more...
I am using Splunk Observability Cloud for Kubernetes monitoring and trying to retrieve data for container CPU limits using the k8s.container.cpu_limit metric, but I'm not getting any data. data('k8s.container.cpu_limit',rollup='average').sum(by=['k8s.container.name','k8s.pod.name', 'k8s.pod.uid', 'k8s.node.name', 'k8s.cluster.name']) Thanks in advance!  
Hello Experts,   I am trying to work on setting up panels with two different queries output based on a filter. I am using the change on condition option  <input type="dropdown" token="spliterror_1... See more...
Hello Experts,   I am trying to work on setting up panels with two different queries output based on a filter. I am using the change on condition option  <input type="dropdown" token="spliterror_1" searchWhenChanged="true"> <label>Splits</label> <choice value="*">All</choice> <choice value="false">Exclude</choice> <choice value="true">Splits Only</choice> <prefix>isSplit="</prefix> <suffix>"</suffix> <default>$spliterror_1$</default> <change> <condition label="All"> <set token="ShowAll">*</set> <unset token="ShowTrue"></unset> <unset token="ShowFalse"></unset> </condition> <condition label="Exclude"> <unset token="ShowAll"></unset> <set token="ShowFalse">false</set> <unset token="ShowTrue"></unset> </condition> <condition label="Splits Only"> <unset token="ShowAll"></unset> <unset token="ShowFalse"></unset> <set token="ShowTrue">true</set> </condition> </change> </input>   The setting/unsetting token displays the panel accordingly but in backend all 3 queries run simultaneously, is there a way that only one condition and related query run only on selection basis   Nishant
# Version Information Splunk Security Essentials version: 3.8.1 Splunk Security Essentials build: 1889 Splunk Enterprise Version: 9.3.2 Current MITRE ATT&CK Ver: 16.1 # Issue Description ... See more...
# Version Information Splunk Security Essentials version: 3.8.1 Splunk Security Essentials build: 1889 Splunk Enterprise Version: 9.3.2 Current MITRE ATT&CK Ver: 16.1 # Issue Description After an update to the MITRE ATT&CK framework, the Data Sources ID column breaks. It becomes vertically indented by 4, leaving the first 4 columns without an ID, and the subsequent columns are off by 4. There are no additional IDs at the end of the lookup. This lookup is correctly formatted upon a clean install as demonstrated below (First and Last 5 rows of the `mitre_data_sources.csv` lookup located at `$SPLUNK_HOME/etc/apps/Splunk_Security_Essentials/lookups/mitre_data_sources.csv` ## Clean Install - First 5 Id Name Data_Source Description Data_Component Data_Component_Description DS0014 Pod Pod: Pod Creation A single unit of shared resources within a cluster, comprised of one or more containers(Citation: Kube Kubectl)(Citation: Kube Pod) Pod Creation Initial construction of a new pod (ex: kubectl apply|run) DS0014 Pod Pod: Pod Modification A single unit of shared resources within a cluster, comprised of one or more containers(Citation: Kube Kubectl)(Citation: Kube Pod) Pod Modification Changes made to a pod, including its settings and/or control data (ex: kubectl set|patch|edit) DS0014 Pod Pod: Pod Metadata A single unit of shared resources within a cluster, comprised of one or more containers(Citation: Kube Kubectl)(Citation: Kube Pod) Pod Metadata Contextual data about a pod and activity around it such as name, ID, namespace, or status DS0014 Pod Pod: Pod Enumeration A single unit of shared resources within a cluster, comprised of one or more containers(Citation: Kube Kubectl)(Citation: Kube Pod) Pod Enumeration An extracted list of pods within a cluster (ex: kubectl get pods) DS0032 Container Container: Container Creation A standard unit of virtualized software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another(Citation: Docker Docs Container) Container Creation Initial construction of a new container (ex: docker create <container_name>) - Last 5 DS0018 Firewall Firewall: Firewall Metadata A network security system, running locally on an endpoint or remotely as a service (ex: cloud environment), that monitors and controls incoming/outgoing network traffic based on predefined rules(Citation: AWS Sec Groups VPC) Firewall Metadata Contextual data about a firewall and activity around it such as name, policy, or status DS0018 Firewall Firewall: Firewall Disable A network security system, running locally on an endpoint or remotely as a service (ex: cloud environment), that monitors and controls incoming/outgoing network traffic based on predefined rules(Citation: AWS Sec Groups VPC) Firewall Disable Deactivation or stoppage of a cloud service (ex: Write/Delete entries within Azure Firewall Activity Logs) DS0018 Firewall Firewall: Firewall Rule Modification A network security system, running locally on an endpoint or remotely as a service (ex: cloud environment), that monitors and controls incoming/outgoing network traffic based on predefined rules(Citation: AWS Sec Groups VPC) Firewall Rule Modification Changes made to a firewall rule, typically to allow/block specific network traffic (ex: Windows EID 4950 or Write/Delete entries within Azure Firewall Rule Collection Activity Logs) DS0018 Firewall Firewall: Firewall Enumeration A network security system, running locally on an endpoint or remotely as a service (ex: cloud environment), that monitors and controls incoming/outgoing network traffic based on predefined rules(Citation: AWS Sec Groups VPC) Firewall Enumeration An extracted list of available firewalls and/or their associated settings/rules (ex: Azure Network Firewall CLI Show commands) DS0011 Module Module: Module Load Executable files consisting of one or more shared classes and interfaces, such as portable executable (PE) format binaries/dynamic link libraries (DLL), executable and linkable format (ELF) binaries/shared libraries, and Mach-O format binaries/shared libraries(Citation: Microsoft LoadLibrary)(Citation: Microsoft Module Class) Module Load Attaching a module into the memory of a process/program, typically to access shared resources/features provided by the module (ex: Sysmon EID 7) ## After triggering a `Force Update` of Security Content - First 5 Id Name Data_Source Description Data_Component Data_Component_Description   Pod Pod: Pod Enumeration A single unit of shared resources within a cluster, comprised of one or more containers(Citation: Kube Kubectl)(Citation: Kube Pod) Pod Enumeration An extracted list of pods within a cluster (ex: kubectl get pods)   Pod Pod: Pod Metadata A single unit of shared resources within a cluster, comprised of one or more containers(Citation: Kube Kubectl)(Citation: Kube Pod) Pod Metadata Contextual data about a pod and activity around it such as name, ID, namespace, or status   Pod Pod: Pod Creation A single unit of shared resources within a cluster, comprised of one or more containers(Citation: Kube Kubectl)(Citation: Kube Pod) Pod Creation Initial construction of a new pod (ex: kubectl apply|run)   Pod Pod: Pod Modification A single unit of shared resources within a cluster, comprised of one or more containers(Citation: Kube Kubectl)(Citation: Kube Pod) Pod Modification Changes made to a pod, including its settings and/or control data (ex: kubectl set|patch|edit) DS0014 Container Container: Container Metadata A standard unit of virtualized software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another(Citation: Docker Docs Container) Container Metadata Contextual data about a container and activity around it such as name, ID, image, or status - Last 5 DS0009 Firewall Firewall: Firewall Rule Modification A network security system, running locally on an endpoint or remotely as a service (ex: cloud environment), that monitors and controls incoming/outgoing network traffic based on predefined rules(Citation: AWS Sec Groups VPC) Firewall Rule Modification Changes made to a firewall rule, typically to allow/block specific network traffic (ex: Windows EID 4950 or Write/Delete entries within Azure Firewall Rule Collection Activity Logs) DS0009 Firewall Firewall: Firewall Disable A network security system, running locally on an endpoint or remotely as a service (ex: cloud environment), that monitors and controls incoming/outgoing network traffic based on predefined rules(Citation: AWS Sec Groups VPC) Firewall Disable Deactivation or stoppage of a cloud service (ex: Write/Delete entries within Azure Firewall Activity Logs) DS0009 Firewall Firewall: Firewall Metadata A network security system, running locally on an endpoint or remotely as a service (ex: cloud environment), that monitors and controls incoming/outgoing network traffic based on predefined rules(Citation: AWS Sec Groups VPC) Firewall Metadata Contextual data about a firewall and activity around it such as name, policy, or status DS0009 Firewall Firewall: Firewall Enumeration A network security system, running locally on an endpoint or remotely as a service (ex: cloud environment), that monitors and controls incoming/outgoing network traffic based on predefined rules(Citation: AWS Sec Groups VPC) Firewall Enumeration An extracted list of available firewalls and/or their associated settings/rules (ex: Azure Network Firewall CLI Show commands) DS0018 Module Module: Module Load Executable files consisting of one or more shared classes and interfaces, such as portable executable (PE) format binaries/dynamic link libraries (DLL), executable and linkable format (ELF) binaries/shared libraries, and Mach-O format binaries/shared libraries(Citation: Microsoft LoadLibrary)(Citation: Microsoft Module Class) Module Load Attaching a module into the memory of a process/program, typically to access shared resources/features provided by the module (ex: Sysmon EID 7) --- This has occurred consistently on both existing and fresh Splunk installations. I suspect it's due to an update to MITRE, and the JSON parsers haven't been updated to handle the changes accordingly. This is purely conjecture. I have been playing about reading Python scripts located at `~/etc/apps/Splunk_Security_Essentials/bin`, but have come across nothing conclusive so far. --- Please let me know if this is an issue that anyone else has been facing, and if this also affects any of the other MITRE lookups, that I haven't yet noticed. If this affected more important lookups such as Detections or Threat Groups, this would considerably affect the app functionality. If anybody has any suggestions, or requires any more information, please let me know. Thanks - Stanley
Hello, I have defined a frozenTimePeriodInSecs for 1 hour on my IDX for a certain index, so that the logs it contains are only kept for 1 hour. The definition of the frozenTimePeriodInSecs was made... See more...
Hello, I have defined a frozenTimePeriodInSecs for 1 hour on my IDX for a certain index, so that the logs it contains are only kept for 1 hour. The definition of the frozenTimePeriodInSecs was made in the indexes.conf in the system/local directory The problem I have, however, is that the frozenTimePeriodInSecs config only takes effect once when the IDX is restarted. Otherwise, the logs remain in this index for the defined retention period. Has anyone already had the same problem and can help me with this? Thanks in advance.
I have 2 indexers in a cluster. One is down and one is up. All buckets are there on the indexer that is up but still not all indexes are searchable. Why is this and what can i do?      
 We are trying to configure event monitoring for Security Event ID 4624 (successful login) and Event ID 4625 (unsuccessful login) for an Account. We have created the app with the below stanza in inpu... See more...
 We are trying to configure event monitoring for Security Event ID 4624 (successful login) and Event ID 4625 (unsuccessful login) for an Account. We have created the app with the below stanza in inputs.conf file   [WinEventLog://Security] index = wineventlog sourcetype=Security:AD_Sec_entmon disabled = 0 start_from = oldest current_only = 1 evt_resolve_ad_obj = 1 checkpointInterval = 300 whitelist = EventCode="4624|4625" #renderXml=false    However, there is no data though the app has been successfully deployed. Please assist me on this issue.
Hi SMEs; I'd like to convert the following date format into epoch:  yyyymmdd. E.g 20220508. Any assistance would be appreciated!  
hi Folks, I’m preparing to present a large-scale Splunk design to stakeholders and want to make it interactive. To achieve this, I’m considering using Mermaid, which allows us to start with code, i... See more...
hi Folks, I’m preparing to present a large-scale Splunk design to stakeholders and want to make it interactive. To achieve this, I’m considering using Mermaid, which allows us to start with code, iterate incrementally, and improve easily over time. Below is my initial draft of the design.     I’d appreciate your input on two points: Are there any obvious mistakes in this draft? (It’s been a while since I last worked on Splunk design, especially after transitioning to Splunk Cloud.) Are you aware of any pre-existing Mermaid or Draw.io diagrams for large-scale Splunk clusters that we could adapt or reuse? Thanks in advance for your feedback!  
I have an index with a list of transactions, the transactions in the system start as 1 process with a transaction number (TRANNO) and that transaction can start a number of sub-tasks, each sub task w... See more...
I have an index with a list of transactions, the transactions in the system start as 1 process with a transaction number (TRANNO) and that transaction can start a number of sub-tasks, each sub task will have its own transaction number (TRANNO) but will also have its parent transaction number (PHTRANNO).  All the tasks (parents and children) have an amount of CPU consumed (USRCPUT_MICROSEC). All tasks have an id field (USER) which tells us what type of task it was. I want to be able to create a report listing all the types of tasks (USER) and the average and max CPU consumed (USRCPUT_MICROSEC). I've managed to create the report with the sum of the parents CPU but most of the CPU is consumed by the children.  Any suggestions on how to do this?  I've been searching and trying things for hours and I'm not getting anywhere.
So, have a timechart with multiple streams. Call them X, Y, and Z. Run the panel for a 4h timeframe. I want to click a peak or valley on one of the lines, take the name of that line (got this part... See more...
So, have a timechart with multiple streams. Call them X, Y, and Z. Run the panel for a 4h timeframe. I want to click a peak or valley on one of the lines, take the name of that line (got this part done) and the exact time that was clicked on ( I think this is click.value ) and pass them to another panel in the same dashboard. The "click.value" should be an epoch time...aka a number... so I should be able to add or subtract say 300 from that number and use them as the earliest and latest variables for a search. Effectively I want to do ("click.value"-300) for earliest and ("click.value"+300) for latest on another panel making it a 10 minute window with the point that was clicked on being the mid-point. I have tried in-line :  <set token="Drill_time_1">$click.value$ - 300</set> <set token="Drill_time_2">$click.value$ + 300</set> I have tried in-search :  earliest=$Drill_time_1$-300 latest=$Drill_time_2$+300 ...And various combinations there-of. All to no avail. Anyone have an idea?
I am trying to instrument a Java Spring Boot application for OpenTelemetry.  I am following the instructions from here: Instrument your Java application for Splunk Observability Cloud — Splunk Obse... See more...
I am trying to instrument a Java Spring Boot application for OpenTelemetry.  I am following the instructions from here: Instrument your Java application for Splunk Observability Cloud — Splunk Observability Cloud documentation But when I start the application, I get this error: ``` java -javaagent:./splunk-otel-javaagent.jar -jar my-app/target/my-app-0.0.1-SNAPSHOT.jar Unexpected error (103) returned by AddToSystemClassLoaderSearch Unable to add ./splunk-otel-javaagent.jar to system class path - the system class loader does not define the appendToClassPathForInstrumentation method or the method failed FATAL ERROR in native method: processing of -javaagent failed, appending to system class path failed ``` From my `mvn -v`: ``` Java version: 21.0.2, vendor: Oracle Corporation, runtime: C:\Users\****\apps\openjdk21\current Default locale: en_US, platform encoding: UTF-8 OS name: "windows 10", version: "10.0", arch: "amd64", family: "windows" ``` How do I correctly start the application with the javaagent?
Hi I have the following data. I am looking to get a line per data, so I can work with it better. If I use mvexpand I hit memory limits, as I need to do it on all the fields. Is there another way? ... See more...
Hi I have the following data. I am looking to get a line per data, so I can work with it better. If I use mvexpand I hit memory limits, as I need to do it on all the fields. Is there another way? Or perhaps I just need to increase the mvexpand memory limits! host="PMC_Sample_Data" index="murex_logs" sourcetype="Market_Risk_DT" | spath "resourceSpans{}.scopeSpans{}.spans{}.spanId" | rename resourceSpans{}.scopeSpans{}.spans{}.spanId as spanId | spath "resourceSpans{}.scopeSpans{}.spans{}.parentSpanId" | rename "resourceSpans{}.scopeSpans{}.spans{}.parentSpanId" as parentSpanId | spath "resourceSpans{}.scopeSpans{}.spans{}.startTimeUnixNano" | rename resourceSpans{}.scopeSpans{}.spans{}.startTimeUnixNano as start | spath "resourceSpans{}.scopeSpans{}.spans{}.endTimeUnixNano" | rename resourceSpans{}.scopeSpans{}.spans{}.endTimeUnixNano as end | spath resourceSpans{}.scopeSpans{}.spans{}.traceId | rename resourceSpans{}.scopeSpans{}.spans{}.traceId as traceId | table traceId spanId parentSpanId start end Thanks in advance    
  when I run this search query in splunk search and reporting apps my output looks like this as mentioned below   Search query:   index="dcn_b2b_use_case_analytics" sourcetype=lime_process_monit... See more...
  when I run this search query in splunk search and reporting apps my output looks like this as mentioned below   Search query:   index="dcn_b2b_use_case_analytics" sourcetype=lime_process_monitoring   Output:   Time  3/19/25 2:32:15.000 PM Event { [-]     BCD_AB_UY_01: 1     BCD_AB_UY_02: 0     BCD_BC_01: 1     BCD_BC_02: 0     BCD_CD_01: 1     BCD_CD_02: 1     BCD_CD_03: 0     BCD_KPI_01: 1     BCD_KPI_02: 1     BCD_KPI_03: 0     BCD_MY_01: 1     BCD_MY_02: 1     BCD_RMO_PZ_01: 1     BCD_RMO_PZ_02: 1     BCD_RMO_PZ_03: 0     BCD_RMO_PZ_04: 0     BCD_RSTA_01: 1     BCD_RSTA_02: 1     BCD_RSTA_03: 0     BCD_SHY_01: 1     BCD_SHY_02: 1     BCD_UK_01: 1     BCD_UK_02: 1     BCD_UK_03: 1     BCD_UK_04: 1     BCD_UK_05: 1     BCD_UK_06: 1     BCD_UK_07: 1     BCD_UK_08: 0     BCD_UK_09: 0     BCD_UK_10: 0     BCD_UK_11: 0     BCD_UK_12: 0 }   host = RSQWERTYASD04index = dcn_b2b_use_case_analyticssource = DCNPassFoldersourcetype = lime_process_monitoring Please Note- if a process value is 1 it means the process ran successfully, if it is 0 it means the process failed   Now my query is I want to trigger an alert for these processes mentioned below so that when these background processes fail I get an incident in my queue  in SNOW   BCD_AB_UY_01: 0 BCD_BC_01: 0 BCD_CD_01: 0 BCD_CD_02: 0 BCD_KPI_01: 0 BCD_KPI_02: 0 BCD_MY_01: 0 BCD_MY_02: 0 BCD_RMO_PZ_01: 0 BCD_RMO_PZ_02: 0 BCD_RSTA_01: 0 BCD_RSTA_02: 0 BCD_SHY_01: 0 BCD_SHY_02: 0 BCD_UK_01: 0 BCD_UK_02: 0 BCD_UK_03: 0 BCD_UK_04: 0 BCD_UK_05: 0 BCD_UK_06: 0 BCD_UK_07: 0     This is the alert search query I designed but when I run this alert I get multiple tickets instead I want a particular ticket where servicename(process name) and servername(hostname) is clearly mentioned to uniquely identify the process is from which server, please help me write and configure the splunk alert properly: Search query- index="dcn_b2b_use_case_analytics" sourcetype=lime_process_monitoring    | where BGS_AR_UY_01=0 OR BGS_BR_01=0 OR BGS_BS_01=0 OR BGS_BS_02=0 OR BGS_KAU_01=0 OR BGS_KAU_02=0 OR BGS_MX_01=0  OR BGS_MX_02=0 OR BGS_RMH_PZ_01=0  OR BGS_RMH_PZ_02=0 OR BGS_RSTO_01=0 OR BGS_RSTO_02=0 OR BGS_SHA_01=0 OR BGS_SHA_02=0  OR BGS_US_01=0 OR BGS_US_02=0 OR BGS_US_03=0 OR BGS_US_04=0 OR  BGS_US_05=0 OR BGS_US_06=0 OR BGS_US_07=0    | eval metricLabel="URGENT !! Labware - < ServiceName > has been stopped in Server"   | eval metricValue="Hello Application Support team, The below service has been stopped in the server, Service name :  < ServiceName > Timestamp :  < Timestamp >   Server : <ServerName>  Please take the required action to resume the service. Thank you. Regards, Background Service Check Automation Bot"   | eval querypattern="default" | eval assignmentgroup="PTO ABC Lab - Operatives" | eval business_service="LIME Business Service"   | eval serviceoffering="LIME" | eval Interface="CLMTS" | eval urgency=2 | eval impact=1     Cron expression * * * * * Trigger For each result Trigger actions PTIX SNOWALERT          
count retail sales events for strategy games   I can't find categoryId field by default from the search tutorial data. It has been added by a lookup file but I don't know where can I download it.... See more...
count retail sales events for strategy games   I can't find categoryId field by default from the search tutorial data. It has been added by a lookup file but I don't know where can I download it. Can anyone help help this ? Thanks
Hello there. After updating from 9.3.1 to 9.4.1 my KVstore stoped working. During quick investigation I found the following errors: 2025-03-19T14:35:15.556Z I NETWORK [listener] connection accepte... See more...
Hello there. After updating from 9.3.1 to 9.4.1 my KVstore stoped working. During quick investigation I found the following errors: 2025-03-19T14:35:15.556Z I NETWORK [listener] connection accepted from 127.0.0.1:41888 #1188 (1 connection now open) 2025-03-19T14:35:15.566Z E NETWORK [conn1188] SSL peer certificate validation failed: unable to get issuer certificate 2025-03-19T14:35:15.566Z I NETWORK [conn1188] Error receiving request from client: SSLHandshakeFailed: SSL peer certificate validation failed: unable to get issuer certificate. Ending connection from 127.0.0.1:41888 (connection id: 1188) openssl s_client -connect 127.0.0.1:8191 -showcerts  gives me valid certificate info. Any idea why I receive this errors? Thx  
I have a Classic Dashboard where I have an HTML panel. I am trying to link to another dashboard with tokens that the user can select via multiselects. However, it isn't working. This is my HTML panel... See more...
I have a Classic Dashboard where I have an HTML panel. I am trying to link to another dashboard with tokens that the user can select via multiselects. However, it isn't working. This is my HTML panel: <row> <panel> <html><div> <a target="_blank" href=/app/app/operating_system?form.case_token=$case_token$&amp;form.host_name=$host_name$">Operating System Artifacts</a> </div></html> </panel> </row>
Hi   I'm installing Splunk UBA and I got the next error:   waiting on impala containerized service to come up Running CaspidaCleanup, resetting rules Cleaning up node domain.com checking if zo... See more...
Hi   I'm installing Splunk UBA and I got the next error:   waiting on impala containerized service to come up Running CaspidaCleanup, resetting rules Cleaning up node domain.com checking if zookeeper is reachable at: domain.com:2181 zookeeper reachable at: domain.com:2181 checking if postgres is reachable at: domain.com:5432 postgres server reachable at: domain.com:5432 checking if impala is reachable at: jdbc:impala://domain.com:21050/;auth=noSasl impala jdbc server at:jdbc:impala://domain.com:21050/;auth=noSasl not reachable, aborting required services not up, aborting cleanup CaspidaCleanup failed, exiting   There are not logs from impala [caspida@ubasplunk ~]$ ls /var/log/impala/ [caspida@ubasplunk ~]$ ls /var/log/impala/     The docker container is running ant the port is map and the 21050 port is open [caspida@ubasplunk ~]$ sudo docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7d198d890b13 domain.com:5000/impala:latest "/bin/bash -c './imp…" 4 minutes ago Up 4 minutes 0.0.0.0:21000->21000/tcp, :::21000->21000/tcp, 0.0.0.0:21050->21050/tcp, :::21050->21050/tcp, 0.0.0.0:24000->24000/tcp, :::24000->24000/tcp, 0.0.0.0:25000->25000/tcp, :::25000->25000/tcp, 0.0.0.0:25010->25010/tcp, :::25010->25010/tcp, 0.0.0.0:25020->25020/tcp, :::25020->25020/tcp, 0.0.0.0:26000->26000/tcp, :::26000->26000/tcp impala 15d899b79ed2 07655ddf2eeb "/dashboard --insecu…" 6 minutes ago Up 6 minutes     Can you helpe to resolve the issue?
Splunk Enterprise ships with a copy of PostGreSQL. The latest splunk installer, v9.4.1, however still ships with a version of Postgresql 16.0 which has several Security vulnerabilities. Is there a do... See more...
Splunk Enterprise ships with a copy of PostGreSQL. The latest splunk installer, v9.4.1, however still ships with a version of Postgresql 16.0 which has several Security vulnerabilities. Is there a documented way to upgrade the version to 16.7? Information on the PostgreSQL CVE https://www.postgresql.org/about/news/postgresql-173-167-1511-1416-and-1319-released-3015/
Hello, I am running into an issue with using multiple dropdowns What I am trying to achieve is dynamic index selection via a hidden splunk dropdown filter that get's autopopulated with the first res... See more...
Hello, I am running into an issue with using multiple dropdowns What I am trying to achieve is dynamic index selection via a hidden splunk dropdown filter that get's autopopulated with the first result value from the datasource search on the hidden dropdown. The hidden filter's data source query for populating the dropdown makes uses the token from the first dropdown. What seems to be working: - Hidden Dropdown successfully lists the correct index based on the selection from the first dropdown. What isn't working: - The result from the hidden index datasource search isn't selected despite it being the only result returned and default selected values is set to first value. Any thoughts or recommendations for how to handle this problem?