All Topics

Top

All Topics

Hello, We are ingesting csv files from a S3 bucket using the Custom SQS based S3 input. Although, the data is pulled rightly, the fields are not getting extracted properly. The header line has been... See more...
Hello, We are ingesting csv files from a S3 bucket using the Custom SQS based S3 input. Although, the data is pulled rightly, the fields are not getting extracted properly. The header line has been ingested as a different event and the header fields are not getting extracted. I have defined the Indexed_Extractions=csv in the props.conf Is there any other way to extract a csv file from the S3 bucket? Any work around?  
Hello I have a restricted rsyslog client. I can there only specify a Hostname or IP and port as target to send the syslog. Where can I found the Hostname or IP for my splunk cloud to receive the a... See more...
Hello I have a restricted rsyslog client. I can there only specify a Hostname or IP and port as target to send the syslog. Where can I found the Hostname or IP for my splunk cloud to receive the according syslog?   Thank you
Timezone issue --------different data is visible to different location users, when I select previous month.. condition : | where abc>="-1mon@mon" and abc<"@mon"   Its taking the system time not... See more...
Timezone issue --------different data is visible to different location users, when I select previous month.. condition : | where abc>="-1mon@mon" and abc<"@mon"   Its taking the system time not the common time, so the user is facing issues..   is there any query to convert to common utc value??  
index=xxxx sourcetype="Script:InstalledApps" DisplayName="Carbon Black Cloud Sensor 64-bit" I am trying to get the list/name of host that doesnt have Carbon Black installed. Can someone help me with... See more...
index=xxxx sourcetype="Script:InstalledApps" DisplayName="Carbon Black Cloud Sensor 64-bit" I am trying to get the list/name of host that doesnt have Carbon Black installed. Can someone help me with a simple query for this.  If I do DisplayName!= and then table the host, it's not giving me the correct result.
Hi All, I am using Splunk Add-on for GCP to pull logs from log sink via pub/sub. I configured a pub/sub input inside the add on and it is successfully pulling the logs from pub/sub.  But I want... See more...
Hi All, I am using Splunk Add-on for GCP to pull logs from log sink via pub/sub. I configured a pub/sub input inside the add on and it is successfully pulling the logs from pub/sub.  But I want to confirm if  "GCP add on after receiving the messages from pub/sub sends back a ACK (acknowledgement) message to pub/sub so that same message is not sent twice or duplicated"? There is nothing mentioned about ACK messages in GCP addon documentation so asking here. Please help me out.  
Hello, I tried to input an DB with query as below:   SELECT ..., txn_stamp as TXTIME, .... FROM mybd WHERE txn_stamp > ? ORDER BY TXTIME ASC   When I hit Excecute query, the resul... See more...
Hello, I tried to input an DB with query as below:   SELECT ..., txn_stamp as TXTIME, .... FROM mybd WHERE txn_stamp > ? ORDER BY TXTIME ASC   When I hit Excecute query, the result produce error: ORA-01861: literal does not match format string. My txn_stamp is a time stamp column with the format: YYYY-mm-dd HH:MM:SS (ex: 2023-08-31 00:00:25). The curious thing is sometime it worked, Executing query show data, but it will stop at some point, I suspect it's because of the above error. My thinking is I want to formart either my db timestamp formart or the rising column timestamp formart to the same formart so it won't be a mischatch, but I don't know how.  
This article discusses some of the most common issues faced when using Linux-based Private Synthetic Agent. In this article:   What are the prerequisites for debugging Linux private Synthetic... See more...
This article discusses some of the most common issues faced when using Linux-based Private Synthetic Agent. In this article:   What are the prerequisites for debugging Linux private Synthetic Agent issues?  How do I capture PSA logs to further troubleshoot issues? What errors arise from unsupported Kubernetes versions? How do I install PSA on a machine without an internet connection? How do I resolve a recurring ‘Test Agent Failed to Post Result’ error? How do I resolve the 'DNS resolution failed (ERROR)'? How do I resolve the error thrown when cluster-level permissions are missing? How do I resolve a Heimdall log error?  How do I resolve a Heimdall error on Docker-based PSA?   What are the prerequisites for debugging Linux private Synthetic Agent issues? Make sure the deployment is done on officially supported PSA platforms, prerequisites and hardware requirements: See Install the Private Synthetic Agent (Web and API Monitoring)  in the documentation, under End User Monitoring > Synthetic Monitoring  Currently, the kernel architecture we support for installing PSA (Web Mon and API Mon) is x86-64, which is also referred to as x64, x86-64,AMD64, and Intel 64. Back to TOC    How do I capture PSA logs to further troubleshoot issues?  To properly capture PSA logs, capture the pod details in separate files as instructed in the notes:  kubectl get pods --namespace <namespace> > {YOUR_PREFERRED_PATH}/pods-status.txt  kubectl get pods -o wide --all-namespaces > {YOUR_PREFERRED_PATH}/pods-status_wide.txt  kubectl describe pod -n <namespace> <pod-name> > {YOUR_PREFERRED_PATH}/describe-pod-<pod-name>.txt  kubectl logs <pod-name> --namespace <namespace> > {YOUR_PREFERRED_PATH}/logs-pod-<pod-name>.txt  Notes: Replace <pod-name> and <namespace> with your existing values.  By default, <namespace> may have a value measurement.  To get all the <pod-name>, the first command will list them for you.  Make sure to capture the output of commands 3 and 4 for all <pod-name> per names listed in command 1, in separate files to avoid overwriting the same file.  Back to TOC    What errors arise from unsupported Kubernetes versions? Below are some of the errors reported when an unsupported K8s version is used. Kubectl version | Insufficient resources for K8s | CrashLoopBackOff error | Low resource allocation to Chrome API/Agent Kubectl version You can check the installed kubectl version using "kubectl version":  INFO 1 --- [or-http-epoll-1] c.a.s.heimdall.client.ReactiveWebClient : [34927359] Response: Status: 500 Cache-Control:no-store Pragma:no-cache Content-Type:application/json X-Content-Type-Options:nosniff X-Frame-Options:DENY X-XSS-Protection:1 ; mode=block Referrer-Policy:no-referrer content-length:226 ERROR 1 --- [or-http-epoll-1] c.a.s.h.service.MeasurementService: Failed to submit measurement with id : 8b71c4f4-7541-41f8-9f6a-8e762502d117~02b75cbc-5aaf-43f6-9d1d-30e20a634977 [SEVERE][main][TcpDiscoverySpi] Failed to get registered addresses from IP finder (retrying every 2000ms; change 'reconnectDelay' to configure the frequency of retries) [maxTimeout=0] class org.apache.ignite.spi.IgniteSpiException: Failed to retrieve Ignite pods IP addresses. The error below (and in the attached txt file) is also seen: Warning Unhealthy 23m (x4 over 25m) kubelet Readiness probe failed: Get "http://10.244.0.3:8080/ignite?cmd=probe": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Normal Killing 23m (x2 over 25m) kubelet Container ignite failed liveness probe, will be restarted Warning Unhealthy 23m (x2 over 25m) kubelet Readiness probe failed: Get "http://10.244.0.3:8080/ignite?cmd=probe": EOF Warning Unhealthy 23m (x3 over 25m) kubelet Readiness probe failed: Get "http://10.244.0.3:8080/ignite?cmd=probe": dial tcp 10.244.0.3:8080: connect: connection refused Normal Pulled 23m (x2 over 25m) kubelet Container image "apacheignite/ignite:2.14.0-jdk11" already present on machine Warning Unhealthy 5m43s (x25 over 25m) kubelet Liveness probe failed: Get "http://10.244.0.3:8080/ignite?cmd=version": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Warning BackOff 92s (x52 over 16m) kubelet Back-off restarting failed container ignite in pod synth-ignite-psa-0_ignite(1cba5f54-7723-4be4-a7ba-ce48fc6eacaf) Back to Errors from Unsupported K8s versions |  Back to TOC  Insufficient resources provided to Kubernetes When not enough resources (CPU and Memory defined in values.yaml) are provided to the K8s env. (for example, when starting minikube).  Events Type  ===========  Reason  ===========  Age  =======  From  ==========  Message  ============ Warning  FailedScheduling  5m (x863 over 3d3h)  default-scheduler  0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod To quickly check the current resources that minikube is running with, use the following:    cat ~/.minikube/config.json | grep "Memory\|CPUs" NOTE | In case of no output, make sure to use config.json under the profile with which minikube was started.   Back to Errors from Unsupported K8s versions |  Back to TOC  CrashLoopBackOff image pulling error When you see CrashLoopBackOff or Back-off pulling image error for Minikube-based PSA, update the values.yaml heimdall > pullPolicy to Never and re-deploy PSA. This fixes the error.     For other platforms, please refer to our documentation for specific instructions by platform: Deploy the Web Monitoring PSA and API Monitoring PSA.  Events: Type ===========   Reason =======  Age ===========  From  =======  Message  =========== Normal  BackOff  3m7s (x18915 over 3d3h)  kubelet  Back-off pulling image “sum-heimdall:<<heimdall-tag>”  Back to Errors from Unsupported K8s versions |  Back to TOC  Low resource allocation to the Chrome/API Agent If you're facing slower execution of the jobs/ High Session Duration to complete the jobs are mainly because of low resources allocated to the PSA, specifically to the Chrome/API agent.     Try increasing the resources (CPU and memory) for the Chrome/API agent in values.yaml and re-deploy the PSA.  chromeAgentResources: min_cpu: "1" max_cpu: "2" min_mem: 1024Mi max_mem: 8192Mi Back to Errors from Unsupported K8s versions |  Back to TOC     How do I install PSA on a machine without an internet connection?  Use the attached document “Install PSA with minikube on an offline machine.pdf.”  You can use any machine with an active internet connection as your temporary machine. Build PSA components on that temporary machine and then export them to your target server machine without an active internet connection.  PLEASE NOTE | The steps in the provided PDF have not been tested in-house by Cisco AppDynamics Support  NOTE | Linux PSA version >= v22.9 doesn't need Postgres DB. Please refer to EUM > Synthetics >  Install the Private Synthetic Agent (Web and API Monitoring)  in our documentation.  Back to TOC     How do I resolve a recurring ‘Test Agent Failed to Post Result’ error?  If you're periodically or intermittently facing a ‘Test Agent Failed to Post Result’ error, redeploy PSA after updating values.yaml for Heimdall resources and Chrome agent resources (recommended):  heimdallResources: min_cpu : "3" max_cpu: "3" min_mem: 5Gi max_mem: 5Gi chromeAgentResources: min_cpu: "1" max_cpu: "2" min_mem: 2048Mi max_mem: 3072Mi Back to TOC    How do I resolve the 'DNS resolution failed (ERROR)'?  If you're facing a job failing with the error below: DNS resolution failed [ERROR] WebDriverException: unknown error: net::ERR_NAME_NOT_RESOLVED Then,   Log into the Heimdall pod with the below command and see if you can ping the <url>:  kubectl exec -it <heimdall-pod-name> -n <namespace> -- /bin/bash After logging in to the Heimdall pod, please run the command below to check whether the pods are able to connect or not:  curl <url>  NOTE | Curl command is available only on the Heimdall pod. Log into the Chrome agent pod using the below command to check/debug anything related to that pod:  kubectl exec -it <chrome-pod-name> -n <namespace> -- /bin/sh NOTE | In order to use any tool available for Alpine (Chrome agent pod), make sure to either remove the USER Block or add the particular install command in Chrome Agent DOCKERFILE , rebuild the image and redeploy the PSA. If you remove the USER block in Chrome Agent DOCKERFILE, the pod will be created with root permissions, and you can install any tool after logging in to the Chrome Agent pod.  Back to TOC    How do I resolve the error thrown when cluster-level permissions are missing? The error below is thrown when cluster-level permissions are missing since PSA would need cluster-level permissions to function properly: io.fabric8.kubernetes.client.KubernetesClientException: Operation: [list] for kind: [Pod] with name: [null] in namespace: [measurement] failed. ... Caused by: java.net.SocketTimeoutException: connect timed out at java.net.PlainSocketImpl.socketConnect(Native Method) ~[na:1.8.0_362] As PSA service accounts and roles are configured for cluster-level permissions to do certain operations on the Helm level. Having cluster-level permissions would imply that the Agent requires access to different namespaces in the cluster.  Refer to Create the Kubernetes Cluster in the documentation. The page makes note to create a cluster in the instructions, so the assumption is that you should have access to create a cluster. With only namespace level permissions, an individual won’t be able to create a cluster.  Apply the steps below to fix the issue:  TIP | If you want to permit only namespace-level permissions instead of cluster-level permissions, we suggest you use the role.yaml file attached below   Unpack Helm chart:  cd <Unzipped-PSA-directory> tar xf sum-psa-heimdall.tgz Use/Replace the attached role.yaml with sum-psa-heimdall/templates/role.yaml   Repack using the following:  helm package sum-psa-heimdall ​ Finally, redeploy the PSA using the newly packed sum-psa-heimdall.tgz.  Back to TOC    How do I resolve a Heimdall log error? If you see the error below in your Heimdall logs, try increasing the RAM on the PSA host machine, or decrease the memory assigned to minikube and values.yaml:  2023-05-30 20:43:38.768 WARN 1 --- [ main] org.apache.ignite.internal.IgniteKernal: Nodes started on local machine require more than 80% of physical RAM what can lead to significant slowdown due to swapping (please decrease JVM heap size, data region size or checkpoint buffer size) [required=2262MB, available=5120MB] [20:43:38] Nodes started on local machine require more than 80% of physical RAM that can lead to significant slowdown due to swapping (please decrease JVM heap size, data region size or checkpoint buffer size) [required=2262MB, Back to TOC  How do I resolve a Heimdall error on Docker-based PSA?  For Docker-based PSA, make sure the "docker ps" command outputs both the Heimdall and ignite containers.  To capture Heimdall logs, use the below:  // Capture heimdall container logs using the <HEIMDALL_CONTAINER-ID> to heimdall.txt file, to get <HEIMDALL_CONTAINER-ID>, run "docker ps" docker logs -n <last-n-lines> <HEIMDALL_CONTAINER-ID> > heimdall-<CONTAINER-ID>.txt Back to TOC 
good afternoon everyone, i'm trying to change the sender when i configure a new SMTP asset, better said i want to change the sender domain when i configure the asset, however i have not been able to ... See more...
good afternoon everyone, i'm trying to change the sender when i configure a new SMTP asset, better said i want to change the sender domain when i configure the asset, however i have not been able to get it. The only domains i can use are splunkcloud.com and splunk.com. does anyone know how can i use other domain, without using user and password to authenticate?
Hello Everyone,    First off, thanks in advance to everyone who takes the time to contribute to this post!   I've got custom html code in simple xml and was able to grab data from a textpart ... See more...
Hello Everyone,    First off, thanks in advance to everyone who takes the time to contribute to this post!   I've got custom html code in simple xml and was able to grab data from a textpart and parse it into a  JavaScript variable captured using the code below. I'm trying to use the variable captured in the search query in the SearchManager function. So far I've only been able to set static values such as eval test = "Working" but have had no luck passing in a JavaScript variable.        require([ "underscore", "splunkjs/mvc/searchmanager", "splunkjs/mvc/simplexml/ready!", ], function(_, mvc, SearchManager) { var mysearch = new SearchManager({ id: "mysearch", autostart: "false", search: '| makeresults | eval test = captured | collect index = "test_index"' }); $("#btn-submit").on("click", function () { // Capture value of the Text Area var captured = $("textarea#outcome").val(); mysearch.startSearch(); }); }); });        
Hello, How to perform lookup on inconsistent IPv6 format in CSV file from index? For example: Index has collapsed format of IPv6:  2001:db8:3333:4444:5555:6666::2101 CSV has expanded format of ... See more...
Hello, How to perform lookup on inconsistent IPv6 format in CSV file from index? For example: Index has collapsed format of IPv6:  2001:db8:3333:4444:5555:6666::2101 CSV has expanded format of IPv6:    2001:db8:3333:4444:5555:6666:0:2101 The following lookup can NOT find the IPv6 that has the inconsistent pattern, it only find the exact match | index=vulnerability_index | lookup company.csv ip_address as ip OUTPUTNEW ip_address, company, location In IPv6  "::" (double colon) represents consecutive zeroes  ( :0:   or :0:0: or :0:0:0:)  ":0:"represents 0000 I think this is what I am looking for, but I am not sure how to implement it. https://splunkbase.splunk.com/app/4912 Thank you for your help
Hello, we are working on setting some Health rules in AppDynamics to monitor slow-running queries in the Database. After going through the documentation on the website, we configured a health rule as... See more...
Hello, we are working on setting some Health rules in AppDynamics to monitor slow-running queries in the Database. After going through the documentation on the website, we configured a health rule as seen below. Our problem with this is that there is a 'Group Replication module' (screenshot below) always running on the db side that is needed but causing constant violations. Is there a way of adding an exception to queries so that similar items in the database do not trigger the violations? Is there another way you can suggest we move forward with this that will give us a more accurate result?
Hello All, I am using maps+ with some success. I have one question, is there a way to zoom back to a set zoom point (like 3 or 4) after a default zoom in on a cluster? I am using maps+ to show up or... See more...
Hello All, I am using maps+ with some success. I have one question, is there a way to zoom back to a set zoom point (like 3 or 4) after a default zoom in on a cluster? I am using maps+ to show up or down network devices. The cluster shows, say 2 devices at a given lat/log, zooms in quite a lot. I am using a map of the US, more or less centered in the window., but after the zoom-in, I have to backout using the "-" icon. It would be nice if maps+ had the zoom bck feature of the legacy map using geostats, etc.   Thanks eholz1
With SOAR 6.1's addition of the "Run automatically when" field, it would be great to be able to run a playbook on container resolution that can read the closure comment. Bonus points if you can expla... See more...
With SOAR 6.1's addition of the "Run automatically when" field, it would be great to be able to run a playbook on container resolution that can read the closure comment. Bonus points if you can explain why Comment data is separate from Event data in the export while notes aren't.
Good day. I am trying to use the sendalert command in Splunk to send a set of results to Splunk SOAR(Phantom), each result appears in phantom as a new event, would there be a way to receive only one ... See more...
Good day. I am trying to use the sendalert command in Splunk to send a set of results to Splunk SOAR(Phantom), each result appears in phantom as a new event, would there be a way to receive only one event with all the results. I'll appreciate your answer
Is it possible to add some parameters in Splunk URL so that after clicking the URL, the viewer will see a well formatted SPL search and does not need to format manually?
Hello, I set up several hosts in Forwarding and Receiving section (different servers and ports) to forward logs. I can see there is Automatic Load Balancing option ENABLED. I want to have it DISABLE... See more...
Hello, I set up several hosts in Forwarding and Receiving section (different servers and ports) to forward logs. I can see there is Automatic Load Balancing option ENABLED. I want to have it DISABLED but do not know how to disable it. Can anybody help me pls ? thanks, pawel
We've updated the layout of the Settings section of the navigation menu in Splunk Observability Cloud. Settings will now show your organization name, and we've made a little extra space to show your ... See more...
We've updated the layout of the Settings section of the navigation menu in Splunk Observability Cloud. Settings will now show your organization name, and we've made a little extra space to show your user profile.   Before this change, when you clicked Settings (or the gear icon), you'd see a link to your user profile at the top of the navigation menu, instead of the Splunk logo and organization name that you would see outside of Settings.   We heard from customers that this made it a little hard to open and close the menu, and (from our customers with multiple accounts) that it could be hard to tell which organization's settings they were looking at. To address this, we changed the layout of the Settings section to be consistent with the rest of the navigation menu.     We hope this little change makes your Observability Cloud experience better!
I have an idea and am looking for some input on how to approach it, where to start. As mentioned in the subject.  I do not want an alert to be triggered if lets say its Sunday between 1-2 AM.  I can... See more...
I have an idea and am looking for some input on how to approach it, where to start. As mentioned in the subject.  I do not want an alert to be triggered if lets say its Sunday between 1-2 AM.  I cannot do this via CRON so looking for an alternative solution. Questions/Thoughts: (1) What is the best/simplest way to get from Splunk the Day and Hour (2) Once I get day & hour how should I incorporate that into my existing alert query.  Should I create a var to indicate outage or not (0/1) (3) Once I determine if I am in an outage (1) is there an easy way to force the alerts results to = 0  I know there are going to be many questions so fire away and I will try to explain or answer the best I can as there are many alerts im trying to make this work for and they are all slightly different in their implementation...  
Are there pre-configured or default Dashboards associated with this Add-on?  Is the Add-on suppose to show up under App when it's installed? 
Goal: Being able to alert off the latest event if the event is more than 300 seconds and is not blank or "non-productive". Here is my current search and the results:  Every incident is an... See more...
Goal: Being able to alert off the latest event if the event is more than 300 seconds and is not blank or "non-productive". Here is my current search and the results:  Every incident is an open or a closing of an event. If the incident is blank, that signifies a closing of the previous event. If the incident has a string, that is the current open event.  In my ideal scenario, I would alert based on any incident where I have a string value within the incident field, current duration has surpassed 300 seconds, and I don't have a value in the total duration field.  However, when I try to add a filter for | where total duration = "", no results are returned at all.. Which I am confused about since the latest totalduration event is blank since streamstats is false... Any help or tips greatly appreciated!