All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Bonus question - are your timestamps parsed at all from the events. The event shows just hours/minutes/seconds whereas the _time field in Spkunk shows thousands of a second.
AIM: Integrate AppDynamics with a Kubernetes cluster using the provided documentation. Issue: I've set up a Kubernetes cluster and aimed to integrate it with AppDynamics for monitoring. Following t... See more...
AIM: Integrate AppDynamics with a Kubernetes cluster using the provided documentation. Issue: I've set up a Kubernetes cluster and aimed to integrate it with AppDynamics for monitoring. Following the provided documentation, I successfully created the cluster agent. However, encountered errors during logging and found that the cluster data isn't showing up in the AppDynamics interface. Reference :Install the Cluster Agent with the Kubernetes CLI  Logs and Findings:  PS C:\Users\SajoSam> kubectl logs k8s-cluster-agent-5f8977b869-bpf5v CA_PROPERTIES= -appdynamics.agent.accountName=myaccount -appdynamics.controller.hostName=mycontroller.saas.appdynamics.com -appdynamics.controller.port=8080 -appdynamics.controller.ssl.enabled=false -appdynamics.agent.monitoredNamespaces=default -appdynamics.agent.event.upload.interval=10 -appdynamics.docker.container.registration.interval=120 -appdynamics.agent.httpClient.timeout.interval=30 APPDYNAMICS_AGENT_CLUSTER_NAME=onepane-cluster [ERROR]: 2024-03-26 09:55:04 - secretconfig.go:68 - Problem With Getting /opt/appdynamics/cluster-agent/secret-volume/api-user Secret: open /opt/appdynamics/cluster-agent/secret-volume/api-user: no such file or directory [INFO]: 2024-03-26 09:55:04 - main.go:57 - check env variables and enable profiling if needed [INFO]: 2024-03-26 09:55:04 - agentprofiler.go:22 - Cluster Agent Profiling not enabled! [INFO]: 2024-03-26 09:55:04 - main.go:60 - Starting APPDYNAMICS CLUSTER AGENT version 24.2.0-317 [INFO]: 2024-03-26 09:55:04 - main.go:61 - Go lang version: go1.22.0 W0326 09:55:04.910967 7 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. [INFO]: 2024-03-26 09:55:04 - main.go:78 - Kubernetes version: v1.29.0 [INFO]: 2024-03-26 09:55:04 - main.go:233 - Registering cluster agent with controller host : mycontroller.saas.appdynamics.com controller port : 8080 account name : xxxxx [INFO]: 2024-03-26 09:55:04 - agentregistrationmodule.go:356 - Established connection to Kubernetes API [INFO]: 2024-03-26 09:55:04 - agentregistrationmodule.go:68 - Cluster name: onepane-cluster [INFO]: 2024-03-26 09:55:04 - agentregistrationmodule.go:119 - Initial Agent registration [ERROR]: 2024-03-26 09:55:34 - agentregistrationmodule.go:131 - Failed to send agent registration request: Post "http://mycontroller.saas.appdynamics.com:8080/sim/v2/agent/clusterRegistration": context deadline exceeded (Client.Timeout exceeded while awaiting headers) [ERROR]: 2024-03-26 09:55:34 - agentregistrationmodule.go:132 - clusterId: -1 [ERROR]: 2024-03-26 09:55:34 - agentregistrationmodule.go:134 - Registration properties: {} [INFO]: 2024-03-26 09:56:04 - agentregistrationmodule.go:119 - Initial Agent registration [ERROR]: 2024-03-26 09:56:34 - agentregistrationmodule.go:131 - Failed to send agent registration request: Post "http://mycontroller.saas.appdynamics.com:8080/sim/v2/agent/clusterRegistration": context deadline exceeded (Client.Timeout exceeded while awaiting headers) [ERROR]: 2024-03-26 09:56:34 - agentregistrationmodule.go:132 - clusterId: -1 [ERROR]: 2024-03-26 09:56:34 - agentregistrationmodule.go:134 - Registration properties: {} [INFO]: 2024-03-26 09:57:04 - agentregistrationmodule.go:119 - Initial Agent registration [ERROR]: 2024-03-26 09:57:34 - agentregistrationmodule.go:131 - Failed to send agent registration request: Post "http://mycontroller.saas.appdynamics.com:8080/sim/v2/agent/clusterRegistration": dial tcp 35.84.229.250:8080: i/o timeout (Client.Timeout exceeded while awaiting headers) [ERROR]: 2024-03-26 09:57:34 - agentregistrationmodule.go:132 - clusterId: -1 [ERROR]: 2024-03-26 09:57:34 - agentregistrationmodule.go:134 - Registration properties: {} [INFO]: 2024-03-26 09:58:04 - agentregistrationmodule.go:119 - Initial Agent registration Question : 1. What could be the root cause of the failure to access the secret file /opt/appdynamics/cluster-agent/secret-volume/api-user? 2. What could be causing the timeout error during the registration request to the AppDynamics controller? Could you help me with this? Thank you ^ Post edited by @Ryan.Paredez to redact account name and controller name. For privacy and security reasons, please do not share your Account name or Controller URL. 
1. If you can, don't receive syslog traffic directly on splunk component. Especially if you have lots of traffic. There are better ways to do that. But it has nothing to do with the timezone problem.... See more...
1. If you can, don't receive syslog traffic directly on splunk component. Especially if you have lots of traffic. There are better ways to do that. But it has nothing to do with the timezone problem. 2. Since the timestamp in the event does not contain timezone information, the timezone is inferred from other sources. Either defined statically in the props.conf for sourcetype, source or host or taken from the timezone your forwarder is running in. There are several posible ways to tackle this. a) Best solution - make the source send TZ info along with the timestam. I'm not sure however if your palo can do that b) Not that bad solution - make your source log in UTC and configure Splunk to interpret your events as UTC c) Worst solution from the maintenance point of view - set the props for this source in Splunk (on your HF) to the timezone of the source. This can cause issues with daylight saving
Assuming that the fields only exist in their respective sourcetypes, you could try something like this sourcetype=source1 OR sourcetype=source2 OR sourcetype=source3 | eval userlist=coalesce(userlis... See more...
Assuming that the fields only exist in their respective sourcetypes, you could try something like this sourcetype=source1 OR sourcetype=source2 OR sourcetype=source3 | eval userlist=coalesce(userlist, line.userlist, line.subject) | stats dc(userlist)
There is insufficient information to be able to determine what might be amiss. For example, if your events have multi-value fields, this can give unexpected counts. Please share some representative a... See more...
There is insufficient information to be able to determine what might be amiss. For example, if your events have multi-value fields, this can give unexpected counts. Please share some representative anonymised examples of your events.
Hi Splunk team, We have been using similar below Splunk query across 15+ Splunk alerts but the count mentioned in email shows 4 times of actual failure occurrence. index="<your_index>" sourcetype=... See more...
Hi Splunk team, We have been using similar below Splunk query across 15+ Splunk alerts but the count mentioned in email shows 4 times of actual failure occurrence. index="<your_index>" sourcetype="<your_sourcetype>" source="<your_source.log>" Business_App_ID=<your_appid> Object=* (Failure_Message=*0x01130006* OR Failure_Message=*0x01130009*) | stats count by Object, Failure_Message | sort count Below Splunk query is returning correct failure events. index="<your_index>" sourcetype="<your_sourcetype>" source="<your_source.log>" Business_App_ID=<your_appid> Object=* (Failure_Message=*0x01130006* OR Failure_Message=*0x01130009*) Can you please help in updating the Splunk query(mentioned 1st) to show correct count instead wrong one?
Good morning, I have started to ingest Palo Alto FW events and they are coming with a wrong timestamp, timestamp is 2 hour less than real time. I am going to show an example: This is a event in my... See more...
Good morning, I have started to ingest Palo Alto FW events and they are coming with a wrong timestamp, timestamp is 2 hour less than real time. I am going to show an example: This is a event in my SCP: My SCP is in Spain time (UTC+1), 11.06 right now. The events are coming wih timestamp of 9.06, although they are ingesting 11.06. PA server is in Mexico and timestamp in raw event is 4.06, 5 less hour. And heavy forwarder is also in mex but its hour is EDT time:   If i have explained me properly, how can i fix it?  
Which splunk version are you running? Have you checked out Solved: How to use iframe in Splunk 8.x? - Splunk Community?
I have 3 different sources of the same filed. I want to aggregate all the 3 sources and get the distinct count of the field eg. sourcetype=source1 | stats dc(userlist) sourcetype=source2 | stats d... See more...
I have 3 different sources of the same filed. I want to aggregate all the 3 sources and get the distinct count of the field eg. sourcetype=source1 | stats dc(userlist) sourcetype=source2 | stats dc(line.userlist) sourcetype=source3 | stats dc(line.subject)   Here userlist, line.userlist, line.subject are all the same attributes but being logged differently. Now I want to get the dc of userlist+line.userlist+line.subject. Any help is appreciated.
Hi, I had couple of questions related to Splunk AI Assistant 1. Do we need to install Splunk AI Assistant on each Splunk Server that we using? 2. Can Splunk AI Assistant be called using API calls?... See more...
Hi, I had couple of questions related to Splunk AI Assistant 1. Do we need to install Splunk AI Assistant on each Splunk Server that we using? 2. Can Splunk AI Assistant be called using API calls? 3. Does Splunk AI Assistant provide SPL query or result of SPL query? 4. Based on users query, if there are multiple matches, will Splunk AI Assistant return all available SPL queries or the best match?
Good morning fellow Splunkthiasts! I have an index with 100k+ events per minute (all of them having the same sourcetype), approximately 100 fields are known in this dataset. Some of these events are... See more...
Good morning fellow Splunkthiasts! I have an index with 100k+ events per minute (all of them having the same sourcetype), approximately 100 fields are known in this dataset. Some of these events are duplicit, while others are unique. My aim is to understand the duplication and be able to explain what events exactly get duplicated. I am detecting duplicities using this SPL:   index="myindex" sourcetype="mysourcetype" | eventstats count AS duplicates BY _time, _raw   Now I need to identify what fields or their combination make the difference, under what circumstances the event is ingested twice. I tried to use predict command, however it is somehow producing new values for "duplicates" field, but it does not disclose the rule by which it makes the decision. In other words, I am not interested in prediction itself, I want to know the predictors. Is something like that possible in SPL?
Hello everyone,  I'm coming to you for advice. I am currently working with splunk to create monitor WSO2-APIM instances.  According to the WSO2-APIM documentation, logs are generated as follows :  ... See more...
Hello everyone,  I'm coming to you for advice. I am currently working with splunk to create monitor WSO2-APIM instances.  According to the WSO2-APIM documentation, logs are generated as follows :  [2019-12-12 17:30:08,091] DEBUG - wire HTTPS-Listener I/O dispatcher-5 >> "GET /helloWorld/1.0.0 HTTP/1.1[\r][\n]" [2019-12-12 17:30:08,093] DEBUG - wire HTTPS-Listener I/O dispatcher-5 >> "Host: localhost:8243[\r][\n]" [2019-12-12 17:30:08,094] DEBUG - wire HTTPS-Listener I/O dispatcher-5 >> "User-Agent: curl/7.54.0[\r][\n]" [2019-12-12 17:30:08,095] DEBUG - wire HTTPS-Listener I/O dispatcher-5 >> "accept: */*[\r][\n]" [2019-12-12 17:30:08,096] DEBUG - wire HTTPS-Listener I/O dispatcher-5 >> "Authorization: Bearer 07f6b26d-0f8d-312a-8d38-797e054566cd[\r][\n]" [2019-12-12 17:30:08,097] DEBUG - wire HTTPS-Listener I/O dispatcher-5 >> "[\r][\n]" [2019-12-12 17:30:08,105] DEBUG - wire HTTP-Sender I/O dispatcher-1 << "GET /v2/5df22aa131000084009a30a9 HTTP/1.1[\r][\n]" [2019-12-12 17:30:08,106] DEBUG - wire HTTP-Sender I/O dispatcher-1 << "accept: */*[\r][\n]" [2019-12-12 17:30:08,107] DEBUG - wire HTTP-Sender I/O dispatcher-1 << "Host: www.mocky.io[\r][\n]" [2019-12-12 17:30:08,108] DEBUG - wire HTTP-Sender I/O dispatcher-1 << "Connection: Keep-Alive[\r][\n]" [2019-12-12 17:30:08,109] DEBUG - wire HTTP-Sender I/O dispatcher-1 << "User-Agent: Synapse-PT-HttpComponents-NIO[\r][\n]" [2019-12-12 17:30:08,110] DEBUG - wire HTTP-Sender I/O dispatcher-1 << "[\r][\n]" [2019-12-12 17:30:08,266] DEBUG - wire HTTP-Sender I/O dispatcher-1 >> "HTTP/1.1 200 OK[\r][\n]" [2019-12-12 17:30:08,268] DEBUG - wire HTTP-Sender I/O dispatcher-1 >> "Server: Cowboy[\r][\n]" [2019-12-12 17:30:08,269] DEBUG - wire HTTP-Sender I/O dispatcher-1 >> "Connection: keep-alive[\r][\n]" [2019-12-12 17:30:08,271] DEBUG - wire HTTP-Sender I/O dispatcher-1 >> "Date: Thu, 12 Dec 2019 12:00:08 GMT[\r][\n]" [2019-12-12 17:30:08,272] DEBUG - wire HTTP-Sender I/O dispatcher-1 >> "Content-Type: application/json[\r][\n]" [2019-12-12 17:30:08,273] DEBUG - wire HTTP-Sender I/O dispatcher-1 >> "Content-Length: 20[\r][\n]" [2019-12-12 17:30:08,274] DEBUG - wire HTTP-Sender I/O dispatcher-1 >> "Via: 1.1 vegur[\r][\n]" [2019-12-12 17:30:08,275] DEBUG - wire HTTP-Sender I/O dispatcher-1 >> "[\r][\n]" [2019-12-12 17:30:08,276] DEBUG - wire HTTP-Sender I/O dispatcher-1 >> "{ "hello": "world" }" [2019-12-12 17:30:08,282] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "HTTP/1.1 200 OK[\r][\n]" [2019-12-12 17:30:08,283] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "Access-Control-Expose-Headers: [\r][\n]" [2019-12-12 17:30:08,284] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "Access-Control-Allow-Origin: *[\r][\n]" [2019-12-12 17:30:08,285] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "Access-Control-Allow-Methods: GET[\r][\n]" [2019-12-12 17:30:08,286] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "Access-Control-Allow-Headers: authorization,Access-Control-Allow-Origin,Content-Type,SOAPAction,Authorization[\r][\n]" [2019-12-12 17:30:08,287] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "Content-Type: application/json[\r][\n]" [2019-12-12 17:30:08,287] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "Via: 1.1 vegur[\r][\n]" [2019-12-12 17:30:08,288] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "Date: Thu, 12 Dec 2019 12:00:08 GMT[\r][\n]" [2019-12-12 17:30:08,289] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "Transfer-Encoding: chunked[\r][\n]" [2019-12-12 17:30:08,290] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "[\r][\n]" [2019-12-12 17:30:08,290] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "14[\r][\n]" [2019-12-12 17:30:08,291] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "{ "hello": "world" }[\r][\n]" [2019-12-12 17:30:08,292] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "0[\r][\n]" [2019-12-12 17:30:08,293] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "[\r][\n]" And also according to the doc :  DEBUG - wire >> Represents the message coming into the API Gateway from the wire. DEBUG - wire << Represents the message that goes to the wire from the API Gateway.   I use AWS Lambda to retrieve the WSO2-APIM logs, which are stored in AWS CloudWatch. I've just started using Splunk so I'm not very good at SPL. I would like Splunk to process events with SPL and then output something like this : Date, loglevel, action_https, correlationID, message, duration [2019-12-12 17:30:08,091], DEBUG, HTTPS-Listener, dispatcher-5, "GET /helloWorld/1.0.0 HTTP/1.1[\r][\n]" "Host: localhost:8243[\r][\n]" "User-Agent: curl/7.54.0[\r][\n]" "accept: */*[\r][\n]" "Authorization: Bearer 07f6b26d-0f8d-312a-8d38-797e054566cd[\r][\n]" "[\r][\n]", 006 [2019-12-12 17:30:08,105], DEBUG, HTTPS-Listener, dispatcher-1, "GET /v2/5df22aa131000084009a30a9 HTTP/1.1[\r][\n]" "accept: */*[\r][\n]" "Host: www.mocky.io[\r][\n]" "Connection: Keep-Alive[\r][\n]" "User-Agent: Synapse-PT-HttpComponents-NIO[\r][\n]" "[\r][\n]", 005 [2019-12-12 17:30:08,266], DEBUG, HTTPS-Sender, dispatcher-1, "HTTP/1.1 200 OK[\r][\n]" "Server: Cowboy[\r][\n]" "Connection: keep-alive[\r][\n]" "Date: Thu, 12 Dec 2019 12:00:08 GMT[\r][\n]" "Content-Type: application/json[\r][\n]" "Content-Length: 20[\r][\n]" "Via: 1.1 vegur[\r][\n]" "[\r][\n]" "{ "hello": "world" }", 010 [2019-12-12 17:30:08,282], DEBUG, HTTPS-Listener, dispatcher-5, "HTTP/1.1 200 OK[\r][\n]" "Access-Control-Expose-Headers: [\r][\n]" "Access-Control-Allow-Origin: *[\r][\n]" "Access-Control-Allow-Methods: GET[\r][\n]" "Access-Control-Allow-Headers: authorization,Access-Control-Allow-Origin,Content-Type,SOAPAction,Authorization[\r][\n]" "Content-Type: application/json[\r][\n]" "Via: 1.1 vegur[\r][\n]" "Date: Thu, 12 Dec 2019 12:00:08 GMT[\r][\n]" "Transfer-Encoding: chunked[\r][\n]" "[\r][\n]" "14[\r][\n]" "{ "hello": "world" }[\r][\n]" "0[\r][\n]" "[\r][\n]", 011 Do you have any ideas on how to do this with SPL in the Search App? Thank you for those who took the time to read and reply to me.
@BRFZ  If you have no cluster the data are not replicated. So if one indexer goes down your search couldn't access all data.
@PaulPanther Thank you for your response, and does it not have any impact given that the indexers are not in a cluster?
1. Create the neccessary indexes on your indexer 2. Configure Best practice: Forward search head data to the indexer layer - Splunk Documentation
As mentioned by @Atyuha.Pal , we can make use of Metric Browser to create one. Here's a simple workaround: If you open the Transaction Score Card and double-click on any of the bar chart,  it will ... See more...
As mentioned by @Atyuha.Pal , we can make use of Metric Browser to create one. Here's a simple workaround: If you open the Transaction Score Card and double-click on any of the bar chart,  it will take you to the Metric Browser where you can see the metrics used e.g. Number of Slow Call, Stall Count etc. Using that, you can create a dashboard by inputing those metrics.  We have Slow, Vey Slow, Error, Stall. So, the tricky part is to create the "Normal". We can use the metric expression to derive Normal = Call per Min - Slow - Very Slow - Stall - Error I did a quick check and the value looks correct. thanks, Terence
Hello, I have an architecture with a single SH and two indexers. I've installed the Splunk for Microsoft 365 add-on on the search head, so the collected logs are stored in the search head's index... See more...
Hello, I have an architecture with a single SH and two indexers. I've installed the Splunk for Microsoft 365 add-on on the search head, so the collected logs are stored in the search head's index, but I want them to be stored on the indexers. Can you help me? Thank you.
I see that it is quite some time since you posted this question. Just wanted to "second it", as I am working with hardening a Splunk platform myself at the moment and am wondering about the same thin... See more...
I see that it is quite some time since you posted this question. Just wanted to "second it", as I am working with hardening a Splunk platform myself at the moment and am wondering about the same thing. By chance, have you found any answers?
You should find all neccessary information about Splunk TAs and the kv store in your "_internal" index. As a second step you could check the "source" field for the TAs that you want to monitor. Most... See more...
You should find all neccessary information about Splunk TAs and the kv store in your "_internal" index. As a second step you could check the "source" field for the TAs that you want to monitor. Most of the available TAs are writing logs in their own logfile under $SPLUNK_HOME/var/log/splunk.  For the kv store check the mongod.log. More information: What Splunk software logs about itself - Splunk Documentation
Thanks, even if the query consumes a lot, but it works