All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi . Trying with: Field transformations:       And adding them to sourcetype:     But does not work is there anything wrong?   Thank you all!!   BR
The "rest" in my answer is an SPL command.  The same REST endpoint can be accessed via port 8089 after the port is enabled. ACS will not get you all of the KOs owned by a user.
last one worked!
@ppal Have you added these both processors under service --> pipelines --> <your log-pipeline> --> processors as well? 
Hi @richgalloway  Got it. what is the "rest" that is mentioned in your answer ? is it  https://<deployment-name>.splunkcloud.com:8089 ? if yes, then we have not opened port 8089 for our Splunkclou... See more...
Hi @richgalloway  Got it. what is the "rest" that is mentioned in your answer ? is it  https://<deployment-name>.splunkcloud.com:8089 ? if yes, then we have not opened port 8089 for our Splunkcloud instance, is it necessary to open this port to be able to use these API's ? I have access to SplunkCloud ACS and am able to get the users list using it https://admin.splunk.com/{stack}/adminconfig/v2/   P.S. I am new to API's and Splunk so apologies incase these are basic Splunk knowledge. Also, thanks for the quick reply  
Hi Gene, I am facing the similar issue. Do you mind sharing what you exactly did to resolve this? Thank you! 
If your problem is resolved, then please click the "Accept as Solution" button to help future readers.
Splunk Fundamentals 1 was splitted into separate courses. You will find them here: Course Catalog | Splunk
This works well @ITWhisperer 
Try adding a table command to your base search <search id="base_srch"> <query>index=prod sourcetype=auth_logs | table ip</query>
Hi, I want to go through the splunk fundamentals 1 where I can get this link?  
First of all, thank you for a good(-ish) description of your issue. Try something like this: | rex "(?<Date>\[[^\]]+\])\s(?<loglevel>\w+)\s-\swire\s(?<action_https>\S+)\sI\/O\s(?<correlationID>\S+)... See more...
First of all, thank you for a good(-ish) description of your issue. Try something like this: | rex "(?<Date>\[[^\]]+\])\s(?<loglevel>\w+)\s-\swire\s(?<action_https>\S+)\sI\/O\s(?<correlationID>\S+)\s(?<direction>\S+)\s(?<message>.*)" | eval grouping=correlationID.direction | stats first(Date) as start last(Date) as end list(message) as message by grouping action_https correlationID loglevel | eval Date=start | eval duration=round(1000*(strptime(end,"[%F %T,%3N]")-strptime(start,"[%F %T,%3N]")),0) | sort 0 Date | table Date, loglevel, action_https, correlationID, message, duration Note that your example shows unique combinations of correlationIDs and direction. If these are reused in your actual log, you may not get the results you expect. If so, please share a more representative version of your logs.
Hello All,   I have created a dashboard and it is always showing no results found. But when i click on open in search or directly running the search query it is showing the results. can anyone he... See more...
Hello All,   I have created a dashboard and it is always showing no results found. But when i click on open in search or directly running the search query it is showing the results. can anyone help please. <form version="1.1" theme="light"> <label>Successful connections by an IP range dashboard</label> <search id="base_srch"> <query>index=prod sourcetype=auth_logs</query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> </search> <fieldset submitButton="false"> <input type="time" token="time" searchWhenChanged="true"> <label>test</label> <default> <earliest>-4h@m</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <chart> <search base="base_srch"> <query>|stats count by ip</query> </search> <option name="charting.chart">pie</option> <option name="charting.drilldown">none</option> </chart> </panel> </row> </form>
Statement: You install  1Password Events Reporting for Splunk from   https://splunkbase.splunk.com/app/5632  Problem: You get error messages after correctly configuring it in the _internal index l... See more...
Statement: You install  1Password Events Reporting for Splunk from   https://splunkbase.splunk.com/app/5632  Problem: You get error messages after correctly configuring it in the _internal index like:        03-26-2024 11:37:30.974 +0000 ERROR ExecProcessor [12044 ExecProcessor] - message from "/opt/splunk/etc/apps/onepassword_events_api/bin/audit_events" 2024/03/26 11:37:30 [DEBUG] POST https://events.1password.com/api/v1/auditevents 03-26-2024 11:37:27.672 +0000 ERROR ExecProcessor [12044 ExecProcessor] - message from "/opt/splunk/etc/apps/onepassword_events_api/bin/signin_attempts" 2024/03/26 11:37:27 [DEBUG] POST https://events.1password.com/api/v1/signinattempts 03-26-2024 11:37:23.259 +0000 ERROR ExecProcessor [12044 ExecProcessor] - message from "/opt/splunk/etc/apps/onepassword_events_api/bin/item_usages" 2024/03/26 11:37:23 [DEBUG] POST https://events.1password.com/api/v1/itemusages 03-26-2024 11:37:20.561 +0000 ERROR ExecProcessor [12044 ExecProcessor] - message from "/opt/splunk/etc/apps/onepassword_events_api/bin/audit_events" 2024/03/26 11:37:20 [DEBUG] POST https://events.1password.com/api/v1/auditevents 03-26-2024 11:37:17.440 +0000 ERROR ExecProcessor [12044 ExecProcessor] - message from "/opt/splunk/etc/apps/onepassword_events_api/bin/signin_attempts" 2024/03/26 11:37:17 [DEBUG] POST https://events.1password.com/api/v1/signinattempts       How do you resolve this? The app was configured with a token, macros had indexes defined, interval for the scripted input set to a cron schedule.  Splunk 9.0.3 core standalone dev env. 
Bonus question - are your timestamps parsed at all from the events. The event shows just hours/minutes/seconds whereas the _time field in Spkunk shows thousands of a second.
AIM: Integrate AppDynamics with a Kubernetes cluster using the provided documentation. Issue: I've set up a Kubernetes cluster and aimed to integrate it with AppDynamics for monitoring. Following t... See more...
AIM: Integrate AppDynamics with a Kubernetes cluster using the provided documentation. Issue: I've set up a Kubernetes cluster and aimed to integrate it with AppDynamics for monitoring. Following the provided documentation, I successfully created the cluster agent. However, encountered errors during logging and found that the cluster data isn't showing up in the AppDynamics interface. Reference :Install the Cluster Agent with the Kubernetes CLI  Logs and Findings:  PS C:\Users\SajoSam> kubectl logs k8s-cluster-agent-5f8977b869-bpf5v CA_PROPERTIES= -appdynamics.agent.accountName=myaccount -appdynamics.controller.hostName=mycontroller.saas.appdynamics.com -appdynamics.controller.port=8080 -appdynamics.controller.ssl.enabled=false -appdynamics.agent.monitoredNamespaces=default -appdynamics.agent.event.upload.interval=10 -appdynamics.docker.container.registration.interval=120 -appdynamics.agent.httpClient.timeout.interval=30 APPDYNAMICS_AGENT_CLUSTER_NAME=onepane-cluster [ERROR]: 2024-03-26 09:55:04 - secretconfig.go:68 - Problem With Getting /opt/appdynamics/cluster-agent/secret-volume/api-user Secret: open /opt/appdynamics/cluster-agent/secret-volume/api-user: no such file or directory [INFO]: 2024-03-26 09:55:04 - main.go:57 - check env variables and enable profiling if needed [INFO]: 2024-03-26 09:55:04 - agentprofiler.go:22 - Cluster Agent Profiling not enabled! [INFO]: 2024-03-26 09:55:04 - main.go:60 - Starting APPDYNAMICS CLUSTER AGENT version 24.2.0-317 [INFO]: 2024-03-26 09:55:04 - main.go:61 - Go lang version: go1.22.0 W0326 09:55:04.910967 7 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. [INFO]: 2024-03-26 09:55:04 - main.go:78 - Kubernetes version: v1.29.0 [INFO]: 2024-03-26 09:55:04 - main.go:233 - Registering cluster agent with controller host : mycontroller.saas.appdynamics.com controller port : 8080 account name : xxxxx [INFO]: 2024-03-26 09:55:04 - agentregistrationmodule.go:356 - Established connection to Kubernetes API [INFO]: 2024-03-26 09:55:04 - agentregistrationmodule.go:68 - Cluster name: onepane-cluster [INFO]: 2024-03-26 09:55:04 - agentregistrationmodule.go:119 - Initial Agent registration [ERROR]: 2024-03-26 09:55:34 - agentregistrationmodule.go:131 - Failed to send agent registration request: Post "http://mycontroller.saas.appdynamics.com:8080/sim/v2/agent/clusterRegistration": context deadline exceeded (Client.Timeout exceeded while awaiting headers) [ERROR]: 2024-03-26 09:55:34 - agentregistrationmodule.go:132 - clusterId: -1 [ERROR]: 2024-03-26 09:55:34 - agentregistrationmodule.go:134 - Registration properties: {} [INFO]: 2024-03-26 09:56:04 - agentregistrationmodule.go:119 - Initial Agent registration [ERROR]: 2024-03-26 09:56:34 - agentregistrationmodule.go:131 - Failed to send agent registration request: Post "http://mycontroller.saas.appdynamics.com:8080/sim/v2/agent/clusterRegistration": context deadline exceeded (Client.Timeout exceeded while awaiting headers) [ERROR]: 2024-03-26 09:56:34 - agentregistrationmodule.go:132 - clusterId: -1 [ERROR]: 2024-03-26 09:56:34 - agentregistrationmodule.go:134 - Registration properties: {} [INFO]: 2024-03-26 09:57:04 - agentregistrationmodule.go:119 - Initial Agent registration [ERROR]: 2024-03-26 09:57:34 - agentregistrationmodule.go:131 - Failed to send agent registration request: Post "http://mycontroller.saas.appdynamics.com:8080/sim/v2/agent/clusterRegistration": dial tcp 35.84.229.250:8080: i/o timeout (Client.Timeout exceeded while awaiting headers) [ERROR]: 2024-03-26 09:57:34 - agentregistrationmodule.go:132 - clusterId: -1 [ERROR]: 2024-03-26 09:57:34 - agentregistrationmodule.go:134 - Registration properties: {} [INFO]: 2024-03-26 09:58:04 - agentregistrationmodule.go:119 - Initial Agent registration Question : 1. What could be the root cause of the failure to access the secret file /opt/appdynamics/cluster-agent/secret-volume/api-user? 2. What could be causing the timeout error during the registration request to the AppDynamics controller? Could you help me with this? Thank you ^ Post edited by @Ryan.Paredez to redact account name and controller name. For privacy and security reasons, please do not share your Account name or Controller URL. 
1. If you can, don't receive syslog traffic directly on splunk component. Especially if you have lots of traffic. There are better ways to do that. But it has nothing to do with the timezone problem.... See more...
1. If you can, don't receive syslog traffic directly on splunk component. Especially if you have lots of traffic. There are better ways to do that. But it has nothing to do with the timezone problem. 2. Since the timestamp in the event does not contain timezone information, the timezone is inferred from other sources. Either defined statically in the props.conf for sourcetype, source or host or taken from the timezone your forwarder is running in. There are several posible ways to tackle this. a) Best solution - make the source send TZ info along with the timestam. I'm not sure however if your palo can do that b) Not that bad solution - make your source log in UTC and configure Splunk to interpret your events as UTC c) Worst solution from the maintenance point of view - set the props for this source in Splunk (on your HF) to the timezone of the source. This can cause issues with daylight saving
Assuming that the fields only exist in their respective sourcetypes, you could try something like this sourcetype=source1 OR sourcetype=source2 OR sourcetype=source3 | eval userlist=coalesce(userlis... See more...
Assuming that the fields only exist in their respective sourcetypes, you could try something like this sourcetype=source1 OR sourcetype=source2 OR sourcetype=source3 | eval userlist=coalesce(userlist, line.userlist, line.subject) | stats dc(userlist)
There is insufficient information to be able to determine what might be amiss. For example, if your events have multi-value fields, this can give unexpected counts. Please share some representative a... See more...
There is insufficient information to be able to determine what might be amiss. For example, if your events have multi-value fields, this can give unexpected counts. Please share some representative anonymised examples of your events.
Hi Splunk team, We have been using similar below Splunk query across 15+ Splunk alerts but the count mentioned in email shows 4 times of actual failure occurrence. index="<your_index>" sourcetype=... See more...
Hi Splunk team, We have been using similar below Splunk query across 15+ Splunk alerts but the count mentioned in email shows 4 times of actual failure occurrence. index="<your_index>" sourcetype="<your_sourcetype>" source="<your_source.log>" Business_App_ID=<your_appid> Object=* (Failure_Message=*0x01130006* OR Failure_Message=*0x01130009*) | stats count by Object, Failure_Message | sort count Below Splunk query is returning correct failure events. index="<your_index>" sourcetype="<your_sourcetype>" source="<your_source.log>" Business_App_ID=<your_appid> Object=* (Failure_Message=*0x01130006* OR Failure_Message=*0x01130009*) Can you please help in updating the Splunk query(mentioned 1st) to show correct count instead wrong one?