All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Splunk advises AGAINST sending syslog directly to a Splunk Instance.  The preferred practice is to send to a dedicated syslog server (rsyslog or syslog-ng) and forward to Splunk from there.  Alternat... See more...
Splunk advises AGAINST sending syslog directly to a Splunk Instance.  The preferred practice is to send to a dedicated syslog server (rsyslog or syslog-ng) and forward to Splunk from there.  Alternatively, you can use Splunk Connect for Syslog (SC4S). You can use any amount of resources you wish.  If there is a problem, however, Splunk Support may require you meet the recommended hardware specifications before they provide further support.
Hello Splunkers, I need some help to understand what will be the minimum spects required for Splunk Enterprise Installation for the purpose Heavy Forwarder where only it will receive logs from 1 sou... See more...
Hello Splunkers, I need some help to understand what will be the minimum spects required for Splunk Enterprise Installation for the purpose Heavy Forwarder where only it will receive logs from 1 source over Syslog and forward to Indexers.  Can I just use 2 CPU's 8 GB RAM and storage based of estimation of the log file sizes. I'm asking this because the official guide says it should be minimum 12 GB RAM , 4 Cores CPU. Please if someone can advise on this. Thanking you in advance,   Moh....
Awesome!
https://ideas.splunk.com
Did.... did you..... did you use the waterfall, slow response times + errors, or call graph views?  Those all, are 100% intended to provide the information you asked for.  They're just 1 click away i... See more...
Did.... did you..... did you use the waterfall, slow response times + errors, or call graph views?  Those all, are 100% intended to provide the information you asked for.  They're just 1 click away in the default navigation tabs, which should make them fairly easy to check.....?
IDK if this will help you, but speaking as someone monitoring 468 Oracle RAC clusters, if it were me I'd try the following 3 steps, possibly putting the pw-validation above others if I was to alter a... See more...
IDK if this will help you, but speaking as someone monitoring 468 Oracle RAC clusters, if it were me I'd try the following 3 steps, possibly putting the pw-validation above others if I was to alter anything as I wrote it below. 1) Disable the collector for a few minutes + re-enable the db collector - This is my goto solution for messages like this. 2) Validate the DBMonitor agent that's the middle man in all this is updated and compatible with your updated controller. 3) While you're there, attempt to validate you can establish a basic JDBC connection to the DB from the instance the DBMon collector is running on, and that you can do basic topology queries with the AppD DB user the collector is using. I'm just speaking to you as another user who's been through similar issues, so hopefully it helps!
9.3.2 does.
It might not be what you were thinking, but you can split your BT's by thread name in the instrumentation settings for business transactions.   It works for any existing BT and I've used this for ... See more...
It might not be what you were thinking, but you can split your BT's by thread name in the instrumentation settings for business transactions.   It works for any existing BT and I've used this for several apps that I needed a given by-thread confirmation or identification of the "flow" to correlate with single procs, or to use in conjunction with other tool's additional insights where my AppD Transaction ID isn't tag-n-trace through the 3rd party tools I may have to use in unique cases where AppD can't cover those apps/tools. I also  use this method by "Thread ID" at times, which can be very valuable when attempting to determine if a third party's app is ACTUALLY spreading load across the threads, or operating in a single-threaded fashion.  This is actually a feature that using this way, takes hours/days vs what would have taken you months to find in log analysis alone. FYI - Above is just a "catch all for anything, split all the things" example.  I would highly recommend you don't use this in Prod as it will likely impact your instances quite seriously.  Instead, try to find your starting point by the given class + method if possible, and then split by thread ID or thread Name. IDK if that's exactly what you were looking for, but I hope it helps!
Thank you, nothing useful in the logs, and I've already opened a ticket. I'll report back.
@dbray_sd  Alright, did you find anything in the internal logs? Are none of the inputs functioning? Have you taken a backup of the DB Connect add-on before upgrading to the latest version? If you h... See more...
@dbray_sd  Alright, did you find anything in the internal logs? Are none of the inputs functioning? Have you taken a backup of the DB Connect add-on before upgrading to the latest version? If you have a backup, please restore it and test again. I haven’t encountered this issue before, but checking the internal logs might provide some insights. If not, it’s best to raise a support ticket. https://docs.splunk.com/Documentation/DBX/3.18.1/DeployDBX/Troubleshooting  I hope this helps, if any reply helps you, you could add your upvote/karma points to that reply, thanks.
Thank you @kiran_panchavat  that at least gives me something to investigate further, but also confusing. Health Check is complaining about: One or more defined connections require the corresponding ... See more...
Thank you @kiran_panchavat  that at least gives me something to investigate further, but also confusing. Health Check is complaining about: One or more defined connections require the corresponding JDBC driver.   However, those JDBC drivers comes from the Splunk_JDBC_mysql add on app, which I checked and it's running with the latest version. Confusing.
@dbray_sd  Did you perform the health check after upgrading to the latest version of DB Connect? https://docs.splunk.com/Documentation/DBX/latest/DeployDBX/CheckInstallationHealth   
Tried same approach but nothing is coming under "Statistics" ,  when i am not checking any condition then i am getting below record,  Now if you relate my question with below then you can understand ... See more...
Tried same approach but nothing is coming under "Statistics" ,  when i am not checking any condition then i am getting below record,  Now if you relate my question with below then you can understand that under 5-key inboundSsoType deep link is coming in response so i just want to replace 5-key string to that deep link.   Below is JSON from where i am trying to check condition.  message: { [-] backendCalls: [ [+] ] deviceInfo: { [+] } elapsedTime: 210 exceptionList: [ [+] ] incomingRequest: { [-] deepLink: https://member.uhc.com hsidSSOParameters: { [+] } inboundSsoType: 5-KEY
+1 on that question. Splunk architectural component is called Deployment Server, not deployment manager. And it doesn't quarantine anything. Quarantine can happen in various other situations though b... See more...
+1 on that question. Splunk architectural component is called Deployment Server, not deployment manager. And it doesn't quarantine anything. Quarantine can happen in various other situations though but they have nothing to do with DS. So what and where is quarantined in your setup?
I am using StatsD to send metrics to a receiver, but I am encountering an issue where timing metrics (|ms) are not being captured, even though counter metrics (|c) work fine on Splunk Observability... See more...
I am using StatsD to send metrics to a receiver, but I am encountering an issue where timing metrics (|ms) are not being captured, even though counter metrics (|c) work fine on Splunk Observability Cloud.   Example of Working Metric: The following command works and is processed correctly by the StatsD receiver:   echo "test_Latency:42|c|#key:val" | nc -u -w1 localhost 8127   Example of Non-Working Metric: However, this command does not result in any output or processing:   echo "test_Latency:0.082231|ms" | nc -u -w1 localhost 8127   Current StatsD Configuration: Here is the configuration I am using for the receiver by following the doc: https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/statsdreceiver   receivers: statsd: endpoint: "localhost:8127" aggregation_interval: 30s enable_metric_type: true is_monotonic_counter: false timer_histogram_mapping: - statsd_type: "histogram" observer_type: "gauge" - statsd_type: "timing" observer_type: "histogram" histogram: max_size: 100 - statsd_type: "distribution" observer_type: "summary" summary: percentiles: [0, 10, 50, 90, 95, 100]    Why are timing metrics (|ms) not being captured while counters (|c) are working, can you please help to check on it as the statsdreceiver github document says it supports "timer" related metrics https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/receiver/statsdreceiver/README.md#timer Any help or suggestions would be greatly appreciated. Thank You.
After upgrading Splunk to 9.4.0 and Splunk DB Connect to 3.18.1, all INPUTS have the error: Checkpoint not found. The input in rising mode is expected to contain a checkpoint.   None of them are p... See more...
After upgrading Splunk to 9.4.0 and Splunk DB Connect to 3.18.1, all INPUTS have the error: Checkpoint not found. The input in rising mode is expected to contain a checkpoint.   None of them are pulling in data. Looking over the logs, I see:   2025-01-10 12:16:00.298 +0000 Trace-Id=1d3654ac-86c1-445f-97c6-6919b3f6eb8c [Scheduled-Job-Executor-116] ERROR org.easybatch.core.job.BatchJob - Unable to open record reader com.splunk.dbx.server.exception.ReadCheckpointFailException: Error(s) occur when reading checkpoint. at com.splunk.dbx.server.dbinput.task.DbInputCheckpointManager.load(DbInputCheckpointManager.java:71) at com.splunk.dbx.server.dbinput.task.DbInputTask.loadCheckpoint(DbInputTask.java:133) at com.splunk.dbx.server.dbinput.recordreader.DbInputRecordReader.executeQuery(DbInputRecordReader.java:82) at com.splunk.dbx.server.dbinput.recordreader.DbInputRecordReader.open(DbInputRecordReader.java:55) at org.easybatch.core.job.BatchJob.openReader(BatchJob.java:140) at org.easybatch.core.job.BatchJob.call(BatchJob.java:97) at com.splunk.dbx.server.api.service.conf.impl.InputServiceImpl.runTask(InputServiceImpl.java:321) at com.splunk.dbx.server.api.resource.InputResource.lambda$runInput$1(InputResource.java:183) at com.splunk.dbx.logging.MdcTaskDecorator.run(MdcTaskDecorator.java:23) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:833)   I'm unable to Edit the config, and update the Check point value. Even thought the Execute Query works, when I try to save the update it gives: Error(s) occur when reading checkpoint.   Has anybody else successfully upgraded to 9.4.0 and 3.18.1?
I might have an update for this one as I was after the same thing as the original question suggest and I did not want to use REST for this. You might want to try following if that works for you.    ... See more...
I might have an update for this one as I was after the same thing as the original question suggest and I did not want to use REST for this. You might want to try following if that works for you.    index=_audit sourcetype=audittrail (action=edit_roles_grantable OR action=edit_role) (TERM(object) OR TERM(role)) (operation=create OR operation=edit OR action=edit_role) info=granted   Basically this search will find 2 types of logs within _audit index. First is  "edit_roles_grantable" which should be logged any time when someone edit role (create counts as edit too). Second is "edit_role" which will also show what was changed (this part is not perfect as I was able to see what capability was changed, but I could not find changes regards to what index can the role search). Anyway you can play around with the search and get what you need in some cases.  
What you are meaning with this “Linux UF will get quarantined by the deployment manager:8089”?
It seems that your target is SCP environment. Are you using SCP’s Universal Forwarder package from SCP? Based on those server names you have something else than AWS Victoria experience in use or other... See more...
It seems that your target is SCP environment. Are you using SCP’s Universal Forwarder package from SCP? Based on those server names you have something else than AWS Victoria experience in use or otherwise you have wrong outputs.conf in use.
How about those configuration files? This was connection to management port like 8089? Are you trying to use self signed certificates for all needed ports (web, mgmt, s2s etc.)?