All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I've got to be close. But having issues trying to figure out how to get a distinct count of user sessions to show up in a bar chart with a trendline. I'd like to see a distinct count of users for las... See more...
I've got to be close. But having issues trying to figure out how to get a distinct count of user sessions to show up in a bar chart with a trendline. I'd like to see a distinct count of users for last year by month and have a trendline added. <My Search> | stats dc(userSesnId) as moving_avg | timechart span=30d dc(userSesnId) as count_of_user_sessions | trendline sma4(moving_avg) as "Moving Average" | rename count_of_user_sessions AS "Disctinct Count of User Sessions"
I know an old question, but actually your idea works, the first part in the subsearch till "fields - ..." simply builds a table I use for field renaming, so that users only need to edit a lookup for ... See more...
I know an old question, but actually your idea works, the first part in the subsearch till "fields - ..." simply builds a table I use for field renaming, so that users only need to edit a lookup for renaming fields: | makeresults | eval field1="some value", field2="another value" | rename [| makeresults | eval mapping="field1:field_one field2:field_two" | makemv delim=" " mapping | mvexpand mapping | rex field=mapping "(?<orig>[^:]+):(?<new>.*)" | fields - _time, mapping | eval rename_phrase=orig + " as " + "\"" + new + "\"" | stats values(rename_phrase) as rename_phrases | eval search=mvjoin(rename_phrases, ", ") | fields search] But it can only build arguments, as seen that rename must be in the base search. Maybe of use for somebody out there.
Working on a dashboard in dashboard studio to display data in two different tables using a single dropdown.  Issue I have is that all my data is determined by the "username" field but want to have dr... See more...
Working on a dashboard in dashboard studio to display data in two different tables using a single dropdown.  Issue I have is that all my data is determined by the "username" field but want to have dropdown display user Lastname, Firstname for better visibility.    First table pulls records from a lookup table with user demographics and such.  Second table is pulling respective window log data tracking various user activity.   In my dropdown, I am currently using the lookup table and eval function to join both "user_last", "user_first" set variable to "fullname" and display User "Lastname, Firstname".   I then used "fullname" as the pass-on token for my first table.   However, my second table, I need the "username" as the token because the data I am querying only has the "username" in the logs and not the users first or last name as my first table.    My question is can I set my dropdown to display "user_last, user_first" names but set the token value as "username" or can I assign multiple tokens in a SPL query in Dashboard Studio to use in the respective tables or can I do both for sake of knowledge.   Here is what I am working with and appreciate any assistance with this. Lookup table:      Name:    system_users.csv      Fields:    username,    name_last,     name_first.... Dashboard Dropdown Field Values:     Data Source Name:    lookup_users SPL Query:     | inputlookup bpn_system_users.csv | eval fullname= name_last.", ".name_first | table fullname | sort fullname Source Code:    { "type": "ds.search", "options": { "queryParameters": { "earliest": "$SearchTimeLine.earliest$", "latest": "$SearchTimeLine.latest$" }, "query": " | inputlookup system_users.csv\n | eval fullname= name_last.\", \".name_first\n | table fullname\n | sort fullname" }, "name": "lookup_users" }
Hello Everyone,  I am hoping someone can help me out as I have exhausted everything I can think of and cannot seem to get anything to work. Essentially what I am looking to do is pull results to get... See more...
Hello Everyone,  I am hoping someone can help me out as I have exhausted everything I can think of and cannot seem to get anything to work. Essentially what I am looking to do is pull results to get a total based off an ID. The issue I am running into is that the ID will have between 1-4 events associated to it. These events are related to the status.  I am only wanting to get the results for any ID that are Open and Escalated but the issue I am running into is that it is pulling all of the events even of those that have since had the status changed to closed or another status.  I am wanting to excluded all of the events for IDs that have had their status changed to anything other than Open or Escalated. The other trouble that I am running into is that this "status" event is occuring in the metadata of the whole transaction. I have the majority of my query built out but where I am struggling is removing the initial Open and Escalated events for the alerts that the status was changed. The field the status changes in is under "logs" and then "logs{}.action".  
Splunk advises AGAINST sending syslog directly to a Splunk Instance.  The preferred practice is to send to a dedicated syslog server (rsyslog or syslog-ng) and forward to Splunk from there.  Alternat... See more...
Splunk advises AGAINST sending syslog directly to a Splunk Instance.  The preferred practice is to send to a dedicated syslog server (rsyslog or syslog-ng) and forward to Splunk from there.  Alternatively, you can use Splunk Connect for Syslog (SC4S). You can use any amount of resources you wish.  If there is a problem, however, Splunk Support may require you meet the recommended hardware specifications before they provide further support.
Hello Splunkers, I need some help to understand what will be the minimum spects required for Splunk Enterprise Installation for the purpose Heavy Forwarder where only it will receive logs from 1 sou... See more...
Hello Splunkers, I need some help to understand what will be the minimum spects required for Splunk Enterprise Installation for the purpose Heavy Forwarder where only it will receive logs from 1 source over Syslog and forward to Indexers.  Can I just use 2 CPU's 8 GB RAM and storage based of estimation of the log file sizes. I'm asking this because the official guide says it should be minimum 12 GB RAM , 4 Cores CPU. Please if someone can advise on this. Thanking you in advance,   Moh....
Awesome!
https://ideas.splunk.com
Did.... did you..... did you use the waterfall, slow response times + errors, or call graph views?  Those all, are 100% intended to provide the information you asked for.  They're just 1 click away i... See more...
Did.... did you..... did you use the waterfall, slow response times + errors, or call graph views?  Those all, are 100% intended to provide the information you asked for.  They're just 1 click away in the default navigation tabs, which should make them fairly easy to check.....?
IDK if this will help you, but speaking as someone monitoring 468 Oracle RAC clusters, if it were me I'd try the following 3 steps, possibly putting the pw-validation above others if I was to alter a... See more...
IDK if this will help you, but speaking as someone monitoring 468 Oracle RAC clusters, if it were me I'd try the following 3 steps, possibly putting the pw-validation above others if I was to alter anything as I wrote it below. 1) Disable the collector for a few minutes + re-enable the db collector - This is my goto solution for messages like this. 2) Validate the DBMonitor agent that's the middle man in all this is updated and compatible with your updated controller. 3) While you're there, attempt to validate you can establish a basic JDBC connection to the DB from the instance the DBMon collector is running on, and that you can do basic topology queries with the AppD DB user the collector is using. I'm just speaking to you as another user who's been through similar issues, so hopefully it helps!
9.3.2 does.
It might not be what you were thinking, but you can split your BT's by thread name in the instrumentation settings for business transactions.   It works for any existing BT and I've used this for ... See more...
It might not be what you were thinking, but you can split your BT's by thread name in the instrumentation settings for business transactions.   It works for any existing BT and I've used this for several apps that I needed a given by-thread confirmation or identification of the "flow" to correlate with single procs, or to use in conjunction with other tool's additional insights where my AppD Transaction ID isn't tag-n-trace through the 3rd party tools I may have to use in unique cases where AppD can't cover those apps/tools. I also  use this method by "Thread ID" at times, which can be very valuable when attempting to determine if a third party's app is ACTUALLY spreading load across the threads, or operating in a single-threaded fashion.  This is actually a feature that using this way, takes hours/days vs what would have taken you months to find in log analysis alone. FYI - Above is just a "catch all for anything, split all the things" example.  I would highly recommend you don't use this in Prod as it will likely impact your instances quite seriously.  Instead, try to find your starting point by the given class + method if possible, and then split by thread ID or thread Name. IDK if that's exactly what you were looking for, but I hope it helps!
Thank you, nothing useful in the logs, and I've already opened a ticket. I'll report back.
@dbray_sd  Alright, did you find anything in the internal logs? Are none of the inputs functioning? Have you taken a backup of the DB Connect add-on before upgrading to the latest version? If you h... See more...
@dbray_sd  Alright, did you find anything in the internal logs? Are none of the inputs functioning? Have you taken a backup of the DB Connect add-on before upgrading to the latest version? If you have a backup, please restore it and test again. I haven’t encountered this issue before, but checking the internal logs might provide some insights. If not, it’s best to raise a support ticket. https://docs.splunk.com/Documentation/DBX/3.18.1/DeployDBX/Troubleshooting  I hope this helps, if any reply helps you, you could add your upvote/karma points to that reply, thanks.
Thank you @kiran_panchavat  that at least gives me something to investigate further, but also confusing. Health Check is complaining about: One or more defined connections require the corresponding ... See more...
Thank you @kiran_panchavat  that at least gives me something to investigate further, but also confusing. Health Check is complaining about: One or more defined connections require the corresponding JDBC driver.   However, those JDBC drivers comes from the Splunk_JDBC_mysql add on app, which I checked and it's running with the latest version. Confusing.
@dbray_sd  Did you perform the health check after upgrading to the latest version of DB Connect? https://docs.splunk.com/Documentation/DBX/latest/DeployDBX/CheckInstallationHealth   
Tried same approach but nothing is coming under "Statistics" ,  when i am not checking any condition then i am getting below record,  Now if you relate my question with below then you can understand ... See more...
Tried same approach but nothing is coming under "Statistics" ,  when i am not checking any condition then i am getting below record,  Now if you relate my question with below then you can understand that under 5-key inboundSsoType deep link is coming in response so i just want to replace 5-key string to that deep link.   Below is JSON from where i am trying to check condition.  message: { [-] backendCalls: [ [+] ] deviceInfo: { [+] } elapsedTime: 210 exceptionList: [ [+] ] incomingRequest: { [-] deepLink: https://member.uhc.com hsidSSOParameters: { [+] } inboundSsoType: 5-KEY
+1 on that question. Splunk architectural component is called Deployment Server, not deployment manager. And it doesn't quarantine anything. Quarantine can happen in various other situations though b... See more...
+1 on that question. Splunk architectural component is called Deployment Server, not deployment manager. And it doesn't quarantine anything. Quarantine can happen in various other situations though but they have nothing to do with DS. So what and where is quarantined in your setup?
I am using StatsD to send metrics to a receiver, but I am encountering an issue where timing metrics (|ms) are not being captured, even though counter metrics (|c) work fine on Splunk Observability... See more...
I am using StatsD to send metrics to a receiver, but I am encountering an issue where timing metrics (|ms) are not being captured, even though counter metrics (|c) work fine on Splunk Observability Cloud.   Example of Working Metric: The following command works and is processed correctly by the StatsD receiver:   echo "test_Latency:42|c|#key:val" | nc -u -w1 localhost 8127   Example of Non-Working Metric: However, this command does not result in any output or processing:   echo "test_Latency:0.082231|ms" | nc -u -w1 localhost 8127   Current StatsD Configuration: Here is the configuration I am using for the receiver by following the doc: https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/statsdreceiver   receivers: statsd: endpoint: "localhost:8127" aggregation_interval: 30s enable_metric_type: true is_monotonic_counter: false timer_histogram_mapping: - statsd_type: "histogram" observer_type: "gauge" - statsd_type: "timing" observer_type: "histogram" histogram: max_size: 100 - statsd_type: "distribution" observer_type: "summary" summary: percentiles: [0, 10, 50, 90, 95, 100]    Why are timing metrics (|ms) not being captured while counters (|c) are working, can you please help to check on it as the statsdreceiver github document says it supports "timer" related metrics https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/receiver/statsdreceiver/README.md#timer Any help or suggestions would be greatly appreciated. Thank You.
After upgrading Splunk to 9.4.0 and Splunk DB Connect to 3.18.1, all INPUTS have the error: Checkpoint not found. The input in rising mode is expected to contain a checkpoint.   None of them are p... See more...
After upgrading Splunk to 9.4.0 and Splunk DB Connect to 3.18.1, all INPUTS have the error: Checkpoint not found. The input in rising mode is expected to contain a checkpoint.   None of them are pulling in data. Looking over the logs, I see:   2025-01-10 12:16:00.298 +0000 Trace-Id=1d3654ac-86c1-445f-97c6-6919b3f6eb8c [Scheduled-Job-Executor-116] ERROR org.easybatch.core.job.BatchJob - Unable to open record reader com.splunk.dbx.server.exception.ReadCheckpointFailException: Error(s) occur when reading checkpoint. at com.splunk.dbx.server.dbinput.task.DbInputCheckpointManager.load(DbInputCheckpointManager.java:71) at com.splunk.dbx.server.dbinput.task.DbInputTask.loadCheckpoint(DbInputTask.java:133) at com.splunk.dbx.server.dbinput.recordreader.DbInputRecordReader.executeQuery(DbInputRecordReader.java:82) at com.splunk.dbx.server.dbinput.recordreader.DbInputRecordReader.open(DbInputRecordReader.java:55) at org.easybatch.core.job.BatchJob.openReader(BatchJob.java:140) at org.easybatch.core.job.BatchJob.call(BatchJob.java:97) at com.splunk.dbx.server.api.service.conf.impl.InputServiceImpl.runTask(InputServiceImpl.java:321) at com.splunk.dbx.server.api.resource.InputResource.lambda$runInput$1(InputResource.java:183) at com.splunk.dbx.logging.MdcTaskDecorator.run(MdcTaskDecorator.java:23) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:833)   I'm unable to Edit the config, and update the Check point value. Even thought the Execute Query works, when I try to save the update it gives: Error(s) occur when reading checkpoint.   Has anybody else successfully upgraded to 9.4.0 and 3.18.1?