All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

DS is not supposed to serve apps to clustered search heads directly. That's what deployer is for. If by any chance you managed to get clustered SHs to pull apps directly from DS, you're in for a poss... See more...
DS is not supposed to serve apps to clustered search heads directly. That's what deployer is for. If by any chance you managed to get clustered SHs to pull apps directly from DS, you're in for a possible loads of problems.
We have a deployment server which deploys apps (which contains configs) to Search head cluster (3 SH). I am not sure whether DS distributes apps directly to SH members or it will sent to deployer and... See more...
We have a deployment server which deploys apps (which contains configs) to Search head cluster (3 SH). I am not sure whether DS distributes apps directly to SH members or it will sent to deployer and from there deployer will distribute apps to SH members? Please clarify. We have created a role in DS app which restricts to specific index. When we try to push it... That role is not reflecting in SH members? But when we are checking in Deployer that app is present under shcluster/apps and that role is updated. But it is not showing in SH UI. What is the problem?   Do we need to manually push the config from deployer to SH members everytime? We have config in Deployer as deployer_push_mode=merge_to_default... Is it means distribution is automated? If not how to push config from Deployer to SH members through Splunk web? We don't have access to backend server to give CLI command.
Yes ofcourse Here is my example data. On the left is the alert when it first comes in and on the right is after it has been reviewed and closed.   Alert Received Alert Closed  alert_type: s... See more...
Yes ofcourse Here is my example data. On the left is the alert when it first comes in and on the right is after it has been reviewed and closed.   Alert Received Alert Closed  alert_type: search query    asset: { [+]    }    asset_term: null    content_created_at: 2017-01-10T11:00:00+00:00    escalated: false    id: XXXXXX112    last_modified: 2025-01-09T16:33:37Z    logs: [ [-]      { [-]        action: open        detail:        id:         subject:        timestamp: 2025-01-09T16:33:37+00:00      }      { [-]        action: modify tags        detail:        id:         subject:        timestamp: 2025-01-09T16:33:37+00:00      }    ]    metadata: { [+]    }    network: domains    severity: 4    status: Open    timestamp: 2025-01-09T16:33:37+00:00    timestamp_modify_tags: 2025-01-09T16:33:37+00:00 alert_type: search query    asset: { [+]    }    asset_term: null    content_created_at: 2017-01-10T11:00:00+00:00    escalated: false    id: XXXXXX112    last_modified: 2025-01-09T17:10:52Z    logs: [ [-]      { [-]        action: close        detail:        id:         subject:        timestamp: 2025-01-09T17:10:52+00:00      }      { [-]        action: modify notes        detail:        id:         subject:        timestamp: 2025-01-09T17:10:48+00:00      }      { [-]        action: assign        detail:        id:         timestamp: 2025-01-09T17:09:25+00:00      }      { [-]        action: open        actor:        detail:        id:         subject:        timestamp: 2025-01-09T16:33:37+00:00      }      { [+]      }    ]    metadata: { [+]    }    network: domain    severity: 4    status: Closed    timestamp: 2025-01-09T16:33:37+00:00    timestamp_modify_notes: 2025-01-09T17:10:48+00:00    timestamp_modify_tags: 2025-01-09T16:33:37+00:00   I tried initially to just dedup but that was before I knew it had mulitple events it was pulling in. Since then I have tried the following:  1. I tried doing an mvindex on the status events but it was still pullin in all of the events. 2. I then tried doing the lastest(status) but realized that was only going to pull in what the actualy lastest status of the ID was and would still include all of the events.  3. I also tried doing a sub search per some guidance from a colleague that ended up looking lik the following:  | dedup id | where  [ search index=source | stats latest(status) as latest_status by id | where latest_status="closed" | return $id ] 4. Lastly, I tried going at the metadata which looked like the following:  | dedup id | fields id | format | rex mode=sed field=search "s/ / OR /g" | eval search="NOT (id IN (" + search + "))" | fields search | format "" "" "" "" "" "" "search" and turned into this [ | search index="source" NOT (status="Open" OR status="Escalated") | stats count by id | fields id | format "" "" "" "" "," "OR" "id!=" | rex mode=sed field=search "s/^(.*)$/NOT (id IN (\1))/"
As @richgalloway said don’t use splunk to terminate syslog feed. When you are using real syslog server then it’s better to use UF instead of HF to send those forward. Or use SC4S especially if you hav... See more...
As @richgalloway said don’t use splunk to terminate syslog feed. When you are using real syslog server then it’s better to use UF instead of HF to send those forward. Or use SC4S especially if you haven’t experience of running syslog server.
Can you show some sample events and SPL what you have tried?
Hi when you are using stats it removed all other fields.  Basically you have two options to do this. You should use timechart and also trendline or streamchat with window parameter. r. Ismo
Hi all...I got this fix by a simple logic of set diff command..thanks everyone   
I've got to be close. But having issues trying to figure out how to get a distinct count of user sessions to show up in a bar chart with a trendline. I'd like to see a distinct count of users for las... See more...
I've got to be close. But having issues trying to figure out how to get a distinct count of user sessions to show up in a bar chart with a trendline. I'd like to see a distinct count of users for last year by month and have a trendline added. <My Search> | stats dc(userSesnId) as moving_avg | timechart span=30d dc(userSesnId) as count_of_user_sessions | trendline sma4(moving_avg) as "Moving Average" | rename count_of_user_sessions AS "Disctinct Count of User Sessions"
I know an old question, but actually your idea works, the first part in the subsearch till "fields - ..." simply builds a table I use for field renaming, so that users only need to edit a lookup for ... See more...
I know an old question, but actually your idea works, the first part in the subsearch till "fields - ..." simply builds a table I use for field renaming, so that users only need to edit a lookup for renaming fields: | makeresults | eval field1="some value", field2="another value" | rename [| makeresults | eval mapping="field1:field_one field2:field_two" | makemv delim=" " mapping | mvexpand mapping | rex field=mapping "(?<orig>[^:]+):(?<new>.*)" | fields - _time, mapping | eval rename_phrase=orig + " as " + "\"" + new + "\"" | stats values(rename_phrase) as rename_phrases | eval search=mvjoin(rename_phrases, ", ") | fields search] But it can only build arguments, as seen that rename must be in the base search. Maybe of use for somebody out there.
Working on a dashboard in dashboard studio to display data in two different tables using a single dropdown.  Issue I have is that all my data is determined by the "username" field but want to have dr... See more...
Working on a dashboard in dashboard studio to display data in two different tables using a single dropdown.  Issue I have is that all my data is determined by the "username" field but want to have dropdown display user Lastname, Firstname for better visibility.    First table pulls records from a lookup table with user demographics and such.  Second table is pulling respective window log data tracking various user activity.   In my dropdown, I am currently using the lookup table and eval function to join both "user_last", "user_first" set variable to "fullname" and display User "Lastname, Firstname".   I then used "fullname" as the pass-on token for my first table.   However, my second table, I need the "username" as the token because the data I am querying only has the "username" in the logs and not the users first or last name as my first table.    My question is can I set my dropdown to display "user_last, user_first" names but set the token value as "username" or can I assign multiple tokens in a SPL query in Dashboard Studio to use in the respective tables or can I do both for sake of knowledge.   Here is what I am working with and appreciate any assistance with this. Lookup table:      Name:    system_users.csv      Fields:    username,    name_last,     name_first.... Dashboard Dropdown Field Values:     Data Source Name:    lookup_users SPL Query:     | inputlookup bpn_system_users.csv | eval fullname= name_last.", ".name_first | table fullname | sort fullname Source Code:    { "type": "ds.search", "options": { "queryParameters": { "earliest": "$SearchTimeLine.earliest$", "latest": "$SearchTimeLine.latest$" }, "query": " | inputlookup system_users.csv\n | eval fullname= name_last.\", \".name_first\n | table fullname\n | sort fullname" }, "name": "lookup_users" }
Hello Everyone,  I am hoping someone can help me out as I have exhausted everything I can think of and cannot seem to get anything to work. Essentially what I am looking to do is pull results to get... See more...
Hello Everyone,  I am hoping someone can help me out as I have exhausted everything I can think of and cannot seem to get anything to work. Essentially what I am looking to do is pull results to get a total based off an ID. The issue I am running into is that the ID will have between 1-4 events associated to it. These events are related to the status.  I am only wanting to get the results for any ID that are Open and Escalated but the issue I am running into is that it is pulling all of the events even of those that have since had the status changed to closed or another status.  I am wanting to excluded all of the events for IDs that have had their status changed to anything other than Open or Escalated. The other trouble that I am running into is that this "status" event is occuring in the metadata of the whole transaction. I have the majority of my query built out but where I am struggling is removing the initial Open and Escalated events for the alerts that the status was changed. The field the status changes in is under "logs" and then "logs{}.action".  
Splunk advises AGAINST sending syslog directly to a Splunk Instance.  The preferred practice is to send to a dedicated syslog server (rsyslog or syslog-ng) and forward to Splunk from there.  Alternat... See more...
Splunk advises AGAINST sending syslog directly to a Splunk Instance.  The preferred practice is to send to a dedicated syslog server (rsyslog or syslog-ng) and forward to Splunk from there.  Alternatively, you can use Splunk Connect for Syslog (SC4S). You can use any amount of resources you wish.  If there is a problem, however, Splunk Support may require you meet the recommended hardware specifications before they provide further support.
Hello Splunkers, I need some help to understand what will be the minimum spects required for Splunk Enterprise Installation for the purpose Heavy Forwarder where only it will receive logs from 1 sou... See more...
Hello Splunkers, I need some help to understand what will be the minimum spects required for Splunk Enterprise Installation for the purpose Heavy Forwarder where only it will receive logs from 1 source over Syslog and forward to Indexers.  Can I just use 2 CPU's 8 GB RAM and storage based of estimation of the log file sizes. I'm asking this because the official guide says it should be minimum 12 GB RAM , 4 Cores CPU. Please if someone can advise on this. Thanking you in advance,   Moh....
Awesome!
https://ideas.splunk.com
Did.... did you..... did you use the waterfall, slow response times + errors, or call graph views?  Those all, are 100% intended to provide the information you asked for.  They're just 1 click away i... See more...
Did.... did you..... did you use the waterfall, slow response times + errors, or call graph views?  Those all, are 100% intended to provide the information you asked for.  They're just 1 click away in the default navigation tabs, which should make them fairly easy to check.....?
IDK if this will help you, but speaking as someone monitoring 468 Oracle RAC clusters, if it were me I'd try the following 3 steps, possibly putting the pw-validation above others if I was to alter a... See more...
IDK if this will help you, but speaking as someone monitoring 468 Oracle RAC clusters, if it were me I'd try the following 3 steps, possibly putting the pw-validation above others if I was to alter anything as I wrote it below. 1) Disable the collector for a few minutes + re-enable the db collector - This is my goto solution for messages like this. 2) Validate the DBMonitor agent that's the middle man in all this is updated and compatible with your updated controller. 3) While you're there, attempt to validate you can establish a basic JDBC connection to the DB from the instance the DBMon collector is running on, and that you can do basic topology queries with the AppD DB user the collector is using. I'm just speaking to you as another user who's been through similar issues, so hopefully it helps!
9.3.2 does.
It might not be what you were thinking, but you can split your BT's by thread name in the instrumentation settings for business transactions.   It works for any existing BT and I've used this for ... See more...
It might not be what you were thinking, but you can split your BT's by thread name in the instrumentation settings for business transactions.   It works for any existing BT and I've used this for several apps that I needed a given by-thread confirmation or identification of the "flow" to correlate with single procs, or to use in conjunction with other tool's additional insights where my AppD Transaction ID isn't tag-n-trace through the 3rd party tools I may have to use in unique cases where AppD can't cover those apps/tools. I also  use this method by "Thread ID" at times, which can be very valuable when attempting to determine if a third party's app is ACTUALLY spreading load across the threads, or operating in a single-threaded fashion.  This is actually a feature that using this way, takes hours/days vs what would have taken you months to find in log analysis alone. FYI - Above is just a "catch all for anything, split all the things" example.  I would highly recommend you don't use this in Prod as it will likely impact your instances quite seriously.  Instead, try to find your starting point by the given class + method if possible, and then split by thread ID or thread Name. IDK if that's exactly what you were looking for, but I hope it helps!
Thank you, nothing useful in the logs, and I've already opened a ticket. I'll report back.