All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @splunklearner , I usually avoid to use the DS to manage SHC-Deployer and Cluster Manager, even if it's possible do deploy apps to them. You have to create a deploymentclient.conf file specializ... See more...
Hi @splunklearner , I usually avoid to use the DS to manage SHC-Deployer and Cluster Manager, even if it's possible do deploy apps to them. You have to create a deploymentclient.conf file specialized for the CM or the SHCD adding [deployment-client] # NOTE: Because of a bug in the way the client works when installing apps # outside of $SPLUNK_HOME/etc/apps, these apps aren't listed as "installed" # by the deployment client, meaning that taking an app away from the cluster # manager's serverclass won't remove it from the manager-apps directory. This # would have to be done by hand. Updates to existing apps will transfer # from the deployment server just fine, however. repositoryLocation = $SPLUNK_HOME/etc/manager-apps serverRepositoryLocationPolicy = rejectAlways in this way the DS deploys apps not in the $SPLUNK_HOME/etc/apps folder but in the folders of the CM  (as in the example) or n the DS folder. the real problem is how to run the push command: for the CM it's possible from GUI but it isn't possible for the SHCD, so it's easier to use a script. And at least anyway, as also @PickleRick said, I'd avoid to search problems by myself! Ciao. Giuseppe
We are facing the same challenge. Did you get a solution to this issue? I have upgraded to 9.2.3 recently.
Can DS push apps to Deployer directly and there deployer will push to cluster SHs?  An you please explain how to push apps from deployer to SHs in splunk web?
Ok. So these actually were different ids? Because of your anonymization they looked as if they were to be the same id. So you have only one event per each id? And you want to exclude those that have... See more...
Ok. So these actually were different ids? Because of your anonymization they looked as if they were to be the same id. So you have only one event per each id? And you want to exclude those that have action=closed or some other values? As far as I can see your jsons should parse so that you get a multivalued field some.path{}.log.action, am I right? If so, you can use normal field=value conditions. Just remember that with multivalued fields key!="myvalue" matches an event where there is at least one value in the field key not equal to myvalue whereas NOT key="myvalue" requires that none of the values in the field key match myvalue (or the field is empty).
Hi @avi123 , I'm not 100% sure if I understood the requirements, but I'm giving it a shot here. Let me know if this works for you: | inputlookup Expiry_details_list.csv | lookup SupportTeamEmails.... See more...
Hi @avi123 , I'm not 100% sure if I understood the requirements, but I'm giving it a shot here. Let me know if this works for you: | inputlookup Expiry_details_list.csv | lookup SupportTeamEmails.csv Application_name OUTPUT Owner_Email_address Ops_Leads_Email_address Escalation_Contacts_Email_address | eval Expiry_Date = strptime(Expiry_date, "%m/%d/%Y") | eval Current_Time = now() | eval Expiry_Date_Timestamp = strftime(Expiry_Date, "%Y/%m/%d %H:%M:%S") | eval Days_until_expiry = round((Expiry_Date - Current_Time) / 86400, 0) | eval alert_type = case( Days_until_expiry <= 7, "Owner", Days_until_expiry <= 15, "Support", Days_until_expiry < 1, "Expired", Days_until_expiry > 15, "Others", true(), "None") | search alert_type != "None" | eval email_list = case( alert_type == "Owner", Escalation_Contacts_Email_address, alert_type == "Support", Ops_Leads_Email_address, alert_type == "Expired", Escalation_Contacts_Email_address, true(), "None") | eval cc_email_list = case( alert_type == "Owner", Owner_Email_address, alert_type == "Support", Owner_Email_address, alert_type == "Expired", mvappend(Owner_Email_address, Ops_Leads_Email_address), true(), "None") | eval email_list = split(mvjoin(email_list, ","), ",") | eval cc_email_list = split(mvjoin(cc_email_list, ","), ",") | dedup Application_name Environment email_list | eval email_recipient = mvdedup(email_list) | eval email_recipient = mvjoin(email_recipient, ",") | eval email_cc = mvdedup(cc_email_list) | eval email_cc = mvjoin(email_cc, ",") | table Application_name, Environment, Type, Sub_Type, Expiry_Date_Timestamp, Days_until_expiry, email_recipient, email_cc | fields - alert_type, Owner_Email_address, Ops_Leads_Email_address, Escalation_Contacts_Email_address  
Based on the docs, I can't tell if there's a functional difference  between this: [clustering] multisite = true [clustermanager:prod] multisite = true [clustermanager:dev] multisite = false ... See more...
Based on the docs, I can't tell if there's a functional difference  between this: [clustering] multisite = true [clustermanager:prod] multisite = true [clustermanager:dev] multisite = false and this: [clustering] [clustermanager:prod] multisite = true [clustermanager:dev] in server.conf on searchheads.      
Collect is very time sensitive and as @gcusello pointed out. My search and collect writing to index was working. I changed the _time=now() to use a now time 14 eval statements earlier in the search a... See more...
Collect is very time sensitive and as @gcusello pointed out. My search and collect writing to index was working. I changed the _time=now() to use a now time 14 eval statements earlier in the search and it stopped writing to the index. After viewing this thread, I changed it back to these final three lines in search and now successfully writing the results to index every time: | eval now=now() | eval _time=now | collect index=index output_format=raw spool=true source=yourSource sourcetype=stash
I don't need those really. I only need the ones that have not been updated so the status is still Open or Esclated as I am trying to get a number for volume on what is still outstanding. So yes you a... See more...
I don't need those really. I only need the ones that have not been updated so the status is still Open or Esclated as I am trying to get a number for volume on what is still outstanding. So yes you are correct, these events are associted with the same ID but there are different IDs. What I want is to exclude all of the IDs where the status has been updated and closed (I don't want them to show the open or escalated event). 
Dedup is rarely the way to go. OK. So you have some json events (for future reference - It's better to copy-paste _raw_ event, not the rendered form from the Splunk UI into a code block or a preform... See more...
Dedup is rarely the way to go. OK. So you have some json events (for future reference - It's better to copy-paste _raw_ event, not the rendered form from the Splunk UI into a code block or a preformatted-styled paragraph. I assume all your events for a single alert share the id field, right? I'm not sure however what is the relation between those two evens since one seems to contain subset of the data contained in the other one. It looks a bit wasteful if you're not just logging changes in your alert's state but repeat the growing "history" with each subsequent event. What are you trying to get from those event then?
DS is not supposed to serve apps to clustered search heads directly. That's what deployer is for. If by any chance you managed to get clustered SHs to pull apps directly from DS, you're in for a poss... See more...
DS is not supposed to serve apps to clustered search heads directly. That's what deployer is for. If by any chance you managed to get clustered SHs to pull apps directly from DS, you're in for a possible loads of problems.
We have a deployment server which deploys apps (which contains configs) to Search head cluster (3 SH). I am not sure whether DS distributes apps directly to SH members or it will sent to deployer and... See more...
We have a deployment server which deploys apps (which contains configs) to Search head cluster (3 SH). I am not sure whether DS distributes apps directly to SH members or it will sent to deployer and from there deployer will distribute apps to SH members? Please clarify. We have created a role in DS app which restricts to specific index. When we try to push it... That role is not reflecting in SH members? But when we are checking in Deployer that app is present under shcluster/apps and that role is updated. But it is not showing in SH UI. What is the problem?   Do we need to manually push the config from deployer to SH members everytime? We have config in Deployer as deployer_push_mode=merge_to_default... Is it means distribution is automated? If not how to push config from Deployer to SH members through Splunk web? We don't have access to backend server to give CLI command.
Yes ofcourse Here is my example data. On the left is the alert when it first comes in and on the right is after it has been reviewed and closed.   Alert Received Alert Closed  alert_type: s... See more...
Yes ofcourse Here is my example data. On the left is the alert when it first comes in and on the right is after it has been reviewed and closed.   Alert Received Alert Closed  alert_type: search query    asset: { [+]    }    asset_term: null    content_created_at: 2017-01-10T11:00:00+00:00    escalated: false    id: XXXXXX112    last_modified: 2025-01-09T16:33:37Z    logs: [ [-]      { [-]        action: open        detail:        id:         subject:        timestamp: 2025-01-09T16:33:37+00:00      }      { [-]        action: modify tags        detail:        id:         subject:        timestamp: 2025-01-09T16:33:37+00:00      }    ]    metadata: { [+]    }    network: domains    severity: 4    status: Open    timestamp: 2025-01-09T16:33:37+00:00    timestamp_modify_tags: 2025-01-09T16:33:37+00:00 alert_type: search query    asset: { [+]    }    asset_term: null    content_created_at: 2017-01-10T11:00:00+00:00    escalated: false    id: XXXXXX112    last_modified: 2025-01-09T17:10:52Z    logs: [ [-]      { [-]        action: close        detail:        id:         subject:        timestamp: 2025-01-09T17:10:52+00:00      }      { [-]        action: modify notes        detail:        id:         subject:        timestamp: 2025-01-09T17:10:48+00:00      }      { [-]        action: assign        detail:        id:         timestamp: 2025-01-09T17:09:25+00:00      }      { [-]        action: open        actor:        detail:        id:         subject:        timestamp: 2025-01-09T16:33:37+00:00      }      { [+]      }    ]    metadata: { [+]    }    network: domain    severity: 4    status: Closed    timestamp: 2025-01-09T16:33:37+00:00    timestamp_modify_notes: 2025-01-09T17:10:48+00:00    timestamp_modify_tags: 2025-01-09T16:33:37+00:00   I tried initially to just dedup but that was before I knew it had mulitple events it was pulling in. Since then I have tried the following:  1. I tried doing an mvindex on the status events but it was still pullin in all of the events. 2. I then tried doing the lastest(status) but realized that was only going to pull in what the actualy lastest status of the ID was and would still include all of the events.  3. I also tried doing a sub search per some guidance from a colleague that ended up looking lik the following:  | dedup id | where  [ search index=source | stats latest(status) as latest_status by id | where latest_status="closed" | return $id ] 4. Lastly, I tried going at the metadata which looked like the following:  | dedup id | fields id | format | rex mode=sed field=search "s/ / OR /g" | eval search="NOT (id IN (" + search + "))" | fields search | format "" "" "" "" "" "" "search" and turned into this [ | search index="source" NOT (status="Open" OR status="Escalated") | stats count by id | fields id | format "" "" "" "" "," "OR" "id!=" | rex mode=sed field=search "s/^(.*)$/NOT (id IN (\1))/"
As @richgalloway said don’t use splunk to terminate syslog feed. When you are using real syslog server then it’s better to use UF instead of HF to send those forward. Or use SC4S especially if you hav... See more...
As @richgalloway said don’t use splunk to terminate syslog feed. When you are using real syslog server then it’s better to use UF instead of HF to send those forward. Or use SC4S especially if you haven’t experience of running syslog server.
Can you show some sample events and SPL what you have tried?
Hi when you are using stats it removed all other fields.  Basically you have two options to do this. You should use timechart and also trendline or streamchat with window parameter. r. Ismo
Hi all...I got this fix by a simple logic of set diff command..thanks everyone   
I've got to be close. But having issues trying to figure out how to get a distinct count of user sessions to show up in a bar chart with a trendline. I'd like to see a distinct count of users for las... See more...
I've got to be close. But having issues trying to figure out how to get a distinct count of user sessions to show up in a bar chart with a trendline. I'd like to see a distinct count of users for last year by month and have a trendline added. <My Search> | stats dc(userSesnId) as moving_avg | timechart span=30d dc(userSesnId) as count_of_user_sessions | trendline sma4(moving_avg) as "Moving Average" | rename count_of_user_sessions AS "Disctinct Count of User Sessions"
I know an old question, but actually your idea works, the first part in the subsearch till "fields - ..." simply builds a table I use for field renaming, so that users only need to edit a lookup for ... See more...
I know an old question, but actually your idea works, the first part in the subsearch till "fields - ..." simply builds a table I use for field renaming, so that users only need to edit a lookup for renaming fields: | makeresults | eval field1="some value", field2="another value" | rename [| makeresults | eval mapping="field1:field_one field2:field_two" | makemv delim=" " mapping | mvexpand mapping | rex field=mapping "(?<orig>[^:]+):(?<new>.*)" | fields - _time, mapping | eval rename_phrase=orig + " as " + "\"" + new + "\"" | stats values(rename_phrase) as rename_phrases | eval search=mvjoin(rename_phrases, ", ") | fields search] But it can only build arguments, as seen that rename must be in the base search. Maybe of use for somebody out there.
Working on a dashboard in dashboard studio to display data in two different tables using a single dropdown.  Issue I have is that all my data is determined by the "username" field but want to have dr... See more...
Working on a dashboard in dashboard studio to display data in two different tables using a single dropdown.  Issue I have is that all my data is determined by the "username" field but want to have dropdown display user Lastname, Firstname for better visibility.    First table pulls records from a lookup table with user demographics and such.  Second table is pulling respective window log data tracking various user activity.   In my dropdown, I am currently using the lookup table and eval function to join both "user_last", "user_first" set variable to "fullname" and display User "Lastname, Firstname".   I then used "fullname" as the pass-on token for my first table.   However, my second table, I need the "username" as the token because the data I am querying only has the "username" in the logs and not the users first or last name as my first table.    My question is can I set my dropdown to display "user_last, user_first" names but set the token value as "username" or can I assign multiple tokens in a SPL query in Dashboard Studio to use in the respective tables or can I do both for sake of knowledge.   Here is what I am working with and appreciate any assistance with this. Lookup table:      Name:    system_users.csv      Fields:    username,    name_last,     name_first.... Dashboard Dropdown Field Values:     Data Source Name:    lookup_users SPL Query:     | inputlookup bpn_system_users.csv | eval fullname= name_last.", ".name_first | table fullname | sort fullname Source Code:    { "type": "ds.search", "options": { "queryParameters": { "earliest": "$SearchTimeLine.earliest$", "latest": "$SearchTimeLine.latest$" }, "query": " | inputlookup system_users.csv\n | eval fullname= name_last.\", \".name_first\n | table fullname\n | sort fullname" }, "name": "lookup_users" }
Hello Everyone,  I am hoping someone can help me out as I have exhausted everything I can think of and cannot seem to get anything to work. Essentially what I am looking to do is pull results to get... See more...
Hello Everyone,  I am hoping someone can help me out as I have exhausted everything I can think of and cannot seem to get anything to work. Essentially what I am looking to do is pull results to get a total based off an ID. The issue I am running into is that the ID will have between 1-4 events associated to it. These events are related to the status.  I am only wanting to get the results for any ID that are Open and Escalated but the issue I am running into is that it is pulling all of the events even of those that have since had the status changed to closed or another status.  I am wanting to excluded all of the events for IDs that have had their status changed to anything other than Open or Escalated. The other trouble that I am running into is that this "status" event is occuring in the metadata of the whole transaction. I have the majority of my query built out but where I am struggling is removing the initial Open and Escalated events for the alerts that the status was changed. The field the status changes in is under "logs" and then "logs{}.action".