All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi , I have a clustered environment of Slunk setup. How can I find the all reports and alerts with email address. Actually I  need to correct the email domains again and I didn't found any correct w... See more...
Hi , I have a clustered environment of Slunk setup. How can I find the all reports and alerts with email address. Actually I  need to correct the email domains again and I didn't found any correct way to check all reports with email address. Is there any search query and specific method to find out.
HI, I have configured an alert to get the email when my query gives greater than 0 search results. I am able to see the alert but if search results are 3 I am getting 3 different emails as below. ... See more...
HI, I have configured an alert to get the email when my query gives greater than 0 search results. I am able to see the alert but if search results are 3 I am getting 3 different emails as below. In the above screenshot, we can see that all 3 different emails are triggered at the same time.  want all these results to be in one email alert. Can someone please help me with how can we get all the search results of a single alert? Also in my alert trigger conditions, I have something like  does that mean my Splunk alert expires after 24 hours? If it is so how can I change the settings to work the alert forever and if I need to stop the alert I will disable it.   Thanks in advance, Swetha. G
Can someone help me to check if any way to stop the scheduled excess bucket removal? Or anything that could speed up the task.  I am trying to perform data-rebalance on the cluster and seems it is... See more...
Can someone help me to check if any way to stop the scheduled excess bucket removal? Or anything that could speed up the task.  I am trying to perform data-rebalance on the cluster and seems it is blocked by on going excess bucket removal.       
I have 2 search Queries to get the windows shutdown list from the lookup file but when  I run these 2 Queries I am getting different host list for same time period, Can you please suggest the best Qu... See more...
I have 2 search Queries to get the windows shutdown list from the lookup file but when  I run these 2 Queries I am getting different host list for same time period, Can you please suggest the best Query to get the shutdown hosts from the lookup file. 1. index=* host=* sourcetype=XmlWinEventLog* (EventCode=41 OR EventCode=1074 OR EventCode=6006 OR EventCode=6008) join type=inner host [ |inputlookup Windows.csv ] | stats count by host | dedup host 2. index=* sourcetype=XmlWinEventLog* EventCode=41 OR EventCode=1074 OR EventCode=6006 OR EventCode=6008 [ | inputlookup Windows.csv | return 1000 host] | stats count by host | where count >1        
Hi, As mentioned in the subject, I wanted to perform a simple subtraction operation on individual values/elements within a multivalue field.  So for example, I have a MV 23 79 45 23 ... See more...
Hi, As mentioned in the subject, I wanted to perform a simple subtraction operation on individual values/elements within a multivalue field.  So for example, I have a MV 23 79 45 23 38 I will iterate over the items, and perform the subtraction to get the following: Iteration 1 diff = 0 (since it's the first element) Iteration 2 diff = abs(79 - 23) Iteration 3 diff = abs(45 - 79) Iteration 4 diff = abs(23 - 45) Iteration 5 diff = abs(38 - 23)   So far, here's what I did: | makeresults | eval a1="23,79,45,23,29" | makemv delim="," a1 | mvexpand a1 | eval start=0 | eval i=0 | foreach a1 [ eval tmp='<<FIELD>>' | eval diff=if(start==0, 0, "subtract_element[i+1]_with_subtract_element[i]") | eval i=i+1 | eval start=1 ] dummy query I haven't implemented the subtraction logic yet, since obviously, I am having a challenging time in doing it. I've spent almost 3 hours already experimenting, but no luck. Another weird thing (for me), which I can't explain, why the variable "i" is not incrementing even if I'm updating it within the foreach block. Hope my question makes sense, and as usual, thank you very much in advance. Any ideas are greatly appreciated
We have a number of saved searches (configured as alerts) that make use of custom search commands we wrote. It can happen that those custom commands fail to execute. This would cause the saved search... See more...
We have a number of saved searches (configured as alerts) that make use of custom search commands we wrote. It can happen that those custom commands fail to execute. This would cause the saved search to fail with an error message, which can be seen if you would run the search in the Splunk UI. We would like alerts to be triggered if this happens to the saved search. In the Alert configuration, this type of trigger is not an option.  Is there any way we can get an alert triggered when one of our saved searches fails with an error? PS: we are running our app on a multi-tenant platform, so we do not have access to the internal logs, thus we cannot run a search like: index=_internal sourcetype=scheduler status!=success | table _time search_type status user app savedsearch_name   
Hi all,  I am trying to highlight a particular column if the values in that column are no all the same. For instance, say I have a table such as the following: User  Place Version Mark Lon... See more...
Hi all,  I am trying to highlight a particular column if the values in that column are no all the same. For instance, say I have a table such as the following: User  Place Version Mark London 1 Sally US 2 Will Africa 2   I would specifically like to highlight the version column red because the values are not all the same on that column (the values are not all 2) , so that it looks like this (underline = should be highlighted) User  Place Version Mark London 1 Sally US 2 Will Africa 2   I understand the first step is to turn the Version column into a multi-value field and add "RED" in each cell along that column but unsure as to how to do that. I'd imagine that logic is something like if (count(Version uniq) >1 )) then add RED. But again not sure how to implement something like that.   Any help would be hugely appreciated!
Hi Splunkers. I'm looking for a way to delete a correlation search that has been created with the wrong name (as ES doesn't let you rename them). The CS is currently disabled but I don't see a way ... See more...
Hi Splunkers. I'm looking for a way to delete a correlation search that has been created with the wrong name (as ES doesn't let you rename them). The CS is currently disabled but I don't see a way to actually delete it. I see some previous answers here but they involve direct access to the .conf file which isn't an option in this particular environment, so looking for a way to do this from the SH. Thanks.  
Splunking legends... how good is Splunk:  we are on splunk cloud and I want to create a dashboard that looks at the status of an input, and reports on any that are disabled... shouldn't be hard, but ... See more...
Splunking legends... how good is Splunk:  we are on splunk cloud and I want to create a dashboard that looks at the status of an input, and reports on any that are disabled... shouldn't be hard, but after hours of research... meh Help me Splunky-won-knobi, your my only hope
Hello All, Nessus keeps throwing the error that "/en-US/splunkd/__raw/services/server/info/server-info?output_mode=json" exposes critical information for unauthenticated scans, but it the test is st... See more...
Hello All, Nessus keeps throwing the error that "/en-US/splunkd/__raw/services/server/info/server-info?output_mode=json" exposes critical information for unauthenticated scans, but it the test is stupid and runs an authenticated scan, therefore it fails since the data will be presented if authenticated. We need a clean Nessus scan result and I managed to make the following changes to restmap.conf [admin:server-info] requireAuthentication = true acceptFrom = "127.0.0.1" [admin:server-info-alias] requireAuthentication = true acceptFrom = "127.0.0.1"   This basically makes it even if you are authenticated you will get forbidden if you visit "/en-US/splunkd/__raw/services/server/info/server-info?output_mode=json".   This works great, but a side effect is that I am unable to view some UI pages like for example the user page anymore. I would have to remove the 127.0.0.1 line to view the UI elements. Anyone know how I can specially block "/en-US/splunkd/__raw/services/server/info/server-info?output_mode=json" but not cause other pages like users from being blocked?  This is to just get the nessus scan to pass.
Hi, I have below output with my search,  base search| stats count by User, action User action count Alex install 3 Alex uninstall 5   But i would like the table to display as b... See more...
Hi, I have below output with my search,  base search| stats count by User, action User action count Alex install 3 Alex uninstall 5   But i would like the table to display as below User install uninstall Total Alex 3 5 8   Please Let me know if this is possible
Splunk Imap is indexing multiple emails with different subjects and source senders but same timestamp into one event.  What should I check?  At a total loss.
Hello fellow Splunkers, So my team has recently implemented the MLTK to track outliers and deviations in network events across several devices. Although I didn't set up the MLTK myself, it is runnin... See more...
Hello fellow Splunkers, So my team has recently implemented the MLTK to track outliers and deviations in network events across several devices. Although I didn't set up the MLTK myself, it is running a query over 5 min intervals to allow analysts to quickly scope deviations from the baseline (upperbound, etc).  All of this is completely fine, however, when we invoke a Notable Event in ES we are left with 24 iterations of the Notable Event (The MLTK requires a 2-hour interval to create a new baseline). Each notable representing a 5 min interval. I was wondering if there is any method to group or cluster these notables into a single Notable Event. We are currently throttling the notable to 1 invocation per hour but this is obviously not a permanent solution as it can cause us to miss alerts that fire within an hour of the previous iteration. Any insight into this would be extremely helpful. Thanks!
Hello Friends  I am facing issues that may be caused by input.conf but I am not able to get bottom of the problem when I search index=network "*f5*" in which 2 types of sourcetype are coming  f5:... See more...
Hello Friends  I am facing issues that may be caused by input.conf but I am not able to get bottom of the problem when I search index=network "*f5*" in which 2 types of sourcetype are coming  f5:bigip:syslog f5:bigip:asm:syslog and certain event count against each sourcetype when search index=network and sourcetype="*f5*",  then under index=network f5:bigip:ltm:tcl:error f5:bigip:syslog and certain event count against each sourcetype but when I put index=network sourcetype="f5:bigip:asm:syslog" no events is found I am collecting the data from f5 on a server (HF) and collecting the data on indexer from HF. Add on are installed but I am not sure whether its configured. Input.conf contains entry for f5:bigip:syslog can someone please help Regards Gaurav @prakash007  @renjith_nair  @jbsplunk 
Would someone be able to help me understand how do to this?  I would like to modify the built in dashboard in the InfoSec APP to exclude a specific source IP address.  The default search the dashboar... See more...
Would someone be able to help me understand how do to this?  I would like to modify the built in dashboard in the InfoSec APP to exclude a specific source IP address.  The default search the dashboard uses is below.   | tstats summariesonly=true allow_old_summaries=true count from datamodel=Intrusion_Detection.IDS_Attacks where * IDS_Attacks.severity="*" by IDS_Attacks.signature, IDS_Attacks.severity | rename "IDS_Attacks.*" as "*" | sort severity   Currently, that dashboard visual is full of events from my vulnerability scanner running scans. 
Every month when software updates go out, my Enterprise deployment exceeds the license. I get overloaded with Event Code 4663. After the first time, I just added it to the blacklist in inputs.conf an... See more...
Every month when software updates go out, my Enterprise deployment exceeds the license. I get overloaded with Event Code 4663. After the first time, I just added it to the blacklist in inputs.conf and problem solved. I'd like to leave that EventCode active and only disable it when the majority of systems are updating. I know I can do this manually but am trying to find if there is a way to automatically enable the blacklist based on date? Or to set a trigger based on a specific series of event codes that indicate software updates? If anyone has tried this before I'm very curious if there's a solution?
So, here is an issue where I can't find some services (e.g, service x, service y. service z) under the field service_name in splunk itsi_summary index but the corresponding service_ids are there in i... See more...
So, here is an issue where I can't find some services (e.g, service x, service y. service z) under the field service_name in splunk itsi_summary index but the corresponding service_ids are there in itsi_summary index. However, when I am looking for those services in the lookup service_kpi_lookup I do find them under title field.  When I do a simple search -  index=itsi_summary | stats count serviceid - I am getting a count of 1029, but then again when I do -  index=itsi_summary | stats count by service_name - I am getting a count of 1024, furthermore if I do - | inputlookup service_kpi_lookup | stats count by title - I am getting a count of 1029 So, there seems to be something broken that populates the service_name field in itsi_summary. Can anyone help me on this. Need to understand on - how this service_name field is getting populated.
Hello Im working on a new script to install Splunk via bash. before accepting the license and starting Splunk, with no prompt and answering yes, Im creating the user-seed.conf file in system/local ... See more...
Hello Im working on a new script to install Splunk via bash. before accepting the license and starting Splunk, with no prompt and answering yes, Im creating the user-seed.conf file in system/local   #create admin account cd /opt/splunk/etc/system/local/ touch user-seed.conf echo "[user_info]" >> user-seed.conf echo "USERNAME = admin" >> user-seed.conf echo "HASHED_PASSWORD = <hased pass>" >> user-seed.conf   However after    '/opt/splunk/bin/splunk start --accept-license --answer-yes --no-prompt'   and going back and trying to find user-seed.conf it no longer exists. Im also removing any file etc/passwd before starting. When Splunk starts with the hashed pass in user-seed.conf does that file disappear or get moved? Maybe Im going about this the wrong way? Better way to do this? Thanks for the thoughts! Todd
Hello All. I am trying to use a lookup to perform a tstats search against a data model, where I want multiple search terms for the same field.  However, I cannot get this to work as desired.  I have... See more...
Hello All. I am trying to use a lookup to perform a tstats search against a data model, where I want multiple search terms for the same field.  However, I cannot get this to work as desired.  I have an example below to show what is happening, and what I'm trying to achieve. I have a lookup file named search_terms.csv: process_exec process process someexe.exe *param1* *param2*   Given this lookup file, here is the expanded search I am trying to achieve:   | tstats summariesonly=false allow_old_summaries=true count from datamodel=Endpoint.Processes where (Processes.process_exec=someexe.exe AND Processes.process=*param1* AND Processes.process=*param2*) by Processes.dest   Here is the first search I tried:   | tstats summariesonly=false allow_old_summaries=true count from datamodel=Endpoint.Processes where [ | inputlookup search_terms.csv | fields process_exec process | rename process_exec AS Processes.process_exec | rename process AS Processes.process ] by Processes.dest   However, expanding this search leads to the second process column being ignored:   | tstats summariesonly=false allow_old_summaries=true count from datamodel=Endpoint.Processes where (Processes.process_exec=someexe.exe AND Processes.process=*param1*) by Processes.dest   Since this did not work, I tried editing the lookup file to look like this: process_exec process someexe.exe *param1*|||*param2*   Then I used makemv to make the process field multivalue:   | tstats summariesonly=false allow_old_summaries=true count from datamodel=Endpoint.Processes where [ | inputlookup search_terms.csv | fields process_exec process | makemv delim="|||" process | rename process_exec AS Processes.process_exec | rename process AS Processes.process ] by Processes.dest   However, this search expanded lead to an "OR" being used for the 2 process query values, instead of an "AND":   | tstats summariesonly=false allow_old_summaries=true count from datamodel=Endpoint.Processes where (Processes.process_exec=someexe.exe AND (Processes.process=*param1* OR Processes.process=*param2*)) by Processes.dest   Does anyone know of a method to create a search using a lookup that would lead to my desired search of:   | tstats summariesonly=false allow_old_summaries=true count from datamodel=Endpoint.Processes where (Processes.process_exec=someexe.exe AND Processes.process=*param1* AND Processes.process=*param2*) by Processes.dest        
Hi. I have a problem with strptime I try converter a date with datee1=strptime('datee', "%d-%b-%y") but with some dates don´t work example datee                                   datee1 31-ag... See more...
Hi. I have a problem with strptime I try converter a date with datee1=strptime('datee', "%d-%b-%y") but with some dates don´t work example datee                                   datee1 31-ago-16                       13-feb-19                        1550026800.000000 When I overwrite  31-ago-16 with 13-fe-19  it works! I don´t understand My source is a Lookup file