All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello dears, I want to list my search if  "B" total count higher than >3 than list by "A" A and B fields could have variable values, doesn't matter.  search | stats count(B) by A,B |sort -A |where... See more...
Hello dears, I want to list my search if  "B" total count higher than >3 than list by "A" A and B fields could have variable values, doesn't matter.  search | stats count(B) by A,B |sort -A |where B>3
So we have a task to find all the hosts in our splunk enterprise. We need to take the list and what type of logs we are getting from that hosts. How can we do that easily?
Hi, We are trying to upgrade Splunk enterprise from 7.3.1 to 8.1.5 . What will be the first activity like 1. Which major files we need to take backup? 2.We need to upgrade the less impacted item f... See more...
Hi, We are trying to upgrade Splunk enterprise from 7.3.1 to 8.1.5 . What will be the first activity like 1. Which major files we need to take backup? 2.We need to upgrade the less impacted item first? 3. Search head,  monitoring console, indexers, deployment server what need to be updated first 4. Can we stop all indexers at a time during upgradation that cause any impact? 5. How about the forwarders upgradation?
I would like to ask about the line of code we put in the messages field in the Splunk Alert Action for Slack Notification.    $result.users$$result.message$   Here is a screenshot of the send Mes... See more...
I would like to ask about the line of code we put in the messages field in the Splunk Alert Action for Slack Notification.    $result.users$$result.message$   Here is a screenshot of the send Message plugin details that we set in a test channel.   I would like to ask why, beginning last week - all of a sudden it began displaying this in Slack:      Instead of the usual results we have that would indicate  @yoshilog "Good day.. <Blah, blah>". So what we did is update the code, to add a whitespace in between the two result calls.     $result.users$ $result.message$     Doing so, fixed the results, and led to the expected output in our Slack test channel. @yoshilog "Good day.. <Blah, blah>". However, within the team, there were some questions about what had changed in the past week, that suddenly caused the alert to not post the expected output in slack. (Since no one had changed / touched the alert for a long time).  I have also gotten in touch with the plugin developer, however he has not responded so I resorted to posting here, since some Splunkers might have had some experience with the issue.    Would appreciate your ideas re: what had happened. Thank you in advance!
I have learned this is very important in making sure you can recoverin case of a big disaster. It is a saving net for your saved searches, event types, tags, lookups, reports & all your customization... See more...
I have learned this is very important in making sure you can recoverin case of a big disaster. It is a saving net for your saved searches, event types, tags, lookups, reports & all your customizations. I work in a large environment including splunk Ent. & ES. Any planning / SPLs are much appreciated. Thx a ton !
I know this is a niche and rookie question, but maybe someone out there can provide some guidance. I'm quite new to Splunk. I have practiced inputting data and working with it in Fundamentals 1, but ... See more...
I know this is a niche and rookie question, but maybe someone out there can provide some guidance. I'm quite new to Splunk. I have practiced inputting data and working with it in Fundamentals 1, but I believe inputting other types of data and working with it will be good in helping me learn. I'm enjoying learning Spunk, but I lack a lot of experience in data analytics. I don't know where to start looking for good practice data. I don't expect many people to have practice data readily available, even so, thank you for hearing me out.
Am getting an error by the cluster master under messages. Indexes missing. Need to learn how many are missing and what happened to them? If deleted by whom? Thanks a million in advance
Hi We are using splunk version 8.1.0 in cluster mode , in my environment we have this components: Nginx load load balancer : for load balancing request to search heads 3 search heads and 1 deploye... See more...
Hi We are using splunk version 8.1.0 in cluster mode , in my environment we have this components: Nginx load load balancer : for load balancing request to search heads 3 search heads and 1 deployer: in cluster mode 3 indexer and 1 master node: in cluster mode 2 heavy forwarder : stand alone and forward data with load balancing between indexers 1 syslog server : receive syslogs from 100 servers and send it via ipvsadm(port 514 udp) to heavy forwarders All splunk servers is centos 7 and all servers are same network zone And i have almost 300 GB per day data server specifications: Search heads : 32GB Ram 32Core Cpu Indexer : 32GB Ram 16Core Cpu heavy forwarder : 12GB Ram 12Core Cpu syslog server: 12GB Ram 12Core Cpu We have a problem in real time search , we have a lot of dashboards with multiple searches in there , when i open my dashboards after random time (about 1 to 120 seconds) we get a error. here is description of my error : [<indexer hostname>] Timed out waiting for peer <indexer hostname>:ingest_pipe=1. Search results might be incomplete! If this occurs frequently, receiveTimeout in distsearch.conf might need to be increased we dont have any problem in resources such as cpu utilization and lack of memory too This error happened while we have another instance with one indexer and one search head in non cluster environment with same traffic, and we dont have any problem with that , the old version of splunk is 6.6.1 So what did i do: - Increase receiveTimeout parameter in search heads but i know problem is not this - Increase parallelIngestionPipelines in indexers to 2 , - Tune os recommended by splunk site - Increase max_searches_per_cpu to 15 - and ... But problem not solved
Hi , I have 2 queries : index="bar_*" sourcetype =foo crm="ser" | dedup uid | stats count as TotalCount and  index="bar_*" sourcetype =foo crm="ser" jet="fas" | dedup uid | stats count as Tota... See more...
Hi , I have 2 queries : index="bar_*" sourcetype =foo crm="ser" | dedup uid | stats count as TotalCount and  index="bar_*" sourcetype =foo crm="ser" jet="fas" | dedup uid | stats count as TotalFalseCount I need both of these queries merged and then take "TotalCount" and "TotalFalseCount" and get value from these as : ActualPercent= (TotalFalseCount/TotalCount)*100. I created one query as below: index="bar_*" sourcetype =foo crm="ser" | dedup uid | stats count as TotalCount by zerocode SubType | appendcols                 [searchindex="bar_*" sourcetype =foo crm="ser" jet="fas"                      | dedup uid                           | stats count as TotalFalseCount by zerocode SubType]  | eval Percent=(TotalFalseCount/TotalCount)*100    | stats count by zerocode SubType Percent   but the value of "Percent" is completely wrong, can anybody help to know how can I get proper value of "Percent" in above case ?
i have this spl  | tstats `summariesonly` earliest(_time) as _time from datamodel=Incident_Management.Notable_Events_Meta by source,Notable_Events_Meta.rule_id | `drop_dm_object_name("Notable_Events... See more...
i have this spl  | tstats `summariesonly` earliest(_time) as _time from datamodel=Incident_Management.Notable_Events_Meta by source,Notable_Events_Meta.rule_id | `drop_dm_object_name("Notable_Events_Meta")` | `get_correlations` | join rule_id [| from inputlookup:incident_review_lookup | eval _time=time | stats earliest(_time) as review_time by rule_id] | eval ttt=review_time-_time | stats count,avg(ttt) as avg_ttt,max(ttt) as max_ttt by rule_name | sort - avg_ttt | `uptime2string(avg_ttt, avg_ttt)` | `uptime2string(max_ttt, max_ttt)` | rename *_ttt* as *(time_to_triage)* | fields - *_dec it should display the mean time to triage for 14 days but it doesn't work for 14 days and works for 30 days. any advise ?
hi  we had the user with name user1, after some days the user was hidden in settings>users, but still user can login to splunk web. and we can't find user in users.
Hi,   I have two values in a field named (Online Booking) and want to display the same in tabular format and I tried eval, stats command but not giving the expected result.   Question : Total cou... See more...
Hi,   I have two values in a field named (Online Booking) and want to display the same in tabular format and I tried eval, stats command but not giving the expected result.   Question : Total count to find out the option to book table     and the output that i am expecting is   Please let me know which command needs to be used and also reference books for learning development, I am completely new to Splunk
Hi all!  I've populated a table dynamically based on a drop down a user can manipulate using...    | inputlookup [| rest /services/saved/searches | fields title | search title=*$use... See more...
Hi all!  I've populated a table dynamically based on a drop down a user can manipulate using...    | inputlookup [| rest /services/saved/searches | fields title | search title=*$userInput$*.c | eval check=replace(title,"\.c",".csv") | return $check]   Additionally I can extrapolate each column into a dropdown using...   | inputlookup [| rest /services/saved/searches | fields title | search title=*$userInput$*.c | eval check=replace(title,"\.c",".csv") | return $check] | eval value= [| inputlookup [| rest /services/saved/searches | fields title | search title=*$userInput$*.c | eval check=replace(title,"\.c",".csv") | return $check] | transpose | fields column | streamstats count(column) as order | where order = 2 | return $column] | fields value | table value | stats values(value) as value | mvexpand value   by varying order = #  The issue I'm running into now is pairing the drop down,  the drop down with a set token of "filter1" Run within    | inputlookup [| rest /services/saved/searches | fields title | search title=*$userInput$*.c | eval check=replace(title,"\.c",".csv") | return $check] | search [| inputlookup [| rest /services/saved/searches | fields title | search title=*$userInput$*.c | eval check=replace(title,"\.c",".csv") | return $check] | head 1 | table * | transpose | head 1 | return $column] = $filter1$     The token is not functioning and will not populate in the query, while other tokens will. I've additionally confirmed the query works, just not when used with tokens within a dashboard.  To summarize, the overall goal is to provide 4-6 drop downs and a table. Based on the initial user request, this table would populate with data, and each drop down would then be able to further filter data,  with the field names and values all varying. If something isn't clear please ask and I'd be glad to provide, been banging my head against this one for awhile lol.
Hello Team, hope you are doing well. I really need your support to the issue ,I have experienced about logs not received from syslog sender devices  into Splunk instance. before logs were received... See more...
Hello Team, hope you are doing well. I really need your support to the issue ,I have experienced about logs not received from syslog sender devices  into Splunk instance. before logs were received, but today no logs are coming,  #I have checked splunk forwarders i found is running also checked splunkd it is also running,   But also I found error but ii don't know if this is the root cause that cause this matter, Below is the issue I found when I check the status, AND even when I do systemctl restart splunk-suf.service this doesn't work, still it gives me failed status! bash-4.2$ systemctl status splunk-suf.service * splunk-suf.service - splunk Universal Forwarder service Loaded: loaded (/etc/systemd/system/splunk-suf.service; enabled; vendor preset:disabled) Active: failed (Result: start-limit) since Sat 2021-09-25 11:28:14 CAT; 3min 3s ago Process: 58723 ExecStart=/opt/splunkforwarder/bin/splunk _internal_launch_under_systemd --accept-license --no-prompt --answer-yes (code=exited, status=1, FAILURE) Main PID: 58723 (code=exited, status=1/FAILURE) ***Kindly help me on how I may solve this issue and share with me the troubleshooting CLI commands to check why receiver Splunk instance are not receiving logs? ** I want to check also if the firewall is not blocking anything, what different command to use?  Or any other advice that may help me to resolve this? **MY OS: Centos, Splunk enterprise Kindly help me on this matter, and share with me other command I can use to troubleshooting this and how i can fix this? Thank you in advance.  
I have a macro that adds a backslash to an existing backslash:   [backslash(1)] args = arg definition = replace("$arg$", "(\\\\)", "\\\\\\\\") iseval = 1    This works:   index=perfmon counter=... See more...
I have a macro that adds a backslash to an existing backslash:   [backslash(1)] args = arg definition = replace("$arg$", "(\\\\)", "\\\\\\\\") iseval = 1    This works:   index=perfmon counter=`backslash(\processor)`    This fails when the arg has spaces:   index=perfmon counter=`backslash("\processor time")`   The expanded search string:   (counter=\\processor index=perfmon time)   How do I get: index=perfmon counter="\\processor time" Oh please show me my stupidity as I have been banging my head on the desk for hours... 
We upgraded some apps and add-ons the other day, and ever since then, we're running into issues with searches breaking and getting error messages stating, "Problem replicating config (bundle) to sear... See more...
We upgraded some apps and add-ons the other day, and ever since then, we're running into issues with searches breaking and getting error messages stating, "Problem replicating config (bundle) to search peer ' <X.X.X.X:8089>', Upload bundle="E:\Splunk\var\run\<bundle_id>.bundle" to peer name=<indexer> uri=https://X.X.X.X:8089 failed; http_status=400 http_description="Failed to untar the bundle="E:\Splunk\var\run\searchpeers\<bundle_id>.bundle". This could be due Search Head attempting to upload the same bundle again after a timeout. Check for sendRcvTimeout message in splund.log, consider increasing it.". We increased the sendRcvTimeout in distsearch.conf on our search heads from the default of 60 to 300, but we're still getting this. Most of our searches come back with 0 results.   Has anyone come across this before? I haven't really seen other posts with that "failed to untar" error message. I looked through some of the add-ons and apps that are included in the bundles, but I didn't see any really large lookup .csv or files in the /bin directory.  (I did see one add-on mentioned in splunkd.log that said "File length is greater than 260, File creation may fail" followed by the untar immediately failing - I am investigating this further.) Our architecture includes a search head deployer with 4 SHs, index cluster master with 4 indexers and a deployment server.  Thank you for any advice!
Hi Splunkers, Can you pls share procedure to upgrade Microsoft Azure Add on for Splunk from 2.x to 3.x. We have upgraded splunk version on 8.1. We tried to find procedure on splunk site but couldnot... See more...
Hi Splunkers, Can you pls share procedure to upgrade Microsoft Azure Add on for Splunk from 2.x to 3.x. We have upgraded splunk version on 8.1. We tried to find procedure on splunk site but couldnot found. Please assist us. Thanks & Regards, Abhijeet B.
index=test sourcetype=test_access tag=prod server_name!="www.test.com" earliest=-4h latest=now | timechart eval(avg(request_time)*1000) as "Today" | appendcols [search index=test sourcetype=test_acce... See more...
index=test sourcetype=test_access tag=prod server_name!="www.test.com" earliest=-4h latest=now | timechart eval(avg(request_time)*1000) as "Today" | appendcols [search index=test sourcetype=test_access tag=prod server_name!="www.test.com" earliest=-7d-4h latest=-7d | timechart eval(avg(request_time)*1000) as "LW1"] | appendcols [search index=test sourcetype=test_access tag=prod server_name!="www.test.com" earliest=-14d-4h latest=-14d | timechart eval(avg(request_time)*1000) as "LW2"]base-search
Hi folks, I am creating a Splunk dashboard and have some questions regarding the multiselect input. 1. I want to add a special option `all`, when user selects it, means all options are selected. So... See more...
Hi folks, I am creating a Splunk dashboard and have some questions regarding the multiselect input. 1. I want to add a special option `all`, when user selects it, means all options are selected. So I added a static option `all`, but I can select both `all` and any other options, makes it looks odd, so my first question is how to make `all` option either exclusive of other options, or when i select `all`, all options will be selected automatically (except `all`)? 2. For the multiselect input, I am currently using is int a `WHERE` clause: `| where $multiselect_roles$`, currently the configuration of multiselect is: it means the interpolated clause looks like: `| where role_name="value1" OR role_name="value2"`, my second question is when `all` is selected, how can I either emit the whole `WHERE` clause, or make it trivial, means the `WHERE` clause is there but actually it doesn’t filter anything? I tried to give the `all` option an empty string, or a `*` but both don’t work.  3. When populating the dynamic options of multiselect from query, I want to reference other inputs as query parameters. For example, I already added an input whose token name is `environment` and another time range input, I want to only get distinct values of a column from the given environment and time range, like this: `from_index_distapps` sourcetype="xyz" "request=" earliest=$time_range$ | rex field=message "request=\"(?[^}]+})" | eval arjson=replace(arjson, "\\\\\"", "\"") | spath input=arjson | where environment=$environment$ | table role_name | dedup role_name How to correctly reference other inputs here? 
Hello, I have the query : hostalias=$hostname$ AND actor AND total | timechart span=1s count by actor | stats   This returns the stats for all the actors into a row, but I wanted to have a table ... See more...
Hello, I have the query : hostalias=$hostname$ AND actor AND total | timechart span=1s count by actor | stats   This returns the stats for all the actors into a row, but I wanted to have a table where each row indicates a specific actor and the resulting max/avg/p50/p99 statistics for that actor. Something like below: Actor max avg p50 p99 actorName1         actorName2         actorName3             I tried the following query, but nothing returned: hostalias=$hostname$ AND actor AND total | timechart span=1s count as TPS | stats max(TPS) as maxTPS avg(TPS) as avgTPS p50(TPS) as p50TPS p99(TPS) as p99TPS by actor   I had something similar working before, but there was no timechart involved. Is this possible to do with timechart?   Thanks for any insights