All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello All, I have just upgraded Splunk ver 9.0.1 to 9.2.1. I have one question: the "Apps" panel on the left side of the window has a "white" background. Version 9.0.1 and older had a "dark" or "b... See more...
Hello All, I have just upgraded Splunk ver 9.0.1 to 9.2.1. I have one question: the "Apps" panel on the left side of the window has a "white" background. Version 9.0.1 and older had a "dark" or "black" backgrould (my preferred view). Is there a way to set the background for the Apps panel to dark or black?   Thanks, Eric W.
In my environment, unless something catastrophic happens, there will always be data from critical and non-critical pods being ingested into this index/sourcetype. I dont have an environment to test t... See more...
In my environment, unless something catastrophic happens, there will always be data from critical and non-critical pods being ingested into this index/sourcetype. I dont have an environment to test the addition you suggested to account for a period of time where no "non-critical" pods reporting, but as soon as i do i will test your updated query. Removing the dedup from your original suggestion seems to have cleared up the weird issue i was seeing where i was getting anamolous results in the timechart.  This seems to be working now which is great. Huge thanks!! index=abc sourcetype=kubectl importance=non-critical | lookup pod_list pod_name_lookup as pod_name OUTPUT pod_name_lookup | append [inputlookup pod_list where importance = non-critical | rename pod_name_lookup as pod_name_all] | eventstats values(pod_name_all) as pod_name_all | where sourcetype == "kubectl" | timechart span=1h@h values(pod_name_lookup) as pod_name_lookup values(pod_name_all) as pod_name_all | eval missing = mvappend(missing, mvmap(pod_name_all, if(pod_name_all IN (pod_name_lookup), null(), pod_name_all))) | where isnotnull(missing) | timechart span=1h@h count by missing    Is there an easy way to instead of  having a individual line for each "missing" pod, to either have a single line with the total count of "non-critical" pods and possibly also have two lines for "critical" and "non-critical"? I guess im also looking to have a timechart summary of the total count of missing non-critical and critical pods. Hope that makes sense.
Currently working on deploying Splunk on AWS to work in conjunction with our current on-prem solution and I have 2 questions. Can I configure our AWS Search heads to function as normal Search Heads ... See more...
Currently working on deploying Splunk on AWS to work in conjunction with our current on-prem solution and I have 2 questions. Can I configure our AWS Search heads to function as normal Search Heads AND as search peers for our on-prem solution? Or would I need dedicated search peers? And would I be able to place the Search peers behind a NLB and point the on-prem distconf file to that NLB? Or would I have to hardcode the instances in the distconf file? 
Actually, the further I review this, the more confused I get.   In your example, why did you split makes and models?  Is it necessary to append data from one sourcetype to the other?  I assume so, ot... See more...
Actually, the further I review this, the more confused I get.   In your example, why did you split makes and models?  Is it necessary to append data from one sourcetype to the other?  I assume so, otherwise the where command would be invalid. You're right, though.  The last three commands are key to the search.  As powerful as Splunk is, I'd sure think there's a much simpler process to search multiple sourcetypes with conditions applied.  (There truly is no comparison between the two, but I could create this query using Access in about 30 seconds.  However, the amount of data I'm searching is far too large for Access...) Thanks for any feedback you can provide.
Try this rex command.  It extracts the individual fields directly. | makeresults | eval test="ton-o-mete_r v4.pdf" | rex field=test max_match=0 "(?<temp>[^\-_\.\(\),;\s]+)"  
Amazing!  Appreciate the straight up regex, too.  This was the way I thought I'd have to do, but didn't realize it 'exact matched' all three keyes (all 3 have to match to be accepted). Thank you!!! ... See more...
Amazing!  Appreciate the straight up regex, too.  This was the way I thought I'd have to do, but didn't realize it 'exact matched' all three keyes (all 3 have to match to be accepted). Thank you!!!  
What is your full current search?
| eval test_data_index=mvfind(split("bl01,bl02,bl03,0_Ref_res", ","), Test_Data) | eventstats max(test_data_index) as max_test_data_index by Identity | where test_data_index = max_test_data_index
It is not clear what your use case is - how can you have events which don't have fields called PROD* and also don't have fields not called PROD* - please clarify what you are trying to do
Hello, I think I'm not that far, unfortunately I still cannot figure out how to extract field PS_command from the inputlookup, and passing it into the main search, and eventually how to map it to th... See more...
Hello, I think I'm not that far, unfortunately I still cannot figure out how to extract field PS_command from the inputlookup, and passing it into the main search, and eventually how to map it to the Message from the index. Could you please try to built a little more on the answers? Thanks again!
@ITWhisperer , Yes occasionally the list changes. So I thought of saving the list as a macro or something. How can I achieve this?
Hi, We are using Splunk Cloud, so we can't access the conf files. In one of our custom source types, we need to create multiple new fields. Those fields are calculated recursevaly meaning Eval2 cal... See more...
Hi, We are using Splunk Cloud, so we can't access the conf files. In one of our custom source types, we need to create multiple new fields. Those fields are calculated recursevaly meaning Eval2 calls result of Eval1, then Eval3 calls results of Eval 2.... Here are some examples of our Eval fields EVAL-url_primaire_apache=if(match(url, "/"), mvindex(split(url, "/"), 0), url) ```if there is a (/) caracter, we only keep the first part before the first (/), if not, we use the full url field``` EVAL-url_primaire_apache_sans_ports=if(match(url_primaire_apache, ":"), mvindex(split(url_primaire_apache, ":"), 0), url_primaire_apache) ```We use the result from the previous Eval to extract only the first part before ":" or the full previous result``` Now the issue is that only the first field is generated. I think that might be fine since Evals are done in parallel. I tried to create an alias on the result of the first Eval and then call it in the second Eval like this: FIELDALIAS-url_primaire_apache_alias1=url_primaire_apache AS url_p_a EVAL-url_primaire_apache_sans_ports=if(match(url_p_a, ":"), mvindex(split(url_p_a, ":"), 0), url_p_a)   However, this still doesn't work since only the first Eval field is created. Neither the alias nor the second Eval are created. What am I missing? How can we create Eval fields recursively?  
Perhaps start here, there's a lot of good information here on certs.  https://lantern.splunk.com/Splunk_Platform/Product_Tips/Administration/Securing_the_Splunk_platform_with_TLS 
Could we get some additional information on our Google chat splunk alert? For now I am only able to find  a way to put $name$ in the message text, but is there a way to add additional information... See more...
Could we get some additional information on our Google chat splunk alert? For now I am only able to find  a way to put $name$ in the message text, but is there a way to add additional information so we can display some of the search query details? like the sample below? Splunk Alert:  "Splunk Alert name" Status: <status code> Resource: <resource> logs: https://... Splunk results: https://...  
Does anyone have a thorough explanation of the certs in Splunk? And why they are all different yet the same? Can i use the same cert for all situations? Here's a table: https://docs.splunk.com/Docu... See more...
Does anyone have a thorough explanation of the certs in Splunk? And why they are all different yet the same? Can i use the same cert for all situations? Here's a table: https://docs.splunk.com/Documentation/Splunk/9.2.1/CommonCriteria/Commoncriteriainstallationandconfigurationoverview#List_of_certificates_and_keys   These tables aren't very specific, and splunk generated different certs for each one. I need to use company specific certs, and am a bit confused on which ones can be the same, and which ones can't...
Verify the IP field does not have any null values because will not show results if a groupby field has null values.
Sorry about that, I should have been more clear. We are using Splunk Cloud, so we would be looking for index deletions via the Web GUI (Settings-->Indexes-->Actions-->Delete).  I can see the remov... See more...
Sorry about that, I should have been more clear. We are using Splunk Cloud, so we would be looking for index deletions via the Web GUI (Settings-->Indexes-->Actions-->Delete).  I can see the removeIndex action being taken in the _internal index - ideally there would be a log linking the index deletion to the user account.  Do we really need to pull in browser request data in order to audit actions that occur within Splunk?  Sorry if I'm missing something here.
Hi here is instructions what you MUST fulfils when you are installing (aka upload) apps to Splunk Cloud. https://dev.splunk.com/enterprise/docs/releaseapps/manageprivatecloud/ https://dev.splunk.... See more...
Hi here is instructions what you MUST fulfils when you are installing (aka upload) apps to Splunk Cloud. https://dev.splunk.com/enterprise/docs/releaseapps/manageprivatecloud/ https://dev.splunk.com/enterprise/docs/releaseapps/cloudvetting/ r. Ismo
I suggest to do it just opposite order. First mount your old splunk installation on /opt/splunk and after that install splunk from rpm to update rpm registry and ensure that correct files are in place... See more...
I suggest to do it just opposite order. First mount your old splunk installation on /opt/splunk and after that install splunk from rpm to update rpm registry and ensure that correct files are in place. You should also use same splunk rpm version than you have on old node. After you have verify that it works then it's time to update.
Thanks, it does help, but when I'm trying to put it in a column chart it does not display anything except the field names _time and ip. Am I doing something wrong? Thanks!