All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

By "weekly data" data, do you mean daily data for the week? If so, you need to timechart by day and set your time period to be the week | timechart span=1d sum(abc) by xyz  
You can't "add case-insensitivity". Field names are case-sensitive in Splunk by design (so Dev and DEV are two different fields and you can have both of them in your event). You can try some ugly ha... See more...
You can't "add case-insensitivity". Field names are case-sensitive in Splunk by design (so Dev and DEV are two different fields and you can have both of them in your event). You can try some ugly hacks to "normalize" field case like | foreach * [ | eval field=lower("<<FIELD>>") | eval {field}=<<FIELD>> | eval <<FIELD>>=null() | eval field=null() ] But this is really, really ugly. And if you have two similar but differently-cased field names, only one of them will be retained, other one(s) will be overwritten. Of course you can "limit" this to just some pattern by doing conditional evals (but it gets even uglier than this because you have to add an if matching the field name to most of those evals so it's getting really spaghetti).
Hi @jacknguyen, what do you mean with backup log? in an indexer you have hot/warm and cold buckets, that you have to store in a filesystem (not NFS and not shared) and each one different. If you w... See more...
Hi @jacknguyen, what do you mean with backup log? in an indexer you have hot/warm and cold buckets, that you have to store in a filesystem (not NFS and not shared) and each one different. If you want to have a copy of your data, you can do it, what what's the meaning of? You can have a backup, but usually only warm and cold buckets are under backup policy because to backup hot buckets (that continously change) you need to stop Splunk. Ciao. Giuseppe
@yuanliu  1. My task is to calculate number of events with "FAILED" value in "RESULT" key, it looks like this and it works (thanks to you guys!) - `index="myIndex" sourcetype ="mySourceType" | forea... See more...
@yuanliu  1. My task is to calculate number of events with "FAILED" value in "RESULT" key, it looks like this and it works (thanks to you guys!) - `index="myIndex" sourcetype ="mySourceType" | foreach "*DEV*" "UAT*" [| eval keep=if(isnotnull('<<FIELD>>'), 1, keep)] | where keep==1 | stats count(eval('RESULT'=="FAILED")) as FAILS | stats values(FAILS)` It would be superb to add case-insensitivity, so both `DEV` and `Develop` are included to the result 2. Many thanks, `foreach "*DEV*" "UAT*"` works as a charm!
What if I just save only One Indexer for backup log, Is it work for both Indexer?
Hi expert, My SPL looks something like: index=<> sourcetype::<> | <do some usual data manipulation> | timechart min(free) AS min_free span=1d limit=bottom1 usenull=f BY hostname | filldown Wha... See more...
Hi expert, My SPL looks something like: index=<> sourcetype::<> | <do some usual data manipulation> | timechart min(free) AS min_free span=1d limit=bottom1 usenull=f BY hostname | filldown What I want to achieve is displaying the outcome as Single Value visualisation with sparkline. My expectation is to have the very last and smallest value min_free for the time span selected displayed and showing the hostname with the smallest min_free shown in the same visual. However, I get different outcome. The BY split appears to group data by hostname first and then applies the min_free value as secondary sort criteria. Following is what I get:   When I modify the SPL timechart to limit=bottom2, I get the following.   What I want with a slightly modified SPL (limit=bottom1 useother=f) is to only display the circled middle one with the Single Value showing both the latest smallest min_free and hostname values. How can I achieve this? Thanks, MCW    
I want to show weekly data in a trend ,it should not add total  Right now using the below query, but it showing overall count of a week | timechart span=1w@w7 sum(abc) by xyz @splunk 
1. how can I make foreach condition case-insensitive, so events with both `Production` and `PRODUCTION`  fields found?  2. how can I make foreach to search with OR condition, e.g. `foreach "*DEV*... See more...
1. how can I make foreach condition case-insensitive, so events with both `Production` and `PRODUCTION`  fields found?  2. how can I make foreach to search with OR condition, e.g. `foreach "*DEV*" OR "*UAT*"` ?  Back to my previous comment.  My exact words are "depending on what you want to do with the keys and values."  If you don't tell us what your end goal is, it is not really an answerable question.  Like I exemplified, you can do a lot without having to resort to cumbersome foreach subsearches if you have some simple goals.  For example, | stats values(PROD*) as PROD* values(Prod*) as Prod* | timechart dc(PROD*) as PROD* dc(Prod*) as Prod* by somekey and so on.  Like @PickleRick points out, you could be giving yourself a harder time than it should have been. Because Splunk is optimized for time series, there are more row-oriented, or value-oriented manipulations than column names or keys.  Instead of doing foreach for simple filter function - which is quite wasteful and offers no performance improvement like a simple filter in index search.  If there are only a handful variants of these field names, it is perhaps more profitable to simply enumerate them in command line. index=myindex sourcetype=mysourcetype ("PROD deploy"=* OR "PROD update"=* OR "PROD error"=* OR "Production warning"=*) This is important for performance because it reduces the number of events. Additionally, is this separation between Prod and Dev going somewhere or are they really just used in two different outputs? The answer to the question about OR in foreach is you don't need any.  Simply do, for example, foreach DEV* Dev* UAT*.  Again, is there a need to put wildcard in front of Prod and Dev?
Hi @emily12234 , open a case to Splunk Support, or, otherwise, submit a new add-on, indicating in the information that it replaces the old archived add-on. Ciao. Giuseppe
Hi @splunky_diamond , there's one general answer to all your questions: it depends on your internal procedures (or playbooks), in other words, it depends on how you work. Answering to your question... See more...
Hi @splunky_diamond , there's one general answer to all your questions: it depends on your internal procedures (or playbooks), in other words, it depends on how you work. Answering to your questions: 1) the take in charge action is usually the first action, so I always saw that investigations were started after a SOC analyst took in charge one or more Notables (often more Notables are take in charge and associated to an investigatin in block). 2)  usually I saw that SOC Anaysts change the status on their Notables by themselves. 3) as I said it depends on your internal procedures, anyway, the closing is tracked. Ciao. Giuseppe
We have an app input config monitor containing wildcards with whitelist configured to pick up only .log and .out. There are about 120 log files matching the whitelist regex. All the logfiles are inge... See more...
We have an app input config monitor containing wildcards with whitelist configured to pick up only .log and .out. There are about 120 log files matching the whitelist regex. All the logfiles are ingesting fine except for 1 specific logfile that seems unable to continue the ingestion after log rotation. crcSalt and initCrcLength already defined as below -  initCrcLength = 1048576 crcSalt = <SOURCE> On splunkd.log, the below event can be found  -  05-15-2024 00:32:57.332 -0400 INFO WatchedFile [16425 tailreader0] - Logfile truncated while open, original pathname file='/xxx/catalina-.out', will begin reading from start. Is 120 logs on 1 input too many for Splunk to handle? How can we resolve this issue?
Hi @LearningGuy , yes both the solutions are internal lookups. They are almonst uqual fast: if you have few rows, hundreds until few thousands, you can use csv, if you have more rows, KV-Store is b... See more...
Hi @LearningGuy , yes both the solutions are internal lookups. They are almonst uqual fast: if you have few rows, hundreds until few thousands, you can use csv, if you have more rows, KV-Store is better. In addition KV-store is prefereable is you need a key in your csv, eg for thacking. Ciao. Giuseppe
Hi @dm1 , I'd follow the first solution in three steps: install a new Splunk on AWS, possible the sam version that you have on-premise, not a new version, configure the new Splunk as SH connected... See more...
Hi @dm1 , I'd follow the first solution in three steps: install a new Splunk on AWS, possible the sam version that you have on-premise, not a new version, configure the new Splunk as SH connected to your Indexers, copy the above folders in the AWS Splunk, eventually update your Splunk and apps version. because you don't need to copy the bins or the libraries that are always the same, you need only to copy the confs that you did in your on-premise installation. Ciao. Giuseppe
Removing the dedup from your original suggestion seems to have cleared up the weird issue i was I should have noticed that dedup counters your goal. (I copied from your original illustration wit... See more...
Removing the dedup from your original suggestion seems to have cleared up the weird issue i was I should have noticed that dedup counters your goal. (I copied from your original illustration without considering implications in time interval.)  You are correct, this is one more reason you don't want to throw dedup around.  Is there an easy way to instead of  having a individual line for each "missing" pod, to either have a single line with the total count of "non-critical" pods and possibly also have two lines for "critical" and "non-critical"? First, let's clarify that your goal is to count the number of missing pod groups by importance.  Something like this should do: index=abc sourcetype=kubectl | lookup pod_list pod_name_lookup as pod_name OUTPUT pod_name_lookup | where sourcetype == "kubectl" | bin span=1h@h _time | stats values(pod_name_lookup) as pod_name_lookup values(pod_name_all) as pod_name_all by importance _time | append [ inputlookup pod_list | rename pod_name_lookup as pod_name_all] | eventstats values(pod_name_all) as pod_name_all importance | eval missing = if(isnull(pod_name_all), pod_name_all, mvappend(missing, mvmap(pod_name_all, if(pod_name_all IN (pod_name_lookup), null(), pod_name_all)))) | where isnotnull(missing) | timechart span=1m@m dc(missing) by importance Here is an emulation. | makeresults format=csv data="_time, pod_name, importance 10,apache-12, critical 22,apache-2, critical 34,kakfa-8, critical 80,superapp-13, critical 88,someapp-6 160,grafana-backup-11 166,apache-4, critical 168,kafka-6, critical 566,apache-4, critical 568,kafka-6, critical 174,someapp-2 250,grafana-backup-6 374,anotherapp-10" | fillnull importance value=non-critical | eval _time = now() - _time | eval sourcetype = "kubectl" | eval pod_name_lookup = replace(pod_name, "\d+", "*") ``` the above emulates index=abc sourcetype=kubectl | lookup pod_list pod_name_lookup as pod_name OUTPUT pod_name_lookup | dedup pod_name ``` | where sourcetype == "kubectl" | bin span=1m@m _time | stats values(pod_name_lookup) as pod_name_lookup values(pod_name_all) as pod_name_all by importance _time | append [makeresults format=csv data="namespace, pod_name_lookup, importance ns1, kafka-*, critical ns1, apache-*, critical ns2, grafana-backup-*, non-critical ns2, someapp-*, non-critical" ``` subsearch thus far emulates | inputlookup pod_list ``` | rename pod_name_lookup as pod_name_all] | eventstats values(pod_name_all) as pod_name_all by importance | eval missing = if(isnull(pod_name_all), pod_name_all, mvappend(missing, mvmap(pod_name_all, if(pod_name_all IN (pod_name_lookup), null(), pod_name_all)))) | where isnotnull(missing) | timechart span=1m@m dc(missing) by importance  
Hello dm1, Were you able to migrate Search Head On premises to AWS?  If so, can you please share the steps/process which you have followed for the migration. Thanks
I am in a bit of a fix right now and getting the below error when I am trying to add a new input to Splunk using this document  https://docs.splunk.com/Documentation/AddOns/released/AWS/Setuptheadd-... See more...
I am in a bit of a fix right now and getting the below error when I am trying to add a new input to Splunk using this document  https://docs.splunk.com/Documentation/AddOns/released/AWS/Setuptheadd-on   Note:  The Splunk instance is in a different account than the S3 bucket. Error response received from the server: Unexpected error "<class 'splunktaucclib.rest_handler.error.RestError'>" from python handler: "REST Error [400]: Bad Request -- An error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied". See splunkd.log/python.log for more details.   I have created an AWS role to allow the user residing in  account where my S3 bucket is and the permissions are like below   Trust relationship:     The user contains s3full access and AssumeRole policy attached to it.    Splunk config: The IAM role still shows undiscovered:     Are there any changes required at the Splunk instance level in the other account so that it could access the policy? TIA for your help!
Hi @gcusello , Thank you for your feedback. I got expecting work. BR
Hello Splunkers! I am learning Splunk, but I've never deployed or worked with Splunk ES in production environment especially in SOC.   As you know, we have notables and investigations in ES and f... See more...
Hello Splunkers! I am learning Splunk, but I've never deployed or worked with Splunk ES in production environment especially in SOC.   As you know, we have notables and investigations in ES and for both of them we can change the status to indicate when the investigation is in process or not, but I am not quite sure about how SOC actually uses these features. That's why I have couple of questions regarding that.  1) Do analysts always start investigation when they are about to handle a notable in the incident review tab?   Probably the first what analysts do is changing the status from new to "in progress" and assign the event to themselves, to indicate that they are handling notable, but do they also start a new investigation or add them to the existing one, or analyst can handle the notable without adding it to an existing one or starting the new investigation? 2) When a notable was added to an investigation, what do analysts do when they close they figure out the disposition (complete their investigation)? Do they merely change the status through editing the investigation and the notable in their associated tabs? Do they always put their conclusions about an incident in the comment section like described in this article: The Five Step SOC Analyst Method. This 5-step security analysis… | by Tyler Wall | Medium? 3) Does SOC analyst of the first level directly put the status "closed" when the notable/investigation  is completed, or he/she always has to put it to "resolved" for their more-experienced colleagues' confirmation? I hope my questions are clear, thanks for taking your time reading my post and replying to it  
Please tell me how to make the output replace some characters in the field definitions. Specifically, the problem is that the following two formats of Mac Address in multiple logs imported into Splu... See more...
Please tell me how to make the output replace some characters in the field definitions. Specifically, the problem is that the following two formats of Mac Address in multiple logs imported into Splunk are mixed. AA:BB:CC:00:11:22 AA-BB-CC-00-11-22 I would like to unify the MacAddress field in the log in the form of “AA:BB:CC:00:11:22” in advance, because I would like to link the host name from MacAddress in the automatic definition of LookUpTable. Put the following in the search field and output the modified one as “MacAddr”, index=“Log” | rex ^. +? \scli\s}? <CL_MacAddr>. +? (. +?)) \) | eval MacAddr = replace(CL_MacAddr,“-”,“:”) Alternatively, we could replace the existing field “CL_MacAddr” with a modified version as follows. index=“Log” | rex mode=sed field=“CL_MacAddr” “s/-/:/g” I am trying to set this in the GUI's field extraction and field transformation to always have the modified superscript, but it does not work. Or can it be set directly in transforms.conf, but in this case, what values can be set and where? I know this is basic, but I would appreciate your help. Thank you in advance.
Update: it actually did work! I just opened the dashboard in a search and the time-picker is indeed applied.