All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

It's a failure. I want to make sure that only the ID searched in number 1 is searched in number 2.   1.   index=A title=AA 2.   |append [|search index=B]    
Hi @AMAN0113, on HF the upgrade made by GUI doesn't require restart, other wise you have to restart the forwarder. During restart you don't loss any data, because logs are written by Linux in files... See more...
Hi @AMAN0113, on HF the upgrade made by GUI doesn't require restart, other wise you have to restart the forwarder. During restart you don't loss any data, because logs are written by Linux in files that are read by the forwarder when it restarts, you'll only have a delay in indexing. Obviously scripts aren't executed during restart, but they will executed at the next scheduled time. Let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi,  Splunk Assist is producing a lot of execution errors in my search head cluster and on my intermediate forwarder. Since the first is part of a cluster I thought of deploying the app.conf via shc... See more...
Hi,  Splunk Assist is producing a lot of execution errors in my search head cluster and on my intermediate forwarder. Since the first is part of a cluster I thought of deploying the app.conf via shc-d but that would not be enough since do doku say one must execute ./splunk disable app splunk_assist. Similar for heavy forwarders connected to a deployment server. Sadly the splunk doku on Splunk Assist doesnt seem to care that clustered environments exists. Kind Regards th
Why SSL status show as "false" despite configuring SSL. In Our environment we have enabled TLS configuration between forwarders and receivers. The connection is established and we could see data is c... See more...
Why SSL status show as "false" despite configuring SSL. In Our environment we have enabled TLS configuration between forwarders and receivers. The connection is established and we could see data is coming through secure TLS channel into splunk. I have validated manually as well using openssl client module and verification was successful with status ok. We could see for the hosts, SSL as false and it keeps changing at random times to True or False. And connection type is cookedSSL for the False host.   I have checked all the tcpoutputproc and tcpinputproc in splunkd logs, cannot find any errors related to SSL.   But found below WARN messages on one of the forwarders. Is this causing the problem ? Any leads on this.
Excellent, thanks for the diagnosis!  So, the ASCII order in * enumeration ruined the conditions.  In my emulations, source series begin with a slash (/) that precedes T, but sourcetype series all be... See more...
Excellent, thanks for the diagnosis!  So, the ASCII order in * enumeration ruined the conditions.  In my emulations, source series begin with a slash (/) that precedes T, but sourcetype series all begin with a lower-case letter that succeeds T.  This explains why the two groupby's behave differently. But I still need to null out Total.  So, a better (and yet simpler) approach is to place "Total" at the end of enumeration taking advantage of Splunk's globber rule:   | addtotals | delta "Total" as delta | foreach * Total delta [eval <<FIELD>> = if(-delta > Total OR Total < 5000, null(), '<<FIELD>>')] | fields - delta   Update: In real world, it is often undesirable to use arbitrary thresholds like Total < 5000.  For this technique to work, I also need to make sure all other fields are nulled before delta.  So, I must expressly specify the order of these two fields in foreach. Update 2: In addition to wanting to nullify Total, I also need to remove delta.  So, my best approach would be a hybrid of hacking field name and ensuring order: | addtotals | delta "Total" as _delta | foreach * Total [eval <<FIELD>> = if(-_delta > Total OR Total < 5000, null(), '<<FIELD>>')] This way, field deletion is also unnecessary.  Thanks again, @bowesmana for the inspiration!
This solution is the one I have been looking for. Thank you.
@ITWhisperer You are a genius.
You can use subsearch for title AA in index A to restrict to the desired id, like index = B [search index = A title = AA | dedup id | fields id] | stats count Hope this helps.
Hi, In ITSI > Notable Event Aggregation Policies > Action Rules, "Run a script" can no longer be executed. The work that triggered the event to occur - Splunk Core Version Up (8.2.7 > 9.0.5.1) ... See more...
Hi, In ITSI > Notable Event Aggregation Policies > Action Rules, "Run a script" can no longer be executed. The work that triggered the event to occur - Splunk Core Version Up (8.2.7 > 9.0.5.1) Environment before the work - Splunk Core 8.2.7 - ITSI 4.11.6 - Configure Run a Script [File name] "patlite.sh RED" > Running enabled Post-work environment - Splunk Core 9.0.5.1 - ITSI 4.11.6 - Configure Run a Script [File name] "patlite.sh RED" > Not working Script Deployment Location /opt/splunk/etc/apps/SA-ITOA/bin/scripts/patlite.sh The ITSI version has not been changed, only the Splunk Core version change, but is there some configuration change that needs to be made?
That generally means X is not X, i.e. if you put  index="web" sourcetype="weblogic_stdout" loglevel IN ("Emergency") domain="*X*" do you get results If you do this in your first search index="web... See more...
That generally means X is not X, i.e. if you put  index="web" sourcetype="weblogic_stdout" loglevel IN ("Emergency") domain="*X*" do you get results If you do this in your first search index="web" sourcetype="weblogic_stdout" loglevel IN ("Emergency") | eval trimmed_domain=trim(domain) | eval bounded_domain=":".domain.":" | stats count by domain trimmed_domain bounded_domain | eval trimmed_equal_domain=if(trimmed_domain=domain, "YES", "NO") you may see whether you have leading or trailing spaces around X and if trimmed_equal_domain is NO, then there are leading/trailing spaces. The bounded domain makes it easier to see what's what by adding : before and after
Hi Splunkers, I have a question regarding splunk olly heatmap chart. Wondering it its possible to exclude or rename the n/a on my panel. I think those are the stateless pods that is no longer send... See more...
Hi Splunkers, I have a question regarding splunk olly heatmap chart. Wondering it its possible to exclude or rename the n/a on my panel. I think those are the stateless pods that is no longer sending namespace o Got this plot and chart options   Thanks  
index title id A AA 111 A CC 111 B BB 111   if the index is A and the title is AA, i'm trying to find id in index BB and look up how many. In the ab... See more...
index title id A AA 111 A CC 111 B BB 111   if the index is A and the title is AA, i'm trying to find id in index BB and look up how many. In the above example, the second is that the title is CC, so even if the id value is the same, it is not counted. there is 1 id 111 in index B, So the answer I want is 1. How do I look up the query?
The first search query returns a count of 26 for domain X : index="web" sourcetype="weblogic_stdout" loglevel IN ("Emergency") | stats count by domain   But when I run the below query to just s... See more...
The first search query returns a count of 26 for domain X : index="web" sourcetype="weblogic_stdout" loglevel IN ("Emergency") | stats count by domain   But when I run the below query to just see the events corresponding to domain=X, I get zero events:  index="web" sourcetype="weblogic_stdout" loglevel IN ("Emergency") domain="X"   Any clue why this might be happening
Updated Query : Time difference is coming as "12/31/23 19:00:30:295 " index=web* "Message sent to Kafka" OR "Response received from Kafka" | stats earlies(_time) as Msg_received, latest(_time) as ... See more...
Updated Query : Time difference is coming as "12/31/23 19:00:30:295 " index=web* "Message sent to Kafka" OR "Response received from Kafka" | stats earlies(_time) as Msg_received, latest(_time) as Response_Kafka by Unique_ID | eval difference=Response_Kafka-Msg_received | eval difference=strftime(difference,"%d-%m-%Y %H:%M:%S") | eval Msg_received=strftime(Msg_received,"%d-%m-%Y %H:%M:%S") | eval Response_Kafka=strftime(Response_Kafka,"%d-%m-%Y %H:%M:%S")
I am looking at logs for asynchronous calls ( sending msg & receiving ack from kafka ) . So we have 2 event , first one is when we receive the message and start processing then send it to Kafka , sec... See more...
I am looking at logs for asynchronous calls ( sending msg & receiving ack from kafka ) . So we have 2 event , first one is when we receive the message and start processing then send it to Kafka , second one is when we receive response back from kafka. I have unique message ID to track both event. I want to capture average processing time for all unique ID. In below query I have not added condition for unique ID. in below query I am not getting "Diffrence" value.  Can you please help !!  index=web* "Message sent to Kafka" OR "Response received from Kafka" | stats earlies(_time) as Msg_received, latest(_time) as Response_Kafka | eval difference=Response_Kafka-Msg_received | eval difference=strftime(difference,"%d-%m-%Y %H:%M:%S") | eval Msg_received=strftime(Msg_received,"%d-%m-%Y %H:%M:%S") | eval Response_Kafka=strftime(Response_Kafka,"%d-%m-%Y %H:%M:%S")      
Currently, our company successfully collects most of the Microsoft 365 logs, but we are facing challenges with gathering the security logs. We aim to comprehensively collect all security logs for... See more...
Currently, our company successfully collects most of the Microsoft 365 logs, but we are facing challenges with gathering the security logs. We aim to comprehensively collect all security logs for Microsoft 365, encompassing elements such as Intune, Defender, and more. Could you please provide advice on how to effectively obtain all the security logs for Microsoft 365?
Your foreach eval statement is wrong, it should test for Total and delta fields [eval <<FIELD>> = if("<<MATCHSTR>>"!="Total" AND "<<MATCHSTR>>"!="delta" AND -delta > Total OR Total < 5000, null(), '... See more...
Your foreach eval statement is wrong, it should test for Total and delta fields [eval <<FIELD>> = if("<<MATCHSTR>>"!="Total" AND "<<MATCHSTR>>"!="delta" AND -delta > Total OR Total < 5000, null(), '<<FIELD>>')] You are not excluding 'delta' and 'Total' fields from the eval, so Total is set to null() before you process the other fields, so breaks the eval for subsequent passes.
Hi @gcusello,  Thanks for your inputs. A follow up question, Do I have to expect any data loss during the upgrade? or is the add-on capable of backfilling the data lost during the time of upgrade. ... See more...
Hi @gcusello,  Thanks for your inputs. A follow up question, Do I have to expect any data loss during the upgrade? or is the add-on capable of backfilling the data lost during the time of upgrade. Also do I need to restart splunk on my server for the changes to reflect?
If the problem is solved, please select the answer and close.  Karma for all that helped advance the solution is also appreciated.
JSON array must first be converted to multivalue before you can use mv-functions.  The classic method to do this is mvexpand together with spath.   | spath input=spec path=spec.containers{} | mvexp... See more...
JSON array must first be converted to multivalue before you can use mv-functions.  The classic method to do this is mvexpand together with spath.   | spath input=spec path=spec.containers{} | mvexpand spec.containers{} | spath input=spec.containers{} | where privileged == "true"   With your sample data, output is like name privileged spec.containers{} spec.field1 spec.field2 spec.field3 A true { "name": "A", "privileged": "true" } X Y Z C true { "name": "C", "privileged": "true" } X Y Z If the array is big and events are many, mvexpand risk running out of memory.  So, Splunk 8 introduced a group of JSON functions.  The following is more memory efficient (and likely more efficient in general), but the output is multivalued.   | spath input=spec path=spec.containers{} | fields - spec.containers{}.* | eval privileged_containers = mvfilter(json_extract('spec.containers{}', "privileged") == "true")   Your sample data would give something like privileged_containers spec.containers{} spec.field1 spec.field2 spec.field3 { "name": "A", "privileged": "true" } { "name": "C", "privileged": "true" } { "name": "A", "privileged": "true" } { "name": "B" } { "name": "C", "privileged": "true" } X Y Z   BTW, please post JSON in raw text and make sure the format is compliant with proper quotes, etc. so volunteers don't have to waste time reconstruct data or worry about noncompliant data.  Here is a compliant emulation that you can play with and compare with real data   | makeresults | eval _raw = "{\"spec\": { \"field1\": \"X\", \"field2\": \"Y\", \"field3\": \"Z\", \"containers\": [ { \"name\": \"A\", \"privileged\": \"true\" }, { \"name\": \"B\" }, { \"name\": \"C\", \"privileged\": \"true\" } ] }}" | spath ``` data emulation above ```   Hope this helps.