All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

How to convert GMT to JKT time in Splunk events by using query
@yuanliu  Or you can use the _ field prefix to hide it from the foreach, i.e. | addtotals fieldname=_T | delta "_T" as _delta | foreach * [eval <<FIELD>> = if(-'_delta' > '_T' OR '_T' < 5000, n... See more...
@yuanliu  Or you can use the _ field prefix to hide it from the foreach, i.e. | addtotals fieldname=_T | delta "_T" as _delta | foreach * [eval <<FIELD>> = if(-'_delta' > '_T' OR '_T' < 5000, null(), '<<FIELD>>')] ``` To show that the _ fields are present ``` | eval y=_T, x=_delta
Hi, I 'm trying to integrate the module of tanium using http with splunk  as i dnt see what exactly we need to add in the URL and also in the Headers splunk what we need to add, can anyone help me... See more...
Hi, I 'm trying to integrate the module of tanium using http with splunk  as i dnt see what exactly we need to add in the URL and also in the Headers splunk what we need to add, can anyone help me with the masked data ??    
@smanojkumar  It looks like you have a custom dashboard. So you can apply CSS by adding it to the HTML tag. Sample Code:   Here I'm changing the background. <dashboard version="1.1"> <label>De... See more...
@smanojkumar  It looks like you have a custom dashboard. So you can apply CSS by adding it to the HTML tag. Sample Code:   Here I'm changing the background. <dashboard version="1.1"> <label>Demo</label> <row> <panel> <html depends="$alwaysHideCSSStyle$"> <style> .dashboard-body { background: #1E93C6 !important; } .dashboard-header h2{ color: #ffffff !important; } </style> </html> </panel> </row> </dashboard>   To identify the element you can Use your Browser Inspector tool. Check this link for instructions. https://community.splunk.com/t5/Dashboards-Visualizations/How-do-I-update-panel-color-in-Splunk-using-CSS/m-p/364590/highlight/true#M23796   For further help on CSS, share sample code so we can replicate in our instance.   I hope this will help you. Thanks KV If any of my replies help you to solve the problem Or gain knowledge, an upvote would be appreciated.
Hi @yuvrajsharma_13, for the difference you have to use the tostring option (https://docs.splunk.com/Documentation/SCS/current/SearchReference/ConversionFunctions#tostring.28.26lt.3Bvalue.26gt.3B.2C... See more...
Hi @yuvrajsharma_13, for the difference you have to use the tostring option (https://docs.splunk.com/Documentation/SCS/current/SearchReference/ConversionFunctions#tostring.28.26lt.3Bvalue.26gt.3B.2C_.26lt.3Bformat.26gt.3B.29) and not strftime that is used to convert date format, so please try this: index=web* "Message sent to Kafka" OR "Response received from Kafka" | stats earlies(_time) as Msg_received, latest(_time) as Response_Kafka by Unique_ID | eval difference=tostring(Response_Kafka-Msg_received,"duration") | eval Msg_received=strftime(Msg_received,"%d-%m-%Y %H:%M:%S") | eval Response_Kafka=strftime(Response_Kafka,"%d-%m-%Y %H:%M:%S") Ciao. Giuseppe
@splunknoob_rip  Can you please try this? require([ 'jquery', "underscore", 'splunkjs/mvc', "splunkjs/mvc/searchmanager", "splunkjs/mvc/simplexml/ready!" ], function ($, _, mvc,... See more...
@splunknoob_rip  Can you please try this? require([ 'jquery', "underscore", 'splunkjs/mvc', "splunkjs/mvc/searchmanager", "splunkjs/mvc/simplexml/ready!" ], function ($, _, mvc, SearchManager) { var submittedTokens = mvc.Components.get('submitted'); console.log("Hiee 1"); // var defaultTokens = mvc.Components.get("default"); var mysearch = new SearchManager({ id: "mysearch", autostart: "false", search: '| makeresults | eval test = "$capturedValue$" | collect index = "test_index"', preview: false, }, { tokens: true, tokenNamespace: "submitted" }); $("#btn-submit").on("click", function () { // Capture value of the Text Area var captured = $("textarea#outcome").val(); tokens.set("capturedValue", captured); submittedTokens.set(tokens.toJSON()); mysearch.startSearch(); }); });   I hope this will help you. Thanks KV If any of my replies help you to solve the problem Or gain knowledge, an upvote would be appreciated.
It's a failure. I want to make sure that only the ID searched in number 1 is searched in number 2.   1.   index=A title=AA 2.   |append [|search index=B]    
Hi @AMAN0113, on HF the upgrade made by GUI doesn't require restart, other wise you have to restart the forwarder. During restart you don't loss any data, because logs are written by Linux in files... See more...
Hi @AMAN0113, on HF the upgrade made by GUI doesn't require restart, other wise you have to restart the forwarder. During restart you don't loss any data, because logs are written by Linux in files that are read by the forwarder when it restarts, you'll only have a delay in indexing. Obviously scripts aren't executed during restart, but they will executed at the next scheduled time. Let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi,  Splunk Assist is producing a lot of execution errors in my search head cluster and on my intermediate forwarder. Since the first is part of a cluster I thought of deploying the app.conf via shc... See more...
Hi,  Splunk Assist is producing a lot of execution errors in my search head cluster and on my intermediate forwarder. Since the first is part of a cluster I thought of deploying the app.conf via shc-d but that would not be enough since do doku say one must execute ./splunk disable app splunk_assist. Similar for heavy forwarders connected to a deployment server. Sadly the splunk doku on Splunk Assist doesnt seem to care that clustered environments exists. Kind Regards th
Why SSL status show as "false" despite configuring SSL. In Our environment we have enabled TLS configuration between forwarders and receivers. The connection is established and we could see data is c... See more...
Why SSL status show as "false" despite configuring SSL. In Our environment we have enabled TLS configuration between forwarders and receivers. The connection is established and we could see data is coming through secure TLS channel into splunk. I have validated manually as well using openssl client module and verification was successful with status ok. We could see for the hosts, SSL as false and it keeps changing at random times to True or False. And connection type is cookedSSL for the False host.   I have checked all the tcpoutputproc and tcpinputproc in splunkd logs, cannot find any errors related to SSL.   But found below WARN messages on one of the forwarders. Is this causing the problem ? Any leads on this.
Excellent, thanks for the diagnosis!  So, the ASCII order in * enumeration ruined the conditions.  In my emulations, source series begin with a slash (/) that precedes T, but sourcetype series all be... See more...
Excellent, thanks for the diagnosis!  So, the ASCII order in * enumeration ruined the conditions.  In my emulations, source series begin with a slash (/) that precedes T, but sourcetype series all begin with a lower-case letter that succeeds T.  This explains why the two groupby's behave differently. But I still need to null out Total.  So, a better (and yet simpler) approach is to place "Total" at the end of enumeration taking advantage of Splunk's globber rule:   | addtotals | delta "Total" as delta | foreach * Total delta [eval <<FIELD>> = if(-delta > Total OR Total < 5000, null(), '<<FIELD>>')] | fields - delta   Update: In real world, it is often undesirable to use arbitrary thresholds like Total < 5000.  For this technique to work, I also need to make sure all other fields are nulled before delta.  So, I must expressly specify the order of these two fields in foreach. Update 2: In addition to wanting to nullify Total, I also need to remove delta.  So, my best approach would be a hybrid of hacking field name and ensuring order: | addtotals | delta "Total" as _delta | foreach * Total [eval <<FIELD>> = if(-_delta > Total OR Total < 5000, null(), '<<FIELD>>')] This way, field deletion is also unnecessary.  Thanks again, @bowesmana for the inspiration!
This solution is the one I have been looking for. Thank you.
@ITWhisperer You are a genius.
You can use subsearch for title AA in index A to restrict to the desired id, like index = B [search index = A title = AA | dedup id | fields id] | stats count Hope this helps.
Hi, In ITSI > Notable Event Aggregation Policies > Action Rules, "Run a script" can no longer be executed. The work that triggered the event to occur - Splunk Core Version Up (8.2.7 > 9.0.5.1) ... See more...
Hi, In ITSI > Notable Event Aggregation Policies > Action Rules, "Run a script" can no longer be executed. The work that triggered the event to occur - Splunk Core Version Up (8.2.7 > 9.0.5.1) Environment before the work - Splunk Core 8.2.7 - ITSI 4.11.6 - Configure Run a Script [File name] "patlite.sh RED" > Running enabled Post-work environment - Splunk Core 9.0.5.1 - ITSI 4.11.6 - Configure Run a Script [File name] "patlite.sh RED" > Not working Script Deployment Location /opt/splunk/etc/apps/SA-ITOA/bin/scripts/patlite.sh The ITSI version has not been changed, only the Splunk Core version change, but is there some configuration change that needs to be made?
That generally means X is not X, i.e. if you put  index="web" sourcetype="weblogic_stdout" loglevel IN ("Emergency") domain="*X*" do you get results If you do this in your first search index="web... See more...
That generally means X is not X, i.e. if you put  index="web" sourcetype="weblogic_stdout" loglevel IN ("Emergency") domain="*X*" do you get results If you do this in your first search index="web" sourcetype="weblogic_stdout" loglevel IN ("Emergency") | eval trimmed_domain=trim(domain) | eval bounded_domain=":".domain.":" | stats count by domain trimmed_domain bounded_domain | eval trimmed_equal_domain=if(trimmed_domain=domain, "YES", "NO") you may see whether you have leading or trailing spaces around X and if trimmed_equal_domain is NO, then there are leading/trailing spaces. The bounded domain makes it easier to see what's what by adding : before and after
Hi Splunkers, I have a question regarding splunk olly heatmap chart. Wondering it its possible to exclude or rename the n/a on my panel. I think those are the stateless pods that is no longer send... See more...
Hi Splunkers, I have a question regarding splunk olly heatmap chart. Wondering it its possible to exclude or rename the n/a on my panel. I think those are the stateless pods that is no longer sending namespace o Got this plot and chart options   Thanks  
index title id A AA 111 A CC 111 B BB 111   if the index is A and the title is AA, i'm trying to find id in index BB and look up how many. In the ab... See more...
index title id A AA 111 A CC 111 B BB 111   if the index is A and the title is AA, i'm trying to find id in index BB and look up how many. In the above example, the second is that the title is CC, so even if the id value is the same, it is not counted. there is 1 id 111 in index B, So the answer I want is 1. How do I look up the query?
The first search query returns a count of 26 for domain X : index="web" sourcetype="weblogic_stdout" loglevel IN ("Emergency") | stats count by domain   But when I run the below query to just s... See more...
The first search query returns a count of 26 for domain X : index="web" sourcetype="weblogic_stdout" loglevel IN ("Emergency") | stats count by domain   But when I run the below query to just see the events corresponding to domain=X, I get zero events:  index="web" sourcetype="weblogic_stdout" loglevel IN ("Emergency") domain="X"   Any clue why this might be happening
Updated Query : Time difference is coming as "12/31/23 19:00:30:295 " index=web* "Message sent to Kafka" OR "Response received from Kafka" | stats earlies(_time) as Msg_received, latest(_time) as ... See more...
Updated Query : Time difference is coming as "12/31/23 19:00:30:295 " index=web* "Message sent to Kafka" OR "Response received from Kafka" | stats earlies(_time) as Msg_received, latest(_time) as Response_Kafka by Unique_ID | eval difference=Response_Kafka-Msg_received | eval difference=strftime(difference,"%d-%m-%Y %H:%M:%S") | eval Msg_received=strftime(Msg_received,"%d-%m-%Y %H:%M:%S") | eval Response_Kafka=strftime(Response_Kafka,"%d-%m-%Y %H:%M:%S")