All Topics

Top

All Topics

We have got a requirement where, event logs need to be indexed under a metrics index. For this we are using mcollect command to ingest during search time. For automatic conversion, we have schedul... See more...
We have got a requirement where, event logs need to be indexed under a metrics index. For this we are using mcollect command to ingest during search time. For automatic conversion, we have scheduled a search to run every 1 hour, but in case of missed schedule data is lost for that schedule.  What is the best way to store the last indexed time and run the schedule based on the last indexed time?     Note: In long run we will be using LogToMetrics sourcetype for conversion, Until then we need to use the mcollect command.
Hi All i ahve a lookup file .csv where i have timestamp Name and USEDGB values  i have been trying to run a time chart to see the total USEDGB per day Both lookup definition and lookup table file h... See more...
Hi All i ahve a lookup file .csv where i have timestamp Name and USEDGB values  i have been trying to run a time chart to see the total USEDGB per day Both lookup definition and lookup table file has app permissions  |inputlookup ABC_DISK_UTILIZATION.csv |eval  _time=Timestamp |timechart span=24h sum(USEDGB)   the result only shows time but no values of USEDGB can you please help  Timestamp NAME USEDGB 12/08/2023 22:04 RECO_A 48.61 12/08/2023 13:04 RECO_B 46.21 12/08/2023 03:04 RECO_C 46133.89 11/08/2023 20:01 RECO_A 164.11 11/08/2023 18:01 RECO_B 48.61 11/08/2023 16:01 RECO_C 46.21 10/08/2023 22:00 RECO_A 45327.22 10/08/2023 17:00 RECO_B 193.4 10/08/2023 08:00 RECO_C 48.61 09/08/2023 21:00 RECO_A 46.21 09/08/2023 13:00 RECO_B 45205.72 09/08/2023 06:00 RECO_C 132.57 08/08/2023 19:00 RECO_A 48.61 08/08/2023 12:00 RECO_B 46.21 08/08/2023 10:00 RECO_C 45203.77 07/08/2023 22:00 RECO_A 132.56 07/08/2023 14:00 RECO_B 48.61 07/08/2023 07:00 RECO_C 46.21 06/08/2023 22:04 RECO_A 45199.08 06/08/2023 13:04 RECO_B 123.85 06/08/2023 03:04 RECO_C 48.61 05/08/2023 20:01 RECO_A 46.21 05/08/2023 18:01 RECO_B 45196.12 05/08/2023 16:01 RECO_C 117.4        
Hi all, what file would I change if I wanted to customize the global banner? I'd like a dark color to match the splunk header as well as unbolded font. I am thinking this is a css/html solution -- po... See more...
Hi all, what file would I change if I wanted to customize the global banner? I'd like a dark color to match the splunk header as well as unbolded font. I am thinking this is a css/html solution -- possibly somewhere in mrsparkle directory? I went into the global-banner.conf and changed       global_banner.backgrond_color = black,       but as expected that didn't work. Thank you!
Hi @trashyroadz  Have opened a new thread for the issue I am facing. Current Splunk version - 8.2.3.3 While running a query in search page, getting error as "Unable to distribute to peer named <... See more...
Hi @trashyroadz  Have opened a new thread for the issue I am facing. Current Splunk version - 8.2.3.3 While running a query in search page, getting error as "Unable to distribute to peer named <idx name> at uri <splunk idx uri> because replication was unsuccessful. ReplicationStatus: Failed-Failure info: failed_because_BUNDLE_SIZE_RETRIEVAL_FAILURE". Verify connectivity to the search peer, that the search peer is up, and that an adequate level of system resources are available. Did not think of connectivity issue as in the message box got message saying bundle size exceeds limit. On checking, could see all apps in $SPLUNK_HOME/var/run/splunk/deploy in deployer even if we had changed a single file in a single app. As per my understanding only modified apps should be pushed to SHs and from SH captain to search peers. Please help on this. Let me know if any other detail is needed.
Hi Team,  I help manage the Terraform Cloud for Splunk app and after v2.37.0 of the AppInspect CLI was released we're seeing this test failure: FAILURE: check_for_updates property found i... See more...
Hi Team,  I help manage the Terraform Cloud for Splunk app and after v2.37.0 of the AppInspect CLI was released we're seeing this test failure: FAILURE: check_for_updates property found in [package] stanza is set to True for private app not uploaded to Splunkbase. It should be set to False for private apps not uploaded to Splunkbase. File: default/app.conf Line Number: 15   I find this message confusing, as the app is published in Splunkbase. I don't want to disable checks for updates unnecessarily.   Could I please get some help understanding how to address this error? Or help me figure out if I'm misunderstanding something? Thanks!  
In the monitoring console of the Splunk Cloud. If you click on Forwarders tab then the "Forwarders Deployment" option. Scrolling down a bit until you see "Status and Configuration - As of <timestamp>... See more...
In the monitoring console of the Splunk Cloud. If you click on Forwarders tab then the "Forwarders Deployment" option. Scrolling down a bit until you see "Status and Configuration - As of <timestamp>".   The timestamp that I see is outdate by 6 months from todays date. So I see the timestamp 19/03/2023 13:09:03+1:00. I am confused as if I type in an Instance in the box, I get data live from when the Instance was last connected to the indexer with the timestamps being in the <15mins ago for active forwarders.   What does it mean if your timestamp is wrong? What sort of fixes can i take to try and solve this?    
Hi All, Im looking for a way to share a non expiring search with other users. If we use the ''share job" option or just use the URL from address bar - it will get expired once the job expires. But I... See more...
Hi All, Im looking for a way to share a non expiring search with other users. If we use the ''share job" option or just use the URL from address bar - it will get expired once the job expires. But I want to share a link that will not expire. Of course in such a case the search needs to run again from the users end, but the time stamps and search query are the things I want to share with a link. Is there a way to do this?
Hello, I'm trying to add new/existing key indicator searches to my dashboard in ES, but the edit toolbar does not have the "Add Key Indicator" button. My custom dashboard: My custom dashboard D... See more...
Hello, I'm trying to add new/existing key indicator searches to my dashboard in ES, but the edit toolbar does not have the "Add Key Indicator" button. My custom dashboard: My custom dashboard Default dashboard with Key Indicators: Default dashboard with Key Indicator I also tried to clone the default "Email Activity" dashboard (which has existing key indicators in it), but the clone dashboard cannot be loaded. What should I do? If this is a bug, which log files do I need to check?   Thank you. 
Hello, I'm trying to add new/existing key indicator searches to my dashboard in ES, but the edit toolbar does not have the "Add Key Indicator" button. My custom dashboard: My custom dashboard... See more...
Hello, I'm trying to add new/existing key indicator searches to my dashboard in ES, but the edit toolbar does not have the "Add Key Indicator" button. My custom dashboard: My custom dashboard Default dashboard with Key Indicators: Default dashboard with Key Indicator I also tried to clone the default "Email Activity" dashboard (which has existing key indicators in it), but the clone dashboard cannot be loaded. What should I do? If this is a bug, which log files do I need to check?   Thank you. 
Hi, How we can find out  the HEC url for my splunk cloud instance ...
Hi, I'm trying to configure "custom Data Type" > SQS input in Splunk add-on for AWS app to onboard data from an AWS account. is it possible to create the SQS input using IAM role instead of account (... See more...
Hi, I'm trying to configure "custom Data Type" > SQS input in Splunk add-on for AWS app to onboard data from an AWS account. is it possible to create the SQS input using IAM role instead of account (for which I need KeyId and secret Key of the account)?  
How to convert GMT to JKT time in Splunk events by using query
Hi, I 'm trying to integrate the module of tanium using http with splunk  as i dnt see what exactly we need to add in the URL and also in the Headers splunk what we need to add, can anyone help me... See more...
Hi, I 'm trying to integrate the module of tanium using http with splunk  as i dnt see what exactly we need to add in the URL and also in the Headers splunk what we need to add, can anyone help me with the masked data ??    
Hi,  Splunk Assist is producing a lot of execution errors in my search head cluster and on my intermediate forwarder. Since the first is part of a cluster I thought of deploying the app.conf via shc... See more...
Hi,  Splunk Assist is producing a lot of execution errors in my search head cluster and on my intermediate forwarder. Since the first is part of a cluster I thought of deploying the app.conf via shc-d but that would not be enough since do doku say one must execute ./splunk disable app splunk_assist. Similar for heavy forwarders connected to a deployment server. Sadly the splunk doku on Splunk Assist doesnt seem to care that clustered environments exists. Kind Regards th
Why SSL status show as "false" despite configuring SSL. In Our environment we have enabled TLS configuration between forwarders and receivers. The connection is established and we could see data is c... See more...
Why SSL status show as "false" despite configuring SSL. In Our environment we have enabled TLS configuration between forwarders and receivers. The connection is established and we could see data is coming through secure TLS channel into splunk. I have validated manually as well using openssl client module and verification was successful with status ok. We could see for the hosts, SSL as false and it keeps changing at random times to True or False. And connection type is cookedSSL for the False host.   I have checked all the tcpoutputproc and tcpinputproc in splunkd logs, cannot find any errors related to SSL.   But found below WARN messages on one of the forwarders. Is this causing the problem ? Any leads on this.
Hi, In ITSI > Notable Event Aggregation Policies > Action Rules, "Run a script" can no longer be executed. The work that triggered the event to occur - Splunk Core Version Up (8.2.7 > 9.0.5.1) ... See more...
Hi, In ITSI > Notable Event Aggregation Policies > Action Rules, "Run a script" can no longer be executed. The work that triggered the event to occur - Splunk Core Version Up (8.2.7 > 9.0.5.1) Environment before the work - Splunk Core 8.2.7 - ITSI 4.11.6 - Configure Run a Script [File name] "patlite.sh RED" > Running enabled Post-work environment - Splunk Core 9.0.5.1 - ITSI 4.11.6 - Configure Run a Script [File name] "patlite.sh RED" > Not working Script Deployment Location /opt/splunk/etc/apps/SA-ITOA/bin/scripts/patlite.sh The ITSI version has not been changed, only the Splunk Core version change, but is there some configuration change that needs to be made?
Hi Splunkers, I have a question regarding splunk olly heatmap chart. Wondering it its possible to exclude or rename the n/a on my panel. I think those are the stateless pods that is no longer send... See more...
Hi Splunkers, I have a question regarding splunk olly heatmap chart. Wondering it its possible to exclude or rename the n/a on my panel. I think those are the stateless pods that is no longer sending namespace o Got this plot and chart options   Thanks  
index title id A AA 111 A CC 111 B BB 111   if the index is A and the title is AA, i'm trying to find id in index BB and look up how many. In the ab... See more...
index title id A AA 111 A CC 111 B BB 111   if the index is A and the title is AA, i'm trying to find id in index BB and look up how many. In the above example, the second is that the title is CC, so even if the id value is the same, it is not counted. there is 1 id 111 in index B, So the answer I want is 1. How do I look up the query?
The first search query returns a count of 26 for domain X : index="web" sourcetype="weblogic_stdout" loglevel IN ("Emergency") | stats count by domain   But when I run the below query to just s... See more...
The first search query returns a count of 26 for domain X : index="web" sourcetype="weblogic_stdout" loglevel IN ("Emergency") | stats count by domain   But when I run the below query to just see the events corresponding to domain=X, I get zero events:  index="web" sourcetype="weblogic_stdout" loglevel IN ("Emergency") domain="X"   Any clue why this might be happening
I am looking at logs for asynchronous calls ( sending msg & receiving ack from kafka ) . So we have 2 event , first one is when we receive the message and start processing then send it to Kafka , sec... See more...
I am looking at logs for asynchronous calls ( sending msg & receiving ack from kafka ) . So we have 2 event , first one is when we receive the message and start processing then send it to Kafka , second one is when we receive response back from kafka. I have unique message ID to track both event. I want to capture average processing time for all unique ID. In below query I have not added condition for unique ID. in below query I am not getting "Diffrence" value.  Can you please help !!  index=web* "Message sent to Kafka" OR "Response received from Kafka" | stats earlies(_time) as Msg_received, latest(_time) as Response_Kafka | eval difference=Response_Kafka-Msg_received | eval difference=strftime(difference,"%d-%m-%Y %H:%M:%S") | eval Msg_received=strftime(Msg_received,"%d-%m-%Y %H:%M:%S") | eval Response_Kafka=strftime(Response_Kafka,"%d-%m-%Y %H:%M:%S")