Activity Feed
- Karma Re: How to resolve "ERROR KVStorageProvider - An error occurred during the last operation" in indexers for richgalloway. 06-12-2023 06:57 AM
- Posted How to resolve "ERROR KVStorageProvider - An error occurred during the last operation" in indexers on Deployment Architecture. 05-28-2023 10:40 PM
- Posted Re: How to get the latest host value which is sending logs by comparing 2 hosts? on Splunk Search. 07-22-2022 01:37 AM
- Posted Re: How to get the latest host value which is sending logs by comparing 2 hosts on Splunk Search. 07-21-2022 10:46 AM
- Posted Re: How to get the latest host value which is sending logs by comparing 2 hosts on Splunk Search. 07-21-2022 05:49 AM
- Posted How to get the latest host value which is sending logs by comparing 2 hosts? on Splunk Search. 07-21-2022 04:47 AM
- Posted Lookup based dashboard in another SHC where the Lookup file is not there on Dashboards & Visualizations. 03-23-2021 08:37 PM
- Posted O365 logs stopped indexing on All Apps and Add-ons. 06-11-2020 03:02 AM
- Got Karma for Re: How do you get the value from a tabular event for alerting?. 06-05-2020 12:50 AM
- Got Karma for Re: How do you get the value from a tabular event for alerting?. 06-05-2020 12:50 AM
- Posted Re: Table time field using transaction on Splunk Search. 10-18-2019 02:27 AM
- Posted Re: Splunk DB Connect: How to send database search output to Splunk and build a dashboard with the result set? on All Apps and Add-ons. 07-08-2019 05:25 AM
- Posted Re: Splunk DB Connect: How to send database search output to Splunk and build a dashboard with the result set? on All Apps and Add-ons. 06-25-2019 10:15 PM
- Posted Re: Splunk DB Connect: How to send database search output to Splunk and build a dashboard with the result set? on All Apps and Add-ons. 06-25-2019 05:26 AM
- Posted Splunk DB Connect: How to send database search output to Splunk and build a dashboard with the result set? on All Apps and Add-ons. 06-24-2019 10:16 PM
- Tagged Splunk DB Connect: How to send database search output to Splunk and build a dashboard with the result set? on All Apps and Add-ons. 06-24-2019 10:16 PM
- Tagged Splunk DB Connect: How to send database search output to Splunk and build a dashboard with the result set? on All Apps and Add-ons. 06-24-2019 10:16 PM
- Posted splunk alert subject - [EXTERNAL] splunk alert failed on Reporting. 05-16-2019 09:45 PM
- Tagged splunk alert subject - [EXTERNAL] splunk alert failed on Reporting. 05-16-2019 09:45 PM
- Posted How to place splunk lookup table data output to a remote server? on Splunk Search. 04-29-2019 02:58 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 |
05-28-2023
10:40 PM
Hi Team , We are getting the below internal errors in majority of our indexers "timestamp ERROR KVStorageProvider - An error occurred during the last operation ('replSetGetStatus', domain: '15', code: '13053'): No suitable servers found (`serverSelectionTryOnce` set): [connection closed calling ismaster on '[::1]:8191']" Can someone explain what that error messages indicates
... View more
Labels
- Labels:
-
indexer
07-22-2022
01:37 AM
Hi @somesoni2 , Thanks for your reply I tried the spl that you gave but its condition Is always looking for the silent_hosts count is 2. where silent_hosts=2 Which in turn its ignoring the single host which is silent for more than 20 hours ( logs are coming from single host only for other index and sourcetype combinations from past 1 month onwards continuously) So this feeds the below query is discarding . I tried where silent_hosts>=1 then in this case its displaying the old stopped host1 . This should not display as we are getting the logs to same index and sourcetype from host2
... View more
07-21-2022
10:46 AM
1) At any point of time one host will be active and sending the logs out of the 2 hosts. so silent hours condition will becomes always false (silent hours>20) as we are receiving the logs so alert should not trigger 2) if both hosts silents for more than 20 hours then condition becomes true then alert should trigger Hope I am clear with requirement
... View more
07-21-2022
05:49 AM
Hi @richgalloway , If host1 is silent then as per the above logic it will show host1 is silent as per the where condition . That I should not get because host2 will be sending the logs . So we want a logic to check if any of the host is sending the log and anyone is sending the log then alert should not trigger. Consider Logs will come interchangeably from host1 and host2 for every 15 days
... View more
07-21-2022
04:47 AM
Hi ,
I have search like below where the logs are coming from the fig1,fig4,fig5,fig6 indexes from either of the 2 hosts say host1 and host2. So at a time 2 hosts won't send logs and only any of the host will be sending the logs actively to fig1 index with source type as abc.
| tstats latest(_time) as latest_time WHERE (index = fig*) (NOT index IN (fig2,fig3,)) sourcetype="abc" by host index sourcetype | eval silent_in_hours=round(( now() - latest_time)/3600,2) | where silent_in_hours>20 | eval latest_time=strftime(latest_time, "%m/%d/%Y %H:%M:%S")
I want to build logic to display if any of the host1 or host2 is sending the logs then the above query should not give any o/p (should not display the silent host because we are getting the log from other host).
Thanks in advance
... View more
03-23-2021
08:37 PM
Hi All, I have created a dashboard which was entirely built on dynamic lookup files in a clustered environment. It basically shows the statistics content and required values of the lookup file in different panels based on the requirement. Now the requirement is to create the same dashboard in another cluster environment where the lookup files were not exist (we are not authorized to upload the necessary lookup files in this SHC). How to accomplish this? Thanks in advance 🙂
... View more
Labels
- Labels:
-
panel
06-11-2020
03:02 AM
Hi There , I can see the internal logs error message as below 2020-06-06 09:00:23,441 ERROR pid=2 tid=MainThread file=base_modinput.py:log_error:307 | HTTP Request error: 500 Server Error: Internal Server Error for url: https: *********** Can someone guide me in fixing the issue so that the logs resume Thanks in Advance
... View more
- Tags:
- o365
10-18-2019
02:27 AM
The time of the first event in the transaction is assigned to _time for the entire transaction. The transaction command automatically assigns a duration field to each transaction. You can eval the end time to be _time + duration.
... your search and transaction | eval First_Event_Time=_time|Table ........
... View more
07-08-2019
05:25 AM
I go with scripted input as of now . Could you please help me about the configuration files .
I have the Query with me now . If I place that query in a script , after a successful run of that script ,output will be redirected to a separate file .
Now how to keep my config files in forwarder , at a time executing the script as well as reading the output file.
New to scripted inputs. Thanks
... View more
06-25-2019
10:15 PM
Yes @amitm05 , search queries using |dbquery has to be processed by splunk only but the data for processing will be in database .
I am thinking to develop a script for running the query on database and will keep it as scripted input with a interval=20m like that . Will it work ?
... View more
06-25-2019
05:26 AM
Hi @amitm05 ,
Could you just elaborate the procedure number 3 . And more over I had scheduled some of the database search queries using |dbquery command in splunk . So any kind of database related search query is ultimately processed by splunk only right?
Please correct me in this.
Thanks 🙂
... View more
06-24-2019
10:16 PM
Hi all,
My requirement is, I have to build a Dashboard by using a database search output.
I have a complex SQL search with 100+ lines logic in it having complex logic in it.
When I tried to run through Splunk which is another network, Faced problems with Java Bridge server (Status as loading or stopped). And the search job status will be displayed as parsing
Due to which other database cron jobs were stopped.
By restarting Search Head I am able to see Java Bridge server status a s running again.
Now is there any possibility to build a dashboard with the output of complex SQL search without running it through Splunk?
We are using Splunk dbconnect app 1.1.4 version.
Thanks 🙂
... View more
05-16-2019
09:45 PM
Hi All,
We used to get splunk alerts with a subject line defined as splunk alert : $name$
From 2 days onwards , Subject line included a string called [EXTERNAL] in all the Splunk alerts . Ex
[EXTERNAL] Splunk Alert : Failure Alert . Unwanted string [EXTERNAL] was added to all the subject lines
Why this [EXTERNAL] string is being adding to the splunk alerts ?
How to avoid that?
Thanks
... View more
- Tags:
04-29-2019
02:58 AM
Hi All,
I had configured an alert with trigger action as Output results to lookup with replace option .
Since the alert will run every 1 hour , a .csv will get generated with the results right.
Now are there any ways to send that particular lookup file to an external location means to an external remote server location ?
Thanks in advance .
... View more
04-26-2019
02:49 AM
Ok @FrankVl , Thanks for your quick response
Thank you 🙂
... View more
04-26-2019
02:42 AM
Will that work ? And The missing log from a particular source will start indexing again if I restart the splunk UF as splunk user.
And what the thing called fish bucket .bat files in this scenario?
... View more
04-26-2019
02:34 AM
Hi All,
So , What happens when I restart universal forwarder as root user on Linux . And Previously if done so what needs to be done if anything goes wrong
I am missing one of the log file on a particular host , but remaining logs from different sources are working fine from the same host
So restarted UF as root user ,but didn't worked
Any help ?
Thanks
... View more
04-26-2019
02:29 AM
What happens if we restart the splunk forwarder with a root user ?
... View more
04-25-2019
02:03 AM
Yes , Other sources are also sending the data to same Index
[monitor:///user/sysem.log]
index=bal
sourcetype=mri
And for the same index different log from different sources are coming
... View more
04-24-2019
10:26 PM
Hi All,
In UF installed server ,we have monitor stanza to read the .log file from a particular source named it as one of the sourcetype.
I used to get the log feed upto 7 days . But suddenly it stopped and not able to see any log feed from that particular sourcetype only
But I am getting the different types of log files nearly from 8 sources from the same UF installed server to indexer
I had rebooted the UF but no luck . By running splunk btool command I can see the monitor stanza for the missing sourcetype in inputs.conf along with others
Please guide me on this
Thanks
... View more
04-18-2019
02:34 AM
Hi all,
Splunk is not sending alert emails to one of the group distribution list . But at the same time it is sending the emails to other group distribution list and all the individual email ids in that alert DL list
what could be the problem here
Thanks
... View more
- Tags:
- splunkalert
04-11-2019
10:30 PM
@akarunkumar321 , You can try the below query and let me know
index=ccp source=service1.log earliest=-4h latest=now() | rex field=_raw "trackingId\":\s\"(?[\w-]+)\"" | table ProducerTrackingID |join type=outer ProducerTrackingID [search index=service2.log earliest=-4h latest=now() | rex field=_raw "trackingId\":\s\"(?[\w-]+)\""|rename ConsumerTrackingID as ProducerTrackingID]|search NOT source=service1.log
... View more
04-11-2019
10:18 PM
@akarunkumar321 can you try splunk joins here
... View more
04-09-2019
12:18 AM
... | rex "(?<STATUS>good|bad)" will help you
... View more
03-25-2019
10:07 PM
Hi All, I found the solution for this , Will take a chance to update the answer here
Firstly I had written the regex to extract the multiple values of MID's and TID's from the raw logs
sourcetype=mysourcetype TID MID | rex max_match=50 "<MID>(?P<mid_extracted>[^\<]+)"|rex max_match=50 "TID\=\"(?P<tid_extracted>[^\"]+)" |table mid_extracted , tid_extracted
Now to split the multiple values in a singe event used > MVEXPAND and finally performed the join with the externally uploaded lookup file .Lookup table contains 2 columns . one is MID Values/TID Values second one is Status .Under MID Values/Tid Values columns we have all the values to be checked and in second column all the values were written as MATCHED
The final query having join condition is as below
sourcetype=mysourcetype TID MID |rex max_match=50 "TID\=\"(?P<tid_extracted>[^\"]+)" |mvexpand tid_extracted |table tid_extracted |join type=left tid_extracted [| inputlookup tid_test.csv]
Now If any value in Splunk extracted output matches the value in Lookup file , The status field value displays as MATCHED
else
displays empty value
Note: In lookup file the column heading should be exactly as splunk output field heading means tid_extraced should be same
... View more