Hi, I am trying to bring back two interesting fields from multiple hosts. My search looks like this.
index=IIS (host=Host1 OR host=Host2 OR host=Host3 OR host=Host4) c_ip=Range OR Client_IP=Range ...
See more...
Hi, I am trying to bring back two interesting fields from multiple hosts. My search looks like this.
index=IIS (host=Host1 OR host=Host2 OR host=Host3 OR host=Host4) c_ip=Range OR Client_IP=Range
This search is only bringing back c_ip results not Client_IP results. It should be bringing back both.
Hi,
1) I want to move my hot/warm bucket to cold after 90 days, is it possible to roll buckets based on time duration or only can roll volume based? Want to keep Hot and Warm for 90 days as i am u...
See more...
Hi,
1) I want to move my hot/warm bucket to cold after 90 days, is it possible to roll buckets based on time duration or only can roll volume based? Want to keep Hot and Warm for 90 days as i am using ssd for it and move it to cold in slow disk after that.
Can this setting be applied
maxHotSpanSecs = [90days]
2) Also i intend to keep hot/warm in same path and cold in another, below config is right for the same? Do i need to mention volume:
in homepath too(my hot/warm buckets should be in /opt/splunk/var/lib/splunk )?
3) Also where should be accelelarated(tstats) be stored ideally
[default]
homePath = $SPLUNK_DB/$_index_name/db
coldPath = volume:[cold]/$_index_name/colddb
thawedPath = $SPLUNK_DB/$_index_name/thaweddb
tstatsHomePath = volume:_splunk_summaries/$_index_name/datamodel_summary
Thanks in advance!
Hi,
We have a sourcetype type called "WinHostMon" and many hosts report into it. Does anyone have any SPL laying around that would allow me to query the last time a host checked in with that pa...
See more...
Hi,
We have a sourcetype type called "WinHostMon" and many hosts report into it. Does anyone have any SPL laying around that would allow me to query the last time a host checked in with that particular Sourcetype?
Hi all,
We have a Splunk infrastructure with ESS using SmartStore over S3 on AWS. We moved from Splunk 7.3.0 to 7.3.4 and none errors appeared during the update procedure.
Once we disabled the...
See more...
Hi all,
We have a Splunk infrastructure with ESS using SmartStore over S3 on AWS. We moved from Splunk 7.3.0 to 7.3.4 and none errors appeared during the update procedure.
Once we disabled the maintenance mode, the Replication Factor and the Search Factor never met due to 2 buckets inside two different indexes. We are trying to fixup those buckets, but it never completed and looks stuck in target_wait_time status.
After having looked inside S3 itself, we noticed that the folders which are being referenced inside Splunk are not present in there.
Is there any way to solve this issue?
Thanks in advance.
Regards.
Following query diplays user logon events for the last 10 days.
index=main sourcetype=WinEventLog (EventCode=4624 OR EventCode=4634) user=pratapa.ln earliest=-12mon
| eval day=strftime(_time,"%...
See more...
Following query diplays user logon events for the last 10 days.
index=main sourcetype=WinEventLog (EventCode=4624 OR EventCode=4634) user=pratapa.ln earliest=-12mon
| eval day=strftime(_time,"%d/%m/%Y") | stats earliest(_time) AS earliest latest(_time) AS latest by user host day
| eval earliest=strftime(earliest,"%d/%m/%Y %H.%M.%S"), latest=strftime(latest,"%d/%m/%Y %H.%M.%S")
But user wants the data to be retained for 12 months.
To achieve this, we have created a new index with name "retention" with the following parameters.
[retention]
coldPath = $SPLUNK_DB/retention/colddb
homePath = $SPLUNK_DB/retention/db
thawedPath = $SPLUNK_DB/retention/thaweddb
maxDataSize = 150
maxHotSpanSecs = 86400
maxTotalDataSizeMB = 54000
frozenTimePeriodInSecs = 31104000
What are the next steps that we need to follow?
A user reported that she is unable to login to Splunk with her credentials. Whereas I could able to login to Splunk with my credentials. What could be the problem.
Hi All,
I have a Powershell script that generates a daily CSV file. I put manually that CSV file as Lookup table files using "settings> lookups> Lookup table files> add new" to use it for my spl...
See more...
Hi All,
I have a Powershell script that generates a daily CSV file. I put manually that CSV file as Lookup table files using "settings> lookups> Lookup table files> add new" to use it for my splunk search "|lookup file.csv number as From OUTPUT User as FromUser ".
My question is, how do I get the lookup table to update automatically whenever the CSV file is updated in the specific local file ?
Thanks in advance,
Nouha
Hello,
I have a search which gives the output of the fields a and b. I am saving those outputs to a csv lets say output.csv. I would like to update with the latest value of the value a and b which...
See more...
Hello,
I have a search which gives the output of the fields a and b. I am saving those outputs to a csv lets say output.csv. I would like to update with the latest value of the value a and b which means I do not want old/duplicate values for a and b and append non-existing values to the CSV file . I tried to do it with the left join but I was not successful.
Example: from the search fields below output is populated and writen to the csv.
name - age
bob 23
joey 33
and from another search fields below output is populated
name - age
joey 43
So I want my output.csv files to become below format.
name - age
bob 23
joey 43
Any help?
Bests!
I have an requirement to get only the exception related substring from the splunk log,
My log will be in the following format:
fetching records from AAA table
creating event to send to sqs
Publ...
See more...
I have an requirement to get only the exception related substring from the splunk log,
My log will be in the following format:
fetching records from AAA table
creating event to send to sqs
Publishing to SQS
Large-payload support enabled.
Exception occurred while processing rules for Feed name AAA. Skipping Exception
com.amazonaws.services.sqs.model.QueueDoesNotExistException: The specified queue does not exist for this wsdl version. (Service: AmazonSQS; Status Code: 400; Error Code: AWS.SimpleQueueService.NonExistentQueue; Request ID: xxxx)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1640)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1304)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1058)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
.....
Now I want to get only the part of exception from above log like
Exception occurred while processing rules for Feed name AAA. Skipping Exception com.amazonaws.services.sqs.model.QueueDoesNotExistException
I have tried the below query :
index=*** source=*** *Exception* | rex field=_raw "\(Exception occurred while processing rules for Feed name (?<myField>[^\)]:*)\)\("
| table myField
But it returns empty result. Can anyone please suggest me the right solution for it.
I have an requirement to get only the exception related substring from the splunk log,
My log will be in the following format:
fetching records from AAA table
creating event to send to sqs
Publ...
See more...
I have an requirement to get only the exception related substring from the splunk log,
My log will be in the following format:
fetching records from AAA table
creating event to send to sqs
Publishing to SQS
Large-payload support enabled.
Exception occurred while processing rules for Feed name AAA. Skipping Exception
com.amazonaws.services.sqs.model.QueueDoesNotExistException: The specified queue does not exist for this wsdl version. (Service: AmazonSQS; Status Code: 400; Error Code: AWS.SimpleQueueService.NonExistentQueue; Request ID: xxxx)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1640)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1304)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1058)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
.....
Now I want to get only the part of exception from above log like
Exception occurred while processing rules for Feed name AAA. Skipping Exception com.amazonaws.services.sqs.model.QueueDoesNotExistException
I have tried the below query :
index=*** source=*** *Exception* | rex field=_raw "\(Exception occurred while processing rules for Feed name (?<myField>[^\)]:*)\)\("
| table myField
But it returns empty result. Can anyone please suggest me the right solution for it.
I have running docker with image"mlkt-container-tf-cpu" in deep learning toolkit also I have access to the jupyter notebook in the toolkit but when I want to run a use case for instance "neural netwo...
See more...
I have running docker with image"mlkt-container-tf-cpu" in deep learning toolkit also I have access to the jupyter notebook in the toolkit but when I want to run a use case for instance "neural network classifier" I get an error which is "Error in fit command. Error while initializing algorithm 'MLTKContainer': local variable 'url' referenced before assigned"
Hi,
I create own custom adaptive response action. This adhoc action is worked.
But, I don't use cim_action.py lib on my script.py . script is worked but invocation result not showing.
Also, M...
See more...
Hi,
I create own custom adaptive response action. This adhoc action is worked.
But, I don't use cim_action.py lib on my script.py . script is worked but invocation result not showing.
Also, My adaptive response action result not writing .
How can I do this without using cim_action.py ? OR Can I do it ?
For example ? I run my adhoc adaptive response and .
Hi All,
Can someone help me with query that gives the macro dependency(ex Dashboard, reports etc ). When I fire that query in search window, I need information like in what all dashboard, reports ...
See more...
Hi All,
Can someone help me with query that gives the macro dependency(ex Dashboard, reports etc ). When I fire that query in search window, I need information like in what all dashboard, reports etc that macro is used.
Thanks in advance...
Hi
Once my indexer crashed with below error:
kernel: splunkd[] general protection ip:xyz error:0 in splunkd[]
And after restarting the indexer my Parsing, Merging , Typing queues are always...
See more...
Hi
Once my indexer crashed with below error:
kernel: splunkd[] general protection ip:xyz error:0 in splunkd[]
And after restarting the indexer my Parsing, Merging , Typing queues are always full with going to the indexing queue.
How to resolve this issue ?
Hi! I have different events and for every event i have a list of reasons. I want to display only three of the reasons for each event, which are latest by time. I use this code:
| stats values(Reaso...
See more...
Hi! I have different events and for every event i have a list of reasons. I want to display only three of the reasons for each event, which are latest by time. I use this code:
| stats values(Reason) as Reason, values(_time) AS Time by event
| sort Reason by Time
| eval Reason=mvindex(Reason,0,2)
| table event, Time, Reason
This doesn't work. What should I do to make it work?
Hello!
When i'm adding capability edit_dist_peer to roles two of them change their status to disable (schedule_rtsearch and dispatch_rest_to_indexers )
So i have config file like this:
dispatc...
See more...
Hello!
When i'm adding capability edit_dist_peer to roles two of them change their status to disable (schedule_rtsearch and dispatch_rest_to_indexers )
So i have config file like this:
dispatch_rest_to_indexers = disabled
edit_dist_peer = enabled
schedule_rtsearch = disabled
I can't find any information about this in documentation.
Hi everyone,
I need to know the last activity of the command splunk restart. Is there a way I can find the username of the person correspondingly running the command?
Hi All,
I have 2 certificates created 1 for forwarders and 1 for Indexers. Enabled SSL port as receiving port in cluster peers. (working fine from forwarders to Indexers communication)
Now,
How...
See more...
Hi All,
I have 2 certificates created 1 for forwarders and 1 for Indexers. Enabled SSL port as receiving port in cluster peers. (working fine from forwarders to Indexers communication)
Now,
How can I forward Search Head cluster data to Indexers, do I need to use forwarder certificate in SHC?
Or can we have 2 ports (one for SSL and one normal)?
Please advise.
Hi Guys
I am having issue in getting mail of triggered alerts on mail server. we have at least 20+ alerts configured. but non of the alerts are being triggered on mail we are having issue in analys...
See more...
Hi Guys
I am having issue in getting mail of triggered alerts on mail server. we have at least 20+ alerts configured. but non of the alerts are being triggered on mail we are having issue in analysing project work stuff. can you please help in troublshooting the issue. we have 3 search head, 2 indexer, deployer, CM and deployment server, license master. we have default configuration of localhost as an SMTP mailserver.