All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Assuming these are numeric (not strings), you can use streamstats | streamstats window=2 range(USAGE) as difference
Hello, how can I ensure the data being sent to cool_index is rolled to cold when the data is 120 days old? The config I'll use   [cool_index] homePath = volume:hotwarm/cool_index/db coldPath = vol... See more...
Hello, how can I ensure the data being sent to cool_index is rolled to cold when the data is 120 days old? The config I'll use   [cool_index] homePath = volume:hotwarm/cool_index/db coldPath = volume:cold/cool_index/colddb thawedPath = $SPLUNK_DB/cool_index/thaweddb frozenTimePeriodInSecs = 10368000 #120 day retention maxTotalDataSizeMB = 60000 maxDataSize=auto repFactor=auto      am I missing something?
Where are you applying the Event Hubs Data Receiver role?  I usually apply it at the Subscription level so that any other namespaces created in the same subscription will inherit the necessary permis... See more...
Where are you applying the Event Hubs Data Receiver role?  I usually apply it at the Subscription level so that any other namespaces created in the same subscription will inherit the necessary permissions.  There is a walkthrough here (Step 4) => https://lantern.splunk.com/Data_Descriptors/Microsoft/Getting_started_with_Microsoft_Azure_Event_Hub_data The SSL error you are getting may be a private certificate in the certificate chain.  I have also seen similar issues when a network device injects a private cert in the header in outbound traffic.
Actually, it may be that something is wrong with your CIM Validator. Even if I try to search a non-existent index, it still populates the counters at the top and has rows of "no extracted values foun... See more...
Actually, it may be that something is wrong with your CIM Validator. Even if I try to search a non-existent index, it still populates the counters at the top and has rows of "no extracted values found" Which version of Cim Validator are you using? Perhaps you could try backing up the current cim validator app, then re-installing it.
Sure thing. For testing I am using this SPL: (time range set to "Last 30 Days")     index=_internal | table _time sourcetype | head 5 | eval othertestfield="test1" | eval _time = now() + 3600 | c... See more...
Sure thing. For testing I am using this SPL: (time range set to "Last 30 Days")     index=_internal | table _time sourcetype | head 5 | eval othertestfield="test1" | eval _time = now() + 3600 | collect index=summary testmode=true addtime=true     It produces the following output: _time sourcetype _raw othertestfield 2024-03-12T22:50:05.000+01:00 splunkd 03/12/2024 22:50:05 +0100, info_min_time=1707606000.000, info_max_time=1710276605.000, info_search_time=1710276605.390, othertestfield=test1, orig_sourcetype=splunkd test1 2024-03-12T22:50:05.000+01:00 splunkd_access 03/12/2024 22:50:05 +0100, info_min_time=1707606000.000, info_max_time=1710276605.000, info_search_time=1710276605.390, othertestfield=test1, orig_sourcetype=splunkd_access test1 2024-03-12T22:50:05.000+01:00 splunkd_access 03/12/2024 22:50:05 +0100, info_min_time=1707606000.000, info_max_time=1710276605.000, info_search_time=1710276605.390, othertestfield=test1, orig_sourcetype=splunkd_access test1 2024-03-12T22:50:05.000+01:00 splunkd_access 03/12/2024 22:50:05 +0100, info_min_time=1707606000.000, info_max_time=1710276605.000, info_search_time=1710276605.390, othertestfield=test1, orig_sourcetype=splunkd_access test1 2024-03-12T22:50:05.000+01:00 splunkd_access 03/12/2024 22:50:05 +0100, info_min_time=1707606000.000, info_max_time=1710276605.000, info_search_time=1710276605.390, othertestfield=test1, orig_sourcetype=splunkd_access test1   I ran the search at 21:50 CET, and the _time field shows the current time plus 3600 seconds.
Per the docs, it must be in sytem/local https://docs.splunk.com/Documentation/Splunk/latest/Admin/Web-featuresconf # To use one or more of these configurations, copy the configuration block into # ... See more...
Per the docs, it must be in sytem/local https://docs.splunk.com/Documentation/Splunk/latest/Admin/Web-featuresconf # To use one or more of these configurations, copy the configuration block into # the web-features.conf file located in $SPLUNK_HOME/etc/system/local/. You must restart # Splunk software after you make changes to this setting to enable configurations. BTW, Btool isn't always the best way to check settings as it just reads the OS files and parses the data there, the configuration files seen by btool may or may not be valid.  
Glad to help
Can you search the internal index, specifically splunkd, to be sure that your configuration stanza is being parsed and that the TailingProcessor is adding a watch on the file path? E.g. index=_inte... See more...
Can you search the internal index, specifically splunkd, to be sure that your configuration stanza is being parsed and that the TailingProcessor is adding a watch on the file path? E.g. index=_internal source="/opt/splunk/var/log/splunk/splunkd.log" <logfilename>
I have single instance splunk environment where the license is 100 gb there is another single instance using the same license .and we get data every day around 6 GB both combined . Instance A is very... See more...
I have single instance splunk environment where the license is 100 gb there is another single instance using the same license .and we get data every day around 6 GB both combined . Instance A is very fast but instance B is very slow .(both  have same resources) All searches and dashboards are really slow .For instance if I run a search to do a simple stats for 24 hrs ..it takes 25 seconds when compared to the other one which takes 2 seconds .I checked the job inspection which was showing  dispatch.evaluate.search = 12.84 dispatch.fetch.rcp.phase_0 =7.78  I want to know where should I start checking on the host and what are the steps to be taken 
Thank you @marnall.  You are the master!
You can indeed do this with sed and rex: | rex mode=sed field=<yourfield> "s/(\d{4})(\d{2})(\d{2})(\d{2})(\d{2})(\d{2}).*/\1\/\2\/\3 \4:\5:\6/"   Every captured group in the first part of the ... See more...
You can indeed do this with sed and rex: | rex mode=sed field=<yourfield> "s/(\d{4})(\d{2})(\d{2})(\d{2})(\d{2})(\d{2}).*/\1\/\2\/\3 \4:\5:\6/"   Every captured group in the first part of the sed can be referenced with a backslash+groupnumber. E.g: "\1" for group 1, "\2" for group 2. Everything not captured can be discarded. Forward slashes need to be escaped.
This is an odd acceleration behavior that has us stumped... If some of you worked with Qualys Technology Add-on before, Qualys dump their knowledge base into a CSV file which we converted to kvStore ... See more...
This is an odd acceleration behavior that has us stumped... If some of you worked with Qualys Technology Add-on before, Qualys dump their knowledge base into a CSV file which we converted to kvStore with the following collections.conf accelerations enabled - The knowledge base has approx. 137,000 rows of about 20 columns. [qualys_kb_kvstore] accelerated_fields.QID_accel = {"QID": 1} replicate = true Then if you were to run the following query with lookup local= true and local=false (default). According to Job Inspector there was no real difference between lookup on search head vs. the indexers. Without the lookup command, the query takes 3 seconds to complete over 17 million events. With lookup added, it takes an extra 165 seconds for some reason with the accelerators turned on. index=<removed> (sourcetype="qualys:hostDetection" OR sourcetype="qualys_vm_detection") "HOSTVULN" | fields _time HOST_ID QID | stats count by HOST_ID, QID | lookup qualys_kb_kvstore QID AS QID OUTPUTNEW PATCHABLE | where PATCHABLE="YES" | stats dc(HOST_ID) ```Number of patchable hosts!``` An idea I am going to try is to add PATCHABLE as another accelerated field and see if that changes. This change will require me to wait until tomorrow. accelerated_fields.QID_accel = {"QID": 1, "PATCHABLE": 1} Is there something we're missing to help avoid the lookup taking extra 2-3 minutes?
I know this is an old dead question ... but, the issue still exists! The problem is that in the "alert_actions_base.py" wrapper file that is put in TA/bin/ta_name/alert_actions_base.py has get_user... See more...
I know this is an old dead question ... but, the issue still exists! The problem is that in the "alert_actions_base.py" wrapper file that is put in TA/bin/ta_name/alert_actions_base.py has get_user_credential defined to wrap get_credential_by_username and does not provide a wrapper for by_account_id.  Adding the definition below for get_user_credential_by_account_id from the ./aob_py3/splunktaucclib/alert_actions_base.py into the TA/bin/ta_name/alert_actions_base.py solves the issue an allows an alert action to request credentials by account id. def get_user_credential_by_account_id(self, account_id): """ if the account_id exists, return { "username": username, "password": credential } """ return self.setup_util.get_credential_by_id(account_id) Would love to see this change integrated into the next release of the add on builder!
I have a weird date/time value:  20240307105530.358753-360 I would like to make it more user friendly  2024/03/07 10:50:30 and drop the rest. %Y/%m/%d %H:%M:%S I know you can use sed for this,... See more...
I have a weird date/time value:  20240307105530.358753-360 I would like to make it more user friendly  2024/03/07 10:50:30 and drop the rest. %Y/%m/%d %H:%M:%S I know you can use sed for this, however, I am not familiar with sed syntax: For example: | rex mode=sed field=_raw "s//g" Any sed guru's out there?
I'm collecting all other logs ie. wineventlogs, splunkd logs the inputs.conf is accurate the splunk user has full access to the file   What are some non-splunk reasons that would prevent a file... See more...
I'm collecting all other logs ie. wineventlogs, splunkd logs the inputs.conf is accurate the splunk user has full access to the file   What are some non-splunk reasons that would prevent a file from being monitored?
Thankyou @ITWhisperer .   Quick followup question, instead of index=test, i have a data model that has limited fields. Then how it would work? | tstats dc(Changeset) as count, values(Changeset) as... See more...
Thankyou @ITWhisperer .   Quick followup question, instead of index=test, i have a data model that has limited fields. Then how it would work? | tstats dc(Changeset) as count, values(Changeset) as name_versions from datamodel=abc by source, name | where count>1 | stats min(name_versions) by source, name   I want to delete now the events. How could i?  
It is a report. I did try with the format in the UI but this will lost when this run again with different dates.
if i had to write a document for myself on basic learning of splunk: to create a dashboard i can either use inputs like index,source,source fields or I can give a data set is that right? for that can... See more...
if i had to write a document for myself on basic learning of splunk: to create a dashboard i can either use inputs like index,source,source fields or I can give a data set is that right? for that can i write it like this or am i wrong with side headings: Understanding of input data:  Explore different methods of data input into Splunk, such as ingesting data from files, network ports, or APIs. Understanding of Data domains : Discover how to efficiently structure your data in Splunk using data models to drive analysis.
If it is in a dashboard, you could try the thousands separator option
Delete is a non-recoverable command so great care must be taken when used it. To identify the events to delete, try something like this index=test | eventstats min(change_set) as min_change_set by... See more...
Delete is a non-recoverable command so great care must be taken when used it. To identify the events to delete, try something like this index=test | eventstats min(change_set) as min_change_set by source | where change_set = min_change_set