All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a few error messages in my ES about searches being delayed. I used the below solution. But it only produces a few but not all. I see errors under message related to delayed searches on the mai... See more...
I have a few error messages in my ES about searches being delayed. I used the below solution. But it only produces a few but not all. I see errors under message related to delayed searches on the main ES page under Messages (numerous) about delayed that does not show using the  See https://github.com/dpaper-splunk/public/blob/master/dashboards/extended_search_reporting.xml for a helpful dashboard. Please advise. Thank u & Happy Holidays.  
I'm looking to convert the results for these fields  in PST time zone, so that I can fetch the events based on these fields but not based on the event timestamp. Below two fields are in GMT format. P... See more...
I'm looking to convert the results for these fields  in PST time zone, so that I can fetch the events based on these fields but not based on the event timestamp. Below two fields are in GMT format. Please help FIRST_FOUND_DATETIME="2021-12-12T08:30:29Z", LAST_FOUND_DATETIME="2021-12-20T16:12:28Z    
Hi Team,  I have 10 events - start event time is at 10AM ,next event time  at 10.08AM ,10.15AM,10.18AM and so on.. End event time is 10.56AM and I am able to find the start event time and end event ... See more...
Hi Team,  I have 10 events - start event time is at 10AM ,next event time  at 10.08AM ,10.15AM,10.18AM and so on.. End event time is 10.56AM and I am able to find the start event time and end event time using min(_time) and max(_time) but I need to find the first modified time  i.e the event that occurred at 10.08AM. Please assist
we followed the steps provided on https://www.splunk.com/en_us/blog/bulletins/splunk-security-advisory-for-apache-log4j-cve-2021-44228.html but it seems that files are being recreated , Can anyone pl... See more...
we followed the steps provided on https://www.splunk.com/en_us/blog/bulletins/splunk-security-advisory-for-apache-log4j-cve-2021-44228.html but it seems that files are being recreated , Can anyone please help on that ??, Also i wanted to know if replacing just Apache version rather upgrading splunk could  help to mitigate ? and what should be the steps if i replace?
How do I view details & find the causes? Thx & Happy holidays?
We have below CEF logs coming in from the device where few field doesn't have any value like cs2 below   CEF:0|vendor|product|1.1.0.15361|6099|DirectoryAssetSyncSucceeded|1|cn1label=EventUserId cn1... See more...
We have below CEF logs coming in from the device where few field doesn't have any value like cs2 below   CEF:0|vendor|product|1.1.0.15361|6099|DirectoryAssetSyncSucceeded|1|cn1label=EventUserId cn1=-3 cs1label=EventUserDisplayName cs1=Automated System cs2label=EventUserDomainName cs2= cn2label=AssetId cn2=16699 cs3label=AssetName cs3=ABC.LOCAL AD cn3label=AssetPartitionId cn3=7 cs4label=AssetPartitionName cs4=XYZ.LOCAL partition cs5label=TaskId cs5=9ec9aa87-61b9-11ec-926f-3123456edt   How can we assign 'NULL' value to such field using SEDCMD or any other possible way here?
We need to capture field value for the below CEF log pattern  CEF:0|vendor|product|1.1.0.15361|6099|DirectoryAssetSyncSucceeded|1|cn1label=EventUserId cn1=-3 cs1label=EventUserDisplayName cs1=Automa... See more...
We need to capture field value for the below CEF log pattern  CEF:0|vendor|product|1.1.0.15361|6099|DirectoryAssetSyncSucceeded|1|cn1label=EventUserId cn1=-3 cs1label=EventUserDisplayName cs1=Automated System cs2label=EventUserDomainName cs2= cn2label=AssetId cn2=16699 cs3label=AssetName cs3=ABC.LOCAL AD cn3label=AssetPartitionId cn3=7 cs4label=AssetPartitionName cs4=XYZ.LOCAL partition cs5label=TaskId cs5=9ec9aa87-61b9-11ec-926f-3123456edt   I am using the below regex  (?:([\d\w]+)label=(?<_KEY_1>\S+))(?=.*\1=(?<_VAL_1>[^=]+)(?=$|\s+[\w\d]+=)) Unfortunatelly it is not taking the blank one, like cs2= , which doesn't contain anything so EventUserDomainName should be blank    Kindly suggest 
Hello,  During that crazy 4logj times I would like ask you for advise.  I am new in Splunk/security but I manage to create dashboard for 4logj attemps. I can see some 4logj scanning activity and ... See more...
Hello,  During that crazy 4logj times I would like ask you for advise.  I am new in Splunk/security but I manage to create dashboard for 4logj attemps. I can see some 4logj scanning activity and codes are 404,400 etc = I am not really worry about. But sometimes I have code 200 as I can see this mean: The HTTP 200 OK success status response code indicates that the request has succeeded. The meaning of a success depends on the HTTP request method: ... GET : The resource has been fetched and is transmitted in the message body. i added screenshot.  I am wondering how to investigate it? Should i check for outband traffic? what is the best query? as far i have just one index=firewall 170.210.45.163 AND 31.131.16.127 ?  this is my Uni Lab environment so i just want to develop myself and learn , what you would do if you see such a string? many thanks
I set up a alert for every 15min if the count > 0,but i want the alert to be triggered a mail  for 2nd consecutive time  example: I want a alert for every 15min but i want a triggered action mail fo... See more...
I set up a alert for every 15min if the count > 0,but i want the alert to be triggered a mail  for 2nd consecutive time  example: I want a alert for every 15min but i want a triggered action mail for 2nd consecutive 15min. Is it possible? Please do help me out. Thanks Veeru
While the add-on is trying to fetch the data from Salesforce resulting in the below error. reason=HTTP Error The read operation timed out Traceback: cloudconnectlib.core.exceptions.HTTPError: HTTP... See more...
While the add-on is trying to fetch the data from Salesforce resulting in the below error. reason=HTTP Error The read operation timed out Traceback: cloudconnectlib.core.exceptions.HTTPError: HTTP Error The read operation timed out
Hi, I have events which contain 3 Fields: "StartDate", "Value_per_month" and "Nr_of_Month". They basically disclose some monthly financial flow which beginns at "StartDate" and ends after "Nr_of_Mon... See more...
Hi, I have events which contain 3 Fields: "StartDate", "Value_per_month" and "Nr_of_Month". They basically disclose some monthly financial flow which beginns at "StartDate" and ends after "Nr_of_Month". The goal is to show a sum of "Value_per_month" for each month over all events. In most cases the dates are in the future, so it will be a bit tricky to get this to work. However, at least a table view would be great and use some basic vizualisation on top. I thought I could create fields for each month, for example "value_yyyy-mm" and assign the value to each and then sum up the values in each field accross all events. However I have not found a way to do this dynamically in a loop for X times, based on variable "Nr_of_Month". I have checked combinations of eval, makeresults, foreach, gentimes, etc. Any basic idea how to approach this would be welcome. Many thanks in advance
Hello all, Can I generate scheduled PDF delivery from the command line? I want to export the dashboard as a PDF file and send it via email. Is that possible? May I know the solution if it's possible?
I have an Index B  which has job_name and job_status details and another index A which has ticket number and job_name... Index A to be joined with Index B to get the below details ticket_number,job... See more...
I have an Index B  which has job_name and job_status details and another index A which has ticket number and job_name... Index A to be joined with Index B to get the below details ticket_number,job_name and job_status I tried join type= inner job_name based on job_name but i get empty results for job_status what might be going wrong?
Hi everyone, Currently we're dealing with an odd one on the Enterprise search head (we're using 8.2.3).  We have multiple roles which grant access to certain indexes to search into, and sometim... See more...
Hi everyone, Currently we're dealing with an odd one on the Enterprise search head (we're using 8.2.3).  We have multiple roles which grant access to certain indexes to search into, and sometimes randomly, people in the company start complaining that their searches are forbidden. After doing some quick investigation I've seen that in the roles the guys are having, from their (let's say for example sake) 5 allowed indexes, one was being unchecked (i.e. removed). Do you know why this behavior is happening? Cheers, Darius
Hi, I have a table like this :  part_of_url count /test1 1 /test2 2 /test3 3   I want to drilldown with a link which open a new tab by clicking on cell of "part_of_url" colum... See more...
Hi, I have a table like this :  part_of_url count /test1 1 /test2 2 /test3 3   I want to drilldown with a link which open a new tab by clicking on cell of "part_of_url" column. I treid this code : <drilldown> <condition field="part_of_url"> <link target="_blank"> <![CDATA[https://url.net$click.value$-/results/ ]]> </link> </condition> </drilldown> But it did not work. Can you help me ?
Hi, I have a table like this :  test count test AA 1 test AB 2 test C 3   I want to merge "test AA" and "test AB" which will give me a count of 3 together. What I want : t... See more...
Hi, I have a table like this :  test count test AA 1 test AB 2 test C 3   I want to merge "test AA" and "test AB" which will give me a count of 3 together. What I want : test count test A 3 test C 3   How Can I do that ?
We are transferring log using log drains and using token created using HTTP event collector.  We need to filter data entering into splunk cloud logs. Few keywords we want to eliminate all-together so... See more...
We are transferring log using log drains and using token created using HTTP event collector.  We need to filter data entering into splunk cloud logs. Few keywords we want to eliminate all-together so reduced the size of our ingestion. So around 50% of the data being ingested is not required and its coming from third party which don't have controllable log levels. How can we avoid data by these keywords and prevent it being ingested into splunk. Or is there way to filter data after we get the data in splunk to reduce the ingestion size ?  Thanks,Dee
I have data coming in which has field as "Log_date" in DD/MM/YYY format. i need to show last 7 days data from today in the dashboard. I used below to filter last 7 days data, but its not showing as ... See more...
I have data coming in which has field as "Log_date" in DD/MM/YYY format. i need to show last 7 days data from today in the dashboard. I used below to filter last 7 days data, but its not showing as results. where Log_date>=relative_time(Log_date, "-6d@d") I need to use date only from the field "Log_date". Can some one please guide.
Hi, I am new to Splunk and running both Splunk Enterprise and Universal Forwarder in a Docker container (on the same host for now). My forwarder keeps shutting down, and I am not quite sure why. Pr... See more...
Hi, I am new to Splunk and running both Splunk Enterprise and Universal Forwarder in a Docker container (on the same host for now). My forwarder keeps shutting down, and I am not quite sure why. Probably a configuration issue. This is the last error I find in the logs before the shutdown:       TASK [splunk_common : Test basic https endpoint] ******************************* fatal: [localhost]: FAILED! => { "attempts": 60, "changed": false, "elapsed": 10, "failed_when_result": true, "redirected": false, "status": -1, "url": "https://127.0.0.1:8089" } MSG: Status code was -1 and not [200, 404]: Request failed: <urlopen error _ssl.c:1074: The handshake operation timed out> ...ignoring         after that there is a bit more in the logs, before it finally stops working:       Monday 20 December 2021 09:57:02 +0000 (0:17:30.719) 0:18:04.687 ******* TASK [splunk_common : Set url prefix for future REST calls] ******************** ok: [localhost] Monday 20 December 2021 09:57:02 +0000 (0:00:00.088) 0:18:04.775 ******* included: /opt/ansible/roles/splunk_common/tasks/clean_user_seed.yml for localhost Monday 20 December 2021 09:57:02 +0000 (0:00:00.214) 0:18:04.989 ******* [WARNING]: Using world-readable permissions for temporary files Ansible needs to create when becoming an unprivileged user. This may be insecure. For information on securing this, see https://docs.ansible.com/ansible/user_guide/become.html#risks-of-becoming-an- unprivileged-user TASK [splunk_common : Remove user-seed.conf] *********************************** ok: [localhost] Monday 20 December 2021 09:57:03 +0000 (0:00:00.653) 0:18:05.643 ******* included: /opt/ansible/roles/splunk_common/tasks/add_splunk_license.yml for localhost Monday 20 December 2021 09:57:03 +0000 (0:00:00.228) 0:18:05.871 ******* TASK [splunk_common : Initialize licenses array] ******************************* ok: [localhost] Monday 20 December 2021 09:57:03 +0000 (0:00:00.082) 0:18:05.954 ******* TASK [splunk_common : Determine available licenses] **************************** ok: [localhost] => (item=splunk.lic) Monday 20 December 2021 09:57:03 +0000 (0:00:00.117) 0:18:06.072 ******* included: /opt/ansible/roles/splunk_common/tasks/apply_licenses.yml for localhost => (item=splunk.lic) Monday 20 December 2021 09:57:03 +0000 (0:00:00.162) 0:18:06.235 ******* Monday 20 December 2021 09:57:03 +0000 (0:00:00.074) 0:18:06.309 ******* Monday 20 December 2021 09:57:03 +0000 (0:00:00.078) 0:18:06.387 ******* Monday 20 December 2021 09:57:03 +0000 (0:00:00.077) 0:18:06.465 ******* included: /opt/ansible/roles/splunk_common/tasks/licenses/add_license.yml for localhost Monday 20 December 2021 09:57:04 +0000 (0:00:00.141) 0:18:06.606 ******* Monday 20 December 2021 09:57:04 +0000 (0:00:00.079) 0:18:06.686 ******* [WARNING]: Using world-readable permissions for temporary files Ansible needs to create when becoming an unprivileged user. This may be insecure. For information on securing this, see https://docs.ansible.com/ansible/user_guide/become.html#risks-of-becoming-an- unprivileged-user TASK [splunk_common : Ensure license path] ************************************* ok: [localhost] Monday 20 December 2021 09:57:04 +0000 (0:00:00.622) 0:18:07.308 ******* Monday 20 December 2021 09:57:04 +0000 (0:00:00.074) 0:18:07.382 ******* Monday 20 December 2021 09:57:04 +0000 (0:00:00.108) 0:18:07.491 ******* Monday 20 December 2021 09:57:05 +0000 (0:00:00.076) 0:18:07.568 ******* included: /opt/ansible/roles/splunk_universal_forwarder/tasks/../../../roles/splunk_common/tasks/set_as_hec_receiver.yml for localhost Monday 20 December 2021 09:57:05 +0000 (0:00:00.118) 0:18:07.686 ******* [WARNING]: Using world-readable permissions for temporary files Ansible needs to create when becoming an unprivileged user. This may be insecure. For information on securing this, see https://docs.ansible.com/ansible/user_guide/become.html#risks-of-becoming-an- unprivileged-user         outputs.conf:       [indexAndForward] index = false [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] server = splunk:9997 disabled = false         inputs.conf:       [monitor:///docker/py-sandbox/log/*.json] disabled = 0 [monitor:///docker/fa/log/*.json] disabled = 0         server.conf (i removed the keys/ssl pw, I never created those myself though):       [general] serverName = synoUniFW pass4SymmKey = <somekey> [sslConfig] sslPassword = <some pw> [lmpool:auto_generated_pool_forwarder] description = auto_generated_pool_forwarder quota = MAX slaves = * stack_id = forwarder [lmpool:auto_generated_pool_free] description = auto_generated_pool_free quota = MAX slaves = * stack_id = free         In the Splunk Enterprise I have opened port 9997 and it actually receives logs, until it doesn't... What am I doing wrong? thanks!
Hello all, I would like to generate a scheduled PDF delivery dashboard for bi-weekly (every two weeks) on Friday. But if I use " 0 0 */14 * * ", the schedule doesn't run exactly every 14 days. It runs... See more...
Hello all, I would like to generate a scheduled PDF delivery dashboard for bi-weekly (every two weeks) on Friday. But if I use " 0 0 */14 * * ", the schedule doesn't run exactly every 14 days. It runs only on the 14th and 29th days of the month. And I want to run it on only Friday of every two weeks. So, I try cron notation " 0 0 */14 * 5 ". But, the schedule run on not only the 14th and 29th days of the month, but also every Friday. What kind of notation has to be used to meet both conditions for every 14 days and on Friday? And also is it possible to run the cron schedule at the end of every month?