All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

How i update the test_MID_IP.csv  with the output IP, so that next time it runs with updated list index=abc IP!="10.*" [| inputlookup ip_tracking.csv | rename test_DATA AS MID | format ] | lookup t... See more...
How i update the test_MID_IP.csv  with the output IP, so that next time it runs with updated list index=abc IP!="10.*" [| inputlookup ip_tracking.csv | rename test_DATA AS MID | format ] | lookup test_MID_IP.csv test_IP as IP OUTPUT test_IP | eval match=if('IP'== test_IP, "yes", "no")| search match=no | stats count by IP
Hi, I guess the question I still need an answer to is, how can I apply a time restriction to the START event, but not the END event? Cheers, David
The transforms to set sourcetypes has a bug. The regex uses a capture group that is not used in the format statment. When this is the case splunk does not return a match on the regex. To get this ... See more...
The transforms to set sourcetypes has a bug. The regex uses a capture group that is not used in the format statment. When this is the case splunk does not return a match on the regex. To get this to work it is neccessary to change the regex to a non-capturing group e.g. for: [auditdclasses2] REGEX = type\=(ANOM_|USER_AVC|AVC|CRYPTO_REPLAY_USER|RESP) DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::linux:audit:ocsf:finding must be change to  REGEX = type\=(?:ANOM_|USER_AVC|AVC|CRYPTO_REPLAY_USER|RESP) Then it works. The same for the other auditdclasses1 - 6.
Hi, Thank you so much for the suggestion.   Is it possible to achieve this by splunk search? since it is expected to be a simple alert configuration due to access limitation. Please share if you ... See more...
Hi, Thank you so much for the suggestion.   Is it possible to achieve this by splunk search? since it is expected to be a simple alert configuration due to access limitation. Please share if you have any suggestions with splunk query which will greatly help !
Even after removing the escape character , still getting error, now as "Error in 'EvalCommand': The expression is malformed." Updated query :  strftime($field$ - (strptime(strftime($field$,"%Y-%m... See more...
Even after removing the escape character , still getting error, now as "Error in 'EvalCommand': The expression is malformed." Updated query :  strftime($field$ - (strptime(strftime($field$,"%Y-%m-%dT%H:%M:%SZ"),"%Y-%m-%dT%H:%M:%SZ") - strptime(strftime($field$,"%Y-%m-%dT%H:%M:%S"),"%Y-%m-%dT%H:%M:%S")),"$format$")   Also in "validation expression" while creating macro, i wrote   iseval=1  
Hi , How to collect server logs without installing the Splunk Universal forwarder. Because the server owned team is not interested to install UF. Please let me know is any other way to collect the d... See more...
Hi , How to collect server logs without installing the Splunk Universal forwarder. Because the server owned team is not interested to install UF. Please let me know is any other way to collect the data and how?   Thanks, Karthi
I can successfully send data to Splunk Cloud using the HEC webhook via a Curl command. However, when attempting to send events from Splunk Observability to Splunk Cloud using the Generic Webhook meth... See more...
I can successfully send data to Splunk Cloud using the HEC webhook via a Curl command. However, when attempting to send events from Splunk Observability to Splunk Cloud using the Generic Webhook method, it doesn't seem to function properly.
Hi All,   From Splunk article, I know it supporting using docker / portainer hosting. Would like to check whether Spunk Enterprise support official hosting in Kubernetes?  
Does Splunk DBConnect support gMSA accounts? If so, when configuring the Splunk Identity, do I leave the password field empty?
For CIM compliance I am trying to fill the action field from some logs using a case. This works in search but not in the calculated field, I see some others had similar issues but there has not been ... See more...
For CIM compliance I am trying to fill the action field from some logs using a case. This works in search but not in the calculated field, I see some others had similar issues but there has not been an answer on here. I am on Cloud so cannot directly change the confs, but calculated fields so far have been working fine. Simple case statements that do not have multivalue fields with objects (e.g. category instead of entitis{}.remediationStatus) work as expected in calculated fields. The events have a similar setup like this: {"entities": [{"name": "somename"}, {"name": "other naem", "remediationStatus": "Prevented"}]}   Search (WORKS):   eval action=case('entities{}.remediationStatus'=="Prevented", "blocked", 'entities{}.deliveryAction'=="Blocked", "blocked", 'entities{}.deliveryAction'=="DeliveredAsSpam", "blocked", true(), "allowed")   Calculated field (Doesn't WORK): action=case('entities{}.remediationStatus'=="Prevented", "blocked", 'entities{}.deliveryAction'=="Blocked", "blocked", 'entities{}.deliveryAction'=="DeliveredAsSpam", "blocked", true(), "allowed")
In the Gui >  Data > Data availability - Click on the Green Base Line Search Button, that will generate the look up, you can then go back to the Data availability and it should display results.   
I believe you don't have to escape the double quotes. Check the examples in the docs: https://docs.splunk.com/Documentation/Splunk/9.2.1/admin/macrosconf
Could this contribute to the slow performance when search for Knowledge Objects in our deployment? We have over 2000 user directories in $SPLUNK_HOME/etc/users on our SHs, representing every user who... See more...
Could this contribute to the slow performance when search for Knowledge Objects in our deployment? We have over 2000 user directories in $SPLUNK_HOME/etc/users on our SHs, representing every user who ever existed since we started with Splunk. When we run Settings -> Searches, Report and Alerts it can take over a minute to find a search.
Hi, do you have a distributed architecture or just single instance? Did you set the volume settings on indexes.conf? [volume:primary] path = /path/to/storage/partition maxVolumeDataSizeMB = 500000... See more...
Hi, do you have a distributed architecture or just single instance? Did you set the volume settings on indexes.conf? [volume:primary] path = /path/to/storage/partition maxVolumeDataSizeMB = 5000000  
Splunk has to pull data from somewhere, logs or API,  if you missed the scan, then the data should reside somewhere in the Nessus system.   You would need to look at the inputs configuration and s... See more...
Splunk has to pull data from somewhere, logs or API,  if you missed the scan, then the data should reside somewhere in the Nessus system.   You would need to look at the inputs configuration and see if there's an option based on the inputs to collect historical data from the Nessus system.  I would start by referring the the  documentation and see if that can help you.  https://docs.tenable.com/integrations/Splunk/Content/Splunk2/CreateInput.htm  
Hello team , I am trying to create macro and than use in my splunk dashboard . The purpose is to get time of entered input in dashboard (in only UTC standard) irrespective of user’s time setting in ... See more...
Hello team , I am trying to create macro and than use in my splunk dashboard . The purpose is to get time of entered input in dashboard (in only UTC standard) irrespective of user’s time setting in Splunk.  My macro is : [strftime_utc(2)] args = field, format definition = strftime($field$ - (strptime(strftime($field$, \"%Y-%m-%dT%H:%M:%SZ\"), \"%Y-%m-%dT%H:%M:%S%Z\")-strptime(strftime($field$, \"%Y-%m-%dT%H:%M:%S\"), \"%Y-%m-%dT%H:%M:%S\")), \"$format$\")  and now my search looks like: *My query* | eval utc_time=`strftime_utc(_time, "%Y-%m-%dT%H:%M:%SZ")` So that always get the output in UTC standard only. But I am getting below error:  Error in 'eval' command: The expression is malformed. An unexpected character is reached at '\"%Y-%m-%dT%H:%M:%SZ\"), \"%Y-%m-%dT%H:%M:%SZ\") - strptime(strftime(_time, \"%Y-%m-%dT%H:%M:%S\"), \"%Y-%m-%dT%H:%M:%S\")), \"%Y-%m-%dT%H:%M:%SZ\"))'. How can i resolve ? Any help is appreciated. Thanks
I suspect that they have made some changes to the TA add-on code and python scripts  universal_session.py I would contact them directly and see if you can get any further information. Disabling come... See more...
I suspect that they have made some changes to the TA add-on code and python scripts  universal_session.py I would contact them directly and see if you can get any further information. Disabling comes with security risks,  and most likely done within the python code. But I understand you have self signed ones,  and should have options, so seeking their advise might be the best cause of action, hopefully they can get the TA developer to give you further help.   support@nozominetworks.com.    
Hi, can you paste your confs here? Usually the proper way of doing it would look something like this: transforms.conf [filter_some_events] REGEX = <regex_that_matches_the_events_you_want> DEST_KEY ... See more...
Hi, can you paste your confs here? Usually the proper way of doing it would look something like this: transforms.conf [filter_some_events] REGEX = <regex_that_matches_the_events_you_want> DEST_KEY = _MetaData:Index FORMAT = <your_index> props.conf [<sourcetype_stanza>] ...other_props_configs... TRANSFORMS-filter_name = filter_some_events  
When navigating to "ESS" -> "Data" -> "Data Availability", will get the following error: >>> Error in 'lookup' command: Could not construct lookup 'SSE-data_availability_latency_status.csv, product... See more...
When navigating to "ESS" -> "Data" -> "Data Availability", will get the following error: >>> Error in 'lookup' command: Could not construct lookup 'SSE-data_availability_latency_status.csv, productId'. See search.log for more details. <<< I can find the definition of  SSE-data_availability_latency_status in "lookup" -> "lookup definitions". However, it looks the SSE-data_availability_latency_status.csv doesn't exist. >>> | inputlookup SSE-data_availability_latency_status.csv --> The lookup table 'SSE-data_availability_latency_status.csv' requires a .csv or KV store lookup definition. <<< I'm using Splunk cloud 9.1.2312.102 and ESS 3.8.0. Thanks for your reply in advance!  
Hi @dmitch , Thank you for answering.  I had already tested that in Staging and it works. However, we need the integration with Splunk Cloud Platform in PROD, so we cannot skip TLS verification as ... See more...
Hi @dmitch , Thank you for answering.  I had already tested that in Staging and it works. However, we need the integration with Splunk Cloud Platform in PROD, so we cannot skip TLS verification as it could be a security risk.   Is it possible to fix this issue on Splunk side? Sign the Trial version "prd-p-e7xnh.splunkcloud.com:8088" with the same certificate that the Paid version "prd-p-e7xnh.splunkcloud.com:443".  We would really appreciate this fix from Splunk.   The rest of observability backend that we have tested have public CA certificate in the target endpoint for Trial Account.    Thank you in advance.  Antonio