All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, I want to ask where i can find the indexed data stored as per the below, i found the bucket consist of the RAW data, index file and some meta data :
I've created the HF, and set up the ip allow list. From the Azure Connection troubleshoot, the testing is successful, NSG has createa and allow all connection to internet, then Windows firewall is di... See more...
I've created the HF, and set up the ip allow list. From the Azure Connection troubleshoot, the testing is successful, NSG has createa and allow all connection to internet, then Windows firewall is disabled in the VM. but I still get this error. 06-16-2024 22:59:24.253 +0000 WARN AutoLoadBalancedConnectionStrategy [8760 TcpOutEloop] - Cooked connection to ip=1.2.3.4:9997 timed out 06-16-2024 22:59:24.563 +0000 ERROR TcpOutputFd [8760 TcpOutEloop] - Read error. An existing connection was forcibly closed by the remote host. 06-16-2024 22:59:24.876 +0000 ERROR TcpOutputFd [8760 TcpOutEloop] - Read error. An existing connection was forcibly closed by the remote host.   running the comand netstat -anob to check the connections it will be stuck in the SYN_SENT status. but the messages said HF has been blocked for blocked_seconds=10 any ideas for fixing this issues?
Have you set up the Prefix field to match_type WILDCARD? See Share a lookup table file with apps.
Hello,  I have a lookup table where a list of MAC addresses are listed with the associated Vendors; basically an identifier. However, the mac address in this lookup table (column name is 'prefix') ... See more...
Hello,  I have a lookup table where a list of MAC addresses are listed with the associated Vendors; basically an identifier. However, the mac address in this lookup table (column name is 'prefix') only has the three characters - xx:xx:xx. What I'm trying to do is write a query to find devices that were assigned/renewed an IP address from the DHCP server and based on their Mac address information in the result, identify the vendor. I was able to filter the first three characters from the result but when adding the lookup table to enrich the result with the Vendor information, I'm getting zero results. What am I doing wrong here? Thanks in advance!  index=some_dhcp description=renew | eval d_mac=dest_mac | rex field=d_mac "(?P<d_mac>([0-9-Fa-f]{2}[:-]){3})" | lookup vendor.csv Prefix as d_mac OUTPUT Prefix Vendor_Name | search Prefix=* | table date dest_mac Vendor_Name description
Hi @whitecat001, Alerts (scheduled searches with alert actions enabled) can fail to run for many reasons. For example, searches can fail because of SPL syntax errors, searches can be skipped because... See more...
Hi @whitecat001, Alerts (scheduled searches with alert actions enabled) can fail to run for many reasons. For example, searches can fail because of SPL syntax errors, searches can be skipped because of scheduling contention, actions can fail, or splunkd may not be running. What is your definition of "failed to run?"  
Hi Paul, this join looks to  be working. Thank you very much..
Hi @sarit_s6, If you haven't already, enable secure access to your instance's REST API by following the guidance at https://docs.splunk.com/Documentation/SplunkCloud/latest/RESTTUT/RESTandCloud. Th... See more...
Hi @sarit_s6, If you haven't already, enable secure access to your instance's REST API by following the guidance at https://docs.splunk.com/Documentation/SplunkCloud/latest/RESTTUT/RESTandCloud. The full list of supported REST API endpoints is at https://docs.splunk.com/Documentation/SplunkCloud/latest/RESTREF/RESTprolog. To move a saved search, use the saved/searches/{name}/move endpoint: $ curl https://{instance}:8089/servicesNS/{user}/{app}/saved/searches/{name}/move -d app={dest_app} -d user={dest_user} The move endpoint itself isn't documented; however, you can get a list of supported endpoints from the object: $ curl 'https://{instance}:8089/servicesNS/{user}/{app}/saved/searches/{name}?output_mode=json' | jq '.entry[].links' { "alternate": "/servicesNS/{user}/{app}/saved/searches/{name}", "list": "/servicesNS/{user}/{app}/saved/searches/{name}", "_reload": "/servicesNS/{user}/{app}/saved/searches/{name}/_reload", "edit": "/servicesNS/{user}/{app}/saved/searches/{name}", "remove": "/servicesNS/{user}/{app}/saved/searches/{name}", "move": "/servicesNS/{user}/{app}/saved/searches/{name}/move", "disable": "/servicesNS/{user}/{app}/saved/searches/{name}/disable", "dispatch": "/servicesNS/{user}/{app}/saved/searches/{name}/dispatch", "embed": "/servicesNS/{user}/{app}/saved/searches/{name}/embed", "history": "/servicesNS/{user}/{app}/saved/searches/{name}/history" } The form data parameters for the move endpoint are app and user as noted above. Unofficially, you can find all of the above by moving an object in Splunk Web while observing the /{locale}/splunkd/__raw/servicesNS REST API calls in your browser's dev tools. Those calls can be converted directly to /servicesNS REST API calls on the management port.
Hi @DarkMSTie, identify the correct sourcetype is the first (and most important) categorization that you can do to recognize your Data Flows, so don't leave to Splunk the choice of the sourcetype, ... See more...
Hi @DarkMSTie, identify the correct sourcetype is the first (and most important) categorization that you can do to recognize your Data Flows, so don't leave to Splunk the choice of the sourcetype, also because in this way it probably will use a standard (as e.g. csv) sourcetype that could be common also with other Data Flows and you're not sure to identify only these logs. So identify the sourcetype (e.g. "bro") in inputs.conf, eventually cloning an existing one (e.g. csv), so you are sure to identify your logs. In addition, if this Data Flow has some different configuration, you can use it without problems to other data Flows. In other words, the most important field to identify a Data Flow isn't index but sourcetype, also because you associate to sourcetype al the fields extractions, etc... Ciao. Giuseppe
Hi @sivaranjani , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @sivaranjani .. on the search query itself you can include the earliest and latest times.  or, as said in the other reply, on timepicker, you can use the "Advanced".. 
How did you resolve this error? I am also facing same issue but before granting access.
Hello, I am having issues when requesting a developer license since two weeks. I have sent already a couple of emails to devinfo with no answer received yet. Is there any issue with this functionali... See more...
Hello, I am having issues when requesting a developer license since two weeks. I have sent already a couple of emails to devinfo with no answer received yet. Is there any issue with this functionality? Thank you.
Use the advanced settings and use @d+8h and now  
Use your search in an alert and add the following | where Avg > 1000 Then set the timeframe for the search to be last 15 minutes and the alert trigger to be when there are greater than zero results
Hello, Im using splunk cloud and i have a lot of saved searches - alerts, dashboards, reports that i need to move from one app to another I have lists that map each saved search to the relevant app... See more...
Hello, Im using splunk cloud and i have a lot of saved searches - alerts, dashboards, reports that i need to move from one app to another I have lists that map each saved search to the relevant app Is there  a way to do it with api or any other way that it is not manually one by one ?   Thanks
Sample data of the original log: [{"PhoneNumber":"+1 450555338","AlternativePhoneNumber":null,"Email":null,"VoiceOnlyPhoneNumber":null}] [{\"PhoneNumber\":\"+20 425554005\",\"AlternativePhoneNum... See more...
Sample data of the original log: [{"PhoneNumber":"+1 450555338","AlternativePhoneNumber":null,"Email":null,"VoiceOnlyPhoneNumber":null}] [{\"PhoneNumber\":\"+20 425554005\",\"AlternativePhoneNumber\":\"+1 455255697\",\"Email\":\"Dam@test.com.us\",\"VoiceOnlyPhoneNumber\":null}]"} [{\"PhoneNumber\":\"+1 459551561\",\"AlternativePhoneNumber\":\"+1 6155555533\",\"Email\":null,\"VoiceOnlyPhoneNumber\":\"+1 455556868\"}] Do you mean to say that some log contains valid JSON, some contains quote-escaped JSON?  Or was the first entry a misprint; all logs are in fact quote-escaped JSON, like the following? log [{\"PhoneNumber\":\"+1 450555338\",\"AlternativePhoneNumber\":null,\"Email\":null,\"VoiceOnlyPhoneNumber\":null}] [{\"PhoneNumber\":\"+20 425554005\",\"AlternativePhoneNumber\":\"+1 455255697\",\"Email\":\"Dam@test.com.us\",\"VoiceOnlyPhoneNumber\":null}] [{\"PhoneNumber\":\"+1 459551561\",\"AlternativePhoneNumber\":\"+1 6155555533\",\"Email\":null,\"VoiceOnlyPhoneNumber\":\"+1 455556868\"}] In this illustration, I assume that the "original log" contains some additional elements; only one field (named log) contains those escaped JSON because it is very unreasonable to escape quotation marks if it is the complete log. If as I speculated, all log values are escaped, you should aim at reconstructing JSON, not use rex to treat them as text.  So, I recommend   | rex field=log mode=sed "s/\\\\\"/\"/g" | spath input=log path={} | mvexpand {} | spath input={}   Using Splunk's built-in JSON handling is more robust than any regex you can craft.  From the mock data, the above will give you AlternativePhoneNumber Email PhoneNumber VoiceOnlyPhoneNumber null null +1 450555338 null +1 455255697 Dam@test.com.us +20 425554005 null +1 6155555533 null +1 459551561 +1 455556868 This is the emulation for the data   | makeresults | eval log = mvappend("[{\\\"PhoneNumber\\\":\\\"+1 450555338\\\",\\\"AlternativePhoneNumber\\\":null,\\\"Email\\\":null,\\\"VoiceOnlyPhoneNumber\\\":null}]", "[{\\\"PhoneNumber\\\":\\\"+20 425554005\\\",\\\"AlternativePhoneNumber\\\":\\\"+1 455255697\\\",\\\"Email\\\":\\\"Dam@test.com.us\\\",\\\"VoiceOnlyPhoneNumber\\\":null}]", "[{\\\"PhoneNumber\\\":\\\"+1 459551561\\\",\\\"AlternativePhoneNumber\\\":\\\"+1 6155555533\\\",\\\"Email\\\":null,\\\"VoiceOnlyPhoneNumber\\\":\\\"+1 455556868\\\"}]") | mvexpand log ``` data emulation above ```  
I have a query that displays avg duration. How to i modify query to alert if avg ( duration) is greater than 1000 last 15 mins.  index=tra cf_space_name="pr" "cf_app_name":"Sch" "msg"."Logging Durat... See more...
I have a query that displays avg duration. How to i modify query to alert if avg ( duration) is greater than 1000 last 15 mins.  index=tra cf_space_name="pr" "cf_app_name":"Sch" "msg"."Logging Duration" AND NOT "DistributedLockProcessor" |rename msg.DurationMs as TimeT |table _time TimeT msg.Service | bucket _time span=1m | stats avg(TimeT) as "Avg" by msg.Service
this removes null and uid from the target group. | search operationName="Add member to group" | stats count by "properties.initiatedBy.user.userPrincipalName", "properties.targetResources{}.use... See more...
this removes null and uid from the target group. | search operationName="Add member to group" | stats count by "properties.initiatedBy.user.userPrincipalName", "properties.targetResources{}.userPrincipalName", "properties.targetResources{}.modifiedProperties{}.newValue", operationName, _time ``` removes uid ``` | regex properties.targetResources{}.modifiedProperties{}.newValue!=".{8}-.{4}-.{4}-.{4}-.{12}" ``` removes null value ``` | search NOT properties.targetResources{}.modifiedProperties{}.newValue="null" | rename "properties.initiatedBy.user.userPrincipalName" as initiated_user, "properties.targetResources{}.userPrincipalName" as target_user, "properties.targetResources{}.modifiedProperties{}.newValue" as group_name | eval group = replace(group_name, "\"", "") | eval initiated_user = lower(initiated_user), target_user = lower(target_user)
In my example, I use 3 backslashes when creating the sample data. To get \" in a quoted string, you need escape the backslash \\, and the quote \", resulting in \\\" In the regex, I avoided the ne... See more...
In my example, I use 3 backslashes when creating the sample data. To get \" in a quoted string, you need escape the backslash \\, and the quote \", resulting in \\\" In the regex, I avoided the need to match on backslashes, so any backslash is just the escape character. However, in my alternative method, you'll notice that there are 5 backslashes in a row. The processing of the escape characters happens once for the string itself, taking \\\\\" down to \\", and then once for the regex, taking \\" down to \".