All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @KendallW, Apologies for the late reply. I tried it but still doesn't return the expected outcome.
I cannot renew my developer license according https://dev.splunk.com/enterprise/dev_license/. I get the same error code 400 every time. I also cannot get a response when I send an email to devinfo@sp... See more...
I cannot renew my developer license according https://dev.splunk.com/enterprise/dev_license/. I get the same error code 400 every time. I also cannot get a response when I send an email to devinfo@splunk.com. What should I do?
I understood, but you can't do what you want to do and it's unlikely to get supported, but by all means create a new idea in that ideas link I posted.  
Yes, you can just say  | eval behindfirewall=max(behindfirewall) however, I am not sure if that will totally work, because if something in my example is attached to LoadBalancer-B, then it will ass... See more...
Yes, you can just say  | eval behindfirewall=max(behindfirewall) however, I am not sure if that will totally work, because if something in my example is attached to LoadBalancer-B, then it will assume it's behind the firewall, so not totally sure if my suggestion is valid
Correct, the match statement will break things because all events will all match the match key
Does your lookup definition contain nnn* or just nnn - to use wildcard, the lookup itself should have an asterisk
The page has moved. Recommend you google it. Today it's found here https://www.splunk.com/en_us/resources/personalized-dev-test-licenses.html?301=/dev-test&locale=en_us
Yes, I've created a lookup definition and set the Match type as 'WILDCARD(Prefix)'. However, I'm still not getting results. When commenting out the lookup, I get results. 
Hi, I want to ask where i can find the indexed data stored as per the below, i found the bucket consist of the RAW data, index file and some meta data :
I've created the HF, and set up the ip allow list. From the Azure Connection troubleshoot, the testing is successful, NSG has createa and allow all connection to internet, then Windows firewall is di... See more...
I've created the HF, and set up the ip allow list. From the Azure Connection troubleshoot, the testing is successful, NSG has createa and allow all connection to internet, then Windows firewall is disabled in the VM. but I still get this error. 06-16-2024 22:59:24.253 +0000 WARN AutoLoadBalancedConnectionStrategy [8760 TcpOutEloop] - Cooked connection to ip=1.2.3.4:9997 timed out 06-16-2024 22:59:24.563 +0000 ERROR TcpOutputFd [8760 TcpOutEloop] - Read error. An existing connection was forcibly closed by the remote host. 06-16-2024 22:59:24.876 +0000 ERROR TcpOutputFd [8760 TcpOutEloop] - Read error. An existing connection was forcibly closed by the remote host.   running the comand netstat -anob to check the connections it will be stuck in the SYN_SENT status. but the messages said HF has been blocked for blocked_seconds=10 any ideas for fixing this issues?
Have you set up the Prefix field to match_type WILDCARD? See Share a lookup table file with apps.
Hello,  I have a lookup table where a list of MAC addresses are listed with the associated Vendors; basically an identifier. However, the mac address in this lookup table (column name is 'prefix') ... See more...
Hello,  I have a lookup table where a list of MAC addresses are listed with the associated Vendors; basically an identifier. However, the mac address in this lookup table (column name is 'prefix') only has the three characters - xx:xx:xx. What I'm trying to do is write a query to find devices that were assigned/renewed an IP address from the DHCP server and based on their Mac address information in the result, identify the vendor. I was able to filter the first three characters from the result but when adding the lookup table to enrich the result with the Vendor information, I'm getting zero results. What am I doing wrong here? Thanks in advance!  index=some_dhcp description=renew | eval d_mac=dest_mac | rex field=d_mac "(?P<d_mac>([0-9-Fa-f]{2}[:-]){3})" | lookup vendor.csv Prefix as d_mac OUTPUT Prefix Vendor_Name | search Prefix=* | table date dest_mac Vendor_Name description
Hi @whitecat001, Alerts (scheduled searches with alert actions enabled) can fail to run for many reasons. For example, searches can fail because of SPL syntax errors, searches can be skipped because... See more...
Hi @whitecat001, Alerts (scheduled searches with alert actions enabled) can fail to run for many reasons. For example, searches can fail because of SPL syntax errors, searches can be skipped because of scheduling contention, actions can fail, or splunkd may not be running. What is your definition of "failed to run?"  
Hi Paul, this join looks to  be working. Thank you very much..
Hi @sarit_s6, If you haven't already, enable secure access to your instance's REST API by following the guidance at https://docs.splunk.com/Documentation/SplunkCloud/latest/RESTTUT/RESTandCloud. Th... See more...
Hi @sarit_s6, If you haven't already, enable secure access to your instance's REST API by following the guidance at https://docs.splunk.com/Documentation/SplunkCloud/latest/RESTTUT/RESTandCloud. The full list of supported REST API endpoints is at https://docs.splunk.com/Documentation/SplunkCloud/latest/RESTREF/RESTprolog. To move a saved search, use the saved/searches/{name}/move endpoint: $ curl https://{instance}:8089/servicesNS/{user}/{app}/saved/searches/{name}/move -d app={dest_app} -d user={dest_user} The move endpoint itself isn't documented; however, you can get a list of supported endpoints from the object: $ curl 'https://{instance}:8089/servicesNS/{user}/{app}/saved/searches/{name}?output_mode=json' | jq '.entry[].links' { "alternate": "/servicesNS/{user}/{app}/saved/searches/{name}", "list": "/servicesNS/{user}/{app}/saved/searches/{name}", "_reload": "/servicesNS/{user}/{app}/saved/searches/{name}/_reload", "edit": "/servicesNS/{user}/{app}/saved/searches/{name}", "remove": "/servicesNS/{user}/{app}/saved/searches/{name}", "move": "/servicesNS/{user}/{app}/saved/searches/{name}/move", "disable": "/servicesNS/{user}/{app}/saved/searches/{name}/disable", "dispatch": "/servicesNS/{user}/{app}/saved/searches/{name}/dispatch", "embed": "/servicesNS/{user}/{app}/saved/searches/{name}/embed", "history": "/servicesNS/{user}/{app}/saved/searches/{name}/history" } The form data parameters for the move endpoint are app and user as noted above. Unofficially, you can find all of the above by moving an object in Splunk Web while observing the /{locale}/splunkd/__raw/servicesNS REST API calls in your browser's dev tools. Those calls can be converted directly to /servicesNS REST API calls on the management port.
Hi @DarkMSTie, identify the correct sourcetype is the first (and most important) categorization that you can do to recognize your Data Flows, so don't leave to Splunk the choice of the sourcetype, ... See more...
Hi @DarkMSTie, identify the correct sourcetype is the first (and most important) categorization that you can do to recognize your Data Flows, so don't leave to Splunk the choice of the sourcetype, also because in this way it probably will use a standard (as e.g. csv) sourcetype that could be common also with other Data Flows and you're not sure to identify only these logs. So identify the sourcetype (e.g. "bro") in inputs.conf, eventually cloning an existing one (e.g. csv), so you are sure to identify your logs. In addition, if this Data Flow has some different configuration, you can use it without problems to other data Flows. In other words, the most important field to identify a Data Flow isn't index but sourcetype, also because you associate to sourcetype al the fields extractions, etc... Ciao. Giuseppe
Hi @sivaranjani , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @sivaranjani .. on the search query itself you can include the earliest and latest times.  or, as said in the other reply, on timepicker, you can use the "Advanced".. 
How did you resolve this error? I am also facing same issue but before granting access.