All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

  I'm trying to display the city and country name for all these IP Addesses which I extracted from my windows log file after uploading it into splunk However, I do not get the results that I wa... See more...
  I'm trying to display the city and country name for all these IP Addesses which I extracted from my windows log file after uploading it into splunk However, I do not get the results that I want. Is there a fix for this?
Hi i'm new When the information is in transit, for example from a forwarder to the indexer, do you have any type of encryption? Another question is when the information is stored in the indexer, ho... See more...
Hi i'm new When the information is in transit, for example from a forwarder to the indexer, do you have any type of encryption? Another question is when the information is stored in the indexer, how is it stored? with or without encryption? thanks, sorry if my questions are stupid
Hi,  Can someone help me with this. I have fields with values  SP=3390510 and TP=3394992 I am trying to get Success percentage    | eval Success=(SP/TP)*100 the expected value is 99.8679% but I a... See more...
Hi,  Can someone help me with this. I have fields with values  SP=3390510 and TP=3394992 I am trying to get Success percentage    | eval Success=(SP/TP)*100 the expected value is 99.8679% but I am value as 100.0000% I also set precision to 0.0000 but I am not getting the expected results       
I am new to Splunk. I have the logs in the following format for our servers.  Host, CPU, %USAGE Host, Memory, %Usage Host, Load Average, % USAGE Host, Swapping, %Usage I need to create a query... See more...
I am new to Splunk. I have the logs in the following format for our servers.  Host, CPU, %USAGE Host, Memory, %Usage Host, Load Average, % USAGE Host, Swapping, %Usage I need to create a query to display the results in the following format.  HOST, CPU Avg Usage, Memory Avg Usage, Load Avg Usage, Swapping Avg Usage My query below is printing the same value for each of fields. Ex: it prints the same cpu value for all the rows. Any suggestions on the query? index = index1 sourcetype=.... source=... | eval cpu_usage = [search index = ... sourcetype=... source=* | search metric_name=CPU_Utilization | stats avg(Usage) as "CPU_Usage" by host_name | return $CPU_Usage ] | eval memory_usage = [search index = ... sourcetype=... source=* | search metric_name=Memory_Utilization | stats avg(Usage) as "Memory_Usage" by host_name | return $Memory_Usage ] | eval load_usage = [search index = ... sourcetype=... source=* | search metric_name=Load_Utilization | stats avg(Usage) as "Load_Usage" by host_name | return $Load_Usage ] | eval swapping_usage = [search index = ... sourcetype=... source=* | search metric_name=Swapping_Utilization | stats avg(Usage) as "Swapping_Usage" by host_name | return $Swapping_Usage ] | stats values(cpu_usage) as "CPU Utilization", values(memory_usage) as "Memory Utilization", values(load_usage) as "Load Utilization", values(swapping_usage) as "Swapping Utilization" by host_name
Hi All, We have a summary indexing configured on Splunk search head(one of the search head in cluster which forwards the data to indexer layer). All the summary indexing jobs will work perfectly fin... See more...
Hi All, We have a summary indexing configured on Splunk search head(one of the search head in cluster which forwards the data to indexer layer). All the summary indexing jobs will work perfectly fine until I restart the search head. If I restart the Search head cluster (Begin rolling restart from UI) then it will not impact summary indexing jobs. It is impacting only when if I restart splunk search head from the command line. (splunk restart) As an alternative, I'm refreshing the summary indexing jobs from GUI after the restart. But that is not a good approach to do the stuff manually. I hope you got my problem statement. Please let me know how to fix this issue permanently.     Thanks, Ravi      
All, I am trying to send to from an external forwarder to a DMZ Heavy Forwarder that is behind a firewall w/ 9997 open. In my attempt to send data, I am getting "TCP output processor has paused the ... See more...
All, I am trying to send to from an external forwarder to a DMZ Heavy Forwarder that is behind a firewall w/ 9997 open. In my attempt to send data, I am getting "TCP output processor has paused the data flow" and unable to get  to get results to the DMZ forwarder.   I have similar setup with HEC and Syslog that works just fine but for traffic on 9997, Splunk is refusing to send. The box is ping-able and 9997 is open. The firewall is set up to route to the DMZ host as well.   Anyone have some  ideas on what I may be doing wrong or things to consider when setting up a Splunk HF in a DMZ?
I am trying to filter a set of data from a single file with the below conditions and send the filtered data to different indexes. Events are like: [ file.txt] <85>XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX... See more...
I am trying to filter a set of data from a single file with the below conditions and send the filtered data to different indexes. Events are like: [ file.txt] <85>XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX <25>XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX So, event with 85 one should go to index A and 25 one should go to index B  
Hi experts Bluecoat proxysg logs are not parsing properly, we are sending  logs from Bluecoat proxy to syslog-ng server in W3C format. we notice that the bluecoat proxy logs itself does not sending ... See more...
Hi experts Bluecoat proxysg logs are not parsing properly, we are sending  logs from Bluecoat proxy to syslog-ng server in W3C format. we notice that the bluecoat proxy logs itself does not sending the log headers. Does TA Require header in logs. please need help   Thanking in advance
Hello, Running Splunk Universal Forwarder 7.3.6 (build 47d8552a4d84) on CentOS 7. I am sending two logs -- suricata and bro - to indexers in AWS. The default splunk group for these two is lbssl I ... See more...
Hello, Running Splunk Universal Forwarder 7.3.6 (build 47d8552a4d84) on CentOS 7. I am sending two logs -- suricata and bro - to indexers in AWS. The default splunk group for these two is lbssl I want to split the two up like so: suricata goes to lbssl (as it always has) bro goes to NAD Based on this thread: https://community.splunk.com/t5/Getting-Data-In/How-can-we-send-data-to-2-different-groups-of-indexers/td-p/280318 I have set my outputs.conf file #ESG_072114_03 [tcpout] defaultGroup = lbssl [tcpout:lbssl] compressed = true server = old-url.com:443 sslCertPath = $SPLUNK_HOME/etc/auth/server.pem sslPassword = long-encrypted-password-goes-here sslRootCAPath = $SPLUNK_HOME/etc/apps/ssl_forwarder/cert/ca_chain.pem sslVerifyServerCert = false [tcpout:NAD] compressed = true server = new-url-for-bro-NAD-flow:443 sslCertPath = $SPLUNK_HOME/etc/auth/server.pem sslPassword = another-long-encrypted-password-goes-here sslRootCAPath = $SPLUNK_HOME/etc/apps/ssl_forwarder/cert/ca_chain.pem sslVerifyServerCert = false and in inputs.conf for the bro app added routing option: [default] _TCP_ROUTING = NAD host=server-name-goes-here-01 Never get any data for old-url which is the suricata flow that got to splunk before changes. new-url-for-bro-NAD-flow does appear to get data. Any thoughts on what is incorrect/misconfigured or additional needed configs would be helpful.
Hello, and thanks for reading this. Having issues with securing the web site for our on-prem Splunk 8 Enterprise instance. This is a vanilla install at this point, so little customization has taken ... See more...
Hello, and thanks for reading this. Having issues with securing the web site for our on-prem Splunk 8 Enterprise instance. This is a vanilla install at this point, so little customization has taken place. The Splunk web site works fine without a cert. At the moment, we are trying to simply get the web site restricted to TLS 1.2 traffic only using a third-party certificate - in the future, we will look at other facets of this. I tried following the steps found in the "Securing the Splunk Platform" document (https://docs.splunk.com/Documentation/Splunk/8.0.5/Security/WhatyoucansecurewithSplunk). Work Log: 1. Requested and downloaded a cert from a 3rd party organization, trusted in our domain.  Imported the cert into the server (Windows Server 2016, if it matters) to complete the enrollment process. 2. Exported the cert as a PFX file, including the private key. Exported the Intermediate CA cert as a CER file. Exported the Root CA cert as a CERT file. 3. Opened an admin PowerShell window, and navigated to "$SPLUNK_HOME/bin/" Ran the following (filenames and paths are place holders): .\splunk.exe cmd openssl pkcs12 -in C:\certs\SSL.pfx -nocerts -out C:\certs\SSL_key.pem -nodes .\splunk.exe cmd openssl pkcs12 -in C:\certs\SSL.pfx -nokeys -out C:\certs\SSL_cert.pem -nodes .\splunk.exe cmd openssl x509 -in C:\certs\Int_CA.cer -out C:\certs\Int_CA_cert.pem .\splunk.exe cmd openssl x509 -in C:\certs\Root_CA.cer -out C:\certs\Root_CA_cert.pem 4. Using Notepad, I opened the SSL_Cert.pem, Int_CA_cert.pen, and Root_CA_cert.pem files, and I copied and pasted the contents from the BEGIN CERTIFICATE line to the END CERTIFICATE line, combining them into a single PEM file (let's call it SSL_combined.pem) like so: -----BEGIN CERTIFICATE----- <SSL Certificate> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <Intermediate CA Certificate> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <Root CA Certificate> -----END CERTIFICATE----- 5. Now that the combined certs and key were in PEM format, I created a folder for them at "$SPLUNK_HOME/etc/mycerts/" and copied them there. 6. I edited the "$SPLUNK_HOME/etc/system/local/web.conf" file as follows: Under the [settings] section, I changed the value of enableSplunkWebSSL from false to true. I added a line which read privKeyPath = /home/etc/auth/mycerts/SSL_key.pem I added a line which read serverCert = /home/etc/auth/mycerts/SSL_combined.pem I changed the value of sslVersions from tls to tls1.2 7. Finally, I restarted the Splunk services by running ".\splunk.exe restart splunkd" which completes with no errors. However, when we try to open the Splunk web page, the browser hangs at "Performing TLS Handshake" in Firefox. In Chrome, it fails with an ERR_TIMED_OUT message. In IE 11, the browser simply hangs up with no error. Captured log in Firefox of connection attempt, but I never see any connection get established. There is an attempt to connect, which times out.  Any idea which direction to go from here?
Hi, We are planning to create alerts based on the search pattern we are given. We are very new and need your suggestions for this. We want to create an alert in case of any job failure. For that ... See more...
Hi, We are planning to create alerts based on the search pattern we are given. We are very new and need your suggestions for this. We want to create an alert in case of any job failure. For that we used " index="3977"  "Exit status 1". If we do it this way we are getting an alert email as expected. What we are trying to do is include the job name in the email since we have 20 jobs and are not sure which alert is being triggered for which job. If we apply a separate alert as index="3977" "job_name", we are getting results but we can't track the job status from the same chunk log in Splunk, both are in separate chunks. How can we achieve this????
Hi everyone, I don't know how to do the average of the "Moy" for all Debit = 5 and per month with DateJour and after this exactly the same but with Debit =25.   Has somebody an idea ?  I trie... See more...
Hi everyone, I don't know how to do the average of the "Moy" for all Debit = 5 and per month with DateJour and after this exactly the same but with Debit =25.   Has somebody an idea ?  I tried many instructions but don't work ..   Thank you in advance
Hi Everyone, I am suppose to configure a VMware add-on in my environment to collect data from Vcenters. I have been configured DCN as one of my HF in the environment and Vcenter connections are a... See more...
Hi Everyone, I am suppose to configure a VMware add-on in my environment to collect data from Vcenters. I have been configured DCN as one of my HF in the environment and Vcenter connections are also connecting. Started the scheduler on Search head, still betting the error as below:  2020-08-05 11:31:25,661 ERROR [ta_vmware_collection_scheduler://puff] Problem with hydra scheduler ta_vmware_collection_scheduler://puff: establishCollectionManifest() got an unexpected keyword argument 'is_timediff_lt_4hr' Traceback (most recent call last): File "/opt/splunk/etc/apps/SA-Hydra/bin/hydra/hydra_scheduler.py", line 2130, in run total_heads=head_count, is_timediff_lt_4hr=is_timediff_lt_4hr, old_token_list=old_token_list) TypeError: establishCollectionManifest() got an unexpected keyword argument 'is_timediff_lt_4hr'   Anyone please let me know if you resolved such error for Hydra TA i.e scheduler to collect data.
Hi ,  I get following error message when trying to select split by field option, i choose less than 5 features but still getting following error. I have a total of 7195 rows in my dataset and 17 fea... See more...
Hi ,  I get following error message when trying to select split by field option, i choose less than 5 features but still getting following error. I have a total of 7195 rows in my dataset and 17 features excluding time. I referred the documentation but could not make out why this error is occurring   Got this error when 1 feature was selected: Error in 'fit' command: Error while fitting "DensityFunction" model: The number of groups can not exceed 1024; the current number of groups is 7195. Please find detailed information about the number of groups in docs. Got this error when 2 features were selected: Error in 'fit' command: Error while fitting "DensityFunction" model: The number of groups can not exceed 1024; the current number of groups is 5452. Please find detailed information about the number of groups in docs.
We are planning to use Splunk free by following this steps https://www.logbinder.com/Solutions/ActiveDirectory.  What are the danger of using the free version in our system? Can we get a quote for th... See more...
We are planning to use Splunk free by following this steps https://www.logbinder.com/Solutions/ActiveDirectory.  What are the danger of using the free version in our system? Can we get a quote for the licensed version? We only need the AD changes report for about 150 users. Thank you. 
Hi Only just started using the API but have been unable to track down documentation on how to exclude fields from the job details return values. Querying the /services/search/jobs with the header '... See more...
Hi Only just started using the API but have been unable to track down documentation on how to exclude fields from the job details return values. Querying the /services/search/jobs with the header 'search=sid=<xxxxx.xxxxx>' which is working fine but is returning too much information under to content, in particular content/dict/phase0 and /remotesearch. Small searches are ok but when its a big search these can be >10 KB in size and takes several seconds to return.  Tried defining fields in the header summerize=true, f=author but none of this seems to help reduce the return size. I'm only really after 6-8 fields so don't really need 90+ of the data that is returned https://docs.splunk.com/Documentation/Splunk/8.0.5/RESTREF/RESTsearch#search.2Fjobs Thanks
Does Splunk Deal with following for Vibration Analysis ? Operational Deflection Shape (ODS) • Nyquist Plot • Waterfall Plot • Cascade Plot • Shaft centre plot
Could you please help me with the below stanza for the interval which should capture the data in micro seconds [WMI: Services] interval = wql = SELECT Name, State, Status FROM Win32_Service WHERE ... See more...
Could you please help me with the below stanza for the interval which should capture the data in micro seconds [WMI: Services] interval = wql = SELECT Name, State, Status FROM Win32_Service WHERE (Name = '*'  ) disabled = 0 I'm unable to capture services restart data from window servers which are taking less than 1 second. Could you please suggest with the interval which captures microseconds data
Hi, I have rsyslog configured and receive different syslog data on different ports.  but is there a log file where this information gets captured ? Or somewhere where we can see which log is comin... See more...
Hi, I have rsyslog configured and receive different syslog data on different ports.  but is there a log file where this information gets captured ? Or somewhere where we can see which log is coming from which port.  how to confirm ?