All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

It's hard to tell you how to fix your setup when we don't know the details of your configuration and your certs. Just one important thing - if you want to enable TLS, get yourself a CA and issue pro... See more...
It's hard to tell you how to fix your setup when we don't know the details of your configuration and your certs. Just one important thing - if you want to enable TLS, get yourself a CA and issue proper certificates. Using self-signeds everywhere will not help you much securitywise and you'll run into troubles when trying to validate them properly (which might be your case)
When trying to fetch values using below query then its not showing result in statistics, Reason is i want to fetch message.backendCalls.responseCode also in my statistics response its showing nothing... See more...
When trying to fetch values using below query then its not showing result in statistics, Reason is i want to fetch message.backendCalls.responseCode also in my statistics response its showing nothing there when i am adding same field at the end of below. Query :-   index="uhcportals-prod-logs" sourcetype=kubernetes container_name="myuhc-sso" logger="com.uhg.myuhc.log.SplunkLog" message.ssoType="Outbound" | spath "message.incomingRequest.partner" | rename message.incomingRequest.partner as "SSO_Partner" | search "SSO_Partner"=* | stats distinct_count("UUID") as Count by SSO_Partner, Membership_LOB, message.backendCalls.responseCode When i am not adding same field then its showing below results, Below is showing whole JSON from which i am trying to fetch response code. { [-] @timestamp: 2024-12-25T08:10:57.764Z Membership_Category: ******* Membership_LOB: **** UUID: ******** adminId:************* adminLevel: ************* correlation-id: ************* dd.env:************* dd.service:************* dd.span_id:************* dd.trace_id:************* dd.version:************* logger:************* message: { [-] backendCalls: [ [-] { [-] elapsedTime: **** endPoint:************* requestObject: { [+] } responseCode: 200 responseObject: { [+] } } ]  
Hi @mouniprathi , probably the issue is in the format of the time token (it must be in epochtime!) that you are passing to the second panel, could you share the code of your dashboard, with attenti... See more...
Hi @mouniprathi , probably the issue is in the format of the time token (it must be in epochtime!) that you are passing to the second panel, could you share the code of your dashboard, with attention to the two searches you're using in the two panels and the earliest and latest tags in the second panel? anyway, you have to insert the token inside the search, not in the token, something like this: <your_search> [ | makeresults | eval earliest=relative_time($excep_time$,"+120s"), latest=$excep_time$) | fields earliest latest ] | ... Ciao. Giuseppe
After some investigation, the answer is: 1) The OS is Linux Redhat 8, Splunk UF version 9.1.1, we have 2 deployment of Splunk which is Splunk Enterprise and Splunk Security, on my end (Splunk Enterp... See more...
After some investigation, the answer is: 1) The OS is Linux Redhat 8, Splunk UF version 9.1.1, we have 2 deployment of Splunk which is Splunk Enterprise and Splunk Security, on my end (Splunk Enterprise) there are only 2 inputs but on the Security end, there are a lot, with 2 apps HG_TA_Splunk_Nix and TA_nmon (roughly 40 inputs each) over 4 hosts. 2.1) There are some but not noteworthy ERROR. The errors are below: +700 ERROR TcpoutputQ [11073 TcpOutEloop] - Unexpected event id=<eventid>  -> benign ERROR as per Splunk dev +700 ERROR ExecProcessor [32056 ExecProcessor] - message from "$SPLUNKHOME/HG_TA_Splunk_Nix/bin/update.sh" https://repo.napas.local/centos/7/updates/x84_64/repodata/repomd.xml: [Errorno14] curl#7 - "Failed to connect to repo.napas.local:80; No route to host" 2.2) HealthReporter show +700 INFO PeriodHealthReporter - feature="Ingestion latency" color=red/yellow indicator="ingestion_latency_gap_multiplier" due_to_threshold_value=1 measured_value=26684 reason=Events from tracker.log have not been seen for the last 26684 seconds, which is more than the red threshold ( 210 seconds ). This typically occurs when indexing or forwarding are falling behind or are blocked." node_type=indicator node_path=splunkd.file_monitor_input.ingestion_latency.ingestion_latency_gap_multiplier. 2.3) log _internal |stats count by destIP show  idx1: 14248 idx2: 8014 idx3: 7963 idx4: 7809 Which is more concerning than I thought it would be.  2.4) Another find. The log is now lagging 1 hour behind, and still being pulled/ingest. But the internal log had stop, the time now is 9:08, but the last internal log is 8:19, with no error, which is +700 Metrics - group=thruput, name=uncooked_output, instantaneous_kbps=0.000, instantaneous_eps=0.000, average_kbps=0.000, total_k_processed=0.000, kb=0.000, ev=0, interval_sec=60  
After upgrading from Splunk Enterprise 9.2.2 to 9.2.4, the following error is displayed in the Splunk Web message: After upgrading Splunk Enterprise from 9.2.2 to 9.2.4, the following error message ... See more...
After upgrading from Splunk Enterprise 9.2.2 to 9.2.4, the following error is displayed in the Splunk Web message: After upgrading Splunk Enterprise from 9.2.2 to 9.2.4, the following error message started appearing on Splunk Web. Log collection and searching is possible. A-Server acts as an indexer, and one search and indexer are used. Search peer A-Server has the following message: Failed to start KV Store process. See mongod.log and splunkd.log for details. 2024/12/25 11:34:12 Search peer A-Server has the following message: KV Store changed status to failed. KVStore process terminated.. 2024/12/25 11:34:11 Search peer A-Server has the following message: KV Store process terminated abnormally (exit code 14, status PID 29873 exited with code 14). See mongod.log and splunkd.log for details. 2024/12/25 11:34:11 Search peer A-Server has the following message: Security risk warning: Found an empty value for 'allowedDomainList' in the alert_actions.conf configuration file. If you do not configure this setting, then users can send email alerts with search results to any domain. You can add values for 'allowedDomainList' either in the alert_actions.conf file or in Server Settings > Email Settings > Email Domains in Splunk Web. 2024/12/25 11:34:11 Failed to start KV Store process. See mongod.log and splunkd.log for details. 2024/12/25 11:26:57 Security risk warning: Found an empty value for 'allowedDomainList' in the alert_actions.conf configuration file. If you do not configure this setting, then users can send email alerts with search results to any domain. You can add values for 'allowedDomainList' either in the alert_actions.conf file or in Server Settings > Email Settings > Email Domains in Splunk Web. 2024/12/25 11:26:57 KV Store changed status to failed. KVStore process terminated.. 2024/12/25 11:26:56 KV Store process terminated abnormally (exit code 14, status PID 2757 exited with code 14). See mongod.log and splunkd.log for details. 2024/12/25 11:26:56
Hi All, I have dashboard which shows table of exceptions that happened within in 24 hours. My table has columns App_Name and time of exception. I tokenized these into $app_name$ and as $excep_time$ ... See more...
Hi All, I have dashboard which shows table of exceptions that happened within in 24 hours. My table has columns App_Name and time of exception. I tokenized these into $app_name$ and as $excep_time$ so that I can pass it to another panel in same dashboard. However once I click first panel, the second panel takes should take $app_name$ and $excep_time$ as inputs and show logs for that particualr app with time picker of $excep_time$. I have no problem adding +120 seconds but when I use earliest and latest its throwing invalid input message and when I use _time>= or _time<= it still taking UI time picker but not the search time. How do I pass time from one row to another panel and search in that passed time window  
You don't have to make up your own process for reading historical data - Splunk has documentation for that.  See https://docs.splunk.com/Documentation/Splunk/9.4.0/Indexer/Restorearchiveddata
1) Share which OS version, which UF version, and roughly how many inputs on those hosts 2) Search _internal for your hostname(IP) for error codes 2.1) Is the UF generating errors 2.2) Does the UF ... See more...
1) Share which OS version, which UF version, and roughly how many inputs on those hosts 2) Search _internal for your hostname(IP) for error codes 2.1) Is the UF generating errors 2.2) Does the UF get indexing paused/congested reports back from the IDX tier. 2.3) Does the UF show round robin to all IDX elements or is there a discrepancy in outputs.conf? Lets start with these.
If your indexers and other devices are no longer indexing data then you need to check individual server splunkd.log files.  Tail and grep for details around connections. Any error codes will help yo... See more...
If your indexers and other devices are no longer indexing data then you need to check individual server splunkd.log files.  Tail and grep for details around connections. Any error codes will help you and us in determining the issues.
Hello everyone I currently have a cluster of 2 indexes and also 1 search header mounted on Linux and everything is going well with it, these days what I need is to restore indexed data from 1 year a... See more...
Hello everyone I currently have a cluster of 2 indexes and also 1 search header mounted on Linux and everything is going well with it, these days what I need is to restore indexed data from 1 year ago, which I have on a disk mounted on the server, I am trying to be able to view that data from my header, but I can't do it, I have done tests like the following: -I have created a new index called mydb2, so as not to alter my original index (mydb), and I have copied several of the directories that have this name "db_1711654894_1711568541_1281_6C91679A-EBBC-4F09-A710-1CC8C8CA8FDC" to the $SPLUNK_DB/mydb2/db/ directory, when doing this I was not successful -From the cluster I restarted the 2 indexes, and it didn't work either, but after 2 days, data began to appear, but only the data corresponding to 4 days, however the data directories that I copied to $SPLUNK_DB/mydb2/db/ are several and correspond to 5 months, more days have passed, and I have restarted, and no more data has appeared Does anyone in the community have knowledge of this? To know how to view historical data that has been restored from a backup  
It's nice to see that you can now post a URL in ES - thank you!
I have followed the configuration steps as outlined, but unfortunately, I have lost the connection between the components. I have applied the certificate configurations and other related settings in ... See more...
I have followed the configuration steps as outlined, but unfortunately, I have lost the connection between the components. I have applied the certificate configurations and other related settings in server.conf, including modifying the search peers to use HTTPS in distsearch.conf. I also modified the master license slave to use HTTPS, but I did not make any changes to the license master itself. Could you confirm if there are any specific configurations required on the master license? After applying the changes, the Server Manager has become extremely slow, and I can no longer access its web interface. Additionnaly, I lost connectivity between the components.  Is there someone who could help me with resolving this issue please?
Hi @BRFZ , KV-Store is usually enabled only on Search Heads and disabled on the other roles (Indexers, Heavy Forwarders, et...) Ciao. Giuseppe
Hello Splunk Community, I am working on the configuration of a distributed Splunk deployment, and I need clarification regarding the KV Store. Could you please confirm where the KV Store should be c... See more...
Hello Splunk Community, I am working on the configuration of a distributed Splunk deployment, and I need clarification regarding the KV Store. Could you please confirm where the KV Store should be configured in a distributed environment? Should it be enabled on the Search Heads, Indexers, or another component of the deployment? Any guidance on best practices would be greatly appreciated. Thank you for your help! Best regards,
No I do not see that menu. More information: I am using Trial Observability (requested through Splunk website) to clear an accreditation course. When I click new token, I only see "Create access to... See more...
No I do not see that menu. More information: I am using Trial Observability (requested through Splunk website) to clear an accreditation course. When I click new token, I only see "Create access token" which has "Name", "capability scope", "description; followed by "Permission" and "Expiration" when I click next. Once the above steps are complete, I only get "New token has been created".  I have now moved to using "add implementation" to get the token ID to be used.
Hello, I have a case where the logs from 4 host are lagging behind. Why I say inconsistant is the laggig is differ from 5 to 30 minutes, sometime didn't at all.  When the log don't show up 30 minute... See more...
Hello, I have a case where the logs from 4 host are lagging behind. Why I say inconsistant is the laggig is differ from 5 to 30 minutes, sometime didn't at all.  When the log don't show up 30 minutes or more, I go to the forwarder management and disable/enable apps, restart Splunkd, then the log continue with 1, 2 seconds lag. The other host also lagging behind at peak hour, but only for 1 or 2 minutes (maximum 5' for source with large amount of logs).  I admit that our indexer cluster is not up to par in IOPS requirement but for 4 paticular host to be visible underperform is quite concerning.  Can someone show me steps to debug and solve the problems. 
Oh my, you are right, I made a stupid mistake! I so used to use props.conf in the most (personal) cases that I automatically tried to use sourcetype in stanza name. But I need the input of splunk U... See more...
Oh my, you are right, I made a stupid mistake! I so used to use props.conf in the most (personal) cases that I automatically tried to use sourcetype in stanza name. But I need the input of splunk UF introspection, that generates splunkd 'sourcetype'. And this monitor named 'monitor://$SPLUNK_HOME\var\log\splunk\splunkd.log'
Ok, finically I figured out by myself. Here is correct code ``` transform/istio-proxy: error_mode: ignore log_statements: - context: log statements: - keep_keys(resource.attributes, ["_time",... See more...
Ok, finically I figured out by myself. Here is correct code ``` transform/istio-proxy: error_mode: ignore log_statements: - context: log statements: - keep_keys(resource.attributes, ["_time", "cluster_codename", "host.name", "com.splunk.index", "splunk_server", "com.splunk.source", "com.splunk.sourcetype"]) where resource.attributes["com.splunk.sourcetype"] == "kube:container:istio-proxy" - delete_key(attributes, "logtag") where resource.attributes["com.splunk.sourcetype"] == "kube:container:istio-proxy" ```  The point is should use resource.attributes["com.splunk.sourcetype"] instead of  attributes["sourcetype"] 
Wait. What is that [splunkd] stanza? You have an input called splunkd? What is it ingesting?
I am trying to use the Splunk Add-on for Tomcat  first time. When I try Add Account this results in error message below. I think the add-on expects Java to be somewhere. Java is installed on my all-i... See more...
I am trying to use the Splunk Add-on for Tomcat  first time. When I try Add Account this results in error message below. I think the add-on expects Java to be somewhere. Java is installed on my all-in-one Splunk server, where the add-on is installed. How do I make Java available to this add-on? Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_tomcat/lib/splunktaucclib/rest_handler/handler.py", line 142, in wrapper for name, data, acl in meth(self, *args, **kwargs): File "/opt/splunk/etc/apps/Splunk_TA_tomcat/lib/splunktaucclib/rest_handler/handler.py", line 107, in wrapper self.endpoint.validate( File "/opt/splunk/etc/apps/Splunk_TA_tomcat/lib/splunktaucclib/rest_handler/endpoint/init.py", line 85, in validate self._loop_fields("validate", name, data, existing=existing) File "/opt/splunk/etc/apps/Splunk_TA_tomcat/lib/splunktaucclib/rest_handler/endpoint/init.py", line 82, in _loop_fields return [getattr(f, meth)(data, *args, **kwargs) for f in model.fields] File "/opt/splunk/etc/apps/Splunk_TA_tomcat/lib/splunktaucclib/rest_handler/endpoint/init.py", line 82, in <listcomp> return [getattr(f, meth)(data, *args, **kwargs) for f in model.fields] File "/opt/splunk/etc/apps/Splunk_TA_tomcat/lib/splunktaucclib/rest_handler/endpoint/field.py", line 56, in validate res = self.validator.validate(value, data) File "/opt/splunk/etc/apps/Splunk_TA_tomcat/bin/Splunk_TA_tomcat_account_validator.py", line 85, in validate self._process = subprocess.Popen( # nosemgrep false-positive : The value java_args is File "/opt/splunk/lib/python3.9/subprocess.py", line 951, in __init_ self._execute_child(args, executable, preexec_fn, close_fds, File "/opt/splunk/lib/python3.9/subprocess.py", line 1837, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) FileNotFoundError: [Errno 2] No such file or directory: 'java'