All Topics

Top

All Topics

What does this error mean? We usually observe the log message below in the Application startup logs when the agent is unable to connect with the controller to retrieve the nodeName (in the case o... See more...
What does this error mean? We usually observe the log message below in the Application startup logs when the agent is unable to connect with the controller to retrieve the nodeName (in the case of using reuse.nodeName).  Started AppDynamics Java Agent Successfully. [Thread-0] Tue Apr 02 09:46:04 UTC 2019[INFO]: JavaAgent - Started AppDynamics Java Agent Successfully. 2019-04-02 09:46:09,545 ERROR Recursive call to appender Buffer 2019-04-02 09:46:09,547 ERROR Recursive call to appender Buffer Next steps: Could you please check if any logs are generated under the /opt/appdynamics-java/ver.xxx.xx/logs/  directory and share them if available? If there are no logs, please add the configuration line below under the instrumentatioRules applied to the problematic pod:" customAgentConfig: -Dappdynamics.agent.reuse.nodeName=false -Dappdynamics.agent.nodeName=test   If you are using Cluster Agent version >=23.11.0, to force re-instrumentation, you need to use the additional parameter in the default auto-instrumentation properties: enableForceReInstrumentation: true   apiVersion: cluster.appdynamics.com/v1alpha1 kind: Clusteragent metadata: name: k8s-cluster-agent namespace: appdynamics spec: # cluster agent properties # ... # required to enable auto-instrumentation instrumentationMethod: Env # default auto-instrumentation properties # may be overridden in an instrumentationRule containerAppCorrelationMethod: proxy nsToInstrumentRegex: default defaultAppName: "" enableForceReInstrumentation: true # ADDED # ... # one or more instrumentationRules instrumentationRules: - namespaceRegex: default customAgentConfig: -Dappdynamics.agent.reuse.nodeName=false -Dappdynamics.agent.nodeName=test # ADDED imageInfo: image: "docker.io/appdynamics/java-agent:24.8.1" agentMountPath: /opt/appdynamics imagePullPolicy: Always Afterward, please apply the changes and wait for the cluster agent to implement the new instrumentation. Then, collect the agent logs from the /opt/appdynamics-java/ver.xxx.xx/logs/ directory and attach them to the ticket. How do you collect logs from a Kubernetes pod? 1. Enter the container and pack the agent logs into a tar file. kubectl exec -it pod <pod_name> -- bash cd /opt/appdynamics-java/ver24.x.x.x/logs/ tar -cvf /java-agent-logs.tar test 2. Copy the created tar file. kubectl cp <some-namespace>/<some-pod>:/java-agent-logs.tar ./java-agent-logs.tar I hope this article was helpful/ Łukasz Kociuba
In the TA documentation at https://splunk.github.io/splunk-add-on-for-amazon-web-services/S3/ -- it is stated, "Ensure that all files have a carriage return at the end of each file. Otherwise, the la... See more...
In the TA documentation at https://splunk.github.io/splunk-add-on-for-amazon-web-services/S3/ -- it is stated, "Ensure that all files have a carriage return at the end of each file. Otherwise, the last line of the CSV file will not be indexed."  But the CSV standard (https://www.ietf.org/rfc/rfc4180.txt) does not require a CRLF at end of last row.  Can you please remedy this so a standard-compliant CSV file without a final CRLF still works and ingests the final row?  Some source solutions only output CSV files in this way (without final CRLF).
  Hi Splunk Community, I recently upgraded my Splunk environment from version 9.1.1 to the latest version. After the upgrade, I’m encountering errors when trying to restart Splunk across different ... See more...
  Hi Splunk Community, I recently upgraded my Splunk environment from version 9.1.1 to the latest version. After the upgrade, I’m encountering errors when trying to restart Splunk across different components. For example, I get the following error: Invalid key in stanza [default] in /opt/splunk/etc/system/local/indexes.conf, line 132: archiver.enableDataArchive (value: false) It seems like some configuration keys are no longer valid in the updated version. Has anyone faced similar issues, and how can I resolve these errors to ensure my configurations are compatible with the latest Splunk version? Thanks in advance for your help! Best regards,
How can we concatenate values from one field and put it in a new variable with commas. e.g If I run a search , I get number of host in host field. I want to concatenate them all in one field separat... See more...
How can we concatenate values from one field and put it in a new variable with commas. e.g If I run a search , I get number of host in host field. I want to concatenate them all in one field separated by commas.
Hi,   Our Linux machine has reached the End of Support, so we are moving the Cluster Master from one machine to another. I set up the cluster master in the new hardware and it was working well, bu... See more...
Hi,   Our Linux machine has reached the End of Support, so we are moving the Cluster Master from one machine to another. I set up the cluster master in the new hardware and it was working well, but when I changed the master node URL in the indexer it was not working. The indexer doesn't turn on by itself and even when I turn it on manually, the indexer stays running for some time but during that time the web UI of the indexer does not work. In some time the indexer stops automatically. The same happened for another indexer as well. When I revert to the old cluster master, all the issues are sorted automatically. Splunk indexer always keeps running, web UI is available. No issues are noticed. Any idea why the indexer keeps shutting down? I am Splunk version 9.0.4   Regards, Pravin
When I edit a correlation search, I want to configure the time of the drill-down search.  If I put "1h" in the form "Earliest Offset", it inputs the unix time stamp in milliseconds. Splunk expects... See more...
When I edit a correlation search, I want to configure the time of the drill-down search.  If I put "1h" in the form "Earliest Offset", it inputs the unix time stamp in milliseconds. Splunk expects the unix time stamp in seconds. Is there a workaround for this issue? ->  Correct would be:    
Hello, I have an Splunk Connect for Syslog (SC4S) server that retrieves logs from a source and transmits them to Splunk indexers. But in order to reduce the number of events, I want to filter the l... See more...
Hello, I have an Splunk Connect for Syslog (SC4S) server that retrieves logs from a source and transmits them to Splunk indexers. But in order to reduce the number of events, I want to filter the logs at the sc4s level. Note that the sc4s tool uses syslog-ng for filtering and parsing. The use case is as follows: when an event arrives on the sc4s server and contains an ip address of 10.9.40.245, the event is dropped. Does anyone have any idea how to create this filter on SC4S? Thank you.
Hello, I am going through the steps of updating Splunk SOAR Unpriv from the site documentation, but when I copy the new package to the Splunk-soar folder and want to start the phantom service, I enco... See more...
Hello, I am going through the steps of updating Splunk SOAR Unpriv from the site documentation, but when I copy the new package to the Splunk-soar folder and want to start the phantom service, I encounter the error Phantom Startup failed: postgresql-11
Hi!! I'm very new to Splunk and just want some advise.  I accidentally installed a 32 bit version of the universal forwarder on my test linux machine.  is it fine to install the 64 bit version on to... See more...
Hi!! I'm very new to Splunk and just want some advise.  I accidentally installed a 32 bit version of the universal forwarder on my test linux machine.  is it fine to install the 64 bit version on top without removing the 32bit version and will this cause issues later?  i'm also running splunk web on the same linux machine too.  any advise or suggestion please.  Amit 
Hello, We have a Splunk indexer cluster with two searchheads and would like to use the addon in the cluster: https://splunkbase.splunk.com/app/4055 We installed the addon on the searchhead without ... See more...
Hello, We have a Splunk indexer cluster with two searchheads and would like to use the addon in the cluster: https://splunkbase.splunk.com/app/4055 We installed the addon on the searchhead without ES and on all indexers via ClusterManager App. Then we set up all the inputs for the addon on the searchhead and could not select the index “M365” but only enter it manually. The problem now is that this index is not filled by the indexers! What are we doing wrong here?
Hi Experts, Has any one achieved SNMP polling to network device from redhat based Splunk HF. Trying to follow below documentation but end up getting some errors related to data bases and connections... See more...
Hi Experts, Has any one achieved SNMP polling to network device from redhat based Splunk HF. Trying to follow below documentation but end up getting some errors related to data bases and connections. Collect data with input plugins | Telegraf Documentation
I have a SHC of 3 search heads. I changed some fields in data model of 1 sh. it is replicated on 2nd SH, but 3rd SH does not have the same fields. Even though that SH was the captian.   I ran resyn... See more...
I have a SHC of 3 search heads. I changed some fields in data model of 1 sh. it is replicated on 2nd SH, but 3rd SH does not have the same fields. Even though that SH was the captian.   I ran resync command but still the same issue.    
I have an application on Splunkbase and want to rename it along with the commands and custom action. I have updated the app name by renaming the folder and updating the app ID. I've also updated t... See more...
I have an application on Splunkbase and want to rename it along with the commands and custom action. I have updated the app name by renaming the folder and updating the app ID. I've also updated the commands and custom action with the new name. While testing it on my local Splunk instance I observed that the existing application isn't getting replaced with a new one as the folder name and app name/ID is different compared to the older version. I believe that is fine as I can ask users to remove it from their instances, but I want the saved searches as well as local data of the older app to be available in the renamed app (newer app) but I'm unable to find any appropriate way of doing so. Lastly, There was a post in the community where the solution was to clone the local data from the older app to the newer app but that isn't feasible for me as I don't have access to the instances that the users are having with the older app installed. Can someone please help me with this? Also, I had a few other questions related to older applications: What is the procedure for deleting an already existing application on Splunkbase? Is emailing Splunk support the only way? Tried app archiving but it doesn't restrict the users from installing it. Is there a way to transfer the old Splunk application or account to a new account? any alternative to emailing the Splunk support team?  TL;DR How can I replace the already installed application on the user's end with the newly renamed application in Splunk? Since the names of the applications differ, Splunk installs a separate app for the new name instead of updating the existing one. If there are users who are already using the existing application and have the application's saved configurations and searches, how can we get it migrated to the newly renamed application?
Hi So I ran into a very odd and specific issue. I trx to regex-Filter a field, lets call it "parent". The field has the following structure: (not actual, the field I wanna regex, but easier to show ... See more...
Hi So I ran into a very odd and specific issue. I trx to regex-Filter a field, lets call it "parent". The field has the following structure: (not actual, the field I wanna regex, but easier to show the issue, so other options like "use .* or something wont work) C:\\Windows\\System32\\test\\ I try to regex this field like: "C:\\\\Windows\\\\System32\\\\test\\\\" This does not work But as soon as I delete this second folder "C:\\\\Windows\\\\.*\\\\test\\\\" it works. And this will be over all fields, no matter which field with a path I take, as soon as I enter this second folder, it will immediately stop working. I also tried to add different special characters, all numbers and letters, space, tab etc. also tried to change the "\\\\", Adding ".*System32.*" but nothing works out. Someone else ever ran into this issue and got a solution?
How to pass earliest and latest values to a data model search?  Example if I select a time range picker of last 30 mins but still give earliest and latest in the normal search of last 24 hours, then ... See more...
How to pass earliest and latest values to a data model search?  Example if I select a time range picker of last 30 mins but still give earliest and latest in the normal search of last 24 hours, then earliest and latest parameters take precedence and works in a normal search. How to implement the same with datamodel query?  
Is it impossible to apply SSL to HEC in the Splunk trial version?  
I have configured Splunk with SAML (ADFS) but We are facing an issue during logout, with the following error message: "Failed to validate SAML logout response received from IdP I have inserted t... See more...
I have configured Splunk with SAML (ADFS) but We are facing an issue during logout, with the following error message: "Failed to validate SAML logout response received from IdP I have inserted the below URL in logout in SAML configuration  " https://my_sh:8000/saml/logout" how can I overcome this issue??
am trying to get all 500 reports into a csv so I can utilize them as a lookup so the rules that are created can have better uniformity and more scalability and control. I am currently looking into su... See more...
am trying to get all 500 reports into a csv so I can utilize them as a lookup so the rules that are created can have better uniformity and more scalability and control. I am currently looking into sub searches and Automatic look ups. Do you know would be the best to move a query like `indextime` `sysmon` <SEARCH> | eval hash_sha256= lower(hash_sha256), hunting_trigger="", mitre_category="Defense_Evasion", mitre_technique="Obfuscated Files or Information", mitre_technique_id="T1027", mitre_subtechnique="", mitre_subtechnique_id="", apt="", mitre_link="https://attack.mitre.org/techniques/T1027/", creator="", upload_date="FIRSTDATE", last_modify_date="CURRENTDATE", mitre_version="v16", priority="" | `process_create_whitelist` | eval indextime = _indextime | convert ctime(indextime) | table _time indextime event_description hash_sha256 host_fqdn user_name original_file_name process_path process_guid process_parent_path process_id process_parent_id process_command_line process_parent_command_line process_parent_guid mitre_category mitre_technique mitre_technique_id hunting_trigger mitre_subtechnique mitre_subtechnique_id apt mitre_link creator upload_date last_modify_date mitre_version priority | collect `the_new_index`   I'm trying to have a csv with all of the evals as columns and if a field hits in the search_field it will populate the data the same as all of our reports but for only one lookup.
I had used Splunk Enterprise(Free Trial version)  and Universal Forwarder on my PC(Windows11). But, I uninstalled these becouse my PC's trouble. I want to re-install SE and UF, but installers outpu... See more...
I had used Splunk Enterprise(Free Trial version)  and Universal Forwarder on my PC(Windows11). But, I uninstalled these becouse my PC's trouble. I want to re-install SE and UF, but installers output error and "This version Splunk Enterprsise has already been installed in this PC". I tried deleting registory editor and program files of Splunk and UniversalFowarder,  run command "sc delete Splunk" in cmd. But installer's output is same. If you know this troubleshooting, please tell me.
I need to replace the command wc-l because I want to saw a dashboard of the total of messages on a source.