All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I have a list column with different values and i want to count the number of occurence of a specific value. For example I have a column temp which contain list of values like below - temp /r... See more...
Hi, I have a list column with different values and i want to count the number of occurence of a specific value. For example I have a column temp which contain list of values like below - temp /review-basket /review-basket /review-basket /check-your-details Now what i want another column with a count of the occurence of /review-basket which in the above case is 3. Any help is appreciated.
Hello All, as far as i know splunk merges all probs.conf (All TAs, Apps, Add-ons) in one single probs.conf. Like the other conf also. Values can be overwritten depending on the placement in the ... See more...
Hello All, as far as i know splunk merges all probs.conf (All TAs, Apps, Add-ons) in one single probs.conf. Like the other conf also. Values can be overwritten depending on the placement in the folder structure. (default/local ... system/default... etc/default .. etc.) As far as i understand the probs.conf, it gets used many times in the process of an input processing. 4x ( DataPipeline ) I have inputs from a UDP Port: [udp://1516] connection_host = dns index = net sourcetype = syslog Is it right that my probs.conf from Splunk_TA_juniper (see below) does not apply on this input -> because of the [< spec>] = [juniper] which means that this stanza only apply to Input with the sourcetype=juniper? (See probs.conf doku search for: <sourcetype>, the source type of an event ) This is the global part of the probs.conf from the juniper TA: ###### Globals ###### ## Apply the following properties to juniper data [juniper] SHOULD_LINEMERGE = false # For load balancing on UF EVENT_BREAKER_ENABLE = true TRANSFORMS-force_info_for_juniper = force_host_for_netscreen_firewall,force_sourcetype_for_netscreen_firewall,force_sourcetype_for_juniper_nsm,force_sourcetype_for_juniper_nsm_idp,force_sourcetype_for_juniper_sslvpn,force_sourcetype_for_junos_firewall,force_sourcetype_for_juniper_idp,force_sourcetype_for_junos_idp,force_sourcetype_for_junos_aamw,force_sourcetype_for_junos_secintel If i get everything right - the first stanza [juniper] defines that settings from this part in probs.conf for stanza [juniper] only apply if the INPUT stream has the sourcetype=juniper. If this is not the case the the stanza does nothing. So if i mess up with input sourcetypes this means that it could be possibile that a Splunk_TA_* does nothing.. At the mid of the probs the sources are also relevant but for this specific part is it mandatory that sourcetype is equal juniper? I ask this very specific because i plan to not use the default input UDP Ports. Instead i want to use the syslog-ng which could mess up with sourcetypes AND sources. That would mean for me that i have to look into every probs.conf for this kind of input to verify which input source or sourcetype it reacts? to make sure that the config applys to my data input. If i'm wrong on sth please let me know. I did my explanation a bit wider than it should. I looked very long in all docs and questions to extract this much of information. So if there is no false claim than i could be a problem solver for someone else too. Best regards, Michele E.
Hi , Anyone had a package of sharepoint addon to monitor on-prem sharepoint? Or any suggestion to monitor sharepoint on-premise? Why sharepoint add-on is not available in splunkbase? Rega... See more...
Hi , Anyone had a package of sharepoint addon to monitor on-prem sharepoint? Or any suggestion to monitor sharepoint on-premise? Why sharepoint add-on is not available in splunkbase? Regards, Ansif
I added a custom object as one of the inputs but I am not able to see the records in Splunk. It is not visible in the sources also. The login and authentication to the salesforce org is successful as... See more...
I added a custom object as one of the inputs but I am not able to see the records in Splunk. It is not visible in the sources also. The login and authentication to the salesforce org is successful as I am able to see logs/login history. input.conf: [sfdc_object://Expense_c] account = san_eng24 interval = 40 limit = 1000 object = Expensec object_fields = Amountc ,CreatedById,LastModifiedById,Reimbursedc ,Name,Date_c order_by = Name Note in the above input.conf, all objects and custom field are appended by underscore underscore 'c' . This post here is not accepting special characters so showing it in wrong way. Can anyone help on this?
I have python script configured in the HF , the script output are enclosed with unicode character U' in the output so splunk is unable to parse the fields properly. script output: {u'rows': [{... See more...
I have python script configured in the HF , the script output are enclosed with unicode character U' in the output so splunk is unable to parse the fields properly. script output: {u'rows': [{u'timestamp': 1585924500001L, u'RAM_': 1000, u'Allocated': 4000.78, u'queue': 0, u'Details': u'Connected': 2, u'Queue': 0, u'EventsQueue': 0}] I had include in Props.conf for charset but still it was not parsing properly. props.conf [sourcetype::GetStreamData] CHARSET=utf-8 [sourcetype::GetStreamData] CHARSET=utf-8
We are looking for Zscaler admin portal "ZPA Current Conneted Users" count to display on this zscalersplunkapp dashboard. So far there are only "Private Access Overview" with top users, top polic... See more...
We are looking for Zscaler admin portal "ZPA Current Conneted Users" count to display on this zscalersplunkapp dashboard. So far there are only "Private Access Overview" with top users, top policies, top connectors, top app group and top application available. Is it possible fo zscalersplunkapp to display "ZPA Current Conneted Users" with the same result from Zscaler admin portal.
We migrated Splunk ES from an old windows server to a new Linux server. Everything is good to go except we want to copy the old data from the incident_review kv store. It seemed simple to |input... See more...
We migrated Splunk ES from an old windows server to a new Linux server. Everything is good to go except we want to copy the old data from the incident_review kv store. It seemed simple to |inputlookup incident_review on the old search head and download that to a .csv (old_kv.csv), which could be uploaded to the new search head where |inputlookup old_kv.csv | outplutlookup incident_review append=t would merge the old data into the new kvstore. Seems pretty straight forward, but I don't know how the notables index is joined to the incident_review kv store in ES. Does anyone know if this would work?
I have a requirement to duplicate a default SPLUNK sourcetype. The duplicate sourcetype is based on the JSON sourcetype. Effectively my new sourcetype is configured as follows [json:myapp] INDE... See more...
I have a requirement to duplicate a default SPLUNK sourcetype. The duplicate sourcetype is based on the JSON sourcetype. Effectively my new sourcetype is configured as follows [json:myapp] INDEXED_EXTRACTIONS = JSON TIMESTAMP_FIELDS = date TIME_FORMAT = %Y%m%d TZ = ****** [redacted] detect_trailing_nulls = auto SHOULD_LINEMERGE = false KV_MODE = none AUTO_KV_JSON = false My question is around deploying this to my distributed environment. The above configuration on a standalone instance would be to create a local props.conf under etc/system/local with this inserted. However in a distributed environment (with Cluster Masters, Index Cluster, search heads etc.) should this be deployed as a typical app or must I manually adjust each instance in my environment to pick this up. It is not being handled differently to standard JSON so hence using the same transforms that JSON would use hence the INDEXED_TRANSACTIONS are no different. The only item being added in this line is the Time Zone. I am not changing the whole JSON as this one particular device I need to handle differently to everything else, hence the need. The question is more around the default duplication of existing sourcetypes.
As an o365 admin I have a TLS email connector setup for my splunk cloud instance but I am unable to find the corresponding configurations on the splunk cloud side. It is not within the Settings < Em... See more...
As an o365 admin I have a TLS email connector setup for my splunk cloud instance but I am unable to find the corresponding configurations on the splunk cloud side. It is not within the Settings < Email settings within splunkcloud. Where else may I find the configurations?
Hi, I have Splunk 8.0.0 on AWS with a clustered indexer set up (1 Master and 4 indexers) and I have deployed custom test apps (with basic monitoring for windows/Linux logs) on the servers that hav... See more...
Hi, I have Splunk 8.0.0 on AWS with a clustered indexer set up (1 Master and 4 indexers) and I have deployed custom test apps (with basic monitoring for windows/Linux logs) on the servers that have the forwarders installed. I have enabled the indexer discovery feature in the outputs.conf file (local folder) for these apps and on the server.conf file of the cluster master (etc/system/local) but I see the following error in the forwarder logs: 04-05-2020 16:57:53.752 +1000 ERROR IndexerDiscoveryHeartbeatThread - Error in Indexer Discovery communication. Verify that the pass4SymmKey set under [indexer_discovery:target1] in 'outputs.conf' matches the same setting under [indexer_discovery] in 'server.conf' on the Cluster Master. [uri=https://clustermaster:8089/services/indexer_discovery http_code=502 http_response="Error connecting: Connect Timeout"] I have ensured that the pass4SymmKey attribute is the same for the outputs.conf on the forwarders and the server.conf on the cluster master (in their respective indexer discovery sections), but yet I see this error. Any pointers that would help me resolve this?
Hi, Has anyone been able to successfully automate Splunk UF for Windows upgrades using Powershell? Thanks, AKN
Splunk7.3.3を利用しています。 複数のインデックスを持っています。 インデックス毎の1日あたりのデータ取込み量を確認する方法をご教授いただきたいです。
any suggestions? 04-05-2020 19:07:03.869 -0500 WARN SearchOperator:kv - Invalid key-value parser, ignoring it, transform_name='cyberark_epv_cef_cyberark_pta_cef_extract_field_6'. 04-05-2020 19:0... See more...
any suggestions? 04-05-2020 19:07:03.869 -0500 WARN SearchOperator:kv - Invalid key-value parser, ignoring it, transform_name='cyberark_epv_cef_cyberark_pta_cef_extract_field_6'. 04-05-2020 19:07:03.869 -0500 WARN SearchOperator:kv - Invalid key-value parser, ignoring it, transform_name='cyberark_epv_cef_cyberark_pta_cef_extract_field_3'. 04-05-2020 19:07:03.869 -0500 WARN SearchOperator:kv - Invalid key-value parser, ignoring it, transform_name='cyberark_epv_cef_cyberark_pta_cef_extract_field_0'. 04-05-2020 19:07:03.869 -0500 WARN SearchOperator:kv - Invalid key-value parser, ignoring it, transform_name='cyberark_epv_cef_cyberark_pta_cef_extract_field_6'. 04-05-2020 19:07:03.869 -0500 WARN SearchOperator:kv - Invalid key-value parser, ignoring it, transform_name='cyberark_epv_cef_cyberark_pta_cef_extract_field_3'. 04-05-2020 19:07:03.869 -0500 WARN SearchOperator:kv - Invalid key-value parser, ignoring it, transform_name='cyberark_epv_cef_cyberark_pta_cef_extract_field_0'. 04-05-2020 19:07:03.869 -0500 WARN SearchOperator:kv - Invalid key-value parser, ignoring it, transform_name='cyberark_epv_cef_extract_field_15'. 04-05-2020 19:07:03.869 -0500 WARN SearchOperator:kv - Invalid key-value parser, ignoring it, transform_name='cyberark_epv_cef_extract_field_13'.
I'm running the below query to find out when was the last time an index checked in. However, in using this query the output reflects a time format that is in EPOC format. I'd like to convert it to a ... See more...
I'm running the below query to find out when was the last time an index checked in. However, in using this query the output reflects a time format that is in EPOC format. I'd like to convert it to a standard month/day/year format. Any help is appreciated. Thank you. | tstats latest(_time) WHERE index=* BY index
I am trying to save on space and licensing with my IIS logs. Currently the vast majority of my logs are just constant health checks from our load balancers or security tools. I would like to filter t... See more...
I am trying to save on space and licensing with my IIS logs. Currently the vast majority of my logs are just constant health checks from our load balancers or security tools. I would like to filter these out by their user agent strings before they are indexed. Currently I have user agent strings that come from KEMP, Cloudflare, Nesses, and Tenable that I would like to filter out. On the indexer I go into \SPLUNKHOME\etc\apps\Splunk_TA_microsoft-iis\local and I modified the promps.conf file and added this line to the sourcetype stanza I use for the logs: TRANSFORMS-null= setnull Also in the same folder I modified the transforms.conf file and added this stanza: [setnull] REGEX = cs_User_Agent_="(?i)(\S*kemp*[^\s]+|\S*Cloudflare*[^\s]+|\S*Nessus*[^\s]+|\S*tenable*[^\s]+)" DEST_KEY = queue FORMAT = nullQueue Should the filtering happen at the index or should I move the settings to promps.conf and transforms.conf files on the app I deploy to the UF? Maybe my regex is just not right, I could not find a good example and guessed on how to reference the field to parse. Hopefully someone can let me know if I am even close to getting it right.
Running Enterprise 8.0.2.1. Data is coming in from a universal forwarder with index=syslog sourcetype=syslog and I'm trying to filter out unwanted messages. Here's a sample of the data: 2020-04-0... See more...
Running Enterprise 8.0.2.1. Data is coming in from a universal forwarder with index=syslog sourcetype=syslog and I'm trying to filter out unwanted messages. Here's a sample of the data: 2020-04-05T20:06:41.435487+00:00 HOST123 2020-04-05 20:06:41,424 Level="INFO" Name="support.bfcp" Message="Received BFCP message" Dst-address="x.x.x.x" Dst-port="41890" Src-address="y.y.y.y" Src-port="28888" Call-id="00000000-1111-2222-3333-444444444444" Primitive="Hello" Transaction-id="1014" 2020-04-05T20:06:37.552312+00:00 HOST123 2020-04-05 20:06:37,551 Level="INFO" Name="support.ice" Message="ICE new-local-candidate event" Media-type="h224" Stream-id="4" Component-id="RTCP" Local-candidate-type="host" Local-candidate-address="x.x.x.x" Local-candidate-port="41659" Local-candidate-transport="udp" Call-id="None" 2020-04-05T20:09:08.286431+00:00 HOST123 2020-04-05 20:09:08,269 Level="INFO" Name="support.participant" Message="Media Stream created" Participant="Patient" Call-id="00000000-1111-2222-3333-444444444444" Conversation-id="00000000-1111-2222-3333-444444444444" Detail="Stream 1 (video)" I want to send certain events to nullQueue based on the Name="blah" field, so I naively did the following on the indexer: /opt/splunk/etc/system/local/props.conf: [syslog] TRANSFORMS-mysystem = mysystem-nullqueue /opt/splunk/etc/system/local/transforms.conf: [mysystem-nullqueue] DEST_KEY = queue REGEX = Name=\"support\.(ice|bfcp|sip|rest|h323|dns) FORMAT = nullQueue Output of splunk cmd btool XXX list --debug for XXX=transforms/props: /opt/splunk/etc/system/local/transforms.conf [mysystem-nullqueue] /opt/splunk/etc/system/default/transforms.conf CAN_OPTIMIZE = True /opt/splunk/etc/system/default/transforms.conf CLEAN_KEYS = True /opt/splunk/etc/system/default/transforms.conf DEFAULT_VALUE = /opt/splunk/etc/system/default/transforms.conf DEPTH_LIMIT = 1000 /opt/splunk/etc/system/local/transforms.conf DEST_KEY = queue /opt/splunk/etc/system/local/transforms.conf FORMAT = nullQueue /opt/splunk/etc/system/default/transforms.conf KEEP_EMPTY_VALS = False /opt/splunk/etc/system/default/transforms.conf LOOKAHEAD = 4096 /opt/splunk/etc/system/default/transforms.conf MATCH_LIMIT = 100000 /opt/splunk/etc/system/default/transforms.conf MV_ADD = False /opt/splunk/etc/system/local/transforms.conf REGEX = Name=\"support\.(ice|bfcp|sip|rest|h323|dns) /opt/splunk/etc/system/default/transforms.conf SOURCE_KEY = _raw /opt/splunk/etc/system/default/transforms.conf WRITE_META = False /opt/splunk/etc/apps/search/local/props.conf [syslog] /opt/splunk/etc/system/default/props.conf ADD_EXTRA_TIME_FIELDS = True /opt/splunk/etc/system/default/props.conf ANNOTATE_PUNCT = True /opt/splunk/etc/system/default/props.conf AUTO_KV_JSON = true /opt/splunk/etc/system/default/props.conf BREAK_ONLY_BEFORE = /opt/splunk/etc/system/default/props.conf BREAK_ONLY_BEFORE_DATE = True /opt/splunk/etc/system/default/props.conf CHARSET = UTF-8 /opt/splunk/etc/system/default/props.conf DATETIME_CONFIG = /etc/datetime.xml /opt/splunk/etc/system/default/props.conf DEPTH_LIMIT = 1000 /opt/splunk/etc/apps/search/local/props.conf EXTRACT-mysystem-syslog-apache = apache2.\d+.: (?<srcip>\S+).*?\"(?<method>\S+) (?<url>[^ ?]+)\?*(?<query>\S*) \S+\" \S+ (?<respcode>\d+) (?<respbytes>\S+) (?<resptime>\d+) /opt/splunk/etc/apps/search/local/props.conf FIELDALIAS-syslog_dst_address = Dst_address ASNEW dest Dst_port ASNEW dest_port Src_address ASNEW src Src_port ASNEW src_port /opt/splunk/etc/system/default/props.conf HEADER_MODE = /opt/splunk/etc/system/default/props.conf LEARN_MODEL = true /opt/splunk/etc/system/default/props.conf LEARN_SOURCETYPE = true /opt/splunk/etc/system/default/props.conf LINE_BREAKER_LOOKBEHIND = 100 /opt/splunk/etc/system/default/props.conf MATCH_LIMIT = 100000 /opt/splunk/etc/system/default/props.conf MAX_DAYS_AGO = 2000 /opt/splunk/etc/system/default/props.conf MAX_DAYS_HENCE = 2 /opt/splunk/etc/system/default/props.conf MAX_DIFF_SECS_AGO = 3600 /opt/splunk/etc/system/default/props.conf MAX_DIFF_SECS_HENCE = 604800 /opt/splunk/etc/system/default/props.conf MAX_EVENTS = 256 /opt/splunk/etc/system/default/props.conf MAX_TIMESTAMP_LOOKAHEAD = 32 /opt/splunk/etc/system/default/props.conf MUST_BREAK_AFTER = /opt/splunk/etc/system/default/props.conf MUST_NOT_BREAK_AFTER = /opt/splunk/etc/system/default/props.conf MUST_NOT_BREAK_BEFORE = /opt/splunk/etc/system/default/props.conf REPORT-syslog = syslog-extractions /opt/splunk/etc/system/default/props.conf SEGMENTATION = indexing /opt/splunk/etc/system/default/props.conf SEGMENTATION-all = full /opt/splunk/etc/system/default/props.conf SEGMENTATION-inner = inner /opt/splunk/etc/system/default/props.conf SEGMENTATION-outer = outer /opt/splunk/etc/system/default/props.conf SEGMENTATION-raw = none /opt/splunk/etc/system/default/props.conf SEGMENTATION-standard = standard /opt/splunk/etc/system/default/props.conf SHOULD_LINEMERGE = False /opt/splunk/etc/system/default/props.conf TIME_FORMAT = %b %d %H:%M:%S /opt/splunk/etc/system/default/props.conf TRANSFORMS = syslog-host /opt/splunk/etc/system/local/props.conf TRANSFORMS-mysystem = mysystem-nullqueue /opt/splunk/etc/system/default/props.conf TRUNCATE = 10000 /opt/splunk/etc/system/default/props.conf category = Operating System /opt/splunk/etc/system/default/props.conf description = Output produced by many syslog daemons, as described in RFC3164 by the IETF /opt/splunk/etc/system/default/props.conf detect_trailing_nulls = false /opt/splunk/etc/system/default/props.conf maxDist = 3 /opt/splunk/etc/system/default/props.conf priority = /opt/splunk/etc/system/default/props.conf pulldown_type = true /opt/splunk/etc/system/default/props.conf sourcetype = After a config refresh or a restart of Splunk, the syslog index is still adding new entries containing Name="support.rest" or Name="support.ice". How do I further debug nullQueue not working?
During installation I am receiving two error messages, one immediately after the other. The first states Internal error: Could not read from D:\applications\appdynamics\Platform\installer\jre\bin\... See more...
During installation I am receiving two error messages, one immediately after the other. The first states Internal error: Could not read from D:\applications\appdynamics\Platform\installer\jre\bin\server\classes.jsa This message is then immediately followed by Failed to copy jre into platform-admin directory I am installing on a windows system. This sounds like either a file path issue, where a copy of Java can not be found on my system or a file permissions issue. I have tried many of my usual solutions to these types of problems but none of them have worked.
I am trying to pull Historgram metrics into Splunk 8.0 (local) and the http_event_collector_metrics.log seems to say that i am enabled and working but i never get an data in the reads ("datetime":"04... See more...
I am trying to pull Historgram metrics into Splunk 8.0 (local) and the http_event_collector_metrics.log seems to say that i am enabled and working but i never get an data in the reads ("datetime":"04-01-2020 12:00:09.795 -0400") - status code 0400 all he time. Prometheus is up and running and serving metrics ( confirmed thru the Promethesus client and http://localhost:9090/metrics) Any ideas? what is wrong here?
Once compressed, Splunk data cannot be changed? Can someone guide me to the proper reference article
Hi, I'm new to Appdynamics and I'm having error messages coming up regarding the proxy when using a python agent, I would appreciate any help to resolve this issue.  If I follow the steps on th... See more...
Hi, I'm new to Appdynamics and I'm having error messages coming up regarding the proxy when using a python agent, I would appreciate any help to resolve this issue.  If I follow the steps on the documentation and just create a /etc/appdynamics.cfg file : [agent] app = Test App tier = Test Tier node = node 0adb [controller] host = XXXX.saas.appdynamics.com port = 443 ssl = (on) account = XXX accesskey = XXX I get this error: 19:21:00,008  WARN [AD Thread-Metric Reporter0] MetricHandler - Metric Reporter Queue full. Dropping metrics. 19:21:20,171  INFO [AD Thread Pool-Global1] ControllerTimeSkewHandler - Controller Time Skew Handler Run Aborted - Skew Check is Disabled 19:21:23,040  INFO [AD Thread Pool-Global1] ConfigurationChannel - Detected node meta info: [Name:ProcessID, Value:4416, Name:appdynamics.ip.addresses, Value:fe80:0:0:0:d453:dc4:33ad:b620%enp0s3,192.168.1.15] 19:21:23,040  INFO [AD Thread Pool-Global1] ConfigurationChannel - Sending Registration request with: Application Name [Test App], Tier Name [Test Tier], Node Name [node 0adb], Host Name [ChrisUbuntu-VM] Node Unique Local ID [node 0adb], Version [Python Agent v20.3.0.0 (proxy v4.5.16.28134, agent-api v4.3.5.0)] 19:21:23,559 ERROR [AD Thread Pool-Global1] ConfigurationChannel - Fatal transport error while connecting to URL [/controller/instance/0/applicationConfiguration]: org.apache.http.NoHttpResponseException: XXXX.saas.appdynamics.com:443 failed to respond 19:21:23,559  WARN [AD Thread Pool-Global1] ConfigurationChannel - Could not connect to the controller/invalid response from controller, cannot get initialization information, controller host [XXX.saas.appdynamics.com], port[443], exception [Fatal transport error while connecting to URL [/controller/instance/0/applicationConfiguration]] 19:22:00,008  WARN [AD Thread-Metric Reporter0] MetricHandler - Metric Reporter Queue full. Dropping metrics. 19:22:19,066  WARN [AD Thread Pool-Global1] EventGenerationService - The retention queue is at full capacity [5]. Dropping events for timeslice [Sun Apr 05 19:17:00 AEST 2020] to accomodate events for timeslice [Sun Apr 05 19:22:00 AEST 2020] If I edit the controller-info.xml in /usr/local/lib/python3.6/dist-packages/appdynamics_bindeps/proxy/conf/controller-info.xml With: <controller-host>EDITED.saas.appdynamics.com</controller-host> <controller-port>443</controller-port> <application-name>Test App</application-name> <tier-name>Test Tier</tier-name> <node-name>node 0adb</node-name>  <account-name>EDITED</account-name> <account-access-key>EDITED</account-access-key>    <use-ssl-client-auth>true</use-ssl-client-auth> Then I get these errors: (log file /tmp/appd/logs/proxyCore.2020_04_05__20_12_05.0.log ) [Thread-1] 05 Apr 2020 20:12:41,174  INFO com.singularity.proxyControl.ProxyMultiNodeManager - Creating new node in proxy for node appName:Test App tierName:Test Tier nodeName:node 0adb [Thread-1] 05 Apr 2020 20:12:41,175  INFO com.singularity.proxyControl.ProxyMultiNodeManager - Comm address for the start node request: [app.name=Test App,tier.name=Test Tier,node.name=node 0adb,controller.host.name=XXXXX.saas.appdynamics.com,account.name=XXXX,account.key=XXXX,controller.port=443] is: /tmp/appd/run/comm/proxy-7826785417410579370/n15 [AD Thread Pool-ProxyControlReq0] 05 Apr 2020 20:12:41,379  INFO com.singularity.proxyControl.ProxyMultiNodeManager - Removed lock for start node request for key [app.name=Test App1,tier.name=Test Tier1,node.name=node 0adb,controller.host.name=XXX.saas.appdynamics.com,account.name=XXX,account.key=01sbc6q69823,controller.port=443] [AD Thread Pool-ProxyControlReq0] 05 Apr 2020 20:12:41,381  INFO com.singularity.proxyControl.ProxyMultiNodeManager - Proxy for node [app.name=Test App,tier.name=Test Tier,node.name=node 0adb,controller.host.name=XXXX.saas.appdynamics.com,account.name=XXX,account.key=XXXX,controller.port=443] has been started [Thread-1] 05 Apr 2020 20:12:42,396 ERROR com.singularity.proxyControl.ProxyMultiNodeManager - Error while starting a new proxy node java.lang.NullPointerException         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)         at java.lang.reflect.Method.invoke(Method.java:498)         at com.appdynamics.ee.agent.proxy.bootstrap.multiagent.ProxyMultiNodeManager.startProxy(ProxyMultiNodeManager.java:190)         at com.appdynamics.ee.agent.proxy.bootstrap.ZeroMQControlServer$ReqHandler.run(ZeroMQControlServer.java:312)         at java.lang.Thread.run(Thread.java:748) [Thread-1] 05 Apr 2020 20:12:42,397 ERROR com.singularity.proxyControl.ProxyNode - Error while shutting down proxy node java.lang.NullPointerException         at com.appdynamics.ee.agent.proxy.bootstrap.multiagent.ProxyNode.shutdown(ProxyNode.java:152)         at com.appdynamics.ee.agent.proxy.bootstrap.multiagent.ProxyMultiNodeManager.stopProxyNode(ProxyMultiNodeManager.java:222)         at com.appdynamics.ee.agent.proxy.bootstrap.multiagent.ProxyMultiNodeManager.startProxy(ProxyMultiNodeManager.java:207)         at com.appdynamics.ee.agent.proxy.bootstrap.ZeroMQControlServer$ReqHandler.run(ZeroMQControlServer.java:312)         at java.lang.Thread.run(Thread.java:748)