All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @karthi2809 , if currStatus has values enabled or disabled, please try something like this: "content.currStatus"="*" content.currStatus!="Interface has no entry found in object Store" | rename c... See more...
Hi @karthi2809 , if currStatus has values enabled or disabled, please try something like this: "content.currStatus"="*" content.currStatus!="Interface has no entry found in object Store" | rename content.currStatus AS currStatus | stats count(eval(currStatus="enabled"))AS enabled_count count(eval(currStatus="disabled"))AS disabled_count last(currStatus) AS last_currStatus BY correlationId) In addition one hint: add always the index containing these events: you'll have a faster search and you'll be sure to take events. Ciao. Giuseppe
Use the below as an example, using both props and transforms,  change to your sourcetype that you are using and if it works, change your group names if desired.  Props.conf [my_sourcetype] REP... See more...
Use the below as an example, using both props and transforms,  change to your sourcetype that you are using and if it works, change your group names if desired.  Props.conf [my_sourcetype] REPORT-my_service = extract_service Transforms.conf [extract_service] SOURCE_KEY = source REGEX = [^:]+:(?<my_service>.+)$ FORMAT = my_service::$1
Thank you very much for your reply, @gcusello ! I have some questions to your post, where can I configure for how long UF stores the logs when the connection is interrupted? Also how can I know th... See more...
Thank you very much for your reply, @gcusello ! I have some questions to your post, where can I configure for how long UF stores the logs when the connection is interrupted? Also how can I know the location of where UF stores these logs, is it some file within the add-on? And finally, what's the capacity of that file/those files, where the logs will be stored in this scenario before the connection to Splunk machine is re-established? 
Try something like this "content.currStatus"="*" [search <your index> | stats latest(correlationId) as correlationId | table correlationId | format] | where currStatus!="Interface has no entry found... See more...
Try something like this "content.currStatus"="*" [search <your index> | stats latest(correlationId) as correlationId | table correlationId | format] | where currStatus!="Interface has no entry found in object Store"|stats count by currStatus
It looks like the app was not re-packaged properly on the Linux box.  Perhaps an extra directory level was added.
Hi All, JAVA App Agent is not showing in Tier and Node also in app agent logs I can see following message. [AD Thread Pool-Global1] 02 May 2024 21:21:31,149 INFO DynamicRulesManager - The config di... See more...
Hi All, JAVA App Agent is not showing in Tier and Node also in app agent logs I can see following message. [AD Thread Pool-Global1] 02 May 2024 21:21:31,149 INFO DynamicRulesManager - The config directory /apps/dynamics/ver23.2.0.34668/conf/outbound--2 is not initialized, not writing /apps/dynamics/ver23.2.0.34668/conf/outbound--2/bcirules.xml [AD Thread Pool-Global0] 02 May 2024 21:21:55,817 ERROR NetVizAgentRequest - Fatal transport error while connecting to URL [http://127.0.0.1:3892/api/agentinfo?timestamp=0&agentType=APP_AGENT&agentVersion=3.2.0]: org.apache.http.conn.HttpHostConnectException: Connect to 127.0.0.1:3892 [/127.0.0.1] failed: Connection refused (Connection refused) [AD Thread Pool-Global1] 02 May 2024 21:21:55,832 WARN NetVizConfigurationChannel - NetViz: Number of communication failures with netviz agent exceeded maximum allowed [3]. Disabling config requests. [AD Thread Pool-Global1] 02 May 2024 21:22:55,833 ERROR NetVizAgentRequest - Fatal transport error while connecting to URL [http://127.0.0.1:3892/api/agentinfo?timestamp=0&agentType=APP_AGENT&agentVersion=3.2.0]: org.apache.http.conn.HttpHostConnectException: Connect to 127.0.0.1:3892 [/127.0.0.1] failed: Connection refused (Connection refused) [AD Thread Pool-Global0] 02 May 2024 21:22:55,848 WARN NetVizConfigurationChannel - NetViz: Number of communication failures with netviz agent exceeded maximum allowed [3]. Disabling config requests. [AD Thread Pool-Global1] 02 May 2024 21:23:55,849 ERROR NetVizAgentRequest - Fatal transport error while connecting to URL [http://127.0.0.1:3892/api/agentinfo?timestamp=0&agentType=APP_AGENT&agentVersion=3.2.0]: org.apache.http.conn.HttpHostConnectException: Connect to 127.0.0.1:3892 [/127.0.0.1] failed: Connection refused (Connection refused) [AD Thread Pool-Global0] 02 May 2024 21:23:55,866 WARN NetVizConfigurationChannel - NetViz: Number of communication failures with netviz agent exceeded maximum allowed [3]. Disabling config requests. [AD Thread Pool-Global1] 02 May 2024 21:25:53,942 INFO DynamicRulesManager - The config directory /apps/dynamics/ver23.2.0.34668/conf/outbound--2 is not initialized, not writing /apps/dynamics/ver23.2.0.34668/conf/outbound--2/bcirules.xml [AD Thread Pool-Global1] 02 May 2024 21:25:58,948 INFO DynamicRulesManager - The config directory /apps/dynamics/ver23.2.0.34668/conf/outbound--2 is not initialized, not writing /apps/dynamics/ver23.2.0.34668/conf/outbound--2/bcirules.xml [AD Thread Pool-Global1] 02 May 2024 21:30:53,887 INFO DynamicRulesManager - The config directory /apps/dynamics/ver23.2.0.34668/conf/outbound--2 is not initialized, not writing /apps/dynamics/ver23.2.0.34668/conf/outbound--2/bcirules.xml [AD Thread Pool-Global0] 02 May 2024 21:35:53,895 INFO DynamicRulesManager - The config directory /apps/dynamics/ver23.2.0.34668/conf/outbound--2 is not initialized, not writing /apps/dynamics/ver23.2.0.34668/conf/outbound--2/bcirules.xml [AD Thread Pool-Global0] 02 May 2024 21:40:53,906 INFO DynamicRulesManager - The config directory /apps/dynamics/ver23.2.0.34668/conf/outbound--2 is not initialized, not writing /apps/dynamics/ver23.2.0.34668/conf/outbound--2/bcirules.xml [AD Thread Pool-Global1] 02 May 2024 21:45:53,910 INFO DynamicRulesManager - The config directory /apps/dynamics/ver23.2.0.34668/conf/outbound--2 is not initialized, not writing /apps/dynamics/ver23.2.0.34668/conf/outbound--2/bcirules.xml [AD Thread Pool-Global0] 02 May 2024 21:49:56,265 ERROR NetVizAgentRequest - Fatal transport error while connecting to URL [http://127.0.0.1:3892/api/agentinfo?timestamp=0&agentType=APP_AGENT&agentVersion=3.2.0]: org.apache.http.conn.HttpHostConnectException: Connect to 127.0.0.1:3892 [/127.0.0.1] failed: Connection refused (Connection refused) [AD Thread Pool-Global1] 02 May 2024 21:49:56,279 WARN NetVizConfigurationChannel - NetViz: Number of communication failures with netviz agent exceeded maximum allowed [3]. Disabling config requests. [AD Thread Pool-Global0] 02 May 2024 21:50:53,910 INFO DynamicRulesManager - The config directory /apps/dynamics/ver23.2.0.34668/conf/outbound--2 is not initialized, not writing /apps/dynamics/ver23.2.0.34668/conf/outbound--2/bcirules.xml [AD Thread Pool-Global1] 02 May 2024 21:50:56,280 ERROR NetVizAgentRequest - Fatal transport error while connecting to URL [http://127.0.0.1:3892/api/agentinfo?timestamp=0&agentType=APP_AGENT&agentVersion=3.2.0]: org.apache.http.conn.HttpHostConnectException: Connect to 127.0.0.1:3892 [/127.0.0.1] failed: Connection refused (Connection refused) [AD Thread Pool-Global0] 02 May 2024 21:50:56,295 WARN NetVizConfigurationChannel - NetViz: Number of communication failures with netviz agent exceeded maximum allowed [3]. Disabling config requests. [AD Thread Pool-Global0] 02 May 2024 21:51:56,296 ERROR NetVizAgentRequest - Fatal transport error while connecting to URL [http://127.0.0.1:3892/api/agentinfo?timestamp=0&agentType=APP_AGENT&agentVersion=3.2.0]: org.apache.http.conn.HttpHostConnectException: Connect to 127.0.0.1:3892 [/127.0.0.1] failed: Connection refused (Connection refused) [AD Thread Pool-Global1] 02 May 2024 21:51:56,309 WARN NetVizConfigurationChannel - NetViz: Number of communication failures with netviz agent exceeded maximum allowed [3]. Disabling config requests. [AD Thread Pool-Global0] 02 May 2024 21:52:56,310 ERROR NetVizAgentRequest - Fatal transport error while connecting to URL [http://127.0.0.1:3892/api/agentinfo?timestamp=0&agentType=APP_AGENT&agentVersion=3.2.0]: org.apache.http.conn.HttpHostConnectException: Connect to 127.0.0.1:3892 [/127.0.0.1] failed: Connection refused (Connection refused) Regards, Mandar Kadam
09-14-2017 10:43:30.132 -0400 WARN LineBreakingProcessor - Truncating line because limit of 10000 bytes has been exceeded with a line length >= 10994 - data_source="json.txt", data_host="n00bserver.n... See more...
09-14-2017 10:43:30.132 -0400 WARN LineBreakingProcessor - Truncating line because limit of 10000 bytes has been exceeded with a line length >= 10994 - data_source="json.txt", data_host="n00bserver.n00blab.local", data_sourcetype="someSourcetype" How did you go about solving this  
Hi @Poojitha, please try this: | rex field=source ":(?<your_field>\w+)$" or in props.conf: EXTRACT-service = EXTRACT-service = source([^:]+:[^:]+:(?<service>.+)$) in source that you can test at ... See more...
Hi @Poojitha, please try this: | rex field=source ":(?<your_field>\w+)$" or in props.conf: EXTRACT-service = EXTRACT-service = source([^:]+:[^:]+:(?<service>.+)$) in source that you can test at https://regex101.com/r/NBjX8h/1 ciao. Giuseppe
Hi All, I am trying to get count of enabled and disabled from field. Then i want to show the field values based on latest correlation ID.The currstatus field will run for every 10 min. "content.cur... See more...
Hi All, I am trying to get count of enabled and disabled from field. Then i want to show the field values based on latest correlation ID.The currstatus field will run for every 10 min. "content.currStatus"="*" |stats values(content.currStatus) as currStatus by latest(correlationId)|where currStatus!="Interface has no entry found in object Store"|stats count by currStatus    
Here is the message logged on peer side, when during starting after adding register_replication_address 05-02-2024 11:26:38.046 +0200 WARN CMSlave [1355571 indexerPipe] - Failed to register with clu... See more...
Here is the message logged on peer side, when during starting after adding register_replication_address 05-02-2024 11:26:38.046 +0200 WARN CMSlave [1355571 indexerPipe] - Failed to register with cluster master reason: failed method=POST path=/services/cluster/master/peers/?output_mode=json manager=CM:8089 rv=0 gotConnectionError=0 gotUnexpectedStatusCode=1 actual_response_code=500 expected_response_code=2xx status_line="Internal Server Error" socket_error="No error" remote_error=Cannot add peer=NEW-IP mgmtport=8089 (reason: Peer with guid=F00C07FD-F8A6-4C15-91D3-8A6CDCF28C96 is already registered and UP). [ event=addPeer status=retrying AddPeerRequest: { active_bundle_id=054CB061CFCD038A4B98FCDB01CE3F2F add_type=Initial-Add base_generation_id=0 batch_serialno=1 batch_size=26 forwarderdata_rcv_port=9997 forwarderdata_use_ssl=0 guid=F00C07FD-F8A6-4C15-91D3-8A6CDCF28C96 last_complete_generation_id=0 latest_bundle_id=054CB061CFCD038A4B98FCDB01CE3F2F mgmt_port=8089 register_forwarder_address= register_replication_address=NEW-IP register_search_address=NEW-IP replication_port=8087 replication_use_ssl=0 replications= server_name=PEER-NAME site=site1 splunk_version=9.0.7 splunkd_build_number=b985591d12fd status=Up } Batch 1/26 ].
Hi All, I am trying to extract a value from the indexed field. i.e from source field . I have added the regex in props.conf  Example :  source = 234234324234:us-west-2:firehose_list_tags_for... See more...
Hi All, I am trying to extract a value from the indexed field. i.e from source field . I have added the regex in props.conf  Example :  source = 234234324234:us-west-2:firehose_list_tags_for_resource I want everything after second : (colon) as service i.e firehose_list_tags_for_resource I have added in props.conf as below : EXTRACT-service = source([^:]+:[^:]+:(?<service>.+)$) This has created the field service but fetching wrong value. It is fetching last part of raw data. Please can anyone help me to understand how can I extract field value from indexed data ? Should I add in transforms.conf as well ? Please can anyone guide me. It helps me lot Regards, PNV
I'm not sure I fully understand what's going on here. When you run the |ldapsearch search="(objectClass=group)" attrs="*" | collect index=<ldapsearch> command on your forwarder, do you get results ... See more...
I'm not sure I fully understand what's going on here. When you run the |ldapsearch search="(objectClass=group)" attrs="*" | collect index=<ldapsearch> command on your forwarder, do you get results or an error message? If you get results, then you should be able to simply search against the index on your search head. If you don't get results, then there's something else going on: is the app configured correctly? is the query correct?
I have an input created in DB Connect app to few the necessary rows from a DB2 table. The job is scheduled to run on daily basis and to fetch only the previous day's data. I have left the "Max rows ... See more...
I have an input created in DB Connect app to few the necessary rows from a DB2 table. The job is scheduled to run on daily basis and to fetch only the previous day's data. I have left the "Max rows to retrieve" and "Fetch size" to default settings. Whenever my job runs, by default it is logging the same records twice. I am not sure what is causing this issue. I have attached screenshot of the entries belong to a primary key field where two events are indexed for each record. Could anyone help me in troubleshooting the issue?
Hi @splunky_diamond , for my knowledhe, the only limit of Splunk is that logs must reach splnk and be searcheable! If you cannot be sure about this there isn't any internal Splunk solution. If the... See more...
Hi @splunky_diamond , for my knowledhe, the only limit of Splunk is that logs must reach splnk and be searcheable! If you cannot be sure about this there isn't any internal Splunk solution. If the network connection between UF and Splunk is interrupted, UF locally stores logs for some time and, when the connection, is again available, it sends all logs to Splunk, but an attacker could delete also Splunk temp files, so you cannot do nothing.  You can be informed that there could be an attack when the data flow is interrupted and when, after the network connection is again available, windows logs are missed. In other words, Splunk save your data for a while, to avoid data loss during network issue, but these logs could be deleted by an attacker. The only way is to be notices that there could ne an attack. Ciao. Giuseppe
Hey Folks,  We are trying to deployed the machine agent on EKS 1.27. The version of the machine agent is v22.3.0.  The pod gets stuck with the below error :   Error in custom provider, javax.xml... See more...
Hey Folks,  We are trying to deployed the machine agent on EKS 1.27. The version of the machine agent is v22.3.0.  The pod gets stuck with the below error :   Error in custom provider, javax.xml.ws.WebServiceException: Failed to get a response from /info using a GET request.   The error encountered is: java.net.SocketException: Connection refused   [machineagent.jar:Machine Agent v22.3.0-3296 GA compatible with 4.4.1.0 Build Date 2022-03-18 19:50:59]   Could not start up the machine agent due to: Failed to get a response from /info using a GET request. The error encountered is: java.net.SocketException: Connection refused Please see startup.log in the current working directory for details
Please, can someone help me with this solution? I need to provide it to my client. Thank you very much!
Hello @rameshkommu , I don't think there's a possibility to add a placeholder text for any of the input. The value that you define in the DefaultValue actually gets passed as the value in the panels... See more...
Hello @rameshkommu , I don't think there's a possibility to add a placeholder text for any of the input. The value that you define in the DefaultValue actually gets passed as the value in the panels wherever the input is being referenced. You may want to raise an idea on ideas.splunk.com to have the feature included as part of new release.   Thanks, Tejas.   --- If the above solution helps, an upvote is appreciated.
Hello Splunkers! Imagine a scenario: There is a test environment with Splunk being deployed in ubuntu-server 20.04 virtual machine as All-in-One deployment scenario.  There is a Windows Server 2... See more...
Hello Splunkers! Imagine a scenario: There is a test environment with Splunk being deployed in ubuntu-server 20.04 virtual machine as All-in-One deployment scenario.  There is a Windows Server 2019, that is sending WindowsEvent Logs from Application and Security using Splunk Universal forwarder along with Splunk add-on for Microsoft Windows. In the normal situation where there is a stable network connection between Windows Server 2019 and ubuntu machine with Splunk, the logs are delivered to Splunk with no problems. However, imagine there is an adversary who executed a script to disable the network connection on the Windows Server 2019 and performed some malicious actions on that machine and then, went to event viewer application and cleared the security logs so that they never reach Splunk. My question is, how can we make the Security Logs that were deleted by adversary on Windows Server 2019 through Event Viewer, still reach the Splunk? To clarify, let's say after adversary disabled the network access to Splunk, and then deleted some users in the domain controller, then cleared the Security logs in the event viewer. What can we do, so that we still get these logs of adversary's activity on Windows Server in Splunk for further investigation? Feel free to ask any additional questions in case this scenario is unclear at some parts.  Thanks in advance for taking your time reading and replying to this post! 
Forwarder's Operation : - After the forwarder sends a data block, it maintains a copy of the data in its wait queue until it receives an acknowledgment. - Meanwhile, it continues to send additional... See more...
Forwarder's Operation : - After the forwarder sends a data block, it maintains a copy of the data in its wait queue until it receives an acknowledgment. - Meanwhile, it continues to send additional blocks. - If the forwarder doesn't get acknowledgment for a block within **300 seconds** (by default), it closes the connection. Possibility of Data Duplication : - It is possible for the indexer to index the same data block twice. - This can happen if there is a network problem that prevents an acknowledgment from reaching the forwarder. Network Issues : - When the network goes down, the forwarder never receives the acknowledgment. - When the network comes back up, the forwarder then resends the data block, which the indexer parses and writes as if it were new data. Duplicate Warning Example : 10-18-2010 17:32:36.941 WARN TcpOutputProc - Possible duplication of events with channel=source::/home/jkerai/splunk/current-install/etc/apps/sample_app /logs/maillog.1|host::MrT|sendmail|, streamId=5941229245963076846, offset=131072 subOffset=219 on host=10.1.42.2:9992 Note on useACK : - When `useACK` is enabled in the `outputs.conf` on forwarders, and there is either a network issue, indexer saturation (for example, pipeline blocks) or a replication problem, your Splunk platform deployment's indexers cannot respond to your deployment's forwarders acknowledgement. - Based on your deployment environment, data duplication can occur. https://docs.splunk.com/Documentation/Forwarder/8.2.5/Forwarder/Protectagainstthelossofin-flightdata
It looks like there may be something else going on in your search. Please share the full search (in a code block </>). It would also be helpful (and quicker) if you could share some sample anonymised... See more...
It looks like there may be something else going on in your search. Please share the full search (in a code block </>). It would also be helpful (and quicker) if you could share some sample anonymised representative events.