All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi All, I am trying to get count of enabled and disabled from field. Then i want to show the field values based on latest correlation ID.The currstatus field will run for every 10 min. "content.cur... See more...
Hi All, I am trying to get count of enabled and disabled from field. Then i want to show the field values based on latest correlation ID.The currstatus field will run for every 10 min. "content.currStatus"="*" |stats values(content.currStatus) as currStatus by latest(correlationId)|where currStatus!="Interface has no entry found in object Store"|stats count by currStatus    
Here is the message logged on peer side, when during starting after adding register_replication_address 05-02-2024 11:26:38.046 +0200 WARN CMSlave [1355571 indexerPipe] - Failed to register with clu... See more...
Here is the message logged on peer side, when during starting after adding register_replication_address 05-02-2024 11:26:38.046 +0200 WARN CMSlave [1355571 indexerPipe] - Failed to register with cluster master reason: failed method=POST path=/services/cluster/master/peers/?output_mode=json manager=CM:8089 rv=0 gotConnectionError=0 gotUnexpectedStatusCode=1 actual_response_code=500 expected_response_code=2xx status_line="Internal Server Error" socket_error="No error" remote_error=Cannot add peer=NEW-IP mgmtport=8089 (reason: Peer with guid=F00C07FD-F8A6-4C15-91D3-8A6CDCF28C96 is already registered and UP). [ event=addPeer status=retrying AddPeerRequest: { active_bundle_id=054CB061CFCD038A4B98FCDB01CE3F2F add_type=Initial-Add base_generation_id=0 batch_serialno=1 batch_size=26 forwarderdata_rcv_port=9997 forwarderdata_use_ssl=0 guid=F00C07FD-F8A6-4C15-91D3-8A6CDCF28C96 last_complete_generation_id=0 latest_bundle_id=054CB061CFCD038A4B98FCDB01CE3F2F mgmt_port=8089 register_forwarder_address= register_replication_address=NEW-IP register_search_address=NEW-IP replication_port=8087 replication_use_ssl=0 replications= server_name=PEER-NAME site=site1 splunk_version=9.0.7 splunkd_build_number=b985591d12fd status=Up } Batch 1/26 ].
Hi All, I am trying to extract a value from the indexed field. i.e from source field . I have added the regex in props.conf  Example :  source = 234234324234:us-west-2:firehose_list_tags_for... See more...
Hi All, I am trying to extract a value from the indexed field. i.e from source field . I have added the regex in props.conf  Example :  source = 234234324234:us-west-2:firehose_list_tags_for_resource I want everything after second : (colon) as service i.e firehose_list_tags_for_resource I have added in props.conf as below : EXTRACT-service = source([^:]+:[^:]+:(?<service>.+)$) This has created the field service but fetching wrong value. It is fetching last part of raw data. Please can anyone help me to understand how can I extract field value from indexed data ? Should I add in transforms.conf as well ? Please can anyone guide me. It helps me lot Regards, PNV
I'm not sure I fully understand what's going on here. When you run the |ldapsearch search="(objectClass=group)" attrs="*" | collect index=<ldapsearch> command on your forwarder, do you get results ... See more...
I'm not sure I fully understand what's going on here. When you run the |ldapsearch search="(objectClass=group)" attrs="*" | collect index=<ldapsearch> command on your forwarder, do you get results or an error message? If you get results, then you should be able to simply search against the index on your search head. If you don't get results, then there's something else going on: is the app configured correctly? is the query correct?
I have an input created in DB Connect app to few the necessary rows from a DB2 table. The job is scheduled to run on daily basis and to fetch only the previous day's data. I have left the "Max rows ... See more...
I have an input created in DB Connect app to few the necessary rows from a DB2 table. The job is scheduled to run on daily basis and to fetch only the previous day's data. I have left the "Max rows to retrieve" and "Fetch size" to default settings. Whenever my job runs, by default it is logging the same records twice. I am not sure what is causing this issue. I have attached screenshot of the entries belong to a primary key field where two events are indexed for each record. Could anyone help me in troubleshooting the issue?
Hi @splunky_diamond , for my knowledhe, the only limit of Splunk is that logs must reach splnk and be searcheable! If you cannot be sure about this there isn't any internal Splunk solution. If the... See more...
Hi @splunky_diamond , for my knowledhe, the only limit of Splunk is that logs must reach splnk and be searcheable! If you cannot be sure about this there isn't any internal Splunk solution. If the network connection between UF and Splunk is interrupted, UF locally stores logs for some time and, when the connection, is again available, it sends all logs to Splunk, but an attacker could delete also Splunk temp files, so you cannot do nothing.  You can be informed that there could be an attack when the data flow is interrupted and when, after the network connection is again available, windows logs are missed. In other words, Splunk save your data for a while, to avoid data loss during network issue, but these logs could be deleted by an attacker. The only way is to be notices that there could ne an attack. Ciao. Giuseppe
Hey Folks,  We are trying to deployed the machine agent on EKS 1.27. The version of the machine agent is v22.3.0.  The pod gets stuck with the below error :   Error in custom provider, javax.xml... See more...
Hey Folks,  We are trying to deployed the machine agent on EKS 1.27. The version of the machine agent is v22.3.0.  The pod gets stuck with the below error :   Error in custom provider, javax.xml.ws.WebServiceException: Failed to get a response from /info using a GET request.   The error encountered is: java.net.SocketException: Connection refused   [machineagent.jar:Machine Agent v22.3.0-3296 GA compatible with 4.4.1.0 Build Date 2022-03-18 19:50:59]   Could not start up the machine agent due to: Failed to get a response from /info using a GET request. The error encountered is: java.net.SocketException: Connection refused Please see startup.log in the current working directory for details
Please, can someone help me with this solution? I need to provide it to my client. Thank you very much!
Hello @rameshkommu , I don't think there's a possibility to add a placeholder text for any of the input. The value that you define in the DefaultValue actually gets passed as the value in the panels... See more...
Hello @rameshkommu , I don't think there's a possibility to add a placeholder text for any of the input. The value that you define in the DefaultValue actually gets passed as the value in the panels wherever the input is being referenced. You may want to raise an idea on ideas.splunk.com to have the feature included as part of new release.   Thanks, Tejas.   --- If the above solution helps, an upvote is appreciated.
Hello Splunkers! Imagine a scenario: There is a test environment with Splunk being deployed in ubuntu-server 20.04 virtual machine as All-in-One deployment scenario.  There is a Windows Server 2... See more...
Hello Splunkers! Imagine a scenario: There is a test environment with Splunk being deployed in ubuntu-server 20.04 virtual machine as All-in-One deployment scenario.  There is a Windows Server 2019, that is sending WindowsEvent Logs from Application and Security using Splunk Universal forwarder along with Splunk add-on for Microsoft Windows. In the normal situation where there is a stable network connection between Windows Server 2019 and ubuntu machine with Splunk, the logs are delivered to Splunk with no problems. However, imagine there is an adversary who executed a script to disable the network connection on the Windows Server 2019 and performed some malicious actions on that machine and then, went to event viewer application and cleared the security logs so that they never reach Splunk. My question is, how can we make the Security Logs that were deleted by adversary on Windows Server 2019 through Event Viewer, still reach the Splunk? To clarify, let's say after adversary disabled the network access to Splunk, and then deleted some users in the domain controller, then cleared the Security logs in the event viewer. What can we do, so that we still get these logs of adversary's activity on Windows Server in Splunk for further investigation? Feel free to ask any additional questions in case this scenario is unclear at some parts.  Thanks in advance for taking your time reading and replying to this post! 
Forwarder's Operation : - After the forwarder sends a data block, it maintains a copy of the data in its wait queue until it receives an acknowledgment. - Meanwhile, it continues to send additional... See more...
Forwarder's Operation : - After the forwarder sends a data block, it maintains a copy of the data in its wait queue until it receives an acknowledgment. - Meanwhile, it continues to send additional blocks. - If the forwarder doesn't get acknowledgment for a block within **300 seconds** (by default), it closes the connection. Possibility of Data Duplication : - It is possible for the indexer to index the same data block twice. - This can happen if there is a network problem that prevents an acknowledgment from reaching the forwarder. Network Issues : - When the network goes down, the forwarder never receives the acknowledgment. - When the network comes back up, the forwarder then resends the data block, which the indexer parses and writes as if it were new data. Duplicate Warning Example : 10-18-2010 17:32:36.941 WARN TcpOutputProc - Possible duplication of events with channel=source::/home/jkerai/splunk/current-install/etc/apps/sample_app /logs/maillog.1|host::MrT|sendmail|, streamId=5941229245963076846, offset=131072 subOffset=219 on host=10.1.42.2:9992 Note on useACK : - When `useACK` is enabled in the `outputs.conf` on forwarders, and there is either a network issue, indexer saturation (for example, pipeline blocks) or a replication problem, your Splunk platform deployment's indexers cannot respond to your deployment's forwarders acknowledgement. - Based on your deployment environment, data duplication can occur. https://docs.splunk.com/Documentation/Forwarder/8.2.5/Forwarder/Protectagainstthelossofin-flightdata
It looks like there may be something else going on in your search. Please share the full search (in a code block </>). It would also be helpful (and quicker) if you could share some sample anonymised... See more...
It looks like there may be something else going on in your search. Please share the full search (in a code block </>). It would also be helpful (and quicker) if you could share some sample anonymised representative events.
I have written a splunk query and used streamstats command to make my output look like this: Query Used: ... | streamstats current= f last(History) as Status by Ticket Id | ... Current Out... See more...
I have written a splunk query and used streamstats command to make my output look like this: Query Used: ... | streamstats current= f last(History) as Status by Ticket Id | ... Current Output:                            Ticket ID Priority    Status 1234 4321 5678 P1 Closed In Progress 8765  P2  Closed   However I want to remove the record 4321 and look at all the closed tickets for Priority P1 and P2, but since it is also of P1 priority the entire record is getting removed for P1 when I use this query: ... | streamstats current= f last(History) as Status by Ticket Id | where NOT Status IN ("In Progress") | ... Output: Ticket ID Priority   Status 8765  P2  Closed   How do I only remove 4321 as it is  "In Progress" Status. Please help. Expected Output: Ticket ID Priority   Status 1234                                                  5678 P1  Closed 8765  P2  Closed
The 6.x are no longer supported, hence why they are most likely no longer listed.  I have seen older versions still working, but may not support new features / config's / settings etc that you may wa... See more...
The 6.x are no longer supported, hence why they are most likely no longer listed.  I have seen older versions still working, but may not support new features / config's / settings etc that you may want to use.  You can check the table's listed  here for supported versions?  https://www.splunk.com/en_us/legal/splunk-software-support-policy.html  
Maybe look at installing Wireshark if you can and check that way as well. 
Hello, I need to upgrade the Splunk indexer version to version 9.2 But, we have a few instances which have Splunk forwarders of version 6.6.3 , 7.0.4 , 7.2.1 I have referred the below link. But, t... See more...
Hello, I need to upgrade the Splunk indexer version to version 9.2 But, we have a few instances which have Splunk forwarders of version 6.6.3 , 7.0.4 , 7.2.1 I have referred the below link. But, there is no mention of compatibility with versions prior to 7.3 Compatibility between forwarders and Splunk Enterprise indexers - Splunk Documentation    
One more thing, how do I get this query : "| ldapsearch search="(objectClass=group)" attrs="*" | collect index=<ldapsearch> "in my SH ? I am following the tutorial ? The following steps are the sa... See more...
One more thing, how do I get this query : "| ldapsearch search="(objectClass=group)" attrs="*" | collect index=<ldapsearch> "in my SH ? I am following the tutorial ? The following steps are the same for saving new alerts or editing existing alerts. From the Add Actions menu, select Log event. Add the following event information to configure the alert action. Use plain text or tokens for search, job, or server metadata. Event text Source and sourcetype Host Destination index for the log event. The main index is the default destination. You can specify a different existing index.   How do I configure the event text to get the data in the SH ? all I get is some event like : " 5/3/24 10:00:01.000 AM   | ldapsearch search="(objectClass=group)" attrs="*" | collect index=<ldapsearch> host = SBOXAD01| SBOXAD02 source = alert: sourcetype = AD"
Additionally, you can also try rolling the bucket manually as mentioned in the reason. The SF isn't met because it needs the bucket to be rolled. Click on the Actions drop down and roll the bucket. T... See more...
Additionally, you can also try rolling the bucket manually as mentioned in the reason. The SF isn't met because it needs the bucket to be rolled. Click on the Actions drop down and roll the bucket. This should also help you fix the SF/RF not met issue without any downtime.   Thanks, Tejas. --- If the above solution helps, an upvote is appreciated.
Hello @karthi2809, I'm not sure if searchmatch function can work with case condition or not. I've read that it must be used inside if condition. Can you try running the following search: | rename b... See more...
Hello @karthi2809, I'm not sure if searchmatch function can work with case condition or not. I've read that it must be used inside if condition. Can you try running the following search: | rename bucketFolder as BucketFolder | eval InterfaceName=case(BucketFolder like "%inbound%epm%","EPM", BucketFolder like "%inbound%KPIs%","APEX_File_Upload", BucketFolder like "%inbound%concur%","ConcurFile_Upload",true(),"Unknown") | stats values(InterfaceName) as InterfaceName min(timestamp) as Timestamp values(BucketFolder) as BucketFolder values(Status) as Status by correlationId | table Status InterfaceName Timestamp FileName Bucket BucketFolder correlationId   Thanks, Tejas. --- If the above solution helps, an upvote is appreciated.
Hello @deepakc @gcusello  I have checked. Netstat shows the only process listening on 514 is Splunkd. Normally on Windows I use Kiwi Syslog. Just a while I changed the Regex again to . so that al... See more...
Hello @deepakc @gcusello  I have checked. Netstat shows the only process listening on 514 is Splunkd. Normally on Windows I use Kiwi Syslog. Just a while I changed the Regex again to . so that all events are moved to null queue. Injected syslog messages with python on TCP (SOCK_STREAM) and surprising no data appears for my test messages. But the firewall messages with "source = tcp:514" is still relentlessly appearing. Can a source appear as TCP: 514 but is not the case? Used btools to check for TCP modifications but can't find any. Oh my god, it's so weird. Seems like ghost TCP 514 messages. Next I will block that port with firewall and see if it still comes in. Update once more: We removed all TCP inputs from data input and do a restart. Funnily the TCP:514 source still comes into the SIEM. I tried with my python syslog message generator and I got an error because the Splunk server rejects the connection. Feels like some kind of corruption somewhere where the messages thinks they are TCP:514?