All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Please, can someone help me with this solution? I need to provide it to my client. Thank you very much!
Hello @rameshkommu , I don't think there's a possibility to add a placeholder text for any of the input. The value that you define in the DefaultValue actually gets passed as the value in the panels... See more...
Hello @rameshkommu , I don't think there's a possibility to add a placeholder text for any of the input. The value that you define in the DefaultValue actually gets passed as the value in the panels wherever the input is being referenced. You may want to raise an idea on ideas.splunk.com to have the feature included as part of new release.   Thanks, Tejas.   --- If the above solution helps, an upvote is appreciated.
Hello Splunkers! Imagine a scenario: There is a test environment with Splunk being deployed in ubuntu-server 20.04 virtual machine as All-in-One deployment scenario.  There is a Windows Server 2... See more...
Hello Splunkers! Imagine a scenario: There is a test environment with Splunk being deployed in ubuntu-server 20.04 virtual machine as All-in-One deployment scenario.  There is a Windows Server 2019, that is sending WindowsEvent Logs from Application and Security using Splunk Universal forwarder along with Splunk add-on for Microsoft Windows. In the normal situation where there is a stable network connection between Windows Server 2019 and ubuntu machine with Splunk, the logs are delivered to Splunk with no problems. However, imagine there is an adversary who executed a script to disable the network connection on the Windows Server 2019 and performed some malicious actions on that machine and then, went to event viewer application and cleared the security logs so that they never reach Splunk. My question is, how can we make the Security Logs that were deleted by adversary on Windows Server 2019 through Event Viewer, still reach the Splunk? To clarify, let's say after adversary disabled the network access to Splunk, and then deleted some users in the domain controller, then cleared the Security logs in the event viewer. What can we do, so that we still get these logs of adversary's activity on Windows Server in Splunk for further investigation? Feel free to ask any additional questions in case this scenario is unclear at some parts.  Thanks in advance for taking your time reading and replying to this post! 
Forwarder's Operation : - After the forwarder sends a data block, it maintains a copy of the data in its wait queue until it receives an acknowledgment. - Meanwhile, it continues to send additional... See more...
Forwarder's Operation : - After the forwarder sends a data block, it maintains a copy of the data in its wait queue until it receives an acknowledgment. - Meanwhile, it continues to send additional blocks. - If the forwarder doesn't get acknowledgment for a block within **300 seconds** (by default), it closes the connection. Possibility of Data Duplication : - It is possible for the indexer to index the same data block twice. - This can happen if there is a network problem that prevents an acknowledgment from reaching the forwarder. Network Issues : - When the network goes down, the forwarder never receives the acknowledgment. - When the network comes back up, the forwarder then resends the data block, which the indexer parses and writes as if it were new data. Duplicate Warning Example : 10-18-2010 17:32:36.941 WARN TcpOutputProc - Possible duplication of events with channel=source::/home/jkerai/splunk/current-install/etc/apps/sample_app /logs/maillog.1|host::MrT|sendmail|, streamId=5941229245963076846, offset=131072 subOffset=219 on host=10.1.42.2:9992 Note on useACK : - When `useACK` is enabled in the `outputs.conf` on forwarders, and there is either a network issue, indexer saturation (for example, pipeline blocks) or a replication problem, your Splunk platform deployment's indexers cannot respond to your deployment's forwarders acknowledgement. - Based on your deployment environment, data duplication can occur. https://docs.splunk.com/Documentation/Forwarder/8.2.5/Forwarder/Protectagainstthelossofin-flightdata
It looks like there may be something else going on in your search. Please share the full search (in a code block </>). It would also be helpful (and quicker) if you could share some sample anonymised... See more...
It looks like there may be something else going on in your search. Please share the full search (in a code block </>). It would also be helpful (and quicker) if you could share some sample anonymised representative events.
I have written a splunk query and used streamstats command to make my output look like this: Query Used: ... | streamstats current= f last(History) as Status by Ticket Id | ... Current Out... See more...
I have written a splunk query and used streamstats command to make my output look like this: Query Used: ... | streamstats current= f last(History) as Status by Ticket Id | ... Current Output:                            Ticket ID Priority    Status 1234 4321 5678 P1 Closed In Progress 8765  P2  Closed   However I want to remove the record 4321 and look at all the closed tickets for Priority P1 and P2, but since it is also of P1 priority the entire record is getting removed for P1 when I use this query: ... | streamstats current= f last(History) as Status by Ticket Id | where NOT Status IN ("In Progress") | ... Output: Ticket ID Priority   Status 8765  P2  Closed   How do I only remove 4321 as it is  "In Progress" Status. Please help. Expected Output: Ticket ID Priority   Status 1234                                                  5678 P1  Closed 8765  P2  Closed
The 6.x are no longer supported, hence why they are most likely no longer listed.  I have seen older versions still working, but may not support new features / config's / settings etc that you may wa... See more...
The 6.x are no longer supported, hence why they are most likely no longer listed.  I have seen older versions still working, but may not support new features / config's / settings etc that you may want to use.  You can check the table's listed  here for supported versions?  https://www.splunk.com/en_us/legal/splunk-software-support-policy.html  
Maybe look at installing Wireshark if you can and check that way as well. 
Hello, I need to upgrade the Splunk indexer version to version 9.2 But, we have a few instances which have Splunk forwarders of version 6.6.3 , 7.0.4 , 7.2.1 I have referred the below link. But, t... See more...
Hello, I need to upgrade the Splunk indexer version to version 9.2 But, we have a few instances which have Splunk forwarders of version 6.6.3 , 7.0.4 , 7.2.1 I have referred the below link. But, there is no mention of compatibility with versions prior to 7.3 Compatibility between forwarders and Splunk Enterprise indexers - Splunk Documentation    
One more thing, how do I get this query : "| ldapsearch search="(objectClass=group)" attrs="*" | collect index=<ldapsearch> "in my SH ? I am following the tutorial ? The following steps are the sa... See more...
One more thing, how do I get this query : "| ldapsearch search="(objectClass=group)" attrs="*" | collect index=<ldapsearch> "in my SH ? I am following the tutorial ? The following steps are the same for saving new alerts or editing existing alerts. From the Add Actions menu, select Log event. Add the following event information to configure the alert action. Use plain text or tokens for search, job, or server metadata. Event text Source and sourcetype Host Destination index for the log event. The main index is the default destination. You can specify a different existing index.   How do I configure the event text to get the data in the SH ? all I get is some event like : " 5/3/24 10:00:01.000 AM   | ldapsearch search="(objectClass=group)" attrs="*" | collect index=<ldapsearch> host = SBOXAD01| SBOXAD02 source = alert: sourcetype = AD"
Additionally, you can also try rolling the bucket manually as mentioned in the reason. The SF isn't met because it needs the bucket to be rolled. Click on the Actions drop down and roll the bucket. T... See more...
Additionally, you can also try rolling the bucket manually as mentioned in the reason. The SF isn't met because it needs the bucket to be rolled. Click on the Actions drop down and roll the bucket. This should also help you fix the SF/RF not met issue without any downtime.   Thanks, Tejas. --- If the above solution helps, an upvote is appreciated.
Hello @karthi2809, I'm not sure if searchmatch function can work with case condition or not. I've read that it must be used inside if condition. Can you try running the following search: | rename b... See more...
Hello @karthi2809, I'm not sure if searchmatch function can work with case condition or not. I've read that it must be used inside if condition. Can you try running the following search: | rename bucketFolder as BucketFolder | eval InterfaceName=case(BucketFolder like "%inbound%epm%","EPM", BucketFolder like "%inbound%KPIs%","APEX_File_Upload", BucketFolder like "%inbound%concur%","ConcurFile_Upload",true(),"Unknown") | stats values(InterfaceName) as InterfaceName min(timestamp) as Timestamp values(BucketFolder) as BucketFolder values(Status) as Status by correlationId | table Status InterfaceName Timestamp FileName Bucket BucketFolder correlationId   Thanks, Tejas. --- If the above solution helps, an upvote is appreciated.
Hello @deepakc @gcusello  I have checked. Netstat shows the only process listening on 514 is Splunkd. Normally on Windows I use Kiwi Syslog. Just a while I changed the Regex again to . so that al... See more...
Hello @deepakc @gcusello  I have checked. Netstat shows the only process listening on 514 is Splunkd. Normally on Windows I use Kiwi Syslog. Just a while I changed the Regex again to . so that all events are moved to null queue. Injected syslog messages with python on TCP (SOCK_STREAM) and surprising no data appears for my test messages. But the firewall messages with "source = tcp:514" is still relentlessly appearing. Can a source appear as TCP: 514 but is not the case? Used btools to check for TCP modifications but can't find any. Oh my god, it's so weird. Seems like ghost TCP 514 messages. Next I will block that port with firewall and see if it still comes in. Update once more: We removed all TCP inputs from data input and do a restart. Funnily the TCP:514 source still comes into the SIEM. I tried with my python syslog message generator and I got an error because the Splunk server rejects the connection. Feels like some kind of corruption somewhere where the messages thinks they are TCP:514?
Hi  @yh , if the receiver is on Windows you cannot use neither rsyslog or SC4S, you should find another syslog receiver server. Ciao. Giuseppe
Hello @SReopelle, Additionally, you can try checking the list of KOs in All Configurations page from the Settings menu. You'll be able to find all the knowledge objects there.   Thanks, Tejas.
Do you mean to obtain something like this?   field name field value Action Name Read Input Files GDrive Start Record Id Type Invoice Run Reference xx-0645-11ef-xx-xx Task Name C... See more...
Do you mean to obtain something like this?   field name field value Action Name Read Input Files GDrive Start Record Id Type Invoice Run Reference xx-0645-11ef-xx-xx Task Name Cash Apps PAPI applicationName ct-fin-abc-apps-papi-v1-uw2-ut applicationType PAPI applicationVersion 1.0.7 batchSize 0 businessRecordId   businessRecordType   detailText Start Reading Input Files from G drive domain CR C4E endpointSystem   environment ct-app-UAT region us-ne-2 remainingRetries 0 stage MILESTONE status SUCCESS threadName [MuleRuntime].uber.05: [ct-fin-aps-apps-papi-v1-uw2-ut].abc-apps-schedular-main-flow.BLOCKING @68f82333 timestamp 2024-04-29 16:30:08.455 totalRecords 0 tx.fileName implementation.xml tx.flow read-input-files-sub-flow txlineNumber 71 worker 0 x-transaction-id xxxx-e691-xx-91bf-xxx As @gcusello suggested, you should use built-in JSON capability to do the job, not regex.  Use rex only to extract the JSON part.  Like this   | rex "eilog.EILog:\s*(?<eilog>{.+})" | spath input=eilog | spath input=jsonRecord   This is an emulation of your sample data.  Play with it and compare with real data.   | makeresults | eval _raw = "INFO 2024-04-29 16:30:08,456 [[MuleRuntime].uber.05: [ct-fin-abc-apps-papi-v1-uw2-ut].abc-apps-schedular-main-flow.BLOCKING @68f82333] com.sfdc.it.ei.mule4.eilog.EILog: {\"worker\":\"0\",\"region\":\"us-ne-2\",\"applicationName\":\"ct-fin-abc-apps-papi-v1-uw2-ut\",\"applicationVersion\":\"1.0.7\",\"applicationType\":\"PAPI\",\"environment\":\"ct-app-UAT\",\"domain\":\"CR C4E\",\"x-transaction-id\":\"xxxx-e691-xx-91bf-xxx\",\"tx.flow\":\"read-input-files-sub-flow\",\"tx.fileName\":\"implementation.xml\",\"txlineNumber\":\"71\",\"stage\":\"MILESTONE\",\"status\":\"SUCCESS\",\"endpointSystem\":\"\",\"jsonRecord\":\"{\\n \\\"Task Name\\\": \\\"Cash Apps PAPI\\\",\\n \\\"Action Name\\\": \\\"Read Input Files GDrive Start\\\",\\n \\\"Run Reference\\\": \\\"xx-0645-11ef-xx-xx\\\",\\n \\\"Record Id Type\\\": \\\"Invoice\\\"\\n}\",\"detailText\":\"Start Reading Input Files from G drive\",\"businessRecordId\":\"\",\"businessRecordType\":\"\",\"batchSize\":\"0\",\"totalRecords\":\"0\",\"remainingRetries\":\"0\",\"timestamp\":\"2024-04-29 16:30:08.455\",\"threadName\":\"[MuleRuntime].uber.05: [ct-fin-aps-apps-papi-v1-uw2-ut].abc-apps-schedular-main-flow.BLOCKING @68f82333\"}" ``` data emulation above ```  
I have seen this behaviour in the past. It may not be issue with private objects. The IDs must be authenticated viz LDAP or Active directory, And since they left the organisation those IDs would hav... See more...
I have seen this behaviour in the past. It may not be issue with private objects. The IDs must be authenticated viz LDAP or Active directory, And since they left the organisation those IDs would have be permanently removed.  Try creating a local Splunk user account with the same ID and you must able to see all the KOs assigned to that ID. Once you reassign/delete those objects, you can delete the local user account. 
Also, is the "/my_search" in the example mentioned below the title of the Knowledge Object ? Not quite.  @deepakc only gave saved searches (aka reports) as an example.  "my_search" is a URL enco... See more...
Also, is the "/my_search" in the example mentioned below the title of the Knowledge Object ? Not quite.  @deepakc only gave saved searches (aka reports) as an example.  "my_search" is a URL encoded string of the title.  In the example, "https://<MY_CLOUD_STACK>splunkcloud.com:8089/servicesNS/nobody/YOU_APP/saved/searches/my_search"  is one property internally known as id. Is there anyway to reassign all the Knowledge Objects owner by a specific user ? instead of transferring one Knowledge object at a time ? Yes.  To continue the example with saved searches, you can use this search to find all id's owned by the old user "old_user".     | rest /servicesNS/-/-/saved/searches/ | search eai:acl.owner = "old_user" | fields id     Example output could be (taken from owner nobody on a standard deployment) id https://127.0.0.1:8089/servicesNS/nobody/search/saved/searches/Bucket%20Merge%20Retrieve%20Conf%20Settings https://127.0.0.1:8089/servicesNS/nobody/SplunkDeploymentServerConfig/saved/searches/DeploymentServerAppEvents https://127.0.0.1:8089/servicesNS/nobody/SplunkDeploymentServerConfig/saved/searches/DeploymentServerClients https://127.0.0.1:8089/servicesNS/nobody/SplunkDeploymentServerConfig/saved/searches/DeploymentServerClientsBasic https://127.0.0.1:8089/servicesNS/nobody/SplunkDeploymentServerConfig/saved/searches/DeploymentServerClientsByMachineType https://127.0.0.1:8089/servicesNS/nobody/SplunkDeploymentServerConfig/saved/searches/DeploymentServerEventsWithError https://127.0.0.1:8089/servicesNS/nobody/SplunkDeploymentServerConfig/saved/searches/DeploymentServerGetClientsUri Then, program a script using these values to update these saved searches to new user. To update other knowledge objects, consult REST API Reference Manual, especially Knowledge endpoint descriptions to find out how to retrieve their id's by owner. (Note saved searches is described in Search endpoint descriptions instead.) Hope this helps.
So if its Splunk on Windows - then r-syslog shouldn't be there,  (I assumed you was running on Linux)  
Yes its better, to send data to files and us a UF, to pick the logs up that way, we tend to use Splunk ports for general testing, POC etc, but not really for production, also you can look at SC4S, th... See more...
Yes its better, to send data to files and us a UF, to pick the logs up that way, we tend to use Splunk ports for general testing, POC etc, but not really for production, also you can look at SC4S, this is the better for syslog network data collection in general and it supports most common syslog formats.  SC4S  - From the link go to the documentation  https://splunkbase.splunk.com/app/4740