All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have written a splunk query and used streamstats command to make my output look like this: Query Used: ... | streamstats current= f last(History) as Status by Ticket Id | ... Current Out... See more...
I have written a splunk query and used streamstats command to make my output look like this: Query Used: ... | streamstats current= f last(History) as Status by Ticket Id | ... Current Output:                            Ticket ID Priority    Status 1234 4321 5678 P1 Closed In Progress 8765  P2  Closed   However I want to remove the record 4321 and look at all the closed tickets for Priority P1 and P2, but since it is also of P1 priority the entire record is getting removed for P1 when I use this query: ... | streamstats current= f last(History) as Status by Ticket Id | where NOT Status IN ("In Progress") | ... Output: Ticket ID Priority   Status 8765  P2  Closed   How do I only remove 4321 as it is  "In Progress" Status. Please help. Expected Output: Ticket ID Priority   Status 1234                                                  5678 P1  Closed 8765  P2  Closed
The 6.x are no longer supported, hence why they are most likely no longer listed.  I have seen older versions still working, but may not support new features / config's / settings etc that you may wa... See more...
The 6.x are no longer supported, hence why they are most likely no longer listed.  I have seen older versions still working, but may not support new features / config's / settings etc that you may want to use.  You can check the table's listed  here for supported versions?  https://www.splunk.com/en_us/legal/splunk-software-support-policy.html  
Maybe look at installing Wireshark if you can and check that way as well. 
Hello, I need to upgrade the Splunk indexer version to version 9.2 But, we have a few instances which have Splunk forwarders of version 6.6.3 , 7.0.4 , 7.2.1 I have referred the below link. But, t... See more...
Hello, I need to upgrade the Splunk indexer version to version 9.2 But, we have a few instances which have Splunk forwarders of version 6.6.3 , 7.0.4 , 7.2.1 I have referred the below link. But, there is no mention of compatibility with versions prior to 7.3 Compatibility between forwarders and Splunk Enterprise indexers - Splunk Documentation    
One more thing, how do I get this query : "| ldapsearch search="(objectClass=group)" attrs="*" | collect index=<ldapsearch> "in my SH ? I am following the tutorial ? The following steps are the sa... See more...
One more thing, how do I get this query : "| ldapsearch search="(objectClass=group)" attrs="*" | collect index=<ldapsearch> "in my SH ? I am following the tutorial ? The following steps are the same for saving new alerts or editing existing alerts. From the Add Actions menu, select Log event. Add the following event information to configure the alert action. Use plain text or tokens for search, job, or server metadata. Event text Source and sourcetype Host Destination index for the log event. The main index is the default destination. You can specify a different existing index.   How do I configure the event text to get the data in the SH ? all I get is some event like : " 5/3/24 10:00:01.000 AM   | ldapsearch search="(objectClass=group)" attrs="*" | collect index=<ldapsearch> host = SBOXAD01| SBOXAD02 source = alert: sourcetype = AD"
Additionally, you can also try rolling the bucket manually as mentioned in the reason. The SF isn't met because it needs the bucket to be rolled. Click on the Actions drop down and roll the bucket. T... See more...
Additionally, you can also try rolling the bucket manually as mentioned in the reason. The SF isn't met because it needs the bucket to be rolled. Click on the Actions drop down and roll the bucket. This should also help you fix the SF/RF not met issue without any downtime.   Thanks, Tejas. --- If the above solution helps, an upvote is appreciated.
Hello @karthi2809, I'm not sure if searchmatch function can work with case condition or not. I've read that it must be used inside if condition. Can you try running the following search: | rename b... See more...
Hello @karthi2809, I'm not sure if searchmatch function can work with case condition or not. I've read that it must be used inside if condition. Can you try running the following search: | rename bucketFolder as BucketFolder | eval InterfaceName=case(BucketFolder like "%inbound%epm%","EPM", BucketFolder like "%inbound%KPIs%","APEX_File_Upload", BucketFolder like "%inbound%concur%","ConcurFile_Upload",true(),"Unknown") | stats values(InterfaceName) as InterfaceName min(timestamp) as Timestamp values(BucketFolder) as BucketFolder values(Status) as Status by correlationId | table Status InterfaceName Timestamp FileName Bucket BucketFolder correlationId   Thanks, Tejas. --- If the above solution helps, an upvote is appreciated.
Hello @deepakc @gcusello  I have checked. Netstat shows the only process listening on 514 is Splunkd. Normally on Windows I use Kiwi Syslog. Just a while I changed the Regex again to . so that al... See more...
Hello @deepakc @gcusello  I have checked. Netstat shows the only process listening on 514 is Splunkd. Normally on Windows I use Kiwi Syslog. Just a while I changed the Regex again to . so that all events are moved to null queue. Injected syslog messages with python on TCP (SOCK_STREAM) and surprising no data appears for my test messages. But the firewall messages with "source = tcp:514" is still relentlessly appearing. Can a source appear as TCP: 514 but is not the case? Used btools to check for TCP modifications but can't find any. Oh my god, it's so weird. Seems like ghost TCP 514 messages. Next I will block that port with firewall and see if it still comes in. Update once more: We removed all TCP inputs from data input and do a restart. Funnily the TCP:514 source still comes into the SIEM. I tried with my python syslog message generator and I got an error because the Splunk server rejects the connection. Feels like some kind of corruption somewhere where the messages thinks they are TCP:514?
Hi  @yh , if the receiver is on Windows you cannot use neither rsyslog or SC4S, you should find another syslog receiver server. Ciao. Giuseppe
Hello @SReopelle, Additionally, you can try checking the list of KOs in All Configurations page from the Settings menu. You'll be able to find all the knowledge objects there.   Thanks, Tejas.
Do you mean to obtain something like this?   field name field value Action Name Read Input Files GDrive Start Record Id Type Invoice Run Reference xx-0645-11ef-xx-xx Task Name C... See more...
Do you mean to obtain something like this?   field name field value Action Name Read Input Files GDrive Start Record Id Type Invoice Run Reference xx-0645-11ef-xx-xx Task Name Cash Apps PAPI applicationName ct-fin-abc-apps-papi-v1-uw2-ut applicationType PAPI applicationVersion 1.0.7 batchSize 0 businessRecordId   businessRecordType   detailText Start Reading Input Files from G drive domain CR C4E endpointSystem   environment ct-app-UAT region us-ne-2 remainingRetries 0 stage MILESTONE status SUCCESS threadName [MuleRuntime].uber.05: [ct-fin-aps-apps-papi-v1-uw2-ut].abc-apps-schedular-main-flow.BLOCKING @68f82333 timestamp 2024-04-29 16:30:08.455 totalRecords 0 tx.fileName implementation.xml tx.flow read-input-files-sub-flow txlineNumber 71 worker 0 x-transaction-id xxxx-e691-xx-91bf-xxx As @gcusello suggested, you should use built-in JSON capability to do the job, not regex.  Use rex only to extract the JSON part.  Like this   | rex "eilog.EILog:\s*(?<eilog>{.+})" | spath input=eilog | spath input=jsonRecord   This is an emulation of your sample data.  Play with it and compare with real data.   | makeresults | eval _raw = "INFO 2024-04-29 16:30:08,456 [[MuleRuntime].uber.05: [ct-fin-abc-apps-papi-v1-uw2-ut].abc-apps-schedular-main-flow.BLOCKING @68f82333] com.sfdc.it.ei.mule4.eilog.EILog: {\"worker\":\"0\",\"region\":\"us-ne-2\",\"applicationName\":\"ct-fin-abc-apps-papi-v1-uw2-ut\",\"applicationVersion\":\"1.0.7\",\"applicationType\":\"PAPI\",\"environment\":\"ct-app-UAT\",\"domain\":\"CR C4E\",\"x-transaction-id\":\"xxxx-e691-xx-91bf-xxx\",\"tx.flow\":\"read-input-files-sub-flow\",\"tx.fileName\":\"implementation.xml\",\"txlineNumber\":\"71\",\"stage\":\"MILESTONE\",\"status\":\"SUCCESS\",\"endpointSystem\":\"\",\"jsonRecord\":\"{\\n \\\"Task Name\\\": \\\"Cash Apps PAPI\\\",\\n \\\"Action Name\\\": \\\"Read Input Files GDrive Start\\\",\\n \\\"Run Reference\\\": \\\"xx-0645-11ef-xx-xx\\\",\\n \\\"Record Id Type\\\": \\\"Invoice\\\"\\n}\",\"detailText\":\"Start Reading Input Files from G drive\",\"businessRecordId\":\"\",\"businessRecordType\":\"\",\"batchSize\":\"0\",\"totalRecords\":\"0\",\"remainingRetries\":\"0\",\"timestamp\":\"2024-04-29 16:30:08.455\",\"threadName\":\"[MuleRuntime].uber.05: [ct-fin-aps-apps-papi-v1-uw2-ut].abc-apps-schedular-main-flow.BLOCKING @68f82333\"}" ``` data emulation above ```  
I have seen this behaviour in the past. It may not be issue with private objects. The IDs must be authenticated viz LDAP or Active directory, And since they left the organisation those IDs would hav... See more...
I have seen this behaviour in the past. It may not be issue with private objects. The IDs must be authenticated viz LDAP or Active directory, And since they left the organisation those IDs would have be permanently removed.  Try creating a local Splunk user account with the same ID and you must able to see all the KOs assigned to that ID. Once you reassign/delete those objects, you can delete the local user account. 
Also, is the "/my_search" in the example mentioned below the title of the Knowledge Object ? Not quite.  @deepakc only gave saved searches (aka reports) as an example.  "my_search" is a URL enco... See more...
Also, is the "/my_search" in the example mentioned below the title of the Knowledge Object ? Not quite.  @deepakc only gave saved searches (aka reports) as an example.  "my_search" is a URL encoded string of the title.  In the example, "https://<MY_CLOUD_STACK>splunkcloud.com:8089/servicesNS/nobody/YOU_APP/saved/searches/my_search"  is one property internally known as id. Is there anyway to reassign all the Knowledge Objects owner by a specific user ? instead of transferring one Knowledge object at a time ? Yes.  To continue the example with saved searches, you can use this search to find all id's owned by the old user "old_user".     | rest /servicesNS/-/-/saved/searches/ | search eai:acl.owner = "old_user" | fields id     Example output could be (taken from owner nobody on a standard deployment) id https://127.0.0.1:8089/servicesNS/nobody/search/saved/searches/Bucket%20Merge%20Retrieve%20Conf%20Settings https://127.0.0.1:8089/servicesNS/nobody/SplunkDeploymentServerConfig/saved/searches/DeploymentServerAppEvents https://127.0.0.1:8089/servicesNS/nobody/SplunkDeploymentServerConfig/saved/searches/DeploymentServerClients https://127.0.0.1:8089/servicesNS/nobody/SplunkDeploymentServerConfig/saved/searches/DeploymentServerClientsBasic https://127.0.0.1:8089/servicesNS/nobody/SplunkDeploymentServerConfig/saved/searches/DeploymentServerClientsByMachineType https://127.0.0.1:8089/servicesNS/nobody/SplunkDeploymentServerConfig/saved/searches/DeploymentServerEventsWithError https://127.0.0.1:8089/servicesNS/nobody/SplunkDeploymentServerConfig/saved/searches/DeploymentServerGetClientsUri Then, program a script using these values to update these saved searches to new user. To update other knowledge objects, consult REST API Reference Manual, especially Knowledge endpoint descriptions to find out how to retrieve their id's by owner. (Note saved searches is described in Search endpoint descriptions instead.) Hope this helps.
So if its Splunk on Windows - then r-syslog shouldn't be there,  (I assumed you was running on Linux)  
Yes its better, to send data to files and us a UF, to pick the logs up that way, we tend to use Splunk ports for general testing, POC etc, but not really for production, also you can look at SC4S, th... See more...
Yes its better, to send data to files and us a UF, to pick the logs up that way, we tend to use Splunk ports for general testing, POC etc, but not really for production, also you can look at SC4S, this is the better for syslog network data collection in general and it supports most common syslog formats.  SC4S  - From the link go to the documentation  https://splunkbase.splunk.com/app/4740   
Hi @gcusello @deepakc  I will check if Rsyslog is installed. As far as I am aware it shouldn't and also it is running on Windows. 20+ GB used in the license per day. Is it recommended to have Rs... See more...
Hi @gcusello @deepakc  I will check if Rsyslog is installed. As far as I am aware it shouldn't and also it is running on Windows. 20+ GB used in the license per day. Is it recommended to have Rsyslog / Kiwi Syslog running on the server and then to use file monitor to copy in events?
It is better to use rsyslog, but in this case I think the issue may be related to the fact the rsyslog is running and using the TCP:514 port - that may the cause of the current issue, and longer term... See more...
It is better to use rsyslog, but in this case I think the issue may be related to the fact the rsyslog is running and using the TCP:514 port - that may the cause of the current issue, and longer term its better to use rsyslog or syslong-ng or even better SC4S but that another topic.
Hi @yh , as @deepakc said, it's prefereable to use an rsyslog to take syslogs instead of Splunk, in this way you're sure to save logs even if your Splunk is down or overloaded. Are you receiving ma... See more...
Hi @yh , as @deepakc said, it's prefereable to use an rsyslog to take syslogs instead of Splunk, in this way you're sure to save logs even if your Splunk is down or overloaded. Are you receiving many logs? Ciao. Giuseppe
Hi @yh , I asked because filtering should be applied in the first full Stunk instance that the data pass through, If you don't have an intermediate HF (or another Splunk full instance) and your dat... See more...
Hi @yh , I asked because filtering should be applied in the first full Stunk instance that the data pass through, If you don't have an intermediate HF (or another Splunk full instance) and your data directly arrive to your stand alone Splunk server, the conf files are correctly located. Ciao. Giuseppe
I'm wondering if you have rsyslog running on the AIO - see if you can turn that off if its running  Check to see if its running, if so stop it as it defaults to TCP 514 sudo systemctl status rsyslo... See more...
I'm wondering if you have rsyslog running on the AIO - see if you can turn that off if its running  Check to see if its running, if so stop it as it defaults to TCP 514 sudo systemctl status rsyslog