All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello @karthi2809, I'm not sure if searchmatch function can work with case condition or not. I've read that it must be used inside if condition. Can you try running the following search: | rename b... See more...
Hello @karthi2809, I'm not sure if searchmatch function can work with case condition or not. I've read that it must be used inside if condition. Can you try running the following search: | rename bucketFolder as BucketFolder | eval InterfaceName=case(BucketFolder like "%inbound%epm%","EPM", BucketFolder like "%inbound%KPIs%","APEX_File_Upload", BucketFolder like "%inbound%concur%","ConcurFile_Upload",true(),"Unknown") | stats values(InterfaceName) as InterfaceName min(timestamp) as Timestamp values(BucketFolder) as BucketFolder values(Status) as Status by correlationId | table Status InterfaceName Timestamp FileName Bucket BucketFolder correlationId   Thanks, Tejas. --- If the above solution helps, an upvote is appreciated.
Hello @deepakc @gcusello  I have checked. Netstat shows the only process listening on 514 is Splunkd. Normally on Windows I use Kiwi Syslog. Just a while I changed the Regex again to . so that al... See more...
Hello @deepakc @gcusello  I have checked. Netstat shows the only process listening on 514 is Splunkd. Normally on Windows I use Kiwi Syslog. Just a while I changed the Regex again to . so that all events are moved to null queue. Injected syslog messages with python on TCP (SOCK_STREAM) and surprising no data appears for my test messages. But the firewall messages with "source = tcp:514" is still relentlessly appearing. Can a source appear as TCP: 514 but is not the case? Used btools to check for TCP modifications but can't find any. Oh my god, it's so weird. Seems like ghost TCP 514 messages. Next I will block that port with firewall and see if it still comes in. Update once more: We removed all TCP inputs from data input and do a restart. Funnily the TCP:514 source still comes into the SIEM. I tried with my python syslog message generator and I got an error because the Splunk server rejects the connection. Feels like some kind of corruption somewhere where the messages thinks they are TCP:514?
Hi  @yh , if the receiver is on Windows you cannot use neither rsyslog or SC4S, you should find another syslog receiver server. Ciao. Giuseppe
Hello @SReopelle, Additionally, you can try checking the list of KOs in All Configurations page from the Settings menu. You'll be able to find all the knowledge objects there.   Thanks, Tejas.
Do you mean to obtain something like this?   field name field value Action Name Read Input Files GDrive Start Record Id Type Invoice Run Reference xx-0645-11ef-xx-xx Task Name C... See more...
Do you mean to obtain something like this?   field name field value Action Name Read Input Files GDrive Start Record Id Type Invoice Run Reference xx-0645-11ef-xx-xx Task Name Cash Apps PAPI applicationName ct-fin-abc-apps-papi-v1-uw2-ut applicationType PAPI applicationVersion 1.0.7 batchSize 0 businessRecordId   businessRecordType   detailText Start Reading Input Files from G drive domain CR C4E endpointSystem   environment ct-app-UAT region us-ne-2 remainingRetries 0 stage MILESTONE status SUCCESS threadName [MuleRuntime].uber.05: [ct-fin-aps-apps-papi-v1-uw2-ut].abc-apps-schedular-main-flow.BLOCKING @68f82333 timestamp 2024-04-29 16:30:08.455 totalRecords 0 tx.fileName implementation.xml tx.flow read-input-files-sub-flow txlineNumber 71 worker 0 x-transaction-id xxxx-e691-xx-91bf-xxx As @gcusello suggested, you should use built-in JSON capability to do the job, not regex.  Use rex only to extract the JSON part.  Like this   | rex "eilog.EILog:\s*(?<eilog>{.+})" | spath input=eilog | spath input=jsonRecord   This is an emulation of your sample data.  Play with it and compare with real data.   | makeresults | eval _raw = "INFO 2024-04-29 16:30:08,456 [[MuleRuntime].uber.05: [ct-fin-abc-apps-papi-v1-uw2-ut].abc-apps-schedular-main-flow.BLOCKING @68f82333] com.sfdc.it.ei.mule4.eilog.EILog: {\"worker\":\"0\",\"region\":\"us-ne-2\",\"applicationName\":\"ct-fin-abc-apps-papi-v1-uw2-ut\",\"applicationVersion\":\"1.0.7\",\"applicationType\":\"PAPI\",\"environment\":\"ct-app-UAT\",\"domain\":\"CR C4E\",\"x-transaction-id\":\"xxxx-e691-xx-91bf-xxx\",\"tx.flow\":\"read-input-files-sub-flow\",\"tx.fileName\":\"implementation.xml\",\"txlineNumber\":\"71\",\"stage\":\"MILESTONE\",\"status\":\"SUCCESS\",\"endpointSystem\":\"\",\"jsonRecord\":\"{\\n \\\"Task Name\\\": \\\"Cash Apps PAPI\\\",\\n \\\"Action Name\\\": \\\"Read Input Files GDrive Start\\\",\\n \\\"Run Reference\\\": \\\"xx-0645-11ef-xx-xx\\\",\\n \\\"Record Id Type\\\": \\\"Invoice\\\"\\n}\",\"detailText\":\"Start Reading Input Files from G drive\",\"businessRecordId\":\"\",\"businessRecordType\":\"\",\"batchSize\":\"0\",\"totalRecords\":\"0\",\"remainingRetries\":\"0\",\"timestamp\":\"2024-04-29 16:30:08.455\",\"threadName\":\"[MuleRuntime].uber.05: [ct-fin-aps-apps-papi-v1-uw2-ut].abc-apps-schedular-main-flow.BLOCKING @68f82333\"}" ``` data emulation above ```  
I have seen this behaviour in the past. It may not be issue with private objects. The IDs must be authenticated viz LDAP or Active directory, And since they left the organisation those IDs would hav... See more...
I have seen this behaviour in the past. It may not be issue with private objects. The IDs must be authenticated viz LDAP or Active directory, And since they left the organisation those IDs would have be permanently removed.  Try creating a local Splunk user account with the same ID and you must able to see all the KOs assigned to that ID. Once you reassign/delete those objects, you can delete the local user account. 
Also, is the "/my_search" in the example mentioned below the title of the Knowledge Object ? Not quite.  @deepakc only gave saved searches (aka reports) as an example.  "my_search" is a URL enco... See more...
Also, is the "/my_search" in the example mentioned below the title of the Knowledge Object ? Not quite.  @deepakc only gave saved searches (aka reports) as an example.  "my_search" is a URL encoded string of the title.  In the example, "https://<MY_CLOUD_STACK>splunkcloud.com:8089/servicesNS/nobody/YOU_APP/saved/searches/my_search"  is one property internally known as id. Is there anyway to reassign all the Knowledge Objects owner by a specific user ? instead of transferring one Knowledge object at a time ? Yes.  To continue the example with saved searches, you can use this search to find all id's owned by the old user "old_user".     | rest /servicesNS/-/-/saved/searches/ | search eai:acl.owner = "old_user" | fields id     Example output could be (taken from owner nobody on a standard deployment) id https://127.0.0.1:8089/servicesNS/nobody/search/saved/searches/Bucket%20Merge%20Retrieve%20Conf%20Settings https://127.0.0.1:8089/servicesNS/nobody/SplunkDeploymentServerConfig/saved/searches/DeploymentServerAppEvents https://127.0.0.1:8089/servicesNS/nobody/SplunkDeploymentServerConfig/saved/searches/DeploymentServerClients https://127.0.0.1:8089/servicesNS/nobody/SplunkDeploymentServerConfig/saved/searches/DeploymentServerClientsBasic https://127.0.0.1:8089/servicesNS/nobody/SplunkDeploymentServerConfig/saved/searches/DeploymentServerClientsByMachineType https://127.0.0.1:8089/servicesNS/nobody/SplunkDeploymentServerConfig/saved/searches/DeploymentServerEventsWithError https://127.0.0.1:8089/servicesNS/nobody/SplunkDeploymentServerConfig/saved/searches/DeploymentServerGetClientsUri Then, program a script using these values to update these saved searches to new user. To update other knowledge objects, consult REST API Reference Manual, especially Knowledge endpoint descriptions to find out how to retrieve their id's by owner. (Note saved searches is described in Search endpoint descriptions instead.) Hope this helps.
So if its Splunk on Windows - then r-syslog shouldn't be there,  (I assumed you was running on Linux)  
Yes its better, to send data to files and us a UF, to pick the logs up that way, we tend to use Splunk ports for general testing, POC etc, but not really for production, also you can look at SC4S, th... See more...
Yes its better, to send data to files and us a UF, to pick the logs up that way, we tend to use Splunk ports for general testing, POC etc, but not really for production, also you can look at SC4S, this is the better for syslog network data collection in general and it supports most common syslog formats.  SC4S  - From the link go to the documentation  https://splunkbase.splunk.com/app/4740   
Hi @gcusello @deepakc  I will check if Rsyslog is installed. As far as I am aware it shouldn't and also it is running on Windows. 20+ GB used in the license per day. Is it recommended to have Rs... See more...
Hi @gcusello @deepakc  I will check if Rsyslog is installed. As far as I am aware it shouldn't and also it is running on Windows. 20+ GB used in the license per day. Is it recommended to have Rsyslog / Kiwi Syslog running on the server and then to use file monitor to copy in events?
It is better to use rsyslog, but in this case I think the issue may be related to the fact the rsyslog is running and using the TCP:514 port - that may the cause of the current issue, and longer term... See more...
It is better to use rsyslog, but in this case I think the issue may be related to the fact the rsyslog is running and using the TCP:514 port - that may the cause of the current issue, and longer term its better to use rsyslog or syslong-ng or even better SC4S but that another topic.
Hi @yh , as @deepakc said, it's prefereable to use an rsyslog to take syslogs instead of Splunk, in this way you're sure to save logs even if your Splunk is down or overloaded. Are you receiving ma... See more...
Hi @yh , as @deepakc said, it's prefereable to use an rsyslog to take syslogs instead of Splunk, in this way you're sure to save logs even if your Splunk is down or overloaded. Are you receiving many logs? Ciao. Giuseppe
Hi @yh , I asked because filtering should be applied in the first full Stunk instance that the data pass through, If you don't have an intermediate HF (or another Splunk full instance) and your dat... See more...
Hi @yh , I asked because filtering should be applied in the first full Stunk instance that the data pass through, If you don't have an intermediate HF (or another Splunk full instance) and your data directly arrive to your stand alone Splunk server, the conf files are correctly located. Ciao. Giuseppe
I'm wondering if you have rsyslog running on the AIO - see if you can turn that off if its running  Check to see if its running, if so stop it as it defaults to TCP 514 sudo systemctl status rsyslo... See more...
I'm wondering if you have rsyslog running on the AIO - see if you can turn that off if its running  Check to see if its running, if so stop it as it defaults to TCP 514 sudo systemctl status rsyslog
Hi, For my case, this is not a heavy forwarder. But this is an indexer and search head. The UDP and TCP flow is direct to the AIO indexer / search head. I understand from the documentation that a ... See more...
Hi, For my case, this is not a heavy forwarder. But this is an indexer and search head. The UDP and TCP flow is direct to the AIO indexer / search head. I understand from the documentation that a HF is more flexible where we can filter using sourcetype, hosts etc but for the indexer it is still possible, but the example given is on the source. We did try to add a backlash to the equal sign, and even just try discarding all events with a "." as the regex search key but somehow the TCP source can't perform any filtering. Is there something unique with direct filtering on indexers? It shows clearly that "source = tcp:514" I don't suppose it has been renamed, or should I try renaming this port, not sure if that help. On the inputs conf, I think there are some TCP settings that directs the sourcetype based on the host address, but I don't suppose that will have any impact?
"How do I solve the problem with automatic report collection and sending?" Maybe you can use the below this to check, using the metadata command this example shows if a host has not sent any data to... See more...
"How do I solve the problem with automatic report collection and sending?" Maybe you can use the below this to check, using the metadata command this example shows if a host has not sent any data to the _internal index, this can be change to another index where you are expecting regular data to come to, and you can also change the period -5m to say 10 mins etc, you can then save this as an alert, or dashboard table  to inform you when there is no data and look as to why etc. | metadata type=hosts index=_internal | table host, firstTime, lastTime, recentTime | rename totalCount as Count firstTime as "First_Event" lastTime as "Last_Event" recentTime as "Last_Update" | fieldformat Count=tostring(Count, "commas") | fieldformat "First_Event"=strftime('First_Event', "%c") | fieldformat "Last_Event"=strftime('Last_Event', "%c") | fieldformat "Last_Update"=strftime('Last_Update', "%c") | where Last_Update <= relative_time(now(),"-5m") | table host, Last_Update   
Hi, I've seen in server.conf specification under clustering stanza, different parameter (https://docs.splunk.com/Documentation/Splunk/9.0.7/Admin/Serverconf#High_availability_clustering_configuratio... See more...
Hi, I've seen in server.conf specification under clustering stanza, different parameter (https://docs.splunk.com/Documentation/Splunk/9.0.7/Admin/Serverconf#High_availability_clustering_configuration) register_search_address: "This is the address that advertises the peer to search heads. This is useful in the cases where a splunk host machine has multiple interfaces and only one of them can be reached by another splunkd instance." register_forwarder_address: "This is the address on which a peer is available for accepting data from forwarder.This is useful in the cases where a splunk host machine has multiple interfaces and only one of them can be reached by another splunkd instance." register_replication_address: "This is the address on which a peer is available for accepting replication data. This is useful in the cases where a peer host machine has multiple interfaces and only one of them can be reached by another splunkd instance." So It's seems to be possible to use multiple IP, but I'm wordering how to convert peer with single IP to peer with multiple IP using this parameter. Frédéric
Hi @ArianeSantos , let me understand: your ingestion correcty worked until the 30th of April and stopped from the 1st of May, is it correct? In this case, check the date format of your data and che... See more...
Hi @ArianeSantos , let me understand: your ingestion correcty worked until the 30th of April and stopped from the 1st of May, is it correct? In this case, check the date format of your data and check if the events of the 1st of may was indexed with timestamp 2024-01-05. If you have an european date format (dd/mm/yyyy) and you didn't forced the format (TIESTAMP_FORMAT = %d/%m/%Y), Splunk by default uses the american format (mm/dd/yyyy), so in the first 12 days of the month, you have an error. You can solve the issue forcing the TIME_FORMAT. Ciao. Giuseppe
Hi @yh , the <189> string isn't relevant. It's relevant only the regex you are using. Anyway, having these logs, you should be able to filter both the logs containing "dstip\=8\.8\.8\.8". Try to ... See more...
Hi @yh , the <189> string isn't relevant. It's relevant only the regex you are using. Anyway, having these logs, you should be able to filter both the logs containing "dstip\=8\.8\.8\.8". Try to add a backslash before "=" event if it shouldn't be the issue. Are you sure that the not filtered events directly arrive to the HF and that there isn't another HF in the middle? Ciao. Giuseppe
Hi @fde , I'm not sure that's possible to use two IP addresses for each server: in Splunk every hostname has one IP address. Ciao. Giuseppe