All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

A very old post but still relevant if the log timestamp format cannot be change. If exact timestamp is not needed I would set this to current or none for the specific sourcetype in props.conf. It is... See more...
A very old post but still relevant if the log timestamp format cannot be change. If exact timestamp is not needed I would set this to current or none for the specific sourcetype in props.conf. It is a very quick fix. DATETIME_CONFIG = [CURRENT | NONE]  Alternative just extract the time and not the date. Works fine as long as the events are indexed the same day as they are written. TIME_FORMAT = %H:%M:%S As last alternative datetime.xml is possible but maybe not so easy. In this case it might be enough to modify the "litmonth" attributes. Just remember to copy the file and rename it to something else and use that modified file for this specific sourcetype only. Modifying datetime.xml would impact all transactions on so do not do that.
You can use the first search as a subsearch to filter the second search - something like this <search source2> [search <search source1> | stats latest(date) as date by name]
Hi @Haleb, exactly: use password for your certificate! Ciao. Giuseppe
Can clearify about what password are you talking about? Link that you send to me have only sslPassword field that should be used only if i use password for my certificate.
Did you ever find a solution to this?  We're having the same problem two years later. I just sat down with the IPQS team and demonstrated the issue.  They took the same data set and it ran flawlessl... See more...
Did you ever find a solution to this?  We're having the same problem two years later. I just sat down with the IPQS team and demonstrated the issue.  They took the same data set and it ran flawlessly in their environment, so it's not the content of the field.  We're currently trying to determine if our Splunk config is different than theirs and somehow causing this issue.
Hi @karthi2809, at first rename your field before the stats command, then don't use append but the lookup command (https://docs.splunk.com/Documentation/Splunk/9.2.1/SearchReference/Lookup). index=... See more...
Hi @karthi2809, at first rename your field before the stats command, then don't use append but the lookup command (https://docs.splunk.com/Documentation/Splunk/9.2.1/SearchReference/Lookup). index="mulesoft" environment=PRD | rename content.payload.Status AS Status content.payload.InterfaceName) AS payLoadInterface content.payload.ErrorMessage AS ErrorMsg | lookup link.csv Link InterfaceName AS payLoadInterface OUTPUT Link | stats values(payLoadInterface) AS payLoadInterface values(ErrorMsg) AS ErrorMsg earliest(timestamp) AS Timestamp values(priority) AS Priority values(tracePoint) AS Tracepoint values(Link) AS Link values(payLoadInterface) AS payLoadInterface BY correlationId | eval names=if(isnull(mvfind(message,"DISABLED")),null,message), Response=coalesce(SuccessResponse,Successresponse,msg,names,ErrorMsg), payLoadInterface=coalesce(Interface,payLoadInterface) | table Status Timestamp InterfaceName Link Response correlationId message Priority Tracepoint | search payLoadInterface="*" | sort -Timestamp Then the condition Status LIKE (,"%") is wrong, what do you want to check?. Ciao. Giuseppe
I have two sources that I'd like to combine/join or search on one based on the other. Source 1 - has two fields  name & date Source 2  - has several fields including name & date, field1, fields2, f... See more...
I have two sources that I'd like to combine/join or search on one based on the other. Source 1 - has two fields  name & date Source 2  - has several fields including name & date, field1, fields2, field3, etc.   I'd like to get the most recent date for a specific name from source 1, and show only the events in source 2 with that name & date
Hi @Haleb, not all of them, e.g. password that must be the same both on Indexers and on Forwarders. Follow the configuration in the url. Ciao. Giuseppe
Hi @gcusello  yes i have another field as interfacename is given below and in my lookup file i have same name as Interfacename . But i dont know to map values using append. values(content.payload... See more...
Hi @gcusello  yes i have another field as interfacename is given below and in my lookup file i have same name as Interfacename . But i dont know to map values using append. values(content.payload.InterfaceName) as InterfaceName    
@gcusello  As i can see some of them are optional
Right.  The app context would make a difference.
Hi @Haleb , it seems to be different that your: some options are missed. Ciao. Giuseppe  
Hello Tejas,  Thank you! We will give this a shot first in our dev environment and see how it goes.
tcpout persistent queue will solve the issue.  If ParsingQueue is full, because tcpout queue was full(due to connection issues), splunktcpin shuts input port as splunktcpin queue is also full. ... See more...
tcpout persistent queue will solve the issue.  If ParsingQueue is full, because tcpout queue was full(due to connection issues), splunktcpin shuts input port as splunktcpin queue is also full. HEC clients will start receiving server is busy as parsingqueue is full. Tcpout persistent queue will be able to support all types of inputs and prevent back-pressure to parsingqueue. https://community.splunk.com/t5/Knowledge-Management/Splunk-Persistent-Queue/m-p/688223#M10063
Hi @splunky_diamond, which ES version did you installed? this was a known bug solved with ES 7.3. Ciao. Giuseppe
Hi, @gcusello  Yes, i did
Hi @Haleb, did you followed all the instructions at https://docs.splunk.com/Documentation/Splunk/9.2.1/Security/ConfigureSplunkforwardingtousesignedcertificates#:~:text=You%20can%20use%20transport%2... See more...
Hi @Haleb, did you followed all the instructions at https://docs.splunk.com/Documentation/Splunk/9.2.1/Security/ConfigureSplunkforwardingtousesignedcertificates#:~:text=You%20can%20use%20transport%20layer,create%20and%20sign%20them%20yourself. ? Ciao. Giuseppe
Thank you for the swift response. It looks to be working as expected.
Hi @karthi2809, have you in the search a nother field with the values to correlate with interfacename? field values must be the same. if yes, you can use this field to join the lookup. Ciao. Giu... See more...
Hi @karthi2809, have you in the search a nother field with the values to correlate with interfacename? field values must be the same. if yes, you can use this field to join the lookup. Ciao. Giuseppe
Hello splunkers! Has anyone else experienced slow performance with Splunk Enterprise Security? For me, when I open the "Content Management" in  "Configure" and let's say try to filter to see enabl... See more...
Hello splunkers! Has anyone else experienced slow performance with Splunk Enterprise Security? For me, when I open the "Content Management" in  "Configure" and let's say try to filter to see enabled correlation searches, it might take up to 5 minutes to load just 5 or 6 correlation searches. However, if I try to perform a search in search and reporting (Within Enterprise Security) the searches will run pretty much fast, returning hunderds of thousands of events. Another case where I might experience huge lags is when: creating a new investigation, updating the status of the notable, deleting investigation, opening Incident review settings, adding new note in investigation. If anyone had similar experience could someone please share how to improve the performance in Enterprise Security app? Some notes to give more info about my case: - The health circle is green.  - The deployment is all-in-one (Splunk Enterprise, ES, and all the apps and add-ons, everything is running on ubuntu server 20.04 virtual machine with 42 GB RAM, 200 GB hard disk (thin provisioned), 32 vCPU - My Splunk deployment has around 4-5 sources from which it receives the logs, average load of data is around 500-700 MB/day Thanks for taking your time reading  and replying to my post