All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I also understand that apps can do similar extractions but there is no apps related to the sourcetypes about which we talking If talking about external syslog receiver, mayby in future In presen... See more...
I also understand that apps can do similar extractions but there is no apps related to the sourcetypes about which we talking If talking about external syslog receiver, mayby in future In present time, we ingest and index literally everything just because we don't know what information we will really need to resolve the problem Can you tell a little more about "not-syslog-aware" LB? What do you mean? Our LB does the following: - monitors the indexers by health API endpoint of earch indexer - if one or more is down, for some reasons, LB selects another healthy instance - spreads syslog messages to all IDXC members to avoid  "data imbalance"  - our approach is disscussable but works - for some reasons, we also makes source port and protocol overrides (some systems not support UDP and we change the protocol for UDP to avoid back TCP traffic)
Make sure Scheduledview objects have right permissions too.
Hello,   I have got the solution to this. We need to first create results and initialize the count as 0. it will create one table with 4 rows. Then join that with the other lookup files. Below is t... See more...
Hello,   I have got the solution to this. We need to first create results and initialize the count as 0. it will create one table with 4 rows. Then join that with the other lookup files. Below is the query that I have used: | makeresults | eval threat_key="p_default_domain_risklist_hrly" | eval count=0 | append [| makeresults | eval threat_key="p_default_hash_risklist_hrly" | eval count=0 ] | append [| makeresults | eval threat_key="p_default_ip_risklist_hrly" | eval count=0] | append [| makeresults | eval threat_key="p_default_url_risklist_hrly" | eval count=0 ] | fields - _time | append [| inputlookup ip_intel | search threat_key=*risklist_hrly* | stats count by threat_key ] | append [| inputlookup file_intel | search threat_key=*risklist_hrly* | stats count by threat_key ] | append [| inputlookup http_intel | search threat_key=*risklist_hrly* | stats count by threat_key ] | stats sum(count) as count by threat_key | search count=0  
Yes, the default syslog sourcetype calls the transform you mention but as far as I remember there are more apps that bring similar extractions with them. And I still advocate for external syslog rec... See more...
Yes, the default syslog sourcetype calls the transform you mention but as far as I remember there are more apps that bring similar extractions with them. And I still advocate for external syslog receiver. This way you can easily (compared to doing it with transforms) manipulate what you're indexing from which source and so on. Also "fault tolerance" in case of not-syslog-aware LB is... discussable. But hey, that's your environment
I checked DNS records many times Also, thank you for your advice but it is not a solution, just a workaround
I agree with you and also suspect that Splunk has an internal resolver or cache, but I can't find any docs or Q&A that can help me find out more 1. I understand it, but we need to see hostnames in... See more...
I agree with you and also suspect that Splunk has an internal resolver or cache, but I can't find any docs or Q&A that can help me find out more 1. I understand it, but we need to see hostnames instead of IPs because we are using Splunk as a log collector from different parts of our internal infrastructure. Using hostnames is more convenient because they are human-readable 2. If I correctly understand Splunk, it has a pre-defined [syslog] stanza in props.conf and a related [syslog-host] stanza in transforms.conf. But in my particular situation, all sourcetypes don't match the syslog pattern because they all have names like *_syslog. My transforms.conf also doesn't have records related to the hostname override 3 and 4. I know it, but we decided to abandon using a dedicated syslog server for different reasons, such as fault tolerance and the desire to make the "log ingention" system less complicated. Thank you for your advices
Hello All, I enabled my indicators feature with "/opt/phanton/bin/phenv set_preference --indicators yes"   I have two pronlems that might be connected: 1. I only enabled three fields in the Indic... See more...
Hello All, I enabled my indicators feature with "/opt/phanton/bin/phenv set_preference --indicators yes"   I have two pronlems that might be connected: 1. I only enabled three fields in the Indicators tab under Administarion, but still SOAR created many indicators on fields that configured as disabled. 2. I see that enabling the indicators feature consuming all my free RAM memory, and I have a lot of RAM so I unserstand that there is a problem with this. anyone can say why and how to solve?
Hi @kamlesh_vaghela , tried this but the dashboard width isn't changing.
Are there lately added more UFs and/or HFs to send logs to your indexer?
Hi you need to do basic data source onboarding process. There are lot of different instructions how to do it.  Here is some links: https://lantern.splunk.com/Splunk_Success_Framework/Data_Managem... See more...
Hi you need to do basic data source onboarding process. There are lot of different instructions how to do it.  Here is some links: https://lantern.splunk.com/Splunk_Success_Framework/Data_Management/Data_onboarding_workflow https://conf.splunk.com/files/2017/slides/data-onboarding-where-do-i-begin.pdf https://data-findings.com/wp-content/uploads/2024/04/Data-OnBoarding-2024-04-03.pdf There are many many more presentations which you can easily found. r. Ismo
I want to separate events by date I want to isolate red highlights that have similar formats. I don't know how. I would appreciate it if you could tell me how.
Please paste the raw event data, preferably in a code block </> to preserve original formatting.
An update: the problem was not the configuration of Splunk (so the mix of new and old versions seems to be OK in this case).  The root cause was in the source data. Thanks for your help anyway Purpl... See more...
An update: the problem was not the configuration of Splunk (so the mix of new and old versions seems to be OK in this case).  The root cause was in the source data. Thanks for your help anyway PurpleRick.
Thank you for your reply, First, let me talk a little bit about my setting. I used regex101 to check the line-break in my config. About the timestamp, it matched with all the events. I just tried y... See more...
Thank you for your reply, First, let me talk a little bit about my setting. I used regex101 to check the line-break in my config. About the timestamp, it matched with all the events. I just tried your settings, it did not work. of course, props.conf in /system/local and restart Splunk. Any other ideas, sir?
Thanks for response, but unfortunately it doesn't work -  YOUR_SEARCH | eval tlogParameters = replace(tlogParameters, "'","\"") this doesn't change anything - ie tlogParameters is still displayed i... See more...
Thanks for response, but unfortunately it doesn't work -  YOUR_SEARCH | eval tlogParameters = replace(tlogParameters, "'","\"") this doesn't change anything - ie tlogParameters is still displayed in raw as single quotes and surrounded by double quotes as original.  YOUR_SEARCH | eval tlogParameters = replace(tlogParameters, "'","\"") | eval _raw = tlogParameters  this makes empty result (the same as full query you proposed):  
Your dashboard could contain a search which (re)sets to the tokens to their default values in the done clause.
Made some changes at source, now we are getting the logs in JSON format and Add-on builder option worked fine.
It depends on what it is you are trying to achieve and what you would accept as a "solution". For example, you could try adding spaces to the end of field names so the column width increases. Basical... See more...
It depends on what it is you are trying to achieve and what you would accept as a "solution". For example, you could try adding spaces to the end of field names so the column width increases. Basically, it is a lot of trial and error to get something close to what you want, and you might not get there, so perhaps you should ask yourself, is it worth the effort?
Sorry, try this: | chart values(duration) over _time I have also edited my previous comment with this.
Hi @Siddharthnegi , if you refresh a page, tokens remain the same you setted. Use the solution to reset tokens. Ciao. Giuseppe