All Posts

Top

All Posts

Well obviously it is possible! The "issue" is that the total emails are counted by user, subject and action, whereas the other two counts are by just user and subject. You could change the eventstats... See more...
Well obviously it is possible! The "issue" is that the total emails are counted by user, subject and action, whereas the other two counts are by just user and subject. You could change the eventstats to correct this | eventstats sum(eval(if(action="quarantined", 1, 0))) as quarantined_count_peruser, sum(eval(if(action="delivered", 1, 0))) as delivered_count_peruser sum(total_emails) as total_emails by src_user, subject
Hello @ITWhisperer , the result should be the total emails count, and the specific count for the delivered and quarantined ones. In my screenshot, there are for example 6 total emails (first row), a... See more...
Hello @ITWhisperer , the result should be the total emails count, and the specific count for the delivered and quarantined ones. In my screenshot, there are for example 6 total emails (first row), and 12 delivered, which is not possible. So the a possible expectation should be: Case1: 6 total emails, 6 delivered, 0 quarantined Case2: 6 total emails, 3 delivered, 3 quarantined Case3: 6 total emails, 1 delivered, 5 quarantined
I installed Snort 3 JSON Alerts add-on. I made changes in inputs.conf (/opt/splunk/etc/apps/TA_Snort3_json/local) like this: [monitor:///var/log/snort/*alert_json.txt*] sourcetype = snort3:alert:js... See more...
I installed Snort 3 JSON Alerts add-on. I made changes in inputs.conf (/opt/splunk/etc/apps/TA_Snort3_json/local) like this: [monitor:///var/log/snort/*alert_json.txt*] sourcetype = snort3:alert:json When I search for events like below (sourcetype="snort3:alert:json") there is NOTHING But Splunk knows in that path there is something and in what number. Like below.   What I can tell more is what Splunk tells me when starting. Value in stanza [eventtype=snort3:alert:json] in /…/TA_Snort3_json/default/tags.conf, line 1 is not URL encoded: eventtype = snort3:alert:json Your indexes and inputs configurations are not internally consistenst. For more info, run ‘splunk btool check –debug’ Please, help..  
When we built our environment (splunk-related) I checked the splunk docs for some information that could say something about the proper functioning of one indexer I can be mistaken, but in this case... See more...
When we built our environment (splunk-related) I checked the splunk docs for some information that could say something about the proper functioning of one indexer I can be mistaken, but in this case, I selected the indexer color status The API endpoint is "bla bla bla/services/server/info/health_info" If an indexer has green or yellow status, LB decides that node is OK If an indexer has a red status, LB decides that node is not OK and selects another one
Check splunkd.log for replication errors. Verify the AWS security groups allow communication among all indexers on ports 8080 and 9887 and to the Cluster Manager's port 8089.
Try it like this index="os" host="abcd*" source="/opt/os/*/logs/*" "implementation:abc-field-flow" (("TargetID":"abc" "Sender":"SenderID":"abc") OR ("status": "SUCCESS")) | rex "CORRELATION ID :: ... See more...
Try it like this index="os" host="abcd*" source="/opt/os/*/logs/*" "implementation:abc-field-flow" (("TargetID":"abc" "Sender":"SenderID":"abc") OR ("status": "SUCCESS")) | rex "CORRELATION ID :: (?<correlation_id>\S+)" | eval success_id = if(searchmatch("COMPLETED"), correlation_id,null()) | eventstats values(success_id) as success_id by correlation_id | where correlation_id = success_id
In what way is it not what you expected? Please share what you had expected?
Hi  yuanliu ,  Thank you for your reply.. I have tried the search index shared by you, but it doesn't work.  Here we have two different search indexes: 1) request payload:  index="os" ... See more...
Hi  yuanliu ,  Thank you for your reply.. I have tried the search index shared by you, but it doesn't work.  Here we have two different search indexes: 1) request payload:  index="os" host="abcd*" source="/opt/os/*/logs/*" "implementation:abc-field-flow" "TargetID":"abc" "Sender":"SenderID":"abc" 2) success payload : index="OS" host="abcd*" source="/opt/os/*/logs/*" "implementation:abc-field-flow" "status": "SUCCESS"   I need to query the search index (only for the success payload) in such way that correlation id present in the success payload need to match with Correlation id present in the Request payload. Could you please help me out. NOTE: Different payload has different Correlation ID.
Hello @marysan , it's seems that the result is not as expected:  
Hello all, Our current environment is : Three site clustered, 2 clusters on on-premises(14 indexers 7 indexers in each cluster) and one cluster(7 indexers) is hosted on AWS. The AWS indexers are  ... See more...
Hello all, Our current environment is : Three site clustered, 2 clusters on on-premises(14 indexers 7 indexers in each cluster) and one cluster(7 indexers) is hosted on AWS. The AWS indexers are  clustered  recently. It is almost 15 days, but still the replication factor and search factor are not met. What might be the reason and what are all the possible ways that I can resolve this. There are around 300 fixup tasks pending. The number remained the same for the past 2 weeks. I've manually rolled the buckets but still no use. 
First sin is "monitor by health API" - it doesn't tell you anything about availability of syslog input. But from your description it seems that your LB is at least a bit syslog-aware (if you're able... See more...
First sin is "monitor by health API" - it doesn't tell you anything about availability of syslog input. But from your description it seems that your LB is at least a bit syslog-aware (if you're able to extract the payload and resend it as UDP, that's something). What is it if you can share this information?
Hi @493600 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
I also understand that apps can do similar extractions but there is no apps related to the sourcetypes about which we talking If talking about external syslog receiver, mayby in future In presen... See more...
I also understand that apps can do similar extractions but there is no apps related to the sourcetypes about which we talking If talking about external syslog receiver, mayby in future In present time, we ingest and index literally everything just because we don't know what information we will really need to resolve the problem Can you tell a little more about "not-syslog-aware" LB? What do you mean? Our LB does the following: - monitors the indexers by health API endpoint of earch indexer - if one or more is down, for some reasons, LB selects another healthy instance - spreads syslog messages to all IDXC members to avoid  "data imbalance"  - our approach is disscussable but works - for some reasons, we also makes source port and protocol overrides (some systems not support UDP and we change the protocol for UDP to avoid back TCP traffic)
Make sure Scheduledview objects have right permissions too.
Hello,   I have got the solution to this. We need to first create results and initialize the count as 0. it will create one table with 4 rows. Then join that with the other lookup files. Below is t... See more...
Hello,   I have got the solution to this. We need to first create results and initialize the count as 0. it will create one table with 4 rows. Then join that with the other lookup files. Below is the query that I have used: | makeresults | eval threat_key="p_default_domain_risklist_hrly" | eval count=0 | append [| makeresults | eval threat_key="p_default_hash_risklist_hrly" | eval count=0 ] | append [| makeresults | eval threat_key="p_default_ip_risklist_hrly" | eval count=0] | append [| makeresults | eval threat_key="p_default_url_risklist_hrly" | eval count=0 ] | fields - _time | append [| inputlookup ip_intel | search threat_key=*risklist_hrly* | stats count by threat_key ] | append [| inputlookup file_intel | search threat_key=*risklist_hrly* | stats count by threat_key ] | append [| inputlookup http_intel | search threat_key=*risklist_hrly* | stats count by threat_key ] | stats sum(count) as count by threat_key | search count=0  
Yes, the default syslog sourcetype calls the transform you mention but as far as I remember there are more apps that bring similar extractions with them. And I still advocate for external syslog rec... See more...
Yes, the default syslog sourcetype calls the transform you mention but as far as I remember there are more apps that bring similar extractions with them. And I still advocate for external syslog receiver. This way you can easily (compared to doing it with transforms) manipulate what you're indexing from which source and so on. Also "fault tolerance" in case of not-syslog-aware LB is... discussable. But hey, that's your environment
I checked DNS records many times Also, thank you for your advice but it is not a solution, just a workaround
I agree with you and also suspect that Splunk has an internal resolver or cache, but I can't find any docs or Q&A that can help me find out more 1. I understand it, but we need to see hostnames in... See more...
I agree with you and also suspect that Splunk has an internal resolver or cache, but I can't find any docs or Q&A that can help me find out more 1. I understand it, but we need to see hostnames instead of IPs because we are using Splunk as a log collector from different parts of our internal infrastructure. Using hostnames is more convenient because they are human-readable 2. If I correctly understand Splunk, it has a pre-defined [syslog] stanza in props.conf and a related [syslog-host] stanza in transforms.conf. But in my particular situation, all sourcetypes don't match the syslog pattern because they all have names like *_syslog. My transforms.conf also doesn't have records related to the hostname override 3 and 4. I know it, but we decided to abandon using a dedicated syslog server for different reasons, such as fault tolerance and the desire to make the "log ingention" system less complicated. Thank you for your advices
Hello All, I enabled my indicators feature with "/opt/phanton/bin/phenv set_preference --indicators yes"   I have two pronlems that might be connected: 1. I only enabled three fields in the Indic... See more...
Hello All, I enabled my indicators feature with "/opt/phanton/bin/phenv set_preference --indicators yes"   I have two pronlems that might be connected: 1. I only enabled three fields in the Indicators tab under Administarion, but still SOAR created many indicators on fields that configured as disabled. 2. I see that enabling the indicators feature consuming all my free RAM memory, and I have a lot of RAM so I unserstand that there is a problem with this. anyone can say why and how to solve?
Hi @kamlesh_vaghela , tried this but the dashboard width isn't changing.