All Apps and Add-ons

Unable to see all events Indexed by splunk via eStreamer eNcore.

att35
Builder

Hi,

We recently installed the new eStreamer eNcore app for Sourcefire. It has connected successfully with FMC and we can see data is being received in encore.***.log files under /opt/splunk/etc/apps/TA-eStreamer/data

Even on the data summary page we can see the # of events which have been indexed, but when we try to search this data, only a handful of events are returned.

alt text
Also, the few events which we do see in the search appear to be metadata and not the actual SF connection events.

alt text

Why cant we see all the events being indexed as shown in the data summary?

Thanks,

~ Abhi

1 Solution

sastrach
Path Finder

It appears that encore is receiving data from the FMC with timestamps (event_sec) in the past. There are various facets to this.

  1. In terms of seeing all the data which is in Splunk, you need to change the time frame. e.g search sourcetype="cisco:estreamer:data" then select Presets > All time - that will at least show you what's there. This of course is not really your issue.
  2. encore might just be playing catch up (I note you are also having a separate issue on this).
  3. If you are seeing data arriving out of sync - i.e. old events mixed in with new then this can be expected behaviour. It’s a totally natural occurrence for events to be sent with time stamps out of sequence. a. The timestamp is created when the event occurs at the sensor (IDS, most events, but not those created at the FMC) b. eStreamer (the server) writes those events to the unified file in the order received at the FMC regardless of timestamp and this is how they are ordered when sent to a client (encore). c. Consider a case where SensorA loses connectivity with its FMC while SensorB does not. SensorB events will continue and SensorA events will get written once it reconnects - but will have older timestamps mixed in with the newer.
  4. I note that you are writing out metadata. This is purely optional - encore will always request the metadata (even if it doesn't write it out) so that it can decorate the other events. Metadata does not have a timestamp so Splunk just indexes it "now"

I accept that these points may or may not apply to your specific case, but encore (the client) will only receive the events which the estreamer / FMC server sends it - and in the same order. So if you know there is a further problem then that is something that TAC and the engineering team may have to look at.

View solution in original post

sastrach
Path Finder

It appears that encore is receiving data from the FMC with timestamps (event_sec) in the past. There are various facets to this.

  1. In terms of seeing all the data which is in Splunk, you need to change the time frame. e.g search sourcetype="cisco:estreamer:data" then select Presets > All time - that will at least show you what's there. This of course is not really your issue.
  2. encore might just be playing catch up (I note you are also having a separate issue on this).
  3. If you are seeing data arriving out of sync - i.e. old events mixed in with new then this can be expected behaviour. It’s a totally natural occurrence for events to be sent with time stamps out of sequence. a. The timestamp is created when the event occurs at the sensor (IDS, most events, but not those created at the FMC) b. eStreamer (the server) writes those events to the unified file in the order received at the FMC regardless of timestamp and this is how they are ordered when sent to a client (encore). c. Consider a case where SensorA loses connectivity with its FMC while SensorB does not. SensorB events will continue and SensorA events will get written once it reconnects - but will have older timestamps mixed in with the newer.
  4. I note that you are writing out metadata. This is purely optional - encore will always request the metadata (even if it doesn't write it out) so that it can decorate the other events. Metadata does not have a timestamp so Splunk just indexes it "now"

I accept that these points may or may not apply to your specific case, but encore (the client) will only receive the events which the estreamer / FMC server sends it - and in the same order. So if you know there is a further problem then that is something that TAC and the engineering team may have to look at.

att35
Builder

Issue got resolved after deploying the patch for CSCve44987.

Thanks,

~ Abhi

0 Karma

sastrach
Path Finder

I believe the behaviour you are seeing is also related to the eStreamer bug outlined here: https://answers.splunk.com/answers/560948/is-there-any-way-to-speed-up-estreamer-encore-data.html#an...

0 Karma

att35
Builder

Thanks Sam.

We are working with TAC on deploying this patch.

0 Karma

paulstark
Path Finder

i have not personally used eNcore yet. nor can i confirm it reports SFIM or IPS. But you can start by using this SPL search.

index=* source=encore*

0 Karma

att35
Builder

Getting closer I guess..

Each time a new log file is generated and read by Splunk, we only receive the first 20 lines with the correct current timestamp and everything below that gets tagged under July 27th. Here's the raw data.

Please see line starting with rec_type=71(Highlighted in bold. Also, it's all one big line so had to split into newline for this post). For that and other similar events, Splunk assigns in-correct timestamps. Could it be because of the field event_sec, which is not present for other events.

rec_type=90 id=0 name=Detector rec_type_desc="Source Type" rec_type_simple="SOURCE TYPE"
rec_type=96 id=0 name=RNA rec_type_desc="Source Detector" rec_type_simple="SOURCE DETECTOR"
rec_type=63 id=617 name=DNS rec_type_desc="Server Metadata" rec_type_simple=SERVICE
rec_type=115 name=X0-Internal rec_type_desc="Security Zone Name Metadata" rec_type_simple=ZONE uuid=7b134026-4550-11e7-83bd-330b9c433d6f
rec_type=116 name=s1p1 rec_type_desc="Interface Name Metadata" rec_type_simple=INTERFACE uuid=98ca9420-7c0c-11e5-adaf-b08e8512dca7
rec_type=116 name=s1p2 rec_type_desc="Interface Name Metadata" rec_type_simple=INTERFACE uuid=a32b4ba4-bebe-11e5-82d7-3df2199708a4
rec_type=145 name="Access Control Policy" rec_type_desc="Access Control Policy Metadata" rec_type_simple="ACCESS CONTROL POLICY" sensor=abc uuid=00000000-0000-0000-0000-000059789626
rec_type=119 id=268434497 name="Log Everything" rec_type_desc="Access Control Rule ID Metadata" rec_type_simple="FIREWALL RULE" uuid=00000000-0000-0000-0000-000059789626
rec_type=120 id=2 name=Allow rec_type_desc="Access Control Rule Action Metadata" rec_type_simple="FIREWALL RULE ACTION"
rec_type=122 id=0 name="Risk unknown" rec_type_desc="URL Reputation Metadata" rec_type_simple="FIREWALL URL REPUTATION"
rec_type=98 id=9999999 ip_proto=Unknown name=Unknown rec_type_desc=User rec_type_simple="RUA USER"
rec_type=55 id=617 name=DNS rec_type_desc="Client Application Metadata" rec_type_simple="CLIENT APP"
rec_type=520 id=840 name="united states" rec_type_desc="Geolocation Data" rec_type_simple=GEOLOCATION
rec_type=602 id=0 name=TLS_NULL_WITH_NULL_NULL rec_type_desc="SSL Cipher Suite" rec_type_simple="SSL CIPHER SUITE"
rec_type=604 id=0 name=Unknown rec_type_desc="SSL Version" rec_type_simple="SSL VERSION"
rec_type=605 description="Not Checked" id=0 rec_type_desc="SSL Server Certificate Status" rec_type_simple="SSL SERVER CERT STATUS"
rec_type=606 description=Unknown id=0 rec_type_desc="SSL Actual Action" rec_type_simple="SSL ACTION"
rec_type=608 description=Unknown id=0 rec_type_desc="SSL Flow Status" rec_type_simple="SSL FLOW STATUS"
rec_type=320 description="text strings" id=16 name=TXT rec_type_desc="DNS Record Type Metadata" rec_type_simple=DNS
rec_type=123 id=2 name=abc rec_type_desc="Managed Device Metadata" rec_type_simple=SENSOR
**rec_type=71 app_proto=DNS client_app=DNS client_version="" connection_id=40188 dest_autonomous_system=0 dest_bytes=434 dest_ip=x.x.x.x dest_ip_country=840 dest_mask=0 dest_pkts=1 dest_port=53 dest_tos=0 dns_query=3.1o19xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxsxl.net dns_rec_id=16 dns_record_desc="text strings" dns_record_name=TXT dns_resp_id=0 dns_ttl=300 event_desc="Flow Statistics" event_sec=1501145147 event_subtype=1 event_type=1003 event_usec=0 file_count=0 first_pkt_sec=1501142463 fw_policy="Access Control Policy" fw_rule="Log Everything" fw_rule_action=Allow fw_rule_reason=N/A has_ipv6=1 http_referrer="" http_response=0 iface_egress=xxxx iface_ingress=yyyy instance_id=18 ip_layer=0 ip_proto=UDP ips_count=0 last_pkt_sec=1501142463 legacy_ip_address=0.0.0.0 mac_address=0:0:0:0:0:0 monitor_rule_1=N/A monitor_rule_2=N/A monitor_rule_3=N/A monitor_rule_4=N/A monitor_rule_5=N/A monitor_rule_6=N/A monitor_rule_7=N/A monitor_rule_8=0 netbios_domain="" netflow_src=00000000-0000-0000-0000-000000000000 num_ioc=0 rec_type_desc="Connection Statistics" rec_type_simple=RNA referenced_host="" sec_intel_event=No sec_intel_ip=N/A sec_zone_egress=X0-Internal sec_zone_ingress=X0-Internal security_context=00000000000000000000000000000000 
sensor=abc sinkhole_uuid=00000000-0000-0000-0000-000000000000 snmp_in=0 snmp_out=0 src_autonomous_system=0 src_bytes=316 src_ip=y.y.y.y src_ip_country=unknown src_mask=0 src_pkts=1 src_port=52122 src_tos=0 ssl_actual_action=Unknown ssl_cert_fingerprint=0000000000000000000000000000000000000000 ssl_cipher_suite=TLS_NULL_WITH_NULL_NULL ssl_expected_action=Unknown ssl_flow_error=0 ssl_flow_flags=0 ssl_flow_messages=0 ssl_flow_status=Unknown ssl_policy_id=00000000000000000000000000000000 ssl_rule_id=0 ssl_server_cert_status="Not Checked" ssl_server_name="" ssl_session_id=0000000000000000000000000000000000000000000000000000000000000000 
ssl_ticket_id=0000000000000000000000000000000000000000 ssl_url_category=0 ssl_version=Unknown tcp_flags=0 url="" url_category=Unknown url_reputation="Risk unknown" user=Unknown user_agent="" vlan_id=0 web_app=Unknown**
rec_type=63 id=1122 name=HTTPS rec_type_desc="Server Metadata" rec_type_simple=SERVICE
rec_type=55 id=1296 name="SSL client" rec_type_desc="Client Application Metadata" rec_type_simple="CLIENT APP"

Here's the props.conf stanza for this sourcetype:

# Sourcetype
[cisco:estreamer:data]
SHOULD_LINEMERGE = false
TRUNCATE = 0
TIME_PREFIX = event_sec=
0 Karma

att35
Builder

Turns out, the timestamp event_sec being recorded in the RAW logs is incorrect. Converted one of the recent ones 1501145150 and it does translate to July 27, 2017 8:45:50 AM which is exactly the timestamp under which this event was indexed.

Not sure if it's the eNcore app or the FMC itself which is sending incorrect timestamp. Will update once I have more details.

0 Karma

paulstark
Path Finder

fmc will set event_sec i believe. if that field (or another timestamp) is not there it should get the files timestamp. I have also seen issue with flow start/end times vs. event time for the connection events , which I dont see in your sample. best of luck! i am anxious yo know if eNcore can keep up with high volume flows.

0 Karma

pj
Contributor

Even though the props settings in the TA do work, I modified it to add a few extra best practice type settings. Not a massive fan of using TRUNCATE=0 as it isn't generally good practice.

[cisco:estreamer:data]
SHOULD_LINEMERGE = false
TIME_PREFIX = event_sec=
LINE_BREAKER = ([\n\r]+)rec_type=
MAX_TIMESTAMP_LOOKAHEAD = 10
TRUNCATE = 999999
TIME_FORMAT = %s
0 Karma

att35
Builder

Hi Paul,

I did try that but no success. I am logged in as admin and made sure that the role has permission to search across all indexes.

index=* sourcetype="cisco:estreamer:data"
index=* source=encore*

Data summary continue to show high count of indexed data but can see only small amount of results when searching.

0 Karma

att35
Builder

Finally found the data but doing a search over all time.

Even though this app was configured just yesterday, for some reason the data is getting indexed for a past timestamp.

_time     count
2017-07-19  1289
2017-07-20  0
2017-07-21  0
2017-07-22  0
2017-07-23  0
2017-07-24  0
2017-07-25  0
2017-07-26  0
2017-07-27  40242
2017-07-28  0
2017-07-29  0
2017-07-30  0
2017-07-31  0
2017-08-01  0
2017-08-02  0
2017-08-03  723
2017-08-04  38

Not sure what would make it index data with wrong timestamp, and especially when it's not doing it for all events. because we do see few events indexed for last two days.

New events are actively getting indexed, but under Jul 27th timestamp for some reason.

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...