<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Is there an efficient way to do field summary on a large set of source types in Splunk Enterprise</title>
    <link>https://community.splunk.com/t5/Splunk-Enterprise/Is-there-an-efficient-way-to-do-field-summary-on-a-large-set-of/m-p/623254#M14697</link>
    <description>&lt;P&gt;I don't really want to debate the value of the CIM. I think it's great, but not enough me or my SOC on it's own.&lt;BR /&gt;CIM is great for quickly finding data. And actually you reminded me that I should include the CIM and it's fields in my data model. Those records are just as pivotable as the other sources.&lt;/P&gt;&lt;P&gt;That said, access to underlying logs is still critical&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;for the integrity checking portion that I mentioned. An altered log format or an updated TA can often result in data that doesn't get included in the data model or may not be included in a search of the logs assuming that data would exist.&lt;UL&gt;&lt;LI&gt;For example if src_ip didn't get parsed out of a network traffic source type (Firwall logs, flow logs), the higher level datamodel search would simply not find that record and the SOC analyst might be none the wiser.&lt;/LI&gt;&lt;LI&gt;The integrity checking is what will clue me into a problem with the source&lt;/LI&gt;&lt;/UL&gt;&lt;/LI&gt;&lt;LI&gt;Also CIM data models often do not contain enough information to fully understand an alert. I find that they are great for alerting, but we often have to drill down to the sources.&lt;UL&gt;&lt;LI&gt;For example the Authentication datamodel does not account for why authentication failed. A locked account or expired password are going to result in lots of authentication failures.&lt;/LI&gt;&lt;/UL&gt;&lt;/LI&gt;&lt;LI&gt;CIM does not automatically account for all newly onboarded sources. An automated data dictionary does.&lt;BR /&gt;&lt;BR /&gt;In my experience robust IR will still require that we work with the logs&lt;/LI&gt;&lt;/UL&gt;</description>
    <pubDate>Mon, 05 Dec 2022 15:06:33 GMT</pubDate>
    <dc:creator>MonkeyK</dc:creator>
    <dc:date>2022-12-05T15:06:33Z</dc:date>
    <item>
      <title>Is there an efficient way to do field summary on a large set of source types?</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/Is-there-an-efficient-way-to-do-field-summary-on-a-large-set-of/m-p/623079#M14675</link>
      <description>&lt;P&gt;I've been wanting to build some integrity checking and other functionality based on knowing the fields in a sourcetype for a while now.&lt;/P&gt;
&lt;P&gt;At my company we've built a data dictionary of indexes and sourcetype of interest to the SOC. They can search the dictionary to help them remember the important data sources. I'd like to augment/use this info in a couple of new ways:&lt;/P&gt;
&lt;P&gt;1) give them a field list for all of these sourcetypes so they could search for which sourcetypes have a relevant field (like src_ip)&lt;/P&gt;
&lt;P&gt;2) I'd like to note the fields that appear in 100% of records for a sourcetype and then every day find out if is missing any of those fields. This would quickly clue me into data issues related to the event sent, parsing, or knowledge objects.&lt;/P&gt;
&lt;P&gt;I know how to get a list of fields for 1 sourcetype and store that info. And I know how to compare a sourcetype to a past set of fields to a current set.&lt;/P&gt;
&lt;P&gt;My challenge now is how do I get the list of fields for the 100 sourcetypes of interest&lt;BR /&gt;&lt;BR /&gt;so far my best idea is to create 100 jobs to handle each sourcetype. Something like&lt;/P&gt;
&lt;P&gt;&lt;FONT face="courier new,courier"&gt;```1-get the sourcetypes of interest and pull back data for them```&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;[| inputlookup dataDictionary.csv where imf_critical=true | eval yesterday=relative_time(now(),"-1d@d") | where evalTS&amp;gt;yesterday&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;| dedup sourcetype | sort sourcetype | head 5 | tail 1 | table sourcetype]&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;earliest=-2d@d latest=-1@d&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;```2-get samples for all indexes in which the sourcetype appears```&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;| dedup 10 index sourcetype&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;| fieldsummary&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;```3-determine field coverage so we can pick the hallmark fields```&lt;BR /&gt;| eventstats max(count) as maxCount&lt;FONT face="courier new,courier"&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;| eval pctCov=round(count/maxCount,2)*100&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;| table field pctCov&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;```4-add back in the sourcetype name```&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;| append&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;[| inputlookup dataDictionary.csv where imf_critical=true | eval yesterday=relative_time(now(),"-1d@d") | where evalTS&amp;gt;yesterday&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;| dedup sourcetype | sort sourcetype | head 5 | tail 1 | table sourcetype]&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;| eventstats first(sourcetype) as sourcetype&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;| eval evalTS=now()&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;| table sourcetype evalTS field pctCov&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;```5-collect the fields to a summary index daily```&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;| collect index=soc_summary marker="sumType=dataInfo, sumSubtype=stFields"&lt;/FONT&gt;&lt;/FONT&gt;&lt;/FONT&gt;&lt;/FONT&gt;&lt;/FONT&gt;&lt;/FONT&gt;&lt;/FONT&gt;&lt;/FONT&gt;&lt;/FONT&gt;&lt;/FONT&gt;&lt;/FONT&gt;&lt;/FONT&gt;&lt;/FONT&gt;&lt;/FONT&gt;&lt;/FONT&gt;&lt;/FONT&gt;&lt;/FONT&gt;&lt;/FONT&gt;&lt;/FONT&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;If I ran 100 jobs like this, the number after head would increment to give me the next sourcetype.&lt;/P&gt;
&lt;P&gt;But I feel like there has to be a better way to do fieldsummary on a lot of sourcetypes.&lt;/P&gt;
&lt;P&gt;Any ideas?&lt;/P&gt;</description>
      <pubDate>Mon, 05 Dec 2022 19:50:35 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/Is-there-an-efficient-way-to-do-field-summary-on-a-large-set-of/m-p/623079#M14675</guid>
      <dc:creator>MonkeyK</dc:creator>
      <dc:date>2022-12-05T19:50:35Z</dc:date>
    </item>
    <item>
      <title>Re: Is there an efficient way to do field summary on a large set of source types</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/Is-there-an-efficient-way-to-do-field-summary-on-a-large-set-of/m-p/623100#M14680</link>
      <description>&lt;P&gt;I'm reading this question on my tablet while walking my dog so the circumstances are not very good for analysing your search &lt;span class="lia-unicode-emoji" title=":winking_face:"&gt;😉&lt;/span&gt;&lt;/P&gt;&lt;P&gt;But let me ask you a different question - especially that you're talking about SOC team which is usually interested with a relatively small set of "types of data" (not sourcetypes!). Instead of making them look through a dictionary of fields for multiple sourcetypes ahd indexes why not use datamodels to normalize data and let the analysts query the data in a unified way regardless of where the specific events come from?&lt;/P&gt;&lt;P&gt;CIM is a great starting point for that. You can also create your own datamodels if needed.&lt;/P&gt;&lt;P&gt;The possible additional upside is that datamodels can be accelerated (but it introduces some problems with managing access, I know).&lt;/P&gt;</description>
      <pubDate>Sat, 03 Dec 2022 12:02:16 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/Is-there-an-efficient-way-to-do-field-summary-on-a-large-set-of/m-p/623100#M14680</guid>
      <dc:creator>PickleRick</dc:creator>
      <dc:date>2022-12-03T12:02:16Z</dc:date>
    </item>
    <item>
      <title>Re: Is there an efficient way to do field summary on a large set of source types</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/Is-there-an-efficient-way-to-do-field-summary-on-a-large-set-of/m-p/623254#M14697</link>
      <description>&lt;P&gt;I don't really want to debate the value of the CIM. I think it's great, but not enough me or my SOC on it's own.&lt;BR /&gt;CIM is great for quickly finding data. And actually you reminded me that I should include the CIM and it's fields in my data model. Those records are just as pivotable as the other sources.&lt;/P&gt;&lt;P&gt;That said, access to underlying logs is still critical&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;for the integrity checking portion that I mentioned. An altered log format or an updated TA can often result in data that doesn't get included in the data model or may not be included in a search of the logs assuming that data would exist.&lt;UL&gt;&lt;LI&gt;For example if src_ip didn't get parsed out of a network traffic source type (Firwall logs, flow logs), the higher level datamodel search would simply not find that record and the SOC analyst might be none the wiser.&lt;/LI&gt;&lt;LI&gt;The integrity checking is what will clue me into a problem with the source&lt;/LI&gt;&lt;/UL&gt;&lt;/LI&gt;&lt;LI&gt;Also CIM data models often do not contain enough information to fully understand an alert. I find that they are great for alerting, but we often have to drill down to the sources.&lt;UL&gt;&lt;LI&gt;For example the Authentication datamodel does not account for why authentication failed. A locked account or expired password are going to result in lots of authentication failures.&lt;/LI&gt;&lt;/UL&gt;&lt;/LI&gt;&lt;LI&gt;CIM does not automatically account for all newly onboarded sources. An automated data dictionary does.&lt;BR /&gt;&lt;BR /&gt;In my experience robust IR will still require that we work with the logs&lt;/LI&gt;&lt;/UL&gt;</description>
      <pubDate>Mon, 05 Dec 2022 15:06:33 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/Is-there-an-efficient-way-to-do-field-summary-on-a-large-set-of/m-p/623254#M14697</guid>
      <dc:creator>MonkeyK</dc:creator>
      <dc:date>2022-12-05T15:06:33Z</dc:date>
    </item>
    <item>
      <title>Re: Is there an efficient way to do field summary on a large set of source types</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/Is-there-an-efficient-way-to-do-field-summary-on-a-large-set-of/m-p/623318#M14703</link>
      <description>&lt;P&gt;&lt;a href="https://community.splunk.com/t5/user/viewprofilepage/user-id/164485"&gt;@MonkeyK&lt;/a&gt;&lt;BR /&gt;All the possible fields for all the selected sourcetypes? With an assumption that the sourcetypes all have the same fields every time so you can create a list of "supposed_to_be_there_fields" and then reference that list every time, to find when a field is missing. Is that right?&lt;BR /&gt;That's a few questions rolled into one. They probably all won't be answered here. Solve for the first part and then create another question for the second part (referencing the first part, for those who come along later).&lt;BR /&gt;&lt;BR /&gt;* Create a list. Use what you have (then determine how often it's going to be updated). An alternative could be something like&lt;/P&gt;&lt;LI-CODE lang="python"&gt;| fieldsummary&lt;/LI-CODE&gt;&lt;P&gt;I suspect you already knew this command. Here's the link to the docs for those finding this later: &lt;A href="https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Fieldsummary" target="_blank" rel="noopener"&gt;https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Fieldsummary&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;* Export as csv, and use as a lookup. Comparing the lookup list against what's available to find what's missing. In the Splunk community, we often kick this link around (credit: Duane Waddle): &lt;A href="https://www.duanewaddle.com/proving-a-negative/" target="_blank" rel="noopener"&gt;https://www.duanewaddle.com/proving-a-negative/&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;As a one-off, this might be okay. As an ongoing solution against 100's of sourcetypes, it sounds a little fragile. YMMV.&lt;BR /&gt;&lt;BR /&gt;Best of luck! Maybe this helped some.&lt;/P&gt;</description>
      <pubDate>Mon, 05 Dec 2022 19:49:03 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/Is-there-an-efficient-way-to-do-field-summary-on-a-large-set-of/m-p/623318#M14703</guid>
      <dc:creator>efavreau</dc:creator>
      <dc:date>2022-12-05T19:49:03Z</dc:date>
    </item>
    <item>
      <title>Re: Is there an efficient way to do field summary on a large set of source types</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/Is-there-an-efficient-way-to-do-field-summary-on-a-large-set-of/m-p/623321#M14704</link>
      <description>&lt;P&gt;OK. You do have a valid point. I just wanted to point out the existence of CIM because some people go to great lengths to reinvent the wheel while not noticing thigs already done by others.&lt;/P&gt;&lt;P&gt;I must say that I don't understand a thing or two about your search.&lt;/P&gt;&lt;P&gt;Firstly, I wouldn't search throughout all my data. How much are you ingesting? That must be a performance hit on your environment. I'd stick to sampling. And I mean heavy sampling.&lt;/P&gt;&lt;P&gt;Secondly, I don't understand those "head 5 | tail 1" - why would you want a fifth result?&lt;/P&gt;&lt;P&gt;Thirdly, instead of doing fieldsummary, I think I'd simply do something like&lt;/P&gt;&lt;PRE&gt;| stats count count(*) AS *&lt;/PRE&gt;&lt;P&gt;to find percent coverage. (With this you can easily add a BY sourcetype clause).&lt;/P&gt;</description>
      <pubDate>Mon, 05 Dec 2022 19:58:22 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/Is-there-an-efficient-way-to-do-field-summary-on-a-large-set-of/m-p/623321#M14704</guid>
      <dc:creator>PickleRick</dc:creator>
      <dc:date>2022-12-05T19:58:22Z</dc:date>
    </item>
    <item>
      <title>Re: Is there an efficient way to do field summary on a large set of source types</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/Is-there-an-efficient-way-to-do-field-summary-on-a-large-set-of/m-p/623340#M14709</link>
      <description>&lt;P&gt;right, probably could have better explained.&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;| inputlookup dataDictionary.csv&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;is my lookup of all indexes and sourcetypes in the environment. Of the 1400 we have there are about 180 that the SOC is specifically interested in and of that there 70 sourcetypes that the SOC considers critical to their work.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;I used the first clause as a subsearch to get one sourcetype to act on. In this case the 5th sourcetype when critical sourcetypes are sorted ascending&lt;/P&gt;&lt;P&gt;I do this because I need to pick a sourcetype to do fieldsummary on. I don't know how to do fieldsummary on more than one sourcetype and have the result tie back to the sourcetype of interest, hence the request for help.&lt;/P&gt;&lt;P&gt;I limit the amount of data to evaluate with dedup&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;| dedup 10 index sourcetype&lt;BR /&gt;says to get 10 records of the sourcetype for each index that uses that sourcetype.&lt;BR /&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;-------------------------------------------&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&lt;BR /&gt;I like your stats count(*) as count by sourcetype technique . That would certainly allow me to summarize by sourcetype without needing to do them one at a time. Feels like it should be workable, but I don't know how to use the results&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;(would have done the next bit in a table, but the markdown is failing me in the most frustrating way)&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;With it I can get a table with a row for each sourcetype where I'd wind up with fields for all sourcetypes where some would be the same as the total, some less than the total, and some zero (because they are part of a different sourcetype.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;Given I don't know the field names, I don't know how I'd determine pctCoverage by sourcetype&lt;/P&gt;&lt;P&gt;So how would I go from that to fields with 100% coverage by sourcetype?&lt;/P&gt;&lt;P&gt;traffic_logs: src_ip, dest_ip&lt;/P&gt;&lt;P&gt;dns: src_ip,request_type&lt;/P&gt;&lt;P&gt;dhcp: src_ip,dest_ip,mac,lease&lt;/P&gt;&lt;P&gt;or even fields by coverage?&lt;/P&gt;&lt;P&gt;sourcetype, field, pctCov&lt;/P&gt;&lt;P&gt;traffic, src_ip, 100&lt;/P&gt;&lt;P&gt;traffic, dest_ip, 100&lt;/P&gt;&lt;P&gt;etc&lt;/P&gt;</description>
      <pubDate>Mon, 05 Dec 2022 21:43:05 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/Is-there-an-efficient-way-to-do-field-summary-on-a-large-set-of/m-p/623340#M14709</guid>
      <dc:creator>MonkeyK</dc:creator>
      <dc:date>2022-12-05T21:43:05Z</dc:date>
    </item>
    <item>
      <title>Re: Is there an efficient way to do field summary on a large set of source types?</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/Is-there-an-efficient-way-to-do-field-summary-on-a-large-set-of/m-p/623951#M14768</link>
      <description>&lt;P&gt;I got it.&amp;nbsp; And it definitely starts with&amp;nbsp;&lt;SPAN&gt;PickleRick's answer&lt;BR /&gt;&lt;BR /&gt;&amp;lt;base search&amp;gt;&lt;BR /&gt;|bin span=3h _time&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;|dedup 10 index sourcetype _time host ```get a good cross section of the sourcetype in all of it's use cases```&lt;BR /&gt;| stats count count(*) as * by sourcetype&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;|untable sourcetype field fieldCount&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;|eval stCount=if(field=="count", fieldCount,null())&lt;BR /&gt;|eventstats max(stCount) as stCount by sourcetype&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;|eval pctCov=fieldCount/stCount&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;|search pctCov&amp;gt;=.5&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;|table sourcetype field pctCov&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 12 Dec 2022 03:54:17 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/Is-there-an-efficient-way-to-do-field-summary-on-a-large-set-of/m-p/623951#M14768</guid>
      <dc:creator>MonkeyK</dc:creator>
      <dc:date>2022-12-12T03:54:17Z</dc:date>
    </item>
    <item>
      <title>Re: Is there an efficient way to do field summary on a large set of source types?</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/Is-there-an-efficient-way-to-do-field-summary-on-a-large-set-of/m-p/635972#M15815</link>
      <description>&lt;P&gt;I was about to start writing a post trying to figure out what you seem to have figured out. I think I get the gist of your original search, though it seems like you have to repeat it per index? If that's the case, it wouldn't work, as you've got quite a few indexes to search over I'd imagine. I guess my question is, what is &amp;lt;base search&amp;gt; that you have referenced up there?&lt;/P&gt;</description>
      <pubDate>Fri, 24 Mar 2023 19:55:17 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/Is-there-an-efficient-way-to-do-field-summary-on-a-large-set-of/m-p/635972#M15815</guid>
      <dc:creator>manderson7</dc:creator>
      <dc:date>2023-03-24T19:55:17Z</dc:date>
    </item>
    <item>
      <title>Re: Is there an efficient way to do field summary on a large set of source types?</title>
      <link>https://community.splunk.com/t5/Splunk-Enterprise/Is-there-an-efficient-way-to-do-field-summary-on-a-large-set-of/m-p/635974#M15816</link>
      <description>&lt;P&gt;Fair question.&lt;BR /&gt;&lt;BR /&gt;When I want to focus on a particular part of my SPL, I often start with&amp;nbsp;&lt;BR /&gt;"&amp;lt;base search&amp;gt; | "&lt;BR /&gt;so that I keep the focus on what I am interested in.&lt;BR /&gt;&lt;BR /&gt;In this case "&amp;lt;base search&amp;gt;"&amp;nbsp; is how I restrict the results to the sourcetypes of interest to my SOC.&amp;nbsp; I actually pull that from a list of critical data sources which is maintained by the SOC&lt;BR /&gt;&lt;BR /&gt;For my purposes, I am only trying to understand fields by sourcetype&lt;BR /&gt;I am using index in the dedup command&lt;BR /&gt;|bin span=3h _time&lt;BR /&gt;|dedup 10 index sourcetype _time host&lt;BR /&gt;This allows me to consider 10 records from every index in every 3 hour period.&amp;nbsp; I do that because I worry that different workloads (which get ingested into different indexes), may decide to change the format of what they send.&amp;nbsp; &amp;nbsp;This way I only consider common fields in a sourcetype across all indexes.&lt;BR /&gt;If I didn't account for different indexes, it might be possible that I evaluate different indexes each time and therefore would , may conclude that there is a change when the common fields actually did not change&lt;BR /&gt;&lt;BR /&gt;If I wanted the field listing per index, I'd just modify the "stats" and "eventstats" to include index in the "by" clause.&amp;nbsp;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 24 Mar 2023 20:24:55 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Enterprise/Is-there-an-efficient-way-to-do-field-summary-on-a-large-set-of/m-p/635974#M15816</guid>
      <dc:creator>MonkeyK</dc:creator>
      <dc:date>2023-03-24T20:24:55Z</dc:date>
    </item>
  </channel>
</rss>

