It seems that simply adding a props.conf on the shclsuter tier in conjunction with my limits.conf changes are allowing all fields to be automatically extracted at search time as I expect. I will need to test removing single limits.conf stanza values to see if any of those had an impact as well. [<sourcetype>]
KV_MODE = json But, yes json is very "verbose" logging since it calls out field names and such. This team though is using HEC which in general prefers JSON (if you use the /event endpoint, and we don't want them using /raw and needing extractions there). HEC/JSON has allowed us to give the users some flexibility on choosing how/what they log. These events are actually pre-processed as well and cut down on event count greatly, it comes from another system which ingests metrics from many, many, many sources and then we are using these datasets for Machine Learning. So we don't have much of an option for bringing down event count or size. This as well is the only dataset like this we work with, every other is ~20 fields from similar use cases. But more than anything we needed to show that we could technically do this, while not ideal.
... View more
So you’ve installed the newest version of the Azure Add-on (2.0.2) on both your ES Search Head (if you have one), Ad-hocs, and on a Heavy Forwarder, correct?
On your Heavy Forwarder, you aren’t running Version 8 of Splunk, correct? (Ideally 7.3.3). On the Heavy Forwarder in the Azure Add-On, you created two new inputs for the Azure AD Sign-Ins and the Azure AD Directory Audit, right? You can also make a third one for the Monitor Metrics if you want to bring those in as well. For each of those inputs the client ID and secret must be entered on the configuration tabs. I assume you've entered those and saved?
What is your interval setting at? Would recommend something like 300 or 180 seconds (usually 300).
I would also edit both input_module_MS_AAD_audit.py (audit.py) and input_module_MS_AAD_signins.py (signins.py) to correct the query times (change value to 5 minutes).
query_date = get_start_date(helper, check_point_key)
query_date_end = (datetime.datetime.utcnow() - datetime.timedelta(minutes=5)).strftime(‘%Y-%m-%dT%H:%M:%S.%fZ’)
There shouldn't be a need to restart Splunk after the change, but you can if you'd like.
Looking at the log you posted- your search time was very extensive. Searching between March 13 and March 20!
“GET /beta/auditLogs/directoryAudits?$orderby=activityDateTime&$filter=activityDateTime+gt+2020-03-13T19%3a29%3a45.102703Z+and+activityDateTime+le+2020-03-20T19%3a22%3a45.501341Z&$skiptoken=f207127ca72cc8e1dca1f7873280c23e_326040 HTTP/1.1”
You would expect to see something more like this:
"GET /beta/auditLogs/directoryAudits?$orderby=activityDateTime&$filter=activityDateTime+gt+2020-04-01T02%3a05%3a16.3181023Z+and+activityDateTime+le+2020-04-01T02%3a15%3a16.703416Z&$skiptoken=bc772c3a4143ceed5b8eb9acb6288b56_1045 HTTP/1.1" 200
... View more
I have recently deployed the Splunk OVA for VMWare which acts as the Data Collection Node to use with the Splunk app and Add-ons for VMWare. I have followed the install and configuration instructions from the documentation which was fairly simple, but now when I go to the collection configuration page within the VMWare app I can make a successful connection, but I am left with "Username/Password are good but apps are not there" in Add-on Validation.
Since this was deployed from an OVA, it should have all the apps pre-installed. As well, I have confirmed that I can see all the add-ons on the Data Collection node in /opt/splunk/etc/apps that are listed as needed in the below link. (might not show up because of low karma but is the Installation Overview for the VMWare add-ons.)
Has anyone seen something like this before or have any suggestions on what to try next?
Edit: I recognized that my OVA, indexers and search heads had different versions running of the apps and add-on, so I upgraded all to the latest 3.4.1 package, but still am unfortunately seeing the same error. I had hoped the versioning difference would have solved it not finding the apps, but was not that lucky today.
... View more