All Apps and Add-ons

Building AD Lookups in MS Windows AD Objects

zward
Path Finder

Hello,

I have successfully installed the MS Windows AD Objects app. When I go to build the kv stores/lookups users/groups, etc, The search will sit and never completes the job. After looking into the log files and job manager, I have found that the jobs run, but they quickly balloon to massive sizes in the job log (1 to 15+ GB size depending on the search) , to the point that the jobs never finish (often sitting on a % or on finalizing) and I am getting out of search memory issues in the job manager. Now we have 24gb of ram on this server -- and we continue to see the warning (search auto-finalized after disk usage limit (12500 MB) reached).

When I run the verify all baseline data, it comes back with no problems, however when I select the build ad lookup lists it takes hours to locate admon baseline data, even though when I search for it normally, it finds the event within 2 seconds. The ad object counts never populate at the top of the dashboard and I cannot move on to the next step. So after this failed many times, I went to build or rebuild all lookups, which never resolves after allowing it to sit and run all day. After trying this I would then do the individual build lookups which resolved for ad groups and objects, but crashed mongod when I tried to build the user database (see below code snippet). I increased the paging file on the system from 10GB to 100GB and still the tables do not build. I have tried reinstalling the application from scratch and that has not done anything to alleviate the issue.

I have also tried individually building the lookups one by one but continue to see finalizing that never completes -- the search sits for hours finalizing and never completes, frozen at a particular run time that never updates. I have increased the size from 10000 MB to to 12500 MB for our disk usage limit in authorize.conf, however I am not sure that is going to do the trick.

I take it our server is out of memory, however why are these search sizes so massive? We have around 220k AD users. There has got to be a way to break this down to a smaller search so I can complete the build lookup and continue this process. Or how much RAM do we need to configure this app? The program is really splendid from what I have seen, however being unable to build the key data lookup files is affecting our ability to use the app entirely.

What can I do to get these lookups to build? Please help!

alt text

Mongod error:

 2018-04-24T21:35:41.448Z I NETWORK  [conn9973] end connection 127.0.0.1:54457 (10 connections now open)
 2018-04-24T21:36:19.780Z I COMMAND  [conn9810] command s_splunkiTR2RCAYp7Go4kZlq1TnMAm9_tSessiV6ysHGNENqOVEZL92qEHLjZQ.$cmd command: insert { insert: "sched@XbEOwXGEIGOSYrnChJNIFYp", writeConcern: { w: "majority", j: true, wtimeout: 1800000 }, ordered: true, documents: 1000 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:80 locks:{ Global: { acquireCount: { r: 1025, w: 1025 } }, MMAPV1Journal: { acquireCount: { w: 1026 }, acquireWaitCount: { w: 7 }, timeAcquiringMicros: { w: 913396 } }, Database: { acquireCount: { w: 1025 } }, Collection: { acquireCount: { W: 25 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 26 } }, oplog: { acquireCount: { w: 1000 } } } 1452ms
 2018-04-24T21:36:42.151Z I COMMAND  [conn9810] command s_splunkiTR2RCAYp7Go4kZlq1TnMAm9_tSessiV6ysHGNENqOVEZL92qEHLjZQ.$cmd command: insert { insert: "sched@XbEOwXGEIGOSYrnChJNIFYp", writeConcern: { w: "majority", j: true, wtimeout: 1800000 }, ordered: true, documents: 1000 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:80 locks:{ Global: { acquireCount: { r: 1025, w: 1025 } }, MMAPV1Journal: { acquireCount: { w: 1025 }, acquireWaitCount: { w: 6 }, timeAcquiringMicros: { w: 60953 } }, Database: { acquireCount: { w: 1025 } }, Collection: { acquireCount: { W: 25 } }, oplog: { acquireCount: { w: 1000 } } } 1048ms
 2018-04-24T21:36:53.119Z I COMMAND  [conn9810] command s_splunkiTR2RCAYp7Go4kZlq1TnMAm9_tSessiV6ysHGNENqOVEZL92qEHLjZQ.$cmd command: insert { insert: "sched@XbEOwXGEIGOSYrnChJNIFYp", writeConcern: { w: "majority", j: true, wtimeout: 1800000 }, ordered: true, documents: 1000 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:80 locks:{ Global: { acquireCount: { r: 1029, w: 1029 } }, MMAPV1Journal: { acquireCount: { w: 1030 }, acquireWaitCount: { w: 10 }, timeAcquiringMicros: { w: 59927 } }, Database: { acquireCount: { w: 1029 } }, Collection: { acquireCount: { W: 29 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 27 } }, oplog: { acquireCount: { w: 1000 } } } 1601ms
 2018-04-24T21:36:59.934Z F STORAGE  [conn9810] MongoDB has exhausted the system memory capacity.
 2018-04-24T21:36:59.934Z F STORAGE  [conn9810] Current Memory Status: { page_faults: -1503484610, usagePageFileMB: 674, totalPageFileMB: 84695, availPageFileMB: 68, ramMB: 24575 }
 2018-04-24T21:37:00.034Z F STORAGE  [conn9810] VirtualProtect for E:/Splunk/datastore/kvstore/mongo/local.1 chunk 4104 failed with errno:1455 The paging file is too small for this operation to complete. (chunk size is 67108864, address is 4020000000) in mongo::makeChunkWritable, terminating
 2018-04-24T21:37:00.036Z I -        [conn9810] Fatal Assertion 16362
 2018-04-24T21:37:00.493Z I CONTROL  [conn9810] mongod.exe      index_collator_extension+0x146b13
 2018-04-24T21:37:00.497Z I CONTROL  [conn9810] mongod.exe      index_collator_extension+0xfe14f
 2018-04-24T21:37:00.497Z I CONTROL  [conn9810] mongod.exe      index_collator_extension+0xf0847
 2018-04-24T21:37:00.497Z I CONTROL  [conn9810] mongod.exe      ???
 2018-04-24T21:37:00.497Z I CONTROL  [conn9810] mongod.exe      ???
 2018-04-24T21:37:00.497Z I CONTROL  [conn9810] mongod.exe      ???
 2018-04-24T21:37:00.497Z I CONTROL  [conn9810] mongod.exe      ???
 2018-04-24T21:37:00.497Z I CONTROL  [conn9810] mongod.exe      ???
 2018-04-24T21:37:00.497Z I CONTROL  [conn9810] mongod.exe      ???
 2018-04-24T21:37:00.497Z I CONTROL  [conn9810] mongod.exe      ???
 2018-04-24T21:37:00.497Z I CONTROL  [conn9810] mongod.exe      ???
 2018-04-24T21:37:00.497Z I CONTROL  [conn9810] mongod.exe      ???
 2018-04-24T21:37:00.497Z I CONTROL  [conn9810] mongod.exe      ???
 2018-04-24T21:37:00.497Z I CONTROL  [conn9810] mongod.exe      ???
 2018-04-24T21:37:00.497Z I CONTROL  [conn9810] mongod.exe      ???
 2018-04-24T21:37:00.497Z I CONTROL  [conn9810] mongod.exe      ???
 2018-04-24T21:37:00.497Z I CONTROL  [conn9810] mongod.exe      ???
 2018-04-24T21:37:00.497Z I CONTROL  [conn9810] mongod.exe      ???
 2018-04-24T21:37:00.497Z I CONTROL  [conn9810] mongod.exe      ???
 2018-04-24T21:37:00.497Z I CONTROL  [conn9810] mongod.exe      ???
 2018-04-24T21:37:00.497Z I CONTROL  [conn9810] mongod.exe      ???
 2018-04-24T21:37:00.497Z I CONTROL  [conn9810] mongod.exe      ???
 2018-04-24T21:37:00.497Z I CONTROL  [conn9810] mongod.exe      ???
 2018-04-24T21:37:00.497Z I CONTROL  [conn9810] mongod.exe      ???
 2018-04-24T21:37:00.497Z I CONTROL  [conn9810] mongod.exe      ???
 2018-04-24T21:37:00.497Z I CONTROL  [conn9810] mongod.exe      ???
 2018-04-24T21:37:00.497Z I CONTROL  [conn9810] mongod.exe      ???
 2018-04-24T21:37:00.497Z I CONTROL  [conn9810] mongod.exe      ???
 2018-04-24T21:37:00.497Z I CONTROL  [conn9810] mongod.exe      index_collator_extension+0x450e38
 2018-04-24T21:37:00.497Z I CONTROL  [conn9810] mongod.exe      index_collator_extension+0x10a6f3
 2018-04-24T21:37:00.497Z I CONTROL  [conn9810] mongod.exe      index_collator_extension+0x1670f1
 2018-04-24T21:37:00.497Z I CONTROL  [conn9810] mongod.exe      index_collator_extension+0x47fa0b
 2018-04-24T21:37:00.497Z I CONTROL  [conn9810] mongod.exe      index_collator_extension+0x47fbb2
 2018-04-24T21:37:00.497Z I CONTROL  [conn9810] KERNEL32.DLL    BaseThreadInitThunk+0x22
 2018-04-24T21:37:00.497Z I CONTROL  [conn9810] 
 2018-04-24T21:37:00.497Z I -        [conn9810] 
0 Karma
1 Solution

shogan_splunk
Splunk Employee
Splunk Employee

First, I am sorry that you are running into this issue, definitely is not the goal of the application. You shouldn't see any mongodb errors, in that the lookups getting created are currently csv's, not in the kvstore. I am looking at doing this in the near future, mainly for working with large environments like yours. The reason for using the kvstore vs the cvs lookups, is that I can update individual object details without having to rebuild the whole lookup, also it is more efficient with replicating in distributed Splunk environments.
Please let me know if the below steps do not help.

A couple questions to help me if the below steps don't help:

  1. How many indexers do you have in your environment, and are they clustered?
  2. How long does the Verify Baseline search take to complete?

Some initial steps to try:
- Temporarily disable the Splunk Scheduled Searches with names that start with "ms_ad_obj_sched_sync_", ex. ms_ad_obj_sched_sync_user
- Run the below search and let me know how long it takes to complete: (Note: I put in a head 50000 to only pull back the first 50000 events and also am only looking for the "Sync" events)

eventtype=ms_ad_obj_msad_data (admonEventType=Sync) (objectClass="top|person|organizationalPerson|user") NOT ([| inputlookup AD_User_LDAP_list| fields objectGUID| table objectGUID| format])
| head 50000
  • If this completes within reasonable time, then try the following steps to:
  • Clone the macro "ms_ad_obj_admon_user_base_list" and rename it to "ms_ad_obj_admon_user_base_temp"
  • Update the original "ms_ad_obj_admon_user_base_list" macro by adding in the following after the (objectClass="top|person|organizationalPerson|user") text:

NOT ([| inputlookup AD_User_LDAP_list| fields objectGUID| table objectGUID| format]) | head 50000

Also, remove the search text **OR admonEventType=Update OR admonEventType=Deleted* so only the Sync data is initially loaded.*

  1. Save the changes, and then run the following search from the search view in the MS Windows AD Objects application, selecting the appropriate time window for your ActiveDirectory "Sync" data, you can try All-time first:

    |ms_ad_obj_sched_sync_objects_base("User","user")

  2. You will need to run this multiple times, probably about 5 times for your environment.

  3. You can check the count of objects in the AD_User_LDAP_list by running | inputlookup AD_User_LDAP_list | stats count

  4. After you have the table built then you can add back to the text OR admonEventType=Update OR admonEventType=Deleted to the "ms_ad_obj_admon_user_base_list" macro, then rerun the step 1 searches to capture the updates and deleted events.

  5. After you have the table built remove the *NOT ([| inputlookup AD_User_LDAP_list| fields objectGUID| table objectGUID| format]) | head 50000 * text from the "ms_ad_obj_admon_user_base_list"

  6. Then lastly re-enable the scheduled jobs you previously disabled.

View solution in original post

0 Karma

zward
Path Finder

Hi Shogan,

Thank you for getting back to me and offering your help. I do appreciate it.

*EDIT: As of 7:30pm CST your solution is working as you described, however it just takes a very long time for the search to complete, after running | inputlookup AD_User_LDAP_list | stats count I I am now sitting at 65k users, so it is indeed calculating and adding new users as designed. While slow, it is definitely a way to efficiently ensure all my users are being brought into Splunk. .. THANK YOU !!!! *

To answer your questions:
1. We have one indexer, no clustering.
2. Verify baseline data comes back in 44 - 60 seconds and accurately reflects our environment, see below screenshot.
alt text

I temporarily disabled the scheduled searches as you mentioned, I then ran

eventtype=ms_ad_obj_msad_data (admonEventType=Sync) (objectClass="top|person|organizationalPerson|user") NOT ([| inputlookup AD_User_LDAP_list| fields objectGUID| table objectGUID| format])
 | head 50000

which completed in around 44 seconds.

After this I run the search in the ms windows ad objects app:
|ms_ad_obj_sched_sync_objects_base("User","user")``

This proceeds, starting with "parsing", followed by "Finalizing Job". The search window then sits on finalizing job and nothing happens, when I check out the job menu, I see the following text:
[subsearch]: No matching fields exist
[subsearch]: No results. Created empty file 'AD_Objects_Queue_Main'

I closed out and deleted this job, then started a new one with "Last 30 Days" search, this also sits on finalizing for quite awhile and nothing happens - it simply sits on finalizing with no indication of any output.

When I run the command | inputlookup AD_User_LDAP_list | stats count I get the same count every time, (15410), this has been the value since the initial run of the add-on and has not changed to my knowledge. Maybe the search is purposely supposed to take this long? Do you have any thoughts?

Here is the job log:

This search is still running and is approximately 0% complete.

(SID: 1525904118.152) search.log
Execution costs
Duration (seconds)          Component       Invocations     Input count     Output count    
    0.00        dispatch.check_disk_usage   1   -   -
    0.00        dispatch.createdSearchResultInfrastructure  1   -   -
    451.23      dispatch.evaluate   1   -   -
    450.61      dispatch.evaluate.append    2   -   -
    0.55        dispatch.evaluate.join  2   -   -
    0.05        dispatch.evaluate.inputlookup   1   -   -
    0.00        dispatch.evaluate.eval  14  -   -
    0.00        dispatch.evaluate.search    1   -   -
    0.00        dispatch.evaluate.fields    3   -   -
    0.00        dispatch.evaluate.fillnull  2   -   -
    0.00        dispatch.evaluate.makemv    1   -   -
    0.00        dispatch.evaluate.noop  1   -   -
    0.00        dispatch.evaluate.outputlookup  2   -   -
    0.00        dispatch.evaluate.rename    1   -   -
    0.00        dispatch.evaluate.sort  1   -   -
    0.00        dispatch.evaluate.stats     2   -   -
    0.07        dispatch.writeStatus    5   -   -
    0.21        startup.configuration   1   -   -
    0.50        startup.handoff     1   -   -
Search job properties
canSummarize        None
createTime      2018-05-09T17:15:19.000-05:00
cursorTime      2038-01-18T21:14:07.000-06:00
custom      
{   [-] 
   dispatch.earliest_time: -30d@d   
   dispatch.latest_time: now    
   dispatch.sample_ratio: 1 
   display.general.type: statistics 
   display.page.search.mode: verbose    
   display.page.search.tab: statistics  
   search: |`ms_ad_obj_sched_sync_objects_base("User","user")`  
}   
defaultSaveTTL      604800
defaultTTL      600
delegate        None
diskUsage       3842048
dispatchState       FINALIZING
doneProgress        None
dropCount       None
eai:acl     
{   [-] 
   app: ms_windows_ad_objects   
   can_write: true  
   modifiable: true 
   owner: zward 
   perms: { [+] 
   }    
   sharing: global  
   ttl: 600 
}   
earliestTime        2018-04-09T00:00:00.000-05:00
eventAvailableCount     None
eventCount      None
eventFieldCount     None
eventIsStreaming        true
eventIsTruncated        true
eventSearch     None
eventSorting        desc
isBatchModeSearch       None
isDone      None
isEventsPreviewEnabled      None
isFailed        None
isFinalized     None
isPaused        None
isPreviewEnabled        true
isRealTimeSearch        None
isRemoteTimeline        None
isSaved     None
isSavedSearch       None
isTimeCursored      None
isZombie        None
keywords        sync_dn_chg::1
label       None
latestTime      2018-05-09T17:15:18.000-05:00
modifiedTime        2018-05-09T18:31:23.630-05:00
normalizedSearch        None
numPreviews     None
optimizedSearch     None
pid     332
priority        5
provenance      UI:Search
remoteSearch        None
reportSearch        inputlookup AD_User_LDAP_list append=true | rename dn_hist AS dn_hist_hold | eval dn_hist_hold=case(dn_hist_hold="",distinguishedName,NOT dn_hist_hold="",dn_hist_hold."####".distinguishedName) | makemv delim="####" dn_hist_hold | append [search eventtype=ms_ad_obj_msad_data (admonEventType=Sync) (objectClass="top|person|organizationalPerson|user") NOT ([| inputlookup AD_User_LDAP_list| fields objectGUID| table objectGUID| format]) | head 50000 | fields DomainDNSName,OU,accountExpires,adminCount,badPasswordTime,badPwdCount,c,cn,orig_cn,codePage,countryCode,dSCorePropagationData,dcName,deletedDate,department,description,displayName,distinguishedName,dn,dn_path,domain,givenName,guid_lookup,initials,instanceType,isCriticalSystemObject,isDeleted,isRecycled,l,lastKnownParent,lastLogon,lastLogonTimestamp,last_evt_flg,lockoutTime,logonCount,logonHours,managedBy,memberOf,msDS-SupportedEncryptionTypes,name,objectCategory,objectClass,objectGUID,objectSid,physicalDeliveryOfficeName,postalCode,primaryGroupID,pwdLastSet,sAMAccountName,sAMAccountType,servicePrincipalName,showInAdvancedViewOnly,sid_lookup,sn,st,streetAddress,title,uSNChanged,uSNCreated,userAccountControl,userPrincipalName,userWorkstations,whenChanged,whenCreated | fillnull value="FALSE" isRecycled,isDeleted,isCriticalSystemObject,showInAdvancedViewOnly | fillnull value="" | stats earliest(distinguishedName) AS orig_evt_dn,values(distinguishedName) AS dn_hist_hold,latest(*) AS * by objectGUID | eval deletedDate=if(match(lower(last_evt_flg), "deleted") OR match(lower(isDeleted), "true"), strptime(whenChanged, "%I:%M.%S %p, %a %m/%d/%Y"), 0) | lookup AD_UAC_Details userAccountControl OUTPUT uac_bin_map, uac_details | join type=left DomainDNSName [|inputlookup AD_Domain_Selector | stats count by DomainDNSName,DomainNetBIOSName | rename DomainNetBIOSName as domain_lkp | table DomainDNSName,domain_lkp] | eval domain=if(isnull(domain_lkp),domain,mvdedup(domain_lkp)) | eval src_nt_domain=domain, q_link_id=domain."##".objectGUID | join primaryGroupID [|inputlookup AD_Groups_LDAP_list | fields distinguishedName, primaryGroupToken | rename primaryGroupToken AS primaryGroupID, distinguishedName AS prm_grp | eval prm_grp_id=PrimaryGroupID." - ".prm_grp] | eval memberOf=if(memberOf="",prm_grp,if(match(memberOf,prm_grp),memberOf,prm_grp."####".memberOf)) | table DomainDNSName,OU,accountExpires,adminCount,badPasswordTime,badPwdCount,c,cn,orig_cn,codePage,countryCode,dSCorePropagationData,dcName,deletedDate,department,description,displayName,distinguishedName,dn,dn_hist_hold,dn_path,domain,givenName,guid_lookup,initials,instanceType,isCriticalSystemObject,isDeleted,isRecycled,l,lastKnownParent,lastLogon,lastLogonTimestamp,last_evt_flg,lockoutTime,logonCount,logonHours,managedBy,memberOf,msDS-SupportedEncryptionTypes,name,objectCategory,objectClass,objectGUID,objectSid,orig_evt_dn,physicalDeliveryOfficeName,postalCode,primaryGroupID,pwdLastSet,q_link_id,sAMAccountName,sAMAccountType,servicePrincipalName,showInAdvancedViewOnly,sid_lookup,sn,src_nt_domain,st,streetAddress,title,uSNChanged,uSNCreated,uac_bin_map,uac_details,userAccountControl,userPrincipalName,userWorkstations,whenChanged,whenCreated] | stats first(distinguishedName) AS current_dn,first(dn_path) AS current_dn_path,values(dn_hist_hold) AS dn_hist,last(*) AS * by objectGUID | eval current_dn=if(isnull(current_dn),orig_evt_dn,current_dn) | join type=left current_dn_path [|inputlookup AD_Objects_Queue_Main WHERE sync_complete=0 append=true | eval sync_ou_user=if(sync_ou==1,2,0) | eval sync_complete=if(sync_ou==1 AND sync_ou_user==2 AND sync_ou_group==2 AND sync_ou_computer==2,2,sync_complete) | table q_link_id,dn,dn_cnt,dn_hist,dn_path,domain,member,memberOf,objectGUID,objectClass,uSNChanged,whenChanged,sync_complete,sync_member,sync_ou,sync_ou_user,sync_ou_group,sync_ou_computer,sync_memberOf,sync_memberOf_user,sync_memberOf_computer,sync_memberOf_group | outputlookup AD_Objects_Queue_Main | search sync_ou=1 sync_ou_user=2 | table dn, dn_hist | makemv delim="####" dn_hist | mvexpand dn_hist | eval current_dn_path=dn_hist | rename dn AS new_ou_dn, dn_hist AS old_ou_dn | table current_dn_path, new_ou_dn, old_ou_dn] | eval sync_dn_chg=case((objectClass="top|person|organizationalPerson|user" OR objectClass="top|person|organizationalPerson|user|computer" OR objectClass="top|group") AND memberOf="",0,objectClass="top|group" AND member="",0,isnotnull(old_ou_dn) OR current_dn!=dn,1,isnull(old_ou_dn) AND current_dn=dn,0) | eval dn=if(isnull(old_ou_dn),dn,replace(dn,old_ou_dn,new_ou_dn)) | eval distinguishedName=if(isnull(old_ou_dn),dn,replace(dn,old_ou_dn,new_ou_dn)) | eval dn_path=if(isnull(old_ou_dn),dn_path,replace(dn_path,old_ou_dn,new_ou_dn)) | eval dn_hist=replace(replace(mvjoin(dn_hist,"####"),distinguishedName,""),"^####|####$","") | join type=left dn [ |inputlookup AD_Objects_Queue_Main WHERE sync_complete=0 | eval sync_memberOf_user=if(sync_memberOf==1,2,0) | eval sync_complete=if(sync_memberOf_user==2 AND sync_memberOf_group==2 AND sync_memberOf_computer==2,2,sync_complete) | table q_link_id,dn,dn_cnt,dn_hist,dn_path,domain,member,memberOf,objectGUID,objectClass,uSNChanged,whenChanged,sync_complete,sync_member,sync_ou,sync_ou_user,sync_ou_group,sync_ou_computer,sync_memberOf,sync_memberOf_user,sync_memberOf_computer,sync_memberOf_group | outputlookup AD_Objects_Queue_Main | search sync_memberOf=1 sync_memberOf_user=2 | table dn, dn_hist, member | makemv delim="####" member | mvexpand member | rename dn AS new_group_dn | rename member AS dn, dn_hist AS old_group_dn | stats values(old_group_dn) as old_group_dn, values(new_group_dn) as new_group_dn by dn | eval old_group_dn=mvjoin(old_group_dn,"|") | eval new_group_dn=mvjoin(new_group_dn,"####") | table dn, new_group_dn, old_group_dn] | eval memberOf=if(isnull(old_group_dn),memberOf, replace(memberOf,old_group_dn,"")) | eval memberOf=if(isnull(new_group_dn),memberOf, if(memberOf=="",new_group_dn,memberOf."####".new_group_dn)) | fields - old_ou_dn, - new_ou_dn, - current_dn, - dn_hist_hold, - orig_evt_dn, - current_dn_path, - new_member_dn, - old_group_dn, - new_group_dn, - old_member_dn, - new_memberOf_dn, - old_memberOf_dn | outputlookup AD_User_LDAP_list | fields q_link_id,dn,dn_cnt,dn_hist,dn_path,domain,member,memberOf,objectGUID,objectClass,uSNChanged,whenChanged,sync_dn_chg | search sync_dn_chg=1 | fillnull value="" member,memberOf | eval sync_member=case(objectClass="top|person|organizationalPerson|user",1,objectClass="top|person|organizationalPerson|user|computer",1,objectClass="top|group",1,objectClass="top|organizationalUnit" OR objectClass="top|container",0) | eval sync_memberOf=case(objectClass="top|person|organizationalPerson|user",0,objectClass="top|person|organizationalPerson|user|computer",0,objectClass="top|group",1,objectClass="top|organizationalUnit" OR objectClass="top|container",0) | eval sync_memberOf_user=if(sync_memberOf==1,1,0),sync_memberOf_group=if(sync_memberOf==1,1,0),sync_memberOf_computer=if(sync_memberOf==1,1,0) | eval sync_ou=case(objectClass="top|person|organizationalPerson|user",0,objectClass="top|person|organizationalPerson|user|computer",0,objectClass="top|group",0,objectClass="top|organizationalUnit" OR objectClass="top|container",1) | eval sync_complete=0,sync_ou_user=if(sync_ou==1,1,0),sync_ou_group=if(sync_ou==1,1,0),sync_ou_computer=if(sync_ou==1,1,0) | fillnull value=0 sync_complete,sync_member,sync_memberOf,sync_memberOf_user,sync_memberOf_computer,sync_memberOf_group,sync_ou,sync_ou_user,sync_ou_group,sync_ou_computer | append [|inputlookup AD_Objects_Queue_Main WHERE sync_complete=0 append=true | table q_link_id,dn,dn_cnt,dn_hist,dn_path,domain,member,memberOf,objectGUID,objectClass,uSNChanged,whenChanged,sync_complete,sync_member,sync_ou,sync_ou_user,sync_ou_group,sync_ou_computer,sync_memberOf,sync_memberOf_user,sync_memberOf_computer,sync_memberOf_group] | fields q_link_id,filter,dn,dn_cnt,dn_hist,dn_path,domain,member,memberOf,objectGUID,objectClass,uSNChanged,whenChanged,sync_complete,sync_member,sync_ou,sync_ou_user,sync_ou_group,sync_ou_computer,sync_memberOf,sync_memberOf_user,sync_memberOf_computer,sync_memberOf_group | sort 0 -uSNChanged | stats last(*) AS * by q_link_id | outputlookup AD_Objects_Queue_Main create_empty=true
request     
{   [-] 
   adhoc_search_level: verbose  
   auto_cancel: 30  
   check_risky_command: false   
   custom.dispatch.earliest_time: -30d@d    
   custom.dispatch.latest_time: now 
   custom.dispatch.sample_ratio: 1  
   custom.display.general.type: statistics  
   custom.display.page.search.mode: verbose 
   custom.display.page.search.tab: statistics   
   custom.search: |`ms_ad_obj_sched_sync_objects_base("User","user")`   
   earliest_time: -30d@d    
   indexedRealtime:
   latest_time: now 
   preview: 1   
   provenance: UI:Search    
   rf: *    
   sample_ratio: 1  
   search: |`ms_ad_obj_sched_sync_objects_base("User","user")`  
   status_buckets: 300  
   ui_dispatch_app: ms_windows_ad_objects   
}   
resultCount     None
resultIsStreaming       None
resultPreviewCount      None
runDuration     451.281
runtime     
{   [-] 
   auto_cancel: 30  
   auto_pause: 0    
}   
sampleRatio     1
sampleSeed      0
scanCount       None
search      |`ms_ad_obj_sched_sync_objects_base("User","user")`
searchCanBeEventType        None
searchEarliestTime      1523250000
searchLatestTime        1525904118
searchProviders     
[   
]   
searchTotalBucketsCount     None
searchTotalEliminatedBucketsCount       None
sid     1525904118.152
statusBuckets       None
ttl     600
Additional info     search.log 
0 Karma

schandrasekar
Loves-to-Learn

schHi Shogan,

I am getting the same error whenever i run the report ms_ad_obj_sched_sync_user 

  • subsearch]: No matching fields exist.
  • [subsearch]: No results. Created empty file 'AD_Objects_Queue_Main'.

Also, field sync_dn_chg is always 0. Pleas help .

Issue is for all the searches that uses the macro  |`ms_ad_obj_sched_sync_objects_base("","")`

0 Karma

shogan_splunk
Splunk Employee
Splunk Employee

First, I am sorry that you are running into this issue, definitely is not the goal of the application. You shouldn't see any mongodb errors, in that the lookups getting created are currently csv's, not in the kvstore. I am looking at doing this in the near future, mainly for working with large environments like yours. The reason for using the kvstore vs the cvs lookups, is that I can update individual object details without having to rebuild the whole lookup, also it is more efficient with replicating in distributed Splunk environments.
Please let me know if the below steps do not help.

A couple questions to help me if the below steps don't help:

  1. How many indexers do you have in your environment, and are they clustered?
  2. How long does the Verify Baseline search take to complete?

Some initial steps to try:
- Temporarily disable the Splunk Scheduled Searches with names that start with "ms_ad_obj_sched_sync_", ex. ms_ad_obj_sched_sync_user
- Run the below search and let me know how long it takes to complete: (Note: I put in a head 50000 to only pull back the first 50000 events and also am only looking for the "Sync" events)

eventtype=ms_ad_obj_msad_data (admonEventType=Sync) (objectClass="top|person|organizationalPerson|user") NOT ([| inputlookup AD_User_LDAP_list| fields objectGUID| table objectGUID| format])
| head 50000
  • If this completes within reasonable time, then try the following steps to:
  • Clone the macro "ms_ad_obj_admon_user_base_list" and rename it to "ms_ad_obj_admon_user_base_temp"
  • Update the original "ms_ad_obj_admon_user_base_list" macro by adding in the following after the (objectClass="top|person|organizationalPerson|user") text:

NOT ([| inputlookup AD_User_LDAP_list| fields objectGUID| table objectGUID| format]) | head 50000

Also, remove the search text **OR admonEventType=Update OR admonEventType=Deleted* so only the Sync data is initially loaded.*

  1. Save the changes, and then run the following search from the search view in the MS Windows AD Objects application, selecting the appropriate time window for your ActiveDirectory "Sync" data, you can try All-time first:

    |ms_ad_obj_sched_sync_objects_base("User","user")

  2. You will need to run this multiple times, probably about 5 times for your environment.

  3. You can check the count of objects in the AD_User_LDAP_list by running | inputlookup AD_User_LDAP_list | stats count

  4. After you have the table built then you can add back to the text OR admonEventType=Update OR admonEventType=Deleted to the "ms_ad_obj_admon_user_base_list" macro, then rerun the step 1 searches to capture the updates and deleted events.

  5. After you have the table built remove the *NOT ([| inputlookup AD_User_LDAP_list| fields objectGUID| table objectGUID| format]) | head 50000 * text from the "ms_ad_obj_admon_user_base_list"

  6. Then lastly re-enable the scheduled jobs you previously disabled.

0 Karma

zward
Path Finder

Hi Shogan,

Posting back again as I am running into the same issue, when I run "| inputlookup AD_User_LDAP_list | stats count" per step 3 of your directions, I received back the same count of 65410 over and over. The first search I ran per your directions above, with the search command of |ms_ad_obj_sched_sync_objects_base("User","user") produced another 50k users on top of the initial 15410, giving us 65410 users total. After this no new users are being added. Repeated searches from steps 1-3 above (you mention running it 5 times to gather all users), after the first time running and producing 50k users, does not pull any additional results now. I have tried running 30 days, 60 days, 90 days, and all time over the course of the past two weeks and after completion, the user count never goes up when running | inputlookup AD_User_LDAP_list | stats count. Is there a step I am missing, or should I try adjusting the head command to 100000 or 150000?

It's almost like the list is not being checked for users and is repeatedly finding the same 50k users every time, is there a way I can look into this in more detail or any ideas on how to correct this issue?

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...