All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi everyone, I am comparatively new to Splunk and trying to create visualization of each http status code vs all traffic line graph that is traversing though the device. I am able to extract all stat... See more...
Hi everyone, I am comparatively new to Splunk and trying to create visualization of each http status code vs all traffic line graph that is traversing though the device. I am able to extract all status code due to a specific path and was able to each of the status code for a specified time as below: index=infra_device_sec sourcetype=device:cloudmonitor:json "message.reqPath"="/test/alpha/beta/delta" | stats count by message.status message.statuscount 0 30 200 3129 302 56321 403 10439 408 25   I am trying to create a graph for each status code vs all traffic as below:  index=infra_device_sec sourcetype=device:cloudmonitor:json "message.reqPath"="/test/alpha/beta/delta" | stats count by message.status | eval x=if('message.status'=503,"ServerDenied","All-Traffic") | timechart span=20m count by x useother=f<   But the output is showing only all traffic on a line graph. Could someone please guide two things: 1- How can create a line graph on each status code vs all traffic 2- How can I create a line graph which include all above status code vs all traffic.    Please let me know if any clarification is needed.    thank you 
please where can i get the updated sample data for practicing searches using SPL? thanks in advance
These are the Splunk query and it seems not working because i cant generate any request from that. Please I need any help to be able to generate a search. Any help would be appreciated, thanks ... See more...
These are the Splunk query and it seems not working because i cant generate any request from that. Please I need any help to be able to generate a search. Any help would be appreciated, thanks   index=aws-cloudtrail "userIdentity.type"=Root AND NOT eventType="AwsServiceEvent" |eval nullParentProcess=if(isnull("userIdentity.invokedBy"),true,false) |search nullParentProcess=true |convert ctime(_time) as _time |stats values(dest) values(eventType) values(eventName) values(userName) latest(_time) by src |rename values as * |head 51
Hi All, I have enquired this problem earlier in older threads, however, could not get a working answer, thus, created a new thread to get wider visibility and response.  Resources in hand: - I hav... See more...
Hi All, I have enquired this problem earlier in older threads, however, could not get a working answer, thus, created a new thread to get wider visibility and response.  Resources in hand: - I have a lookup table which has many fields. I have two fields to consider: - index, host I have a list of indexes for which results need to be fetched.  Requirement: - I need to fetch list of those hosts for each index value whose records are fetched in index but not fetched from lookup table. For fetching the events from index, I need to get the list of index values from lookup table. I tried the below however, I am getting hosts which are fetched in both index and lookup table: -   |tstats fillnull_value="unknown" count AS event_count WHERE [ |inputlookup table1 |stats count BY index |foreach index [eval <<FIELD>>=replace(replace(lower(trim(<<FIELD>>)),"\s+",""),"\t+","")] |eval search_str="(index=".index.")" |stats values(search_str) AS search_str |eval to_return=mvjoin(search_str," OR ") |return $to_return ] BY index, host |search NOT ( [ |inputlookup table1 |stats count BY index, host ] )   Thus, I need your help to resolve the issue. Thank you
An app was updated via the GUI on a SHC member. What exactly does the Splunkbase install do/check? What needs to be done to un-do any changes made? Is it just best to uninstall the app and redeploy f... See more...
An app was updated via the GUI on a SHC member. What exactly does the Splunkbase install do/check? What needs to be done to un-do any changes made? Is it just best to uninstall the app and redeploy from SHC deployer? Default/app.conf shows the old version number, which makes me think that the files weren't actually updated everywhere. Wondering the best route to proceed to fix the mistake. 
Hello All, Thanks for a great resource for Splunk and searches I am using the linux_secure sourcetype. I have a search that returns a value if a field (src) is longer than 1 if src is longer than... See more...
Hello All, Thanks for a great resource for Splunk and searches I am using the linux_secure sourcetype. I have a search that returns a value if a field (src) is longer than 1 if src is longer than 1 a user has logged in to a host from a "remote" host, aka a host without a splunk universal forwarder installed. When the user logs of the host with a forwarder, I want my base search to return 0 results, or make the table disappear (using Dashboard Studio). I detect the ssh_open or ssh_close in this search. Here is the search I am working with: sourcetype=linux_secure user=* | eval Date=strftime(_time, "%Y-%m-%d %H:%M:%S") | rex "(?P<Status>(?<=session)\s\w+)" | eval Action=case(Status=" opened","Online",Status=" closed","Off") | eval Action=if(len(src)>1,"Login from Remote",Action) | eval Action=if(len(src)=0,"Logged Off",Action)| sort - Date | table Date, host,src,Action My time range is 15 min. In a nutshell, I want "Remote" to show when src is there, and then zero results when the "Off" Action or the src length is 0, etc.   Any suggestions will help, Thank you very much, eholz1      
Hi Team,   I have the env setup like 2 Indexers, 1 Search Head,1 Heavy Forwarder,1 Deployment Server, 1 Cluster Master My DS is connected to HF and from here the data will be pushed to Indexers... See more...
Hi Team,   I have the env setup like 2 Indexers, 1 Search Head,1 Heavy Forwarder,1 Deployment Server, 1 Cluster Master My DS is connected to HF and from here the data will be pushed to Indexers  I would like use bots_v3_dataset for my env https://github.com/splunk/botsv3 Kindly help me how to push the data in distributed deployment.  
Hi,   If anyone can help me with this it would be truly helpful. I'm currently practicing to become a Splunk architect and I'm having an issue with file ownership on Linux Ubuntu. I changed Splunk'... See more...
Hi,   If anyone can help me with this it would be truly helpful. I'm currently practicing to become a Splunk architect and I'm having an issue with file ownership on Linux Ubuntu. I changed Splunk's ownership to the dedicated Splunk account I created but it only changes that specific folder and not all the contents in it can anyone give me a cool little command to make everything under the Splunk folder owned by the Splunk account I created for Splunk management/ Regards, 
I have a simple xml dashboard that I am doing some custom JavaScript with. I would like to get the earliest and latest from the time picker. However, if the time picker is set to today I am getting "... See more...
I have a simple xml dashboard that I am doing some custom JavaScript with. I would like to get the earliest and latest from the time picker. However, if the time picker is set to today I am getting "@d" for the earliest and "now" for the latest. Are there any helper functions to convert relative time to epoch? You can see this in my simplified code example below.     ... var defaultTokens = mvc.Components.get("default"); var earliest = defaultTokens.get('timePicker.earliest'); //when time picker is today this returns @d var latest = defaultTokens.get('timePicker.latest'); //when time picker is today this returns now ...      
All, When running a metric-data query, for one application I'm getting this response: Invalid application id xxx is specified Here is the query: https://mycompany.saas.appdynamics.com/controller/... See more...
All, When running a metric-data query, for one application I'm getting this response: Invalid application id xxx is specified Here is the query: https://mycompany.saas.appdynamics.com/controller/rest/applications/xxx/metric-data?time-range-type=BEFORE_NOW&metric-path=Application Infrastructure Performance|*|JVM|Process CPU Burnt (ms/min)&duration-in-mins=1440&rollup=true Of course the app name is not actually xxx.  It is fairly long.  Perhaps the API has a length limit in the app name?  I don't know. Any ideas about the real problem? thanks
Is there a way of limiting the search load on a index cluster using configuration in the index cluster? E.g. setting a max limit for how many concurrent searches that is allowed to run simultaneous. ... See more...
Is there a way of limiting the search load on a index cluster using configuration in the index cluster? E.g. setting a max limit for how many concurrent searches that is allowed to run simultaneous. Lowering the parameters in limits.conf section Concurrency limits and savedsearches.conf does not have any effect on the indexer. These only seems to have an effect on the Search Head https://docs.splunk.com/Documentation/Splunk/9.0.2/Admin/Limitsconf One environment I use have multiple standalone search heads that all execute searches in the same indexer cluster. The measured median concurrent searches on an indexer peer goes well above the max concurrent searches (twice the limit). That indexer cluster has default limits.conf.
I am getting logs in Splunk. But the logs are in improper format. So I want to make changes so that all my logs should be indexed in a proper format. Below are the format of the logs. Please help m... See more...
I am getting logs in Splunk. But the logs are in improper format. So I want to make changes so that all my logs should be indexed in a proper format. Below are the format of the logs. Please help me regex in props & transforms.conf     2022-12-15T16:02:11+05:30 gd9017 msgtra.imss[26879]: NormalTransac#0112022 Dec 15 16:01:30 +05:30#0112022/12/15 16:01:31 +05:30#0112022 Dec 15 16:01:31 +05:30#01136082476.4647.1671100216806.JavaMail.jwsuser@communication-api-9-xrc8m#0118B3D3323-EFDB-5B05-A5EA-9077D10C03DD#011288C06408D#0111#011donotreply@test.com#011uat08@test.org.in#011Invoices not transmitted to ICEGATE because of Negative ledger balance.#011103.83.79.99#011[172.18.201.13]:25#011250 2.0.0 Ok: queued as 619AE341807#011sent#01100100000000000000#0110#011#0112022 Dec 15 16:01:31 +05:30#0112022 Dec 15 16:01:31 +05:30#011#0113#011     Fields in the logs are time, computer, from, to, subjectline, attachment name  
I've got 3 single values and I'd like to put them into a row within a panel. The problem is that the last single value jumps to the row below. Is there any way to reduce the size or width to allow fo... See more...
I've got 3 single values and I'd like to put them into a row within a panel. The problem is that the last single value jumps to the row below. Is there any way to reduce the size or width to allow for all 3 single values to always be in the row no matter what size the browser window is at ?   
Hi All, I'm facing issue while appending results for 2 searches using append command.  I have a 2 search which i'm using to get results and also both query has lookup command to get ip_address deta... See more...
Hi All, I'm facing issue while appending results for 2 searches using append command.  I have a 2 search which i'm using to get results and also both query has lookup command to get ip_address details from a lookup . Search query1: index=abc filter1=A | eval ..| table * | lookup def host as host OUTPUT ipaddress | stats ..| eval .. Query 2: index=abc filter2=B | eval ..| table * | lookup def host as host OUTPUT ipaddress | stats ..| eval .. Both searches are almost same except "filter" field and eval commands. And i'm using append command to append results as below: index=abc filter1=A | eval ..| table * | lookup def host as host OUTPUT ipaddress | stats ..| eval ..| append [search index=abc filter2=B | eval ..| table * | lookup def host as host OUTPUT ipaddress | stats ..| eval ..] I'm getting error ([subsearch]: Streamed search execute failed because: Error in 'lookup' command: Could not construct lookup) when running above query and runs fine if i run it seperately. Please let me know what i am making wrong.  
IPs in lookup table 3.124.56/32 64.37.99.0/24 55.63.24.7/16  How to edit my search to Exclude  an IPs  from outside to a Subnet IP in a lookup file?
I am facing socket issue in Splunk cloud. How we can set parameter on Splunk Cloud? DefaultLimitFSIZE=-1 DefaultLimitNOFILE=64000 DefaultLimitNPROC=8192  
Hello, I have a csv file that have some summary stats from an index, but the requirement  is to show an sample event with all the info in that index. The CSV file have a hash number (from a accou... See more...
Hello, I have a csv file that have some summary stats from an index, but the requirement  is to show an sample event with all the info in that index. The CSV file have a hash number (from a account number), some calculated status. For example: PAN total_trans total_amount HASHPAN 1234******5678 15 15000 ABC123   The index have all the transaction and detail. We need to take an sample in the index from the csv and output to a new csv that have sample detail. Something like this: PAN total_trans total_amount HASHPAN _time TRACE TRANSACTIONID 1234******5678 15 15000 ABC123 xxxxxxx xxxxxx xxxxxxxxx   I don't really like to use join, because the index have a lot of events (around 32 mils events). Are there any elegants way to get the data?
Hi. I'm looking to make a table/stats of all fields in a search to display all values inside of each field. Similar to stats count, but instead of counting the amount of values, I want to display... See more...
Hi. I'm looking to make a table/stats of all fields in a search to display all values inside of each field. Similar to stats count, but instead of counting the amount of values, I want to display all values inside.
Hi, I am facing an strange issue on a SIEM Installation (Splunk 9.0.2 / ES 7.0.1) in regards to multisearch which is used inside Threat Intel Framework for threatmatch src. The customer complai... See more...
Hi, I am facing an strange issue on a SIEM Installation (Splunk 9.0.2 / ES 7.0.1) in regards to multisearch which is used inside Threat Intel Framework for threatmatch src. The customer complains he does not get notables for Threat Activity that is coming from IP Intel Matches. I was able to find out that the threat match is not running properly anymore... Running the search manually I get: Error in 'multisearch' command: Multisearch subsearches might only contain purely streaming operations (subsearch 1 contains a non-streaming command) Expanding all the Macros, I dont get whats up with the multisearch... This looks all proper to me. | multisearch [| tstats prestats=true summariesonly=true values("sourcetype"),values("DNS.dest") from datamodel="Network_Resolution"."DNS" by "DNS.query" | eval SEGMENTS=split(ltrim('DNS.query', "."), "."),SEG1=mvindex(SEGMENTS, -1),SEG2=mvjoin(mvindex(SEGMENTS, -2, -1), "."),SEG3=mvjoin(mvindex(SEGMENTS, -3, -1), "."),SEG4=mvjoin(mvindex(SEGMENTS, -4, -1), ".") | lookup mozilla_public_suffix_lookup domain AS SEG1 OUTPUTNEW length AS SEG1_LENGTH | lookup mozilla_public_suffix_lookup domain AS SEG2 OUTPUTNEW length AS SEG2_LENGTH | lookup mozilla_public_suffix_lookup domain AS SEG3 OUTPUTNEW length AS SEG3_LENGTH | lookup mozilla_public_suffix_lookup domain AS SEG4 OUTPUTNEW length AS SEG4_LENGTH | lookup cim_http_tld_lookup tld AS SEG1 OUTPUT tld AS TLD | eval TGT_LENGTH=coalesce(SEG4_LENGTH,SEG3_LENGTH,SEG2_LENGTH,SEG1_LENGTH, if(cidrmatch("0.0.0.0/0", 'DNS.query'), if(1==0, 4, 0), if(isnull(TLD), 2, 0))),SRC_LENGTH=mvcount(SEGMENTS),"DNS.query_truncated"=if(TGT_LENGTH==0, null, if(SRC_LENGTH>=TGT_LENGTH, mvjoin(mvindex(SEGMENTS, -TGT_LENGTH, -1), "."), null)) | fields - TLD SEG1 SEG2 SEG3 SEG4 SEG1_LENGTH SEG2_LENGTH SEG3_LENGTH SEG4_LENGTH TGT_LENGTH SRC_LENGTH SEGMENTS | eval "DNS.query_truncated"=if("DNS.query_truncated"="DNS.query" OR 'DNS.query_truncated'='DNS.query', null(), 'DNS.query_truncated') | lookup "threatintel_by_cidr" value as "DNS.query" OUTPUT threat_collection as tc0,threat_collection_key as tck0 | lookup "threatintel_by_domain" value as "DNS.query" OUTPUT threat_collection as tc1,threat_collection_key as tck1 | lookup "threatintel_by_domain" value as "DNS.query_truncated" OUTPUT threat_collection as tc2,threat_collection_key as tck2 | lookup "threatintel_by_system" value as "DNS.query" OUTPUT threat_collection as tc3,threat_collection_key as tck3 | where isnotnull('tck0') OR isnotnull('tck1') OR isnotnull('tck2') OR isnotnull('tck3') | eval intelzip0=mvzip('tc0','tck0',"@@") | eval intelzip1=mvzip('tc1','tck1',"@@") | eval intelzip2=mvzip('tc2','tck2',"@@") | eval intelzip3=mvzip('tc3','tck3',"@@") | eval threat_collection_key=mvappend(intelzip0,intelzip1,intelzip2,intelzip3) | eval "psrsvd_ct_sourcetype"=if(isnull('psrsvd_ct_sourcetype'),'psrsvd_ct_sourcetype','psrsvd_ct_sourcetype') | eval "psrsvd_nc_sourcetype"=if(isnull('psrsvd_nc_sourcetype'),'psrsvd_nc_sourcetype','psrsvd_nc_sourcetype') | eval "psrsvd_vm_sourcetype"=if(isnull('psrsvd_vm_sourcetype'),'psrsvd_vm_sourcetype','psrsvd_vm_sourcetype') | eval "psrsvd_ct_dest"=if(isnull('psrsvd_ct_dest'),'psrsvd_ct_DNS.dest','psrsvd_ct_dest') | eval "psrsvd_nc_dest"=if(isnull('psrsvd_nc_dest'),'psrsvd_nc_DNS.dest','psrsvd_nc_dest') | eval "psrsvd_vm_dest"=if(isnull('psrsvd_vm_dest'),'psrsvd_vm_DNS.dest','psrsvd_vm_dest') | eval "threat_match_field"=if(isnull('threat_match_field'),"src",'threat_match_field') | eval "threat_match_value"=if(isnull('threat_match_value'),'DNS.query','threat_match_value') ] [| tstats prestats=true summariesonly=true values("sourcetype"),values("All_Traffic.dest"),values("All_Traffic.user") from datamodel="Network_Traffic"."All_Traffic" where "All_Traffic.action"="allowed" by "All_Traffic.src" | eval SEGMENTS=split(ltrim('All_Traffic.src', "."), "."),SEG1=mvindex(SEGMENTS, -1),SEG2=mvjoin(mvindex(SEGMENTS, -2, -1), "."),SEG3=mvjoin(mvindex(SEGMENTS, -3, -1), "."),SEG4=mvjoin(mvindex(SEGMENTS, -4, -1), ".") | lookup mozilla_public_suffix_lookup domain AS SEG1 OUTPUTNEW length AS SEG1_LENGTH | lookup mozilla_public_suffix_lookup domain AS SEG2 OUTPUTNEW length AS SEG2_LENGTH | lookup mozilla_public_suffix_lookup domain AS SEG3 OUTPUTNEW length AS SEG3_LENGTH | lookup mozilla_public_suffix_lookup domain AS SEG4 OUTPUTNEW length AS SEG4_LENGTH | lookup cim_http_tld_lookup tld AS SEG1 OUTPUT tld AS TLD | eval TGT_LENGTH=coalesce(SEG4_LENGTH,SEG3_LENGTH,SEG2_LENGTH,SEG1_LENGTH, if(cidrmatch("0.0.0.0/0", 'All_Traffic.src'), if(1==0, 4, 0), if(isnull(TLD), 2, 0))),SRC_LENGTH=mvcount(SEGMENTS),"All_Traffic.src_truncated"=if(TGT_LENGTH==0, null, if(SRC_LENGTH>=TGT_LENGTH, mvjoin(mvindex(SEGMENTS, -TGT_LENGTH, -1), "."), null)) | fields - TLD SEG1 SEG2 SEG3 SEG4 SEG1_LENGTH SEG2_LENGTH SEG3_LENGTH SEG4_LENGTH TGT_LENGTH SRC_LENGTH SEGMENTS | eval "All_Traffic.src_truncated"=if("All_Traffic.src_truncated"="All_Traffic.src" OR 'All_Traffic.src_truncated'='All_Traffic.src', null(), 'All_Traffic.src_truncated') | lookup "threatintel_by_cidr" value as "All_Traffic.src" OUTPUT threat_collection as tc0,threat_collection_key as tck0 | lookup "threatintel_by_domain" value as "All_Traffic.src" OUTPUT threat_collection as tc1,threat_collection_key as tck1 | lookup "threatintel_by_domain" value as "All_Traffic.src_truncated" OUTPUT threat_collection as tc2,threat_collection_key as tck2 | lookup "threatintel_by_system" value as "All_Traffic.src" OUTPUT threat_collection as tc3,threat_collection_key as tck3 | where isnotnull('tck0') OR isnotnull('tck1') OR isnotnull('tck2') OR isnotnull('tck3') | eval intelzip0=mvzip('tc0','tck0',"@@") | eval intelzip1=mvzip('tc1','tck1',"@@") | eval intelzip2=mvzip('tc2','tck2',"@@") | eval intelzip3=mvzip('tc3','tck3',"@@") | eval threat_collection_key=mvappend(intelzip0,intelzip1,intelzip2,intelzip3) | eval "psrsvd_ct_sourcetype"=if(isnull('psrsvd_ct_sourcetype'),'psrsvd_ct_sourcetype','psrsvd_ct_sourcetype') | eval "psrsvd_nc_sourcetype"=if(isnull('psrsvd_nc_sourcetype'),'psrsvd_nc_sourcetype','psrsvd_nc_sourcetype') | eval "psrsvd_vm_sourcetype"=if(isnull('psrsvd_vm_sourcetype'),'psrsvd_vm_sourcetype','psrsvd_vm_sourcetype') | eval "psrsvd_ct_dest"=if(isnull('psrsvd_ct_dest'),'psrsvd_ct_All_Traffic.dest','psrsvd_ct_dest') | eval "psrsvd_nc_dest"=if(isnull('psrsvd_nc_dest'),'psrsvd_nc_All_Traffic.dest','psrsvd_nc_dest') | eval "psrsvd_vm_dest"=if(isnull('psrsvd_vm_dest'),'psrsvd_vm_All_Traffic.dest','psrsvd_vm_dest') | eval "psrsvd_ct_user"=if(isnull('psrsvd_ct_user'),'psrsvd_ct_All_Traffic.user','psrsvd_ct_user') | eval "psrsvd_nc_user"=if(isnull('psrsvd_nc_user'),'psrsvd_nc_All_Traffic.user','psrsvd_nc_user') | eval "psrsvd_vm_user"=if(isnull('psrsvd_vm_user'),'psrsvd_vm_All_Traffic.user','psrsvd_vm_user') | eval "threat_match_field"=if(isnull('threat_match_field'),"src",'threat_match_field') | eval "threat_match_value"=if(isnull('threat_match_value'),'All_Traffic.src','threat_match_value') ] | mvexpand threat_collection_key | stats values("dest") as "dest",values("sourcetype") as "sourcetype",values("user") as "user" by threat_match_field,threat_match_value,threat_collection_key | rex field=threat_collection_key "^(?<threat_collection>.*)@@(?<threat_collection_key>.*)$" | eval "dest"=mvindex('dest',0,10-1) | eval "sourcetype"=mvindex('sourcetype',0,10-1) | eval "user"=mvindex('user',0,10-1) | eval certificate_intel_key=if(threat_collection="certificate_intel",'threat_collection_key',null()) | eval email_intel_key=if(threat_collection="email_intel",'threat_collection_key',null()) | eval file_intel_key=if(threat_collection="file_intel",'threat_collection_key',null()) | eval http_intel_key=if(threat_collection="http_intel",'threat_collection_key',null()) | eval ip_intel_key=if(threat_collection="ip_intel",'threat_collection_key',null()) | eval process_intel_key=if(threat_collection="process_intel",'threat_collection_key',null()) | eval registry_intel_key=if(threat_collection="registry_intel",'threat_collection_key',null()) | eval service_intel_key=if(threat_collection="service_intel",'threat_collection_key',null()) | eval user_intel_key=if(threat_collection="user_intel",'threat_collection_key',null()) | lookup "certificate_intel" _key as "certificate_intel_key" OUTPUTNEW "description","threat_key","weight","disabled" | lookup "email_intel" _key as "email_intel_key" OUTPUTNEW "description","threat_key","weight","disabled" | lookup "file_intel" _key as "file_intel_key" OUTPUTNEW "description","threat_key","weight","disabled" | lookup "http_intel" _key as "http_intel_key" OUTPUTNEW "description","threat_key","weight","disabled" | lookup "ip_intel" _key as "ip_intel_key" OUTPUTNEW "description","threat_key","weight","disabled" | lookup "process_intel" _key as "process_intel_key" OUTPUTNEW "description","threat_key","weight","disabled" | lookup "registry_intel" _key as "registry_intel_key" OUTPUTNEW "description","threat_key","weight","disabled" | lookup "service_intel" _key as "service_intel_key" OUTPUTNEW "description","threat_key","weight","disabled" | lookup "user_intel" _key as "user_intel_key" OUTPUTNEW "description","threat_key","weight","disabled" | lookup threat_group_intel _key as threat_key OUTPUTNEW description,weight | eval weight=if(isnum(weight),weight,60) | fields - intelzip*,"certificate_intel_key","email_intel_key","file_intel_key","http_intel_key","ip_intel_key","process_intel_key","registry_intel_key","service_intel_key","user_intel_key" | where NOT match(disabled, "1|[Tt]|[Tt][Rr][Uu][Ee]") | dedup threat_match_field,threat_match_value,threat_key Has anyone faced similar problems? BR, Markus
Hi, I was going through my Splunk setup and came across this warning when I was clicking on LDAP Groups under "Authentication Methods > LDAP strategies" "LDAP server warning: Size limit exceeded" ... See more...
Hi, I was going through my Splunk setup and came across this warning when I was clicking on LDAP Groups under "Authentication Methods > LDAP strategies" "LDAP server warning: Size limit exceeded"   I was just wondering what could be the cause and how to resolve this warning?   Thank you in advance for any assistance.   Mikhael