All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Example 1 - you are using dedup src/dest - you can't do that as I explained in my other post. Example 2 - dedup here is not useful - you have a multivalue src_ip and you will not have any duplicate ... See more...
Example 1 - you are using dedup src/dest - you can't do that as I explained in my other post. Example 2 - dedup here is not useful - you have a multivalue src_ip and you will not have any duplicate src_ip in there relating to the dest_ip, so it's redundant. Best way to work out what's wrong here is to remove the last 2 lines and just let the stats work. If you work on a small time zone where you KNOW what data you expect - then you can more easily validate what's wrong. As @isoutamo says, you can even remove stats and just to table * - but work with a very small data set where the results are predictable. Then you can build back the detail again.
It's quotes - you are using * without quotes, so it's invalid eval - sometimes it's hard to debug eval token setters in dashboards, but generally if you find that nothing appears to happen when you h... See more...
It's quotes - you are using * without quotes, so it's invalid eval - sometimes it's hard to debug eval token setters in dashboards, but generally if you find that nothing appears to happen when you have an <eval> token setter, it's probably getting an error. I often have a panel behind depends="$debug_tokens$" that has an html panel that reports token values, which I can turn on as needed. Also on the dashboard examples app there is a showtokens.js which can help you uncover token issues.  
https://community.splunk.com/t5/Splunk-Enterprise/Migration-of-Splunk-to-different-server-same-platform-Linux-but/m-p/538062 this links contains those exact steps which are needed including remove old... See more...
https://community.splunk.com/t5/Splunk-Enterprise/Migration-of-Splunk-to-different-server-same-platform-Linux-but/m-p/538062 this links contains those exact steps which are needed including remove old peers from CM! As "splunk offline --enforce-counts" is not enough.
Hi gcusello, Thanks for your response and suggestions. 1- As yuanliu  said, please see samples in text format (using the Insert/Edit Code Sample button). 2- I put all the search terms in the main ... See more...
Hi gcusello, Thanks for your response and suggestions. 1- As yuanliu  said, please see samples in text format (using the Insert/Edit Code Sample button). 2- I put all the search terms in the main search like you said as below:   index=nprod_database sourcetype=tigergraph:app:auditlog:8542 host=VCAUSC11EUAT* clientHost failedAttempts userAgent userName authType message timestamp actionName status "actionName":"login" "message":"Authentication failed" "status":"FAILURE" "timestamp":"2025-01-08T*"   Sample results:   {"endpoint":"/requesttoken","clientHost":"localhost:36976","userAgent":"Apache-HttpClient/5.2.3 (Java/17.0.13)","userName":"TEST6","authType":"LDAP","message":"Generate new token successfully.\nWarning: TEST6 Support cannot restore access to secrets/tokens for security reasons. Please save your secret/token and keep it safe and accessible.","timestamp":"2024-12-30T11:27:02.881-06:00","actionName":"requestToken","status":"SUCCESS"}, {"endpoint":"/gsql/simpleauth","clientHost":"localhost:53964","failedAttempts":0,"userAgent":"GraphStudio","userName":"TEST1","authType":"LDAP","message":"login succeed","timestamp":"2024-12-30T15:47:15.496-06:00","actionName":"login","status":"SUCCESS"}, {"endpoint":"/gsql/schema","clientHost":"localhost:54556","userAgent":"GraphStudio","userName":"TEST1","authType":"LDAP","message":"Successfully got schema for graph aml_risk_graph.","timestamp":"2024-12-30T15:47:33.226-06:00","actionName":"showCatalog","status":"SUCCESS"}, {"endpoint":"/gsql/authtoken","clientHost":"localhost:54552","userAgent":"GraphStudio","userName":"TEST1","authType":"LDAP","message":"Generate new token successfully.\nWarning: TEST6 Support cannot restore access to secrets/tokens for security reasons. Please save your secret/token and keep it safe and accessible.","timestamp":"2024-12-30T15:47:35.36-06:00","actionName":"requestToken","status":"SUCCESS"}, {"endpoint":"/gsql/userdefinedtokenfunctions","clientHost":"localhost:54556","userAgent":"GraphStudio","userName":"TEST1","authType":"LDAP","message":"getUDFs succeeded","timestamp":"2024-12-30T15:47:38.872-06:00","actionName":"getUDFs","status":"SUCCESS"}, {"endpoint":"/gsql/schema","clientHost":"100.71.65.228","userAgent":"GraphStudio","userName":"TEST1","authType":"LDAP","message":"Successfully got schema for graph aml_risk_graph.","timestamp":"2024-12-30T15:47:39.214-06:00","actionName":"showCatalog","status":"SUCCESS"}, {"endpoint":"/gsql/file","clientHost":"100.71.65.229","userAgent":"GraphStudio","userName":"TEST1","authType":"LDAP","message":"exportJob succeeded","timestamp":"2024-12-30T15:47:39.292-06:00","actionName":"exportJob","status":"SUCCESS"}, {"endpoint":"/gsql/authtoken","clientHost":"localhost:54552","userAgent":"GraphStudio","userName":"TEST1","authType":"LDAP","message":"Generate new token successfully.\nWarning: TEST6 Support cannot restore access to secrets/tokens for security reasons. Please save your secret/token and keep it safe and accessible.","timestamp":"2024-12-30T15:47:40.63-06:00","actionName":"requestToken","status":"SUCCESS"}, {"endpoint":"/gsql/schema","clientHost":"localhost:54556","userAgent":"GraphStudio","userName":"TEST1","authType":"LDAP","message":"Successfully got schema for graph aml_risk_graph.","timestamp":"2024-12-30T15:47:41.877-06:00","actionName":"showCatalog","status":"SUCCESS"}, {"endpoint":"/gsql/simpleauth","clientHost":"100.71.65.229","failedAttempts":0,"userAgent":"GraphStudio","userName":"TEST2","authType":"LDAP","message":"login succeed","timestamp":"2024-12-31T10:00:19.455-06:00","actionName":"login","status":"SUCCESS"}, {"endpoint":"/gsql/simpleauth","clientHost":"100.71.65.229","failedAttempts":0,"userAgent":"GraphStudio","userName":"TEST2","authType":"LDAP","message":"login succeed","timestamp":"2024-12-31T10:03:22.203-06:00","actionName":"login","status":"SUCCESS"}, {"endpoint":"/gsql/simpleauth","clientHost":"localhost:47404","failedAttempts":0,"userAgent":"GraphStudio","userName":"TEST3","authType":"LDAP","message":"login succeed","timestamp":"2024-12-31T10:18:22.9-06:00","actionName":"login","status":"SUCCESS"}, {"endpoint":"/gsql/simpleauth","clientHost":"100.71.65.229","failedAttempts":0,"userAgent":"GrpahStudio","userName":"TEST3","authType":"LDAP","message":"Authentication failed!","timestamp":"2024-12-31T10:25:32.26-06:00","actionName":"login","status":"FAILURE"}, {"endpoint":"/requesttoken","clientHost":"localhost:35260","userAgent":"Apache-HttpClient/5.2.3 (Java/17.0.13)","userName":"TEST6","authType":"LDAP","message":"Generate new token successfully.\nWarning: TEST6 Support cannot restore access to secrets/tokens for security reasons. Please save your secret/token and keep it safe and accessible.","timestamp":"2024-12-31T11:00:05.35-06:00","actionName":"requestToken","status":"SUCCESS"}, {"endpoint":"/gsql/simpleauth","clientHost":"100.71.65.229","failedAttempts":0,"userAgent":"GraphStudio","userName":"TEST3","authType":"LDAP","message":"login succeed","timestamp":"2024-12-31T11:24:31.435-06:00","actionName":"login","status":"SUCCESS"}, {"endpoint":"/gsql/schema","clientHost":"localhost:47318","userAgent":"Apache-HttpClient/5.2.3 (Java/17.0.13)","userName":"c089265","authType":"LDAP","message":"Successfully got schema for graph aml_risk_graph.","timestamp":"2024-12-31T21:15:55.995-06:00","actionName":"showCatalog","status":"SUCCESS"}, {"endpoint":"/gsql/schema","clientHost":"localhost:38336","userAgent":"Apache-HttpClient/5.2.3 (Java/17.0.13)","userName":"c089265","authType":"LDAP","message":"Successfully got schema for graph aml_risk_graph.","timestamp":"2025-01-02T11:36:45.844-06:00","actionName":"showCatalog","status":"SUCCESS"}, {"endpoint":"/gsql/schema","clientHost":"100.82.128.85","userAgent":"Apache-HttpClient/5.2.3 (Java/17.0.13)","userName":"c089265","authType":"LDAP","message":"Successfully got schema for graph aml_risk_graph.","timestamp":"2025-01-03T03:59:10.235-06:00","actionName":"showCatalog","status":"SUCCESS"}, {"endpoint":"/gsql/simpleauth","clientHost":"localhost:38012","failedAttempts":0,"userAgent":"GraphStudio","userName":"TEST4","authType":"LDAP","message":"login succeed","timestamp":"2025-01-06T13:47:43.429-06:00","actionName":"login","status":"SUCCESS"}, {"endpoint":"/gsql/simpleauth","clientHost":"100.71.65.229","failedAttempts":0,"userAgent":"GrpahStudio","userName":"TEST4","authType":"LDAP","message":"Authentication failed!","timestamp":"2025-01-06T13:48:27.717-06:00","actionName":"login","status":"FAILURE"}, {"endpoint":"/gsql/simpleauth","clientHost":"100.71.65.228","failedAttempts":0,"userAgent":"GrpahStudio","userName":"TEST4","authType":"LDAP","message":"Authentication failed!","timestamp":"2025-01-06T13:48:32.587-06:00","actionName":"login","status":"FAILURE"}, {"endpoint":"/gsql/simpleauth","clientHost":"/127.0.0.1:43520","failedAttempts":0,"userAgent":"GrpahStudio","userName":"TEST4","authType":"LDAP","message":"Authentication failed!","timestamp":"2025-01-06T13:48:36.03-06:00","actionName":"login","status":"FAILURE"}, {"endpoint":"/gsql/simpleauth","clientHost":"localhost:60404","failedAttempts":0,"userAgent":"GraphStudio","userName":"TEST5","authType":"LDAP","message":"login succeed","timestamp":"2025-01-06T19:59:28.295-06:00","actionName":"login","status":"SUCCESS"}, {"endpoint":"/gsql/simpleauth","clientHost":"100.71.65.229","failedAttempts":0,"userAgent":"GraphStudio","userName":"TEST5","authType":"LDAP","message":"login succeed","timestamp":"2025-01-06T19:59:40.885-06:00","actionName":"login","status":"SUCCESS"}, {"endpoint":"/gsql/login","clientHost":"localhost:53886","clientOSUsername":"TEST4","failedAttempts":0,"userAgent":"GSQL Shell","userName":"TEST4","authType":"LDAP","message":"login succeeded","timestamp":"2025-01-06T20:45:36.492-06:00","actionName":"login","status":"SUCCESS"}, {"endpoint":"/gsql/file","clientHost":"10.138.170.165","clientOSUsername":"TEST4","userAgent":"GSQL Shell","userName":"TEST4","authType":"LDAP","message":"showCatalog succeeded","timestamp":"2025-01-06T20:45:37.241-06:00","actionName":"showCatalog","status":"SUCCESS"}, {"endpoint":"/gsql/login","clientHost":"localhost:39154","clientOSUsername":"TEST4","failedAttempts":0,"userAgent":"GSQL Shell","userName":"TEST4","authType":"LDAP","message":"login succeeded","timestamp":"2025-01-06T20:46:48.666-06:00","actionName":"login","status":"SUCCESS"}, {"endpoint":"/gsql/file","clientHost":"10.138.170.165","clientOSUsername":"TEST4","userAgent":"GSQL Shell","userName":"TEST4","authType":"LDAP","message":"showCatalog succeeded","timestamp":"2025-01-06T20:46:49.376-06:00","actionName":"showCatalog","status":"SUCCESS"}, {"endpoint":"/gsql/login","clientHost":"10.138.170.165","clientOSUsername":"TEST4","failedAttempts":0,"userAgent":"GSQL Shell","userName":"TEST4","authType":"LDAP","message":"login succeeded","timestamp":"2025-01-06T20:47:14.033-06:00","actionName":"login","status":"SUCCESS"}, {"endpoint":"/gsql/file","clientHost":"localhost:39154","clientOSUsername":"TEST4","userAgent":"GSQL Shell","userName":"TEST4","authType":"LDAP","message":"Successfully used graph 'aml_risk_graph'.","timestamp":"2025-01-06T20:47:14.863-06:00","actionName":"useGraph","status":"SUCCESS"}, {"endpoint":"/gsql/file","clientHost":"localhost:39154","clientOSUsername":"TEST4","userAgent":"GSQL Shell","userName":"TEST4","authType":"LDAP","message":"Successfully created query 'Del_orphan_edges_for_previous_primary'.","timestamp":"2025-01-06T20:47:17.079-06:00","actionName":"createQuery","status":"SUCCESS"}, {"endpoint":"/gsql/simpleauth","clientHost":"100.71.65.228","failedAttempts":0,"userAgent":"GraphStudio","userName":"TEST4","authType":"LDAP","message":"login succeed","timestamp":"2025-01-06T20:55:21.895-06:00","actionName":"login","status":"SUCCESS"}, {"endpoint":"/gsql/schema","clientHost":"localhost:43048","userAgent":"GraphStudio","userName":"TEST4","authType":"LDAP","message":"Successfully got schema for graph aml_risk_graph.","timestamp":"2025-01-06T20:56:34.057-06:00","actionName":"showCatalog","status":"SUCCESS"}, {"endpoint":"/gsql/queries","clientHost":"localhost:43048","userAgent":"GraphStudio","userName":"TEST4","authType":"LDAP","message":"showQuery succeeded","timestamp":"2025-01-06T20:56:34.781-06:00","actionName":"showQuery","status":"SUCCESS"}, {"endpoint":"/gsql/file","clientHost":"localhost:39154","clientOSUsername":"TEST4","userAgent":"GSQL Shell","userName":"TEST4","authType":"LDAP","message":"Successfully installed query [Del_orphan_edges_for_previous_primary].","timestamp":"2025-01-06T20:57:36.229-06:00","actionName":"installQuery","status":"SUCCESS"}, {"endpoint":"/gsql/schema","clientHost":"100.71.65.229","userAgent":"GraphStudio","userName":"TEST4","authType":"LDAP","message":"Successfully got schema for graph aml_risk_graph.","timestamp":"2025-01-06T20:57:46.93-06:00","actionName":"showCatalog","status":"SUCCESS"}, {"endpoint":"/gsql/queries","clientHost":"localhost:60138","userAgent":"GraphStudio","userName":"TEST4","authType":"LDAP","message":"showQuery succeeded","timestamp":"2025-01-06T20:57:47.563-06:00","actionName":"showQuery","status":"SUCCESS"}, {"endpoint":"/gsql/simpleauth","clientHost":"localhost:48050","failedAttempts":0,"userAgent":"GraphStudio","userName":"sxvedag","authType":"LDAP","message":"login succeed","timestamp":"2025-01-07T08:56:29.476-06:00","actionName":"login","status":"SUCCESS"}, {"endpoint":"/gsql/schema","clientHost":"localhost:48036","userAgent":"GraphStudio","userName":"sxvedag","authType":"LDAP","message":"Successfully got schema for graph aml_risk_graph.","timestamp":"2025-01-07T08:56:38.165-06:00","actionName":"showCatalog","status":"SUCCESS"}, {"endpoint":"/gsql/queries","clientHost":"100.71.65.229","userAgent":"GraphStudio","userName":"sxvedag","authType":"LDAP","message":"showQuery succeeded","timestamp":"2025-01-07T08:56:38.823-06:00","actionName":"showQuery","status":"SUCCESS"}, {"endpoint":"/gsql/simpleauth","clientHost":"100.71.65.228","failedAttempts":0,"userAgent":"GraphStudio","userName":"TEST4","authType":"LDAP","message":"login succeed","timestamp":"2025-01-07T10:16:11.689-06:00","actionName":"login","status":"SUCCESS"}, {"endpoint":"/requesttoken","clientHost":"localhost:45058","userAgent":"Apache-HttpClient/5.2.3 (Java/17.0.13)","userName":"TEST6","authType":"LDAP","message":"Generate new token successfully.\nWarning: TEST6 Support cannot restore access to secrets/tokens for security reasons. Please save your secret/token and keep it safe and accessible.","timestamp":"2025-01-08T09:20:02.381-06:00","actionName":"requestToken","status":"SUCCESS"} ]   But here I don’t know why the filter is giving me information about "status":"SUCCESS" and "message":"login succeed" even my login failed condition is "message":"Authentication failed!" Maybe my condition in the search query is wrong. Also I was trying to get the results for the timestamp but it is showing all for other timestamp. 3- I suppose also  to check the condition for each host in our  infrastructure and each account by using my search below:   index=nprod_database sourcetype=tigergraph:app:auditlog:8542 host=VCAUSC11EUAT* clientHost failedAttempts userAgent userName authType message timestamp actionName status "actionName":"login" "message":"Authentication failed" "status":"FAILURE" "timestamp":"2025-01-08T*"   4- I tried  to use the stats command to aggregate results and the where command to filter them, something like this:   index=nprod_database sourcetype=tigergraph:app:auditlog:8542 host=VCAUSC11EUAT* clientHost failedAttempts userAgent userName authType message timestamp actionName status "actionName":"login" "message":"Authentication failed" "status":"FAILURE" "timestamp":"2025-01-08T*" | stats count BY actionName host | where count>3   Results: No results found. Thank you for your help.
Maybe this can be solution to your challenge: https://community.splunk.com/t5/Deployment-Architecture/KVStore-does-not-start-when-running-Splunk-9-4/m-p/708304/thread-id/29016/highlight/false#M29017
Have you checked from mongodb.log why this is not starting? There is one another case where Windows OS was not supported by Splunk 9.4.0 version. https://community.splunk.com/t5/Splunk-Enterprise/KVs... See more...
Have you checked from mongodb.log why this is not starting? There is one another case where Windows OS was not supported by Splunk 9.4.0 version. https://community.splunk.com/t5/Splunk-Enterprise/KVstore-unable-to-start-after-upgrade-to-Splunk-Enterprise-9-4/m-p/708264#M21264
I did some reading of the documentation and realized that underlying Mongo DB was upgraded to 7. I figured out that Mongo DB 5+ requires AVX instruction set.  So time to check if CPU supports AVX in... See more...
I did some reading of the documentation and realized that underlying Mongo DB was upgraded to 7. I figured out that Mongo DB 5+ requires AVX instruction set.  So time to check if CPU supports AVX instruction set - in my case the CPU model did support this instructions. But running the lscpu command didnt show AVX flags. It turned out that AVX instructions were not available, because the VM had Processor compatibility mode enabled. In hyper-v we had to remove "Allow migration to a virtual machine host with a different processor version" checkbox.  After VM was restarted, AVX appeared in CPU flags and Splunk KV Store was operational. Lession learned: before upgrading  to 9.4 (or making fresh install), check if AVX flag is available. If it isn't, it is about time to upgrade your hardware and in stick to Splunk 9.3.
I am posting this to maybe save you from few hours of troubleshooting like I did. I did clean install of Splunk 9.4 in small customer environment with virtualized AIO instance. After the installat... See more...
I am posting this to maybe save you from few hours of troubleshooting like I did. I did clean install of Splunk 9.4 in small customer environment with virtualized AIO instance. After the installation there was an error notifying that KV Store can not start and that mongo log should be checked. The following error was logged:   ERROR KVStoreConfigurationProvider [4755 KVStoreConfigurationThread] - Could not start mongo instance. Initialization failed.     Mongod.log was completely empty.  So there was no clues in the log files about what is wrong and what can I do to make KVStore operational. Time to start Googling. Solution will be posted in the next post.
How about ceil for diffTime? https://docs.splunk.com/Documentation/Splunk/9.4.0/SearchReference/MathematicalFunctions#ceiling.28.26lt.3Bnum.26gt.3B.29_or_ceil.28.26lt.3Bnum.26gt.3B.29
If you can do you can try to standardise your company log messages like Mandatory part Here are some fields/information that every event must contain, regardless of the service. It doesn’t matter i... See more...
If you can do you can try to standardise your company log messages like Mandatory part Here are some fields/information that every event must contain, regardless of the service. It doesn’t matter if those fields are KV, JSON, or just placement formatted. More important is that these are always present and easily identified in the log event.       Field   Content   Example   Purpose   timestamp RFC-3339 formatted timestamp 2024-07-01T12:13:15.123+03:00 When an event occurs in this service. log_type audit/apps/trace/metric audit What is the event type from a security content perspective? source_ip IP of logging systems / service 10.11.22.123 Where did the event occur from an IP perspective? source_system Source System of log event aa.bb.local Host or service where event was created. process Process / service which has processed event app_abc Which application / service processed the event. sessionId Session where event belongs 82B98B54-9553-43CD-A5AB-E6F45656CD95 e.g. GUID to identify the entire session. requestId Request where event belongs 9DF09DE7-4061-487B-953C-49B73C000E2C e.g. GUID to identify individual request within session. userId User’s identification on service. a12345 Pseudonyms should be used instead of real user IDs to avoid exposing PII. outcome Status of action Error Did the action succeed or fail? errorDetails Details for action result Not authorized A more detailed error message, including the full message, could be part of the service-based payload payload Application / service specific parts { “as”:23, { “aa”:”bb”, “cc”:12}} A separate payload based on the real audit trail needs.   In that way you could do some kind of DM based on this, but as I said usually the payload is the interesting part and this is different by every subsystem / service etc. And there are lot equipments which have their own log format.
Thank you for the input everyone.  @isoutamo - you are correct in that each data source I'm looking at has vastly different data available... Some sources come from endpoint agents which have userna... See more...
Thank you for the input everyone.  @isoutamo - you are correct in that each data source I'm looking at has vastly different data available... Some sources come from endpoint agents which have username, endpoint name, ip address (local/public), url, url ip, etc.  Other sources from network devices and might track users by local IP only, but also might have which FW the request goes through, etc.  I have one source which only lists a single field to identify the user.... the MAC address... really not helpful without an additional lookup. I ended up using a number of macros and lots of coalesces to make my field names consistent.
I have this search, where I get the duration and I need to convert it to integer: Example: Min:Sec to Whole 00:02      to   1 00:16      to   1 01:53      to  2 09:20      to  10 ...etc S... See more...
I have this search, where I get the duration and I need to convert it to integer: Example: Min:Sec to Whole 00:02      to   1 00:16      to   1 01:53      to  2 09:20      to  10 ...etc Script: index="cdr" | search "Call.TermParty.TrunkGroup.TrunkGroupId"="2811" OR "Call.TermParty.TrunkGroup.TrunkGroupId"="2810" "Call.ConnectTime"=* "Call.DisconnectTime"=* |lookup Pais Call.RoutingInfo.DestAddr OUTPUT Countrie | eval Disctime=strftime('Call.DisconnectTime'/1000,"%m/%d/%Y %H:%M:%S %Q") | eval Conntime=strftime('Call.ConnectTime'/1000, "%m/%d/%Y %H:%M:%S%Q") | eval diffTime=('Call.DisconnectTime'-'Call.ConnectTime') | eval Duracion=strftime(diffTime/1000, "%M:%S") | table Countrie, Duración Spain 00:02 Spain 00:16 Argentina 00:53 Spain 09:20 Spain 02:54 Spain 28:30 Spain 01:18 Spain 00:28 Spain 16:40 Spain 00:03 Chile 00:25 Uruguay 01:54 Spain 01:54  Regards.  
This is actually quite simple question but give a good answer is extreme hard My answer or actually thinking is based on what I have learn e.g. in finance sector. I suppose that on almost all en... See more...
This is actually quite simple question but give a good answer is extreme hard My answer or actually thinking is based on what I have learn e.g. in finance sector. I suppose that on almost all enterprise grade business URL is just a start/execution point for real application. This means that when this endpoint is called it's just e.g. API Gateway request which are forwarder to one (usually several) backend(s) which are processing real request and then return needed response to client.  In tecnical point of view this means that there are a session (what user are doing in real life transaction) and this contains several request (those individual URLs) which are processing e.g. individual dashboard or step in real process. Usually there should be sessionId which are fixed for one real life transaction e.g. login into web bank and do what ever you are doing in one login (e.g. check balance, pay some invoices, transfer money etc.). Then there is requestId which is execution for one individual URL / process step (like see our account amount, check invoice, modify invoice, accept it into pay etc.). When you are think this workflow and which kind of event all those tens of subsystems are generating for click one entry point UR it's quite obviously that. you cannot define any DM which can describe this bunch of events. I suppose that you can do some DM for base audit data, but as payloads of different requests for backend systems are totally different it will be extremely hard to create generic DM for this. If/when needed you can do it by yourself, but quite probably it will be different for every customer or at least for every entry  point Just some thoughts not any real answer. r. Ismo.  
Hi I propose that you set up a test/lab system to test and document this change. There are some things which you need check and test. Are you using REST api for current users? This works different... See more...
Hi I propose that you set up a test/lab system to test and document this change. There are some things which you need check and test. Are you using REST api for current users? This works differently with SAML users How you avoid that used user accounts / userIDs didn't change or if those have change, how to migrate users private KOs users schedules something else which are depending on userID Are you need CLI access with your old LDAP users. This didn't work with SAML account or at least it needs some additional scripts or something else? Probably something else depending on which SAML idP you are using? r. Ismo
Hi I think that this is place for sub query like index=lalala source=lalala EventID=4728 AND PrimaryGroupId IN (512,516,517,518,519) AND [ search index=lalala source=lalala EventID=4720 | fields... See more...
Hi I think that this is place for sub query like index=lalala source=lalala EventID=4728 AND PrimaryGroupId IN (512,516,517,518,519) AND [ search index=lalala source=lalala EventID=4720 | fields UserName | dedup UserName | format ] In this way it first look those UserNames which has created and then that "outer" base search this those (UserName = "xxx" OR UserName = "yy"....) If you are looking for long period then maybe there is better options too. r. Ismo 
All fields should be there if those contains some values. You could debug it e.g. comment dedup away comment stats away and replace it with table Also if/when you are using verbose mode you can... See more...
All fields should be there if those contains some values. You could debug it e.g. comment dedup away comment stats away and replace it with table Also if/when you are using verbose mode you can see what values you have in Events tab. With Smart or Fast mode this tab is not available. r Ismo
One old post which do exactly what you are doing. https://community.splunk.com/t5/Getting-Data-In/sending-specific-events-to-nullqueue-using-props-amp-amp/m-p/660688
Here is the other one https://conf.splunk.com/files/2021/slides/PLA1410C.pdf
Yes, I want to take all logs but events with envoy in it. I am only using universal forwarder which I believe cant parse any data like a heavy forwarder. Am I mistaken? I made the transformation name... See more...
Yes, I want to take all logs but events with envoy in it. I am only using universal forwarder which I believe cant parse any data like a heavy forwarder. Am I mistaken? I made the transformation names unique and restarted splunk via GUI but still no discards. 
Please write all SPL etc. inside </> tags. That way those are easier to take into use. It also ensure that we can get the same SPL what you have write into your example.