All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Folks,  I have the following issue on my Cluster Master when trying to create an index via Cluster Master and push the bundle: [Critical] In index 'oxxion': Failed to create directory '/opt/in... See more...
Hi Folks,  I have the following issue on my Cluster Master when trying to create an index via Cluster Master and push the bundle: [Critical] In index 'oxxion': Failed to create directory '/opt/indexes_frozen/oxxion' (File exists) I have been unable to identify what the issue is. Does anyone has an idea on how to resolve this? Thanks in advance!
After updating to unprivileged mode (because privileged is being depcrecated), we are getting access denied issues when running some commands. For example: "phenv ibackup --setup" So far, the issue... See more...
After updating to unprivileged mode (because privileged is being depcrecated), we are getting access denied issues when running some commands. For example: "phenv ibackup --setup" So far, the issues seem to be with commands that relate to Postgres. How are we supposed to run unprivileged, but complete admin tasks? Note: If I run the commands as root, I get this error: "CommandError: This command must be executed by user phantom." Thanks!
Hello everybody! My standalone SH Storage Engine did not migrate to WiredTiger after upgrade to Splunk 9.0.2.  I then followed these steps: https://docs.splunk.com/Documentation/Splunk/8.1.3/Admin/... See more...
Hello everybody! My standalone SH Storage Engine did not migrate to WiredTiger after upgrade to Splunk 9.0.2.  I then followed these steps: https://docs.splunk.com/Documentation/Splunk/8.1.3/Admin/MigrateKVstore Migrate the KV store after an upgrade to Splunk Enterprise 8.1.* or 8.2.* in a single-instance deployment Stop Splunk Enterprise. Do not use the -f option. Open server.conf in the $SPLUNK_HOME/etc/system/local/ directory. Edit the storageEngineMigration setting to match the following example: [kvstore] storageEngineMigration=true Save the server.conf file. To begin the migration, use the following command: splunk migrate kvstore-storage-engine --target-engine wiredTiger Starting KV Store storage engine upgrade: Phase 1 (dump) of 2: ..ERROR: Failed to migrate to storage engine wiredTiger, reason= How can I troubleshoot this further?  Thanks.  
Good morning,    I am trying to create a filter to avoid events where the user is 3 letters and 4 numbers (Not 0), f.e. FSA4568 and to avoid events at the time of entry to work for these users. I... See more...
Good morning,    I am trying to create a filter to avoid events where the user is 3 letters and 4 numbers (Not 0), f.e. FSA4568 and to avoid events at the time of entry to work for these users. I have created the filter for the user regex but I don't know how to integrate it with the time. The thing is that no events appear when the users have the structure of 3 letters plus four numbers and the time is between 7.30 and 9.30 a.m. How can I integrate it? This is the search:     (index="anb_andorra" OR index="anb_luxembourg" OR index="anb_monaco" OR index="anb_espana") source="XmlWinEventLog:Security" ((EventCode IN (4771,4768) Error_Code=0x6) OR (EventCode=4625 Error_Code="0xc000006d")) user!="*$" src!="::ffff:*" | regex user!="([A-Z]{3}[1-9]{4})" | eval timestamp = _time*1000, name = signature    
Hi everyone and thanks for any tips. The question is: can the font be changed in Dashboard Studio? More specifically, I'm trying to change the font family of a text box (markdown text). I've foun... See more...
Hi everyone and thanks for any tips. The question is: can the font be changed in Dashboard Studio? More specifically, I'm trying to change the font family of a text box (markdown text). I've found this material on the topic: 1) font in beta dashboard  https://community.splunk.com/t5/Dashboards-Visualizations/How-to-change-font-in-beta-dashboard/m-p/510029 which doesnt seem to work in dashboard studio and 2) text boxes in dashboard studio https://docs.splunk.com/Documentation/SplunkCloud/9.0.2209/DashStudio/chartsText So I know it is possible to change size and color, but I don't see a way of changing the font family. Thanks.
Hi All Splunk Experts. I'd like to create an alert in a certain index when the word "Finished" doesn't appear within five minutes of the word "Starting". For context, we upload file and see the s... See more...
Hi All Splunk Experts. I'd like to create an alert in a certain index when the word "Finished" doesn't appear within five minutes of the word "Starting". For context, we upload file and see the string "Started" when we don't see the word "Finished" within 5 minutes, I'd like to have an alert. btw, me regex knowledge is really crap. Can you help. Much appreciated,  Sheldon.
I want to filter the Subject Account Name in the Event log below as those other than Admin. So I want to see the cases where this log appears outside of the Admin. How can I do it ?     11/29/2... See more...
I want to filter the Subject Account Name in the Event log below as those other than Admin. So I want to see the cases where this log appears outside of the Admin. How can I do it ?     11/29/2022 12:23:16 PM LogName=Security EventCode=4738 EventType=0 ComputerName=dc.windomain.local SourceName=Microsoft Windows security auditing. Type=Information RecordNumber=247213 Keywords=Audit Success TaskCategory=User Account Management OpCode=Info Message=A user account was changed. Subject: Security ID: S-1-5-21-4236582264-665789389-1555517817-1000 Account Name: Admin Account Domain: WINDOMAIN Logon ID: 0x59B44 Target Account: Security ID: S-1-5-21-4236582264-665789389-1555517817-1324 Account Name: aleda.billye Account Domain: WINDOMAIN    
Hi All, We have below data extracted in splunk and the ask is , in the "Node" field we need to make first two values as one value, next two values as one value and so on and map these values to the ... See more...
Hi All, We have below data extracted in splunk and the ask is , in the "Node" field we need to make first two values as one value, next two values as one value and so on and map these values to the corresponding COUNT value. For Eg: in the first row in "Node" field , we need to create three separate values of consecutive two values and map these values to corresponding COUNT value.   expected result: COUNT              Node 682                     gol************,ser**** --------------------------------------------------------- 622                     gol************,ser**** ---------------------------------------------------------- 606                     gol************,ser****   Note: *********** is just for masking not the requirement. Only above format is the requirement. COUNT and Node are multi value fields and we need single value fields in above format Can someone please help me in achieving this. I have spent 2 days and not getting the solution. Any help would be appreciated a lot.     Thanks,
Hello guys,  Can you help us with this case, thank you in advance. We received 300k events in 24 hours, we have to process on peak, about 15k in real-time, and this job takes 140 sec to proces... See more...
Hello guys,  Can you help us with this case, thank you in advance. We received 300k events in 24 hours, we have to process on peak, about 15k in real-time, and this job takes 140 sec to process, is it possible to make it take less time ? The application it's already developed, the output should stay the same.  Savedsearches.conf:       [Preatreament - Opération Summary] action.email.show_password = 1 action.logevent = 1 action.logevent.param.event = _time=$result._time$|ABC123456Emetrice=$result.ABC123456Emetrice$|ABC123456Receptrice=$result.ABC123456Receptrice$|ABCaeiou=$result.ABCaeiou$|ABCdonneurbbbb=$result.ABCdonneurbbbb$|AAAAaeiou=$result.AAAAaeiou$|AAAADonneurbbbb=$result.AAAADonneurbbbb$|application=$result.application$|canal=$result.canal$|codeE=$result.codeE$|count=$result.count$|csv=$result.csv$|dateEmissionrrrr=$result.dateEmissionrrrr$|dateReglement=$result.dateReglement$|date_hour=$result.date_hour$|date_mday=$result.date_mday$|date_minute=$result.date_minute$|date_month=$result.date_month$|date_second=$result.date_second$|date_wday=$result.date_wday$|date_year=$result.date_year$|date_zone=$result.date_zone$|deviseOrigine=$result.deviseOrigine$|deviseReglement=$result.deviseReglement$|encryptedAAAAaeiou=$result.encryptedAAAAaeiou$|encryptedAAAADonneurbbbb=$result.encryptedAAAADonneurbbbb$|etat=$result.etat$|eventtype=$result.eventtype$|heureEmissionrrrr=$result.heureEmissionrrrr$|host=$result.host$|identifiantrrrr=$result.identifiantrrrr$|index=$result.index$|info_max_time=$result.info_max_time$|info_min_time=$result.info_min_time$|info_search_time=$result.info_search_time$|lastUpdate=$result.lastUpdate$|libelleRejet=$result.libelleRejet$|linecount=$result.linecount$|montantOrigine=$result.montantOrigine$|montantTransfere=$result.montantTransfere$|motifRejet=$result.motifRejet$|nomaeiou=$result.nomaeiou$|nomDonneurbbbb=$result.nomDonneurbbbb$|orig_index=$result.orig_index$|orig_sourcetype=$result.orig_sourcetype$|phase=$result.phase$|punct=$result.punct$|refEstampillage=$result.refEstampillage$|refFichier=$result.refFichier$|refbbbbClient=$result.refbbbbClient$|refTransaction=$result.refTransaction$|search_name=$result.search_name$|search_now=$result.search_now$|sens=$result.sens$|source=$result.source$|sourcetype=$result.sourcetype$|splunk_server=$result.splunk_server$|splunk_server_group=$result.splunk_server_group$|startDate=$result.startDate$|summaryDate=$result.summaryDate$|timeendpos=$result.timeendpos$|timestamp=$result.timestamp$|timestartpos=$result.timestartpos$|typeOperation=$result.typeOperation$|summaryDate_ms=$result.summaryDate_ms$|UUUUUETR=$result.UUUUUETR$|messageDefinitionIdentifier=$result.messageDefinitionIdentifier$|ssssssInstructionId=$result.ssssssInstructionId$|endToEndIdentification=$result.endToEndIdentification$| action.logevent.param.index = bam_xpto_summary action.logevent.param.sourcetype = Opération_summary action.lookup = 0 action.lookup.append = 1 action.lookup.filename = test.csv alert.digest_mode = 0 alert.severity = 1 alert.suppress = 0 alert.track = 0 counttype = number of events cron_schedule = */1 * * * * dispatch.earliest_time = -6h dispatch.latest_time = now enableSched = 1 quantity = 0 relation = greater than search = (index="bam_xpto" AND sourcetype="Opération") OR (index="bam_xpto_summary" sourcetype="Opération_summary" earliest=-15d latest=now)\ | search [ search index="bam_xpto" AND sourcetype="Opération" \ | streamstats count as id \ | eval splitter=round(id/500) \ | stats values(refEstampillage) as refEstampillage by splitter\ | fields refEstampillage]\ | sort 0 - _time indexTime str(sens)\ | fillnull application phase etat canal motifRejet libelleRejet identifiantrrrr dateReglement ABCdonneurbbbb nomDonneurbbbb ABCaeiou nomaeiou codeEtablissement refFichier messageDefinitionIdentifier UUUUUETR ssssssInstructionId endToEndIdentification value=" " \ | eval codeEtablissement=if(codeEtablissement=="", "N/R",codeEtablissement),\ identifiantrrrr=if(identifiantrrrr=="", "N/R",identifiantrrrr),\ dateReglement=if(dateReglement=="", "N/R",dateReglement),\ ABCdonneurbbbb=if(ABCdonneurbbbb=="", "N/R",ABCdonneurbbbb), \ nomDonneurbbbb=if(nomDonneurbbbb=="", "N/R",nomDonneurbbbb),\ ABCaeiou=if(ABCaeiou=="", "N/R",ABCaeiou),\ nomaeiou=if(nomaeiou=="", "N/R",nomaeiou),\ libelleRejet=if(libelleRejet=="", "N/R",libelleRejet),\ refFichier=if(refFichier=="", "N/R",refFichier),\ application=if(application=="", "N/R",application),\ canal=if(canal=="", "N/R",canal),\ motifRejet=if(motifRejet=="", "N/R",motifRejet),\ count=if(sourcetype="Opération", 1, count),\ startDate=if(isnull(startDate), _time, startDate),\ typeOperation = if(NOT (messageDefinitionIdentifier==" " AND endToEndIdentification== " " AND ssssssInstructionId == " " AND UUUUUETR== " ") , messageDefinitionIdentifier, typeOperation), \ refTransaction = if(NOT (messageDefinitionIdentifier==" " AND endToEndIdentification== " " AND ssssssInstructionId == " " AND UUUUUETR== " ") , ssssssInstructionId, refTransaction),\ relatedRef = if(NOT (messageDefinitionIdentifier==" " AND endToEndIdentification== " " AND ssssssInstructionId == " " AND UUUUUETR== " ") , endToEndIdentification, relatedRef)\ | foreach * \ [eval <<FIELD>>=replace(<<FIELD>>, "\"","'"), <<FIELD>>=replace(<<FIELD>>, "\\\\"," "), <<FIELD>>=replace(<<FIELD>>, ",",".")]\ | eval nomDonneurbbbb=replace(nomDonneurbbbb,"[^\p{L}\s]",""), nomaeiou=replace(nomaeiou,"[^\p{L}\s]","") \ | eval nomDonneurbbbb=replace(nomDonneurbbbb,"\s{2,99}"," "), nomaeiou=replace(nomaeiou,"\s{2,99}"," ") \ | stats latest(_time) as _time, latest(Actions_xpto) as Actions_xpto, list(sens) as sens, list(phase) as phase, list(etat) as etat, list(identifiantrrrr) as identifiantrrrr, list(dateReglement) as dateReglement, list(ABCdonneurbbbb) as ABCdonneurbbbb, list(nomDonneurbbbb) as nomDonneurbbbb, list(ABCaeiou) as ABCaeiou, list(nomaeiou) as nomaeiou, list(codeEtablissement) as codeEtablissement, list(index) as index, list(count) as count, list(typeOperation) as typeOperation, list(libelleRejet) as libelleRejet , list(application) as application,latest(dateEmissionrrrr) as dateEmissionrrrr, list(canal) as canal, earliest(deviseOrigine) as deviseOrigine, earliest(deviseReglement) as deviseReglement, earliest(refbbbbClient) as refbbbbClient, list(refFichier) as refFichier, earliest(montantOrigine) as montantOrigine, earliest(montantTransfere) as montantTransfere, last(AAAADonneurbbbb) as AAAADonneurbbbb, last(AAAAaeiou) as AAAAaeiou, list(motifRejet) as motifRejet, list(refTransaction) as refTransaction, earliest(encryptedAAAAaeiou) as encryptedAAAAaeiou, earliest(encryptedAAAADonneurbbbb) as encryptedAAAADonneurbbbb, first(heureEmissionrrrr) as heureEmissionrrrr, first(sourcetype) as sourcetype, last(ABC123456Receptrice) as ABC123456Receptrice, last(ABC123456Emetrice) as ABC123456Emetrice,latest(summaryDate) as summaryDate, list(startDate) as startDate, list(endToEndIdentification) as endToEndIdentification, list(messageDefinitionIdentifier) as messageDefinitionIdentifier, list(UUUUUETR) as UUUUUETR, list(ssssssInstructionId) as ssssssInstructionId, count(eval(sourcetype="Opération")) as nbOperation, min(startDate) as minStartDate by refEstampillage\ | eval lastSummaryIndex=mvfind(index, "bam_xpto_summary"), lastSummaryIndex=if(isnull(lastSummaryIndex), -1, lastSummaryIndex)\ | foreach * \ [eval <<FIELD>>=mvindex(<<FIELD>>,0, lastSummaryIndex)]\ | eval etat=mvjoin(etat,","), phase=mvjoin(phase,","), identifiantrrrr=mvjoin(identifiantrrrr,","), dateReglement=mvjoin(dateReglement,","), ABCdonneurbbbb=mvjoin(ABCdonneurbbbb,","), nomDonneurbbbb=mvjoin(nomDonneurbbbb,","), ABCaeiou=mvjoin(ABCaeiou,","), nomaeiou=mvjoin(nomaeiou,","), codeEtablissement=mvjoin(codeEtablissement,","),application=mvjoin(application,","),canal=mvjoin(canal,","),motifRejet=mvjoin(motifRejet,","),libelleRejet =mvjoin(libelleRejet ,","),dateReglement=mvjoin(dateReglement,","),refFichier=mvjoin(refFichier,","), sens=mvjoin(sens,","), startDate=mvjoin(startDate,","), count=mvjoin(count,","), oldSummary=summaryDate, endToEndIdentification = mvjoin (endToEndIdentification, ","), messageDefinitionIdentifier = mvjoin (messageDefinitionIdentifier, ","), UUUUUETR = mvjoin(UUUUUETR, ","), ssssssInstructionId = mvjoin(ssssssInstructionId, ","), typeOperation = mvjoin(typeOperation, ","), refTransaction = mvjoin(refTransaction, ",")\ | where _time >= summaryDate OR isnull(summaryDate)\ | majoperation\ | eval count=if(nbOperation > count, nbOperation, count)\ | eval startDate=if(minStartDate<startDate,minStartDate, startDate) \ | where !(mvcount(index)==1 AND index="bam_xpto_summary") \ | fillnull codeEtablissement value="N/R"\ | fillnull refFichier value="Aucun"\ | eval summaryDate=_time, lastUpdate=now(), codeE=codeEtablissement, summaryDate_ms=mvindex(split(_time,"."),1)\ | fields - codeEtablissement index           limits.conf max_per_result_alerts = 500 Inspector Thank you again, waiting anxiously for your answer, Best regards, Ricardo Alves
Hi everyone, I want to create a Dashboard where the time filter (a customize, no preset by Splunk) will effect the result in the table. Data from table comes from a database, so I use dbxquery. Wh... See more...
Hi everyone, I want to create a Dashboard where the time filter (a customize, no preset by Splunk) will effect the result in the table. Data from table comes from a database, so I use dbxquery. When I run  the script below, I get an error:  Error in 'eval' command: The expression is malformed. Expected IN. I have no idea what is wrong, anyone has an idea please? | dbxquery connection="connection_name" [| makeresults | eval time1 = tostring("-2h@h") | eval time2 = tostring("@h") | eval time2 = if(time2=="", now(), time2) | eval time1 = if(time2=="now", relative_time(now(), time1), time1) | eval time2 = if(time2=="now", now(), time2) | eval time1 = if(match(time1,"[@|y|q|mon|w|d|h|m|s]"), relative_time(now(), time1), time1) | eval time2 = if(match(time2,"[@|y|q|mon|w|d|h|m|s]"), relative_time(now(), time2), time2) | eval time1 = strftime(time1, "%Y-%m-%d %H:%M:%S") | eval time2 = strftime(time2, "%Y-%m-%d %H:%M:%S") | eval query = "SELECT * FROM "catalog"."schema"."table" WHERE date_time BETWEEN '" . time1 . "' AND '" . time2 . "' " | return query] Thanks a lot.
Hello Splunkers, I configured my HF to pull data from an Event Hub, all good I'm receiving logs, but to much (around 130Gb/Day) and my HF often has some trouble to parse and forward the logs durin... See more...
Hello Splunkers, I configured my HF to pull data from an Event Hub, all good I'm receiving logs, but to much (around 130Gb/Day) and my HF often has some trouble to parse and forward the logs during "the peak of data". I wanted to use an additional HF in order to "share" the work but I do not know how to proceed. If I configured the Add-On on this new HF the same way I did for the first, I will just end up with duplicated data... Would you have any idea ?   Thanks, GaetanVP
Hello Splunkers, In a Splunk clustered environment, the "coldToFrozenDir" will be the same for each indexer since it's deployed by the master node for each indexers. So how Splunk will handle the... See more...
Hello Splunkers, In a Splunk clustered environment, the "coldToFrozenDir" will be the same for each indexer since it's deployed by the master node for each indexers. So how Splunk will handle the fact that each indexers has replicated  data ? Will the data be duplicated on my archive storage ? Regards, GaetanVP  
Hello, We tried to enable the SAML SSO on Splunk, We thought it's simple cause of the swap of both xml configuration data but that's not working at all. When we log in, we are redirected on an u... See more...
Hello, We tried to enable the SAML SSO on Splunk, We thought it's simple cause of the swap of both xml configuration data but that's not working at all. When we log in, we are redirected on an unknown server. We have SSO on many others applications without encountering that problem and without that URL "sh-i-XX". Could someone whose enabled SSO help me ?  Below the conf created automatically with XML Files. Thanks a lot, Dimitri
Hi Splunkers,   Has anyone used a pdf file to load/open as part of a dashboard? (Not a link to the pdf file)   Thanks in advance. Kind Regards, Ariel
Hi, I was posed a query from my customer. Is it possible to forward syslog from UF to Syslog-ng using the BSD/IETF syslog format? If so, how would one go about implementing it? Thank you in advan... See more...
Hi, I was posed a query from my customer. Is it possible to forward syslog from UF to Syslog-ng using the BSD/IETF syslog format? If so, how would one go about implementing it? Thank you in advance for any information provided. Regards, Mikhael
Hi Team, We are planning to integrate the Splunk monitoring tool in React Native mobile app. To do the same we couldn't be able to find any of the package from NPM / YARN.  Is it possible to use ... See more...
Hi Team, We are planning to integrate the Splunk monitoring tool in React Native mobile app. To do the same we couldn't be able to find any of the package from NPM / YARN.  Is it possible to use Splunk in React Native mobile apps?      
Hi, I'm trying implement Microsoft Graph Security Add-On for Splunk. I'm using Splunk Enterprise Version v8. 2022-11-29 14:19:07,357 ERROR pid=17546 tid=MainThread file=base_modinput.py:log_e... See more...
Hi, I'm trying implement Microsoft Graph Security Add-On for Splunk. I'm using Splunk Enterprise Version v8. 2022-11-29 14:19:07,357 ERROR pid=17546 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-microsoft-graph-security-add-on-for-splunk/bin/ta_microsoft_graph_security_add_on_for_splunk/aob_py3/modinput_wrapper/base_modinput.py", line 128, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA-microsoft-graph-security-add-on-for-splunk/bin/microsoft_graph_security.py", line 72, in collect_events input_module.collect_events(self, ew) File "/opt/splunk/etc/apps/TA-microsoft-graph-security-add-on-for-splunk/bin/input_module_microsoft_graph_security.py", line 63, in collect_events access_token = _get_access_token(helper) File "/opt/splunk/etc/apps/TA-microsoft-graph-security-add-on-for-splunk/bin/input_module_microsoft_graph_security.py", line 39, in _get_access_token return access_token[ACCESS_TOKEN] KeyError: 'access_token'
Hi All,  We are tring to collect the Desktop experience data in Splunk using the Uber Agent. We have installed the Splunk Enterprise trial setup on a Windows Machine and the same machine hosts th... See more...
Hi All,  We are tring to collect the Desktop experience data in Splunk using the Uber Agent. We have installed the Splunk Enterprise trial setup on a Windows Machine and the same machine hosts the Uberagent as well as an additional machine has uber agent sending logs to this server. Due to the message popup we created an index for incoming uberagent data and can see the events piling up in that index; but failed to get them populated under any dashboard or searches. Please share if there exists a step by step setup link for this architecture or suggest if there needs to be additonal configuration done to support UberAgent logs.
Hello! DSDL (Deeplearning toolkit) is set up and the Golden Image CPU container is started. I ran the example "Entity Recognition and Extraction Example for Japanese using spaCy + Ginza Library" ... See more...
Hello! DSDL (Deeplearning toolkit) is set up and the Golden Image CPU container is started. I ran the example "Entity Recognition and Extraction Example for Japanese using spaCy + Ginza Library" and an error occurred.     MLTKC error: /fit: ERROR: unable to initialize module. Ended with exception: No module named 'ja_ginza' MLTKC parameters: {'params': {'algo': 'spacy_ner_ja', 'epochs': '100'}, 'args': ['text'], 'feature_variables': ['text'], 'model_name': 'spacy_ginza_entity_extraction_model', 'output_name': 'extracted', 'algo_name': 'MLTKContainer', 'mlspl_limits': {'handle_new_cat': 'default', 'max_distinct_cat_values': '100', 'max_distinct_cat_values_for_classifiers': '100', 'max_distinct_cat_values_for_scoring': '100', 'max_fit_time': '600', 'max_inputs': '100000', 'max_memory_usage_mb': '4000', 'max_model_size_mb': '30', 'max_score_time': '600', 'use_sampling': 'true'}, 'kfold_cv': None}      Does the container "Golden Image CPU" support Japanese entity extraction? Any help would be much appreciated.Thank you!!
Greetings. We recently turned on a HEC and have JSON data coming in and I have noticed that multiple JSON blobs are embedded in _raw.  I searched several solutions and found one that actually did p... See more...
Greetings. We recently turned on a HEC and have JSON data coming in and I have noticed that multiple JSON blobs are embedded in _raw.  I searched several solutions and found one that actually did parse _raw into a new colum "split_raw" and then I went so far as to try     | eval raw=split_raw     but when I do     | table *     it still shows all the data from the first entry only I think my questions are: 1. the ones that are 'multiple json entries' I think is when 'a bunch arrive at about the same time' - so is there a way to FORCE these to split at ingestion (to guarantee 1:1 json-to-event)? my guess is i may need to play with my source_type, but looking for some guidance/thoughts. 2. if not, will have to split them (like the link above) and then do some processing on the new split_raw field? Thank you so much for leads and thoughts on this!