All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have the following data and i am trying to create a time chart of the data for average duration by channel "_time",duration,CH "2020-02-13 11:30:32.367",275,BOSRetail "2020-02-13 12:47:59.33... See more...
I have the following data and i am trying to create a time chart of the data for average duration by channel "_time",duration,CH "2020-02-13 11:30:32.367",275,BOSRetail "2020-02-13 12:47:59.334",202,LTSBRetail "2020-02-13 11:02:54.025",216,BOSRetail "2020-02-13 11:26:11.459",264,BOSRetail "2020-02-13 11:53:03.636",179,BOSRetail "2020-02-13 11:20:53.384",269,BOSRetail "2020-02-13 10:58:52.428",264,BOSRetail "2020-02-13 09:41:22.445",216,LTSBRetail "2020-02-13 09:56:09.820",233,LTSBRetail "2020-02-13 10:58:13.035",240,LTSBRetail "2020-02-13 11:47:48.664",325,BOSRetail "2020-02-13 12:21:27.147",274,LTSBRetail "2020-02-13 11:18:59.352",235,BOSRetail "2020-02-13 11:23:25.297",257,BOSRetail "2020-02-13 11:03:32.007",274,HalifaxRetail "2020-02-13 11:02:15.745",181,LTSBRetail "2020-02-13 11:47:03.084",264,BOSRetail "2020-02-13 15:28:01.956",260,HalifaxRetail "2020-02-13 11:54:23.306",276,BOSRetail "2020-02-13 11:55:58.454",215,LTSBRetail "2020-02-13 11:00:05.081",240,HalifaxRetail "2020-02-13 11:56:38.345",236,BOSRetail "2020-02-13 11:49:52.787",226,BOSRetail "2020-02-13 15:24:13.651",247,HalifaxRetail "2020-02-13 09:31:26.887",194,LTSBRetail "2020-02-13 11:51:59.928",262,BOSRetail "2020-02-13 11:57:18.917",227,HalifaxRetail "2020-02-13 09:42:04.574",171,LTSBRetail "2020-02-13 15:25:51.943",334,HalifaxRetail for unknown reason the average duration values are not reflecting on the timechart using the below query | timechart span=1h avg(duration) by CH
I want to write some Scala that writes out to the Splunk logging API, so I went here to get started. It links here to get the JAR. The only JARs there are for the SDK and SimData. The only logging li... See more...
I want to write some Scala that writes out to the Splunk logging API, so I went here to get started. It links here to get the JAR. The only JARs there are for the SDK and SimData. The only logging link there is to Github. The Github link includes links to source code, which is fine, but I'm new to Java, and I don't know how to build it. I assume it was a mistake to purport to link to a JAR where there is no JAR. So first: is there some missing link to an actual logging JAR? But I'm happy to build from source, if someone can point me to instructions on doing that. Can anyone help?
| makeresults | eval a="1" | append [| makeresults | eval b="2"] | fillnull value="" | stats list(a) vs. | makeresults | eval a="1" | append [| makeresults | eval b="2"] | fillnull value="0... See more...
| makeresults | eval a="1" | append [| makeresults | eval b="2"] | fillnull value="" | stats list(a) vs. | makeresults | eval a="1" | append [| makeresults | eval b="2"] | fillnull value="0" | stats list(a) ... shows only one result, even though | makeresults | eval a="1" | append [| makeresults | eval b="2"] | fillnull value="" | stats list(a) AS z | eval w=mvcount(z) | table w ... counts 2. Isn't this the whole point of using fillnull ? What's the most elegant workaround? This seems to be super dangerous and misleading
Azure Security Center Alerts and Tasks, Azure Resource Groups, Azure Virtual Networks, Azure Compute, Azure Billing Consumption, Azure Reservation Recomendation, and others all require... See more...
Azure Security Center Alerts and Tasks, Azure Resource Groups, Azure Virtual Networks, Azure Compute, Azure Billing Consumption, Azure Reservation Recomendation, and others all require a subscription ID to be specified. What I would like is the option to specify "all subscriptions" as an option and/or a list of subscriptions defined in the configuration options. Is that something that can be achieved? If I pick "all subscriptions", the code would get the list of subscriptions first and then use the API for each one.
I am trying to search for a server which is named differently than all the others in our network. Commonly servers are named with Location followed by 4 digits and then some string in the end (Eg: Fl... See more...
I am trying to search for a server which is named differently than all the others in our network. Commonly servers are named with Location followed by 4 digits and then some string in the end (Eg: Flra2209php_ua). If one of the machines is not following this naming convention, how do I search for it? I was hoping there would be a "not like" function which might help with this?
I've been poking around the interwebs trying to figure out if there is a benefit/downside to going with the new AMD Rome EPYC architecture for our Splunk servers. I don't find anything specific. ... See more...
I've been poking around the interwebs trying to figure out if there is a benefit/downside to going with the new AMD Rome EPYC architecture for our Splunk servers. I don't find anything specific. I "think" we would be good to go, but I was hoping for a more definitive answer. Cheers. Rich Hickey
Hi, I have a new HF once accepted logs for about a week, then stopped receiving on almost all logs at a same time. I compared this HF with the old working one and I don't see rotated logs create... See more...
Hi, I have a new HF once accepted logs for about a week, then stopped receiving on almost all logs at a same time. I compared this HF with the old working one and I don't see rotated logs created on the new HF. For instance, in log1 directory, I see log1.log and several other copies like log1.log-date1.gz and log1.log-date2.gz and so on, but on the new HF I only see log1.log. I think not creating rotated logs on the HF could be the issue, but not sure and how to have these rotated logs created. Anyone can help, I appreciate it. Thanks,
indexers + SH setup on perm. What is the best way for splunk to monitor a k8s cluster deployed on one box / 3 nodes setup (HA) / 6 nodes setup (HA DR)? Thanks in advance!
What would be a way to get data from an external machine which is not part of our environment .Correct me if I am wrong .I was assuming to install UF on the external machine , create an HTTP token on... See more...
What would be a way to get data from an external machine which is not part of our environment .Correct me if I am wrong .I was assuming to install UF on the external machine , create an HTTP token on a HF in our environment and give the token , URL and port details to get the data from the external machine . Is this the way to get data through external machine through HTTP token.The data in a custom path and the data is in csv format. Thanks in Advance
I have a search that based on a lookup that is pulling names and totals over the course of a 24 hour period or week based on time. How can I sum each column without having to sum every field individ... See more...
I have a search that based on a lookup that is pulling names and totals over the course of a 24 hour period or week based on time. How can I sum each column without having to sum every field individually? cdr_events duration>0 ( (callingPartyGroup="00581" OR originalCalledPartyGroup="00581" OR finalCalledPartyGroup="00581") ) | calculate_all_internal_parties | lookup groups number as number output name group subgroup | search ( group="00581" ) | timechart dc(callId) by name I could get it by running a | sum("Tony Freeman") as "Tony Freeman" sum("Andrea Cook" as "Andrea Cook" etc etc but is there an easier way to do that?
Splunk as product what is the percentage that splunk assures on no data loss. Is there anything like 99 % or 99.99% Any document for reference would be helpful
Hi All , I am trying to install the app on search head and even after installing(Manage app->install from a file->upload) and restart of search head the app is not appearing . i have even check t... See more...
Hi All , I am trying to install the app on search head and even after installing(Manage app->install from a file->upload) and restart of search head the app is not appearing . i have even check the apps directory on the server and cant find . could you please suggest what might be going wrong in this case
We have simple csv lookup like: network,descr 192.168.0.0/24,network_name Lookup description in transforms.conf: [networklist_allocs_all] filename = networklist_allocs_all.csv max_matches =... See more...
We have simple csv lookup like: network,descr 192.168.0.0/24,network_name Lookup description in transforms.conf: [networklist_allocs_all] filename = networklist_allocs_all.csv max_matches = 1 min_matches = 1 default_match = OK match_type = CIDR(network) Any search command like: ... | lookup networklist_allocs_all network AS src_ip ... too often crashing splunk on the indexers. It is about 5-10 crashlogs on the each indexers per day. Part of crashlog at the end of question. How can we resolve this situation? Seems that splunk began to crash after update from 7 to 8 version. We did't any changes in lookup format or definition. Unfortunately we can't open support case for some reason, so ask for community help. crash-xx.log: [build 6db836e2fb9e] 2020-02-13 17:00:56 Received fatal signal 11 (Segmentation fault). Cause: No memory mapped at address [0x0000000000000058]. Crashing thread: BatchSearch Registers: RIP: [0x0000564E2F44470F] _ZN14LookupMatchMap16mergeDestructiveERS_ + 31 (splunkd + 0x223670F) RDI: [0x00007FC9C01FCA80] RSI: [0x0000000000000000] RBP: [0x0000000000000000] RSP: [0x00007FC9C01FC9A0] RAX: [0x00007FC9CDB3AE08] RBX: [0x0000000000000000] RCX: [0x0000000000000001] RDX: [0x00007FC9BFB7F608] R8: [0x00007FC9C01FC9C0] R9: [0x00007FC9C01FC9BF] R10: [0x0000000000000010] R11: [0x0000000000000080] R12: [0x00007FC9928E33C0] R13: [0x00007FC9C01FCA80] R14: [0xAAAAAAAAAAAAAAAB] R15: [0x00007FC9BFB7F600] EFL: [0x0000000000010246] TRAPNO: [0x000000000000000E] ERR: [0x0000000000000004] CSGSFS: [0x0000000000000033] OLDMASK: [0x0000000000000000] OS: Linux Arch: x86-64 Backtrace (PIC build): [0x0000564E2F44470F] _ZN14LookupMatchMap16mergeDestructiveERS_ + 31 (splunkd + 0x223670F) [0x0000564E2F444EE8] _ZN14UnpackedResult8finalizeEv + 168 (splunkd + 0x2236EE8) [0x0000564E2F445E26] _ZN18LookupDataProvider6lookupERSt6vectorIP15SearchResultMemSaIS2_EERK17SearchResultsInfoR16LookupDefinitionPK22LookupProcessorOptions + 2118 (splunkd + 0x2237E26) [0x0000564E2F451B7F] _ZN12LookupDriver13executeLookupEP29IFieldAwareLookupDataProviderP15SearchResultMemR17SearchResultsInfoPK22LookupProcessorOptions + 367 (splunkd + 0x2243B7F) [0x0000564E2F451C22] _ZN18SingleLookupDriver7executeER18SearchResultsFilesR17SearchResultsInfoPK22LookupProcessorOptions + 98 (splunkd + 0x2243C22) [0x0000564E2F43BDFF] _ZN15LookupProcessor7executeER18SearchResultsFilesR17SearchResultsInfo + 79 (splunkd + 0x222DDFF) [0x0000564E2F08FC7D] _ZN15SearchProcessor16execute_dispatchER18SearchResultsFilesR17SearchResultsInfoRK3Str + 749 (splunkd + 0x1E81C7D) [0x0000564E2F07F528] _ZN14SearchPipeline7executeER18SearchResultsFilesR17SearchResultsInfo + 344 (splunkd + 0x1E71528) [0x0000564E2F1931D9] _ZN16MapPhaseExecutor15executePipelineER18SearchResultsFilesb + 153 (splunkd + 0x1F851D9) [0x0000564E2F193901] _ZN25BatchSearchExecutorThread13executeSearchEv + 385 (splunkd + 0x1F85901) [0x0000564E2DEDCB8F] _ZN20SearchExecutorThread4mainEv + 47 (splunkd + 0xCCEB8F) [0x0000564E2EC41AC8] _ZN6Thread8callMainEPv + 120 (splunkd + 0x1A33AC8) [0x00007FC9CC7B76BA] ? (libpthread.so.0 + 0x76BA) [0x00007FC9CC4EC41D] clone + 109 (libc.so.6 + 0x10741D) Linux / c-6.index.splunk / 4.4.0-21-generic / #37-Ubuntu SMP Mon Apr 18 18:33:37 UTC 2016 / x86_64 /etc/debian_version: stretch/sid ... Last errno: 0 Threads running: 13 Runtime: 436566.363455s argv: [splunkd -p 8089 start] Process renamed: [splunkd pid=8371] splunkd -p 8089 start [process-runner] Process renamed: [splunkd pid=8371] search --id=remote_d-0.search.splunk_scheduler__gots_eWFuZGV4LWFsZXJ0cw__RMD5d361bb2fcae608a1_at_1581602460_31361_374F9CE8-43DB-48C4-9F22-7982EC4B6AD5 --maxbuckets=0 --ttl=60 --maxout=0 --maxtime=0 --lookups=1 --streaming --s idtype=normal --outCsv=true --acceptSrsLevel=1 --user=gots --pro --roles=admin:power:user Regex JIT enabled RE2 regex engine enabled using CLOCK_MONOTONIC Preforked process=0/59436: process_runtime_msec=54987, search=0/188869, search_runtime_msec=1240, new_user=Y, export_search=Y, args_size=356, completed_searches=3, user_changes=2, cache_rotations=3 Thread: "BatchSearch", did_join=0, ready_to_run=Y, main_thread=N First 8 bytes of Thread token @0x7fc9c70845e8: 00000000 00 e7 1f c0 c9 7f 00 00 |........| 00000008 SearchExecutor Thread ID: 1 Search Result Work Unit Queue: 0x7fc9b77ff000 Search Result Work Unit Queue Crash Reporting Type: NON_REDISTRIBUTE Number of Active Pipelines: 2 Max Count of Results: 0 Current Results Count: 352 Queue Current Size: 0 Queue Max Size: 200, Queue Drain Size: 120 FoundLast=N Terminate=N Total Bucket Finished: 1.0012863152522, Total Bucket Count: 16 ===============Search Processor Information=============== Search Processor: "lookup" type="SP_STREAM" search_string=" networklist_allocs_all network AS dest_ip " normalized_search_string="networklist_allocs_all network AS dest_ip" litsearch="networklist_allocs_all network AS dest_ip " raw_search_string_set=1 raw_search_string=" networklist_allocs_all network AS dest_ip " args={StringArg: {string="networklist_allocs_all" raw="networklist_allocs_all" quoted=0 parsed=1 isopt=0 optname="" optval=""},StringArg: {string="network" raw="network" quoted=0 parsed=1 isopt=0 optname="" optval=""},StringArg: {string="AS" raw="AS" quoted=0 parsed=1 isopt=0 optname="" optval=""},StringArg: {string="dest_ip" raw="dest_ip" quoted=0 parsed=1 isopt=0 optname="" optval=""}} input_count=885 output_count=0 directive_args= ==========================================================
I am really struggling on how to frame the question. In essence I need to display the duration trucks are spends waiting in a carpark and display the average waiting time. But this must further be... See more...
I am really struggling on how to frame the question. In essence I need to display the duration trucks are spends waiting in a carpark and display the average waiting time. But this must further be split down by shifts So early is say 6am - 2pm, Late is 2pm to 10pm and Nights are 10pm to 6am So I have used this code to determine what current shift is based on hour of the day:- |eval iHour=strftime(strptime(TIMESTAMP,"%Y-%m-%d %H:%M:%S"),"%H") |eval iDay=strftime(strptime(TIMESTAMP,"%Y-%m-%d %H:%M:%S"),"%Y-%m-%d") |eval iDay=round(strptime(iDay,"%Y-%m-%d"),0) |eval iDay=if(iHour>=22 AND iHour <24,iDay+86400,iDay) |eval shift=if(iHour >= 6 AND iHour < 14,"Early",if(iHour >= 14 AND iHour < 22,"Late","Night")) And this for working out average queue times but for a week |dedup MANIFESTID |search STATE=6 AND LOADTYPE="L" |eval iTrkConfirmed=strptime(TIMEPARK,"%Y-%m-%d %H:%M:%S") |eval iTrkCallForward=strptime(TIMEDPLY,"%Y-%m-%d %H:%M:%S") |eval iTrkQueueTime = round((iTrkCallForward - iTrkConfirmed)/3600,2) |timechart span=1d avg(iTrkQueueTime) as Avg_QueueTime |timewrap 1w | foreach * [eval <>=round('<>',2)] Both from different searches but I just cannot for the life of me work out how to take the salient pieces from each search to allow me to display the average wait time by shift. Any help or pointers would be greatly appreciated.. Thank you
alt textHi, I have a daily scheduled report which goes to sftp server in a csv format. I am getting complaints that the data is not coming properly. I investigated and suspect that it may be because ... See more...
alt textHi, I have a daily scheduled report which goes to sftp server in a csv format. I am getting complaints that the data is not coming properly. I investigated and suspect that it may be because of the multi valued fields in the table but I am not sure. In Splunk it shows something like I have attached and in the CSV which is delivered on the server it is seen something like this very weird with column name deviceDescription app,"serviceName","2020-02-12 23:34:01","2020-02-12 23:34:01",34567,ANA,C,,51228586,"HD BOX (CISCO),,,,,,,,,,,, TIVO 500GB BOX (CISCO),,,,,,,,,,,,,,,,,,,,,, TIVO 1TB BOX (ARRIS),,,,,,,,,,,,,,,,,,,,,, TIVO 1TB BOX (ARRIS)",456,Agent,,,,5678997,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, Total columns in the table is 23 but it seems in CSV they are more than 23 commas coming. Any help is appreciated.
hi, i have several universal forwarders deployed, and im getting lots of events i want to filter out. I understand from reading answers here i need to do this on the indexer (or else install heav... See more...
hi, i have several universal forwarders deployed, and im getting lots of events i want to filter out. I understand from reading answers here i need to do this on the indexer (or else install heavy forwaders on my endpoints, which i dont want to do). This is a raw entry that im trying to drop / filter out from my indexer (ie to keep it from using up lots of my license): 02/13/2020 10:19:09.016 event_status="(0)The operation completed successfully." pid=1216 process_image="c:\Program Files\VMware\VMware Tools\vmtoolsd.exe" registry_type="CreateKey" key_path="HKLM\system\controlset001\services\tcpip\parameters" data_type="REG_NONE" data="" This is the entry from the inputs.conf on the forwarders that is sending some of the events i want to filter out: [WinRegMon://default] disabled = 0 hive = .* proc = .* type = rename|set|delete|create And i have added these lines on my indexer (and restarted), but im still seeing the events come in: #on props.conf ( located in: C:\Program #Files\Splunk\etc\users\admin\search\local\props.conf): [WinRegMon://default] TRANSFORMS-set= setnull #on transforms.conf ( located in: C:\Program #Files\Splunk\etc\users\admin\search\local\transforms.conf): [setnull] REGEX = process_image=.+vmtoolsd.exe" DEST_KEY = queue FORMAT = nullQueue Thanks! (ive been referencing many answers, including this good one): (h)ttps:// answers.splunk.com/answers/37423/how-to-configure-a-forwarder-to-filter-and-send-the-specific-events-i-want.html
I have a enteries in logfile that has information like the following two - transaction sucessful. Request: {empName=Sam, empNum=40012, empMgr=John, empDept=102} transaction sucessful. Request: {... See more...
I have a enteries in logfile that has information like the following two - transaction sucessful. Request: {empName=Sam, empNum=40012, empMgr=John, empDept=102} transaction sucessful. Request: {empName=John, empNum=40001, empDept=102} In this case, empName, empNum, empMgr, empDept are the variables for which each request is sending a value I want a report that shows all the values under variables for these successful transaction like this empName empNum empMgr empDept Sam 40012 John 102 John 40001 102
hi, i have several universal forwarders deployed, and im getting lots of events i want to filter out. I understand from reading answers here i need to do this on the indexer (or else install heav... See more...
hi, i have several universal forwarders deployed, and im getting lots of events i want to filter out. I understand from reading answers here i need to do this on the indexer (or else install heavy forwaders on my endpoints, which i dont want to do). This is a raw entry that im trying to drop / filter out from my indexer (ie to keep it from using up lots of my license): 02/13/2020 10:19:09.016 event_status="(0)The operation completed successfully." pid=1216 process_image="c:\Program Files\VMware\VMware Tools\vmtoolsd.exe" registry_type="CreateKey" key_path="HKLM\system\controlset001\services\tcpip\parameters" data_type="REG_NONE" data="" This is the entry from the inputs.conf on the forwarders that is sending some of the events i want to filter out: [WinRegMon://default] disabled = 0 hive = .* proc = .* type = rename|set|delete|create And i have added these lines on my indexer (and restarted), but im still seeing the events come in: #on props.conf ( located in: C:\Program #Files\Splunk\etc\users\admin\search\local\props.conf): [WinRegMon://default] TRANSFORMS-set= setnull #on transforms.conf ( located in: C:\Program #Files\Splunk\etc\users\admin\search\local\transforms.conf): [setnull] REGEX = process_image=.+vmtoolsd.exe" DEST_KEY = queue FORMAT = nullQueue Thanks! (ive been referencing many answers, including this good one): (h)ttps:// answers.splunk.com/answers/37423/how-to-configure-a-forwarder-to-filter-and-send-the-specific-events-i-want.html
We have a few users that need access to application logs. We have our active directory admins create a group and once they create that group it shows up in splunk for us to add a role to. The lates... See more...
We have a few users that need access to application logs. We have our active directory admins create a group and once they create that group it shows up in splunk for us to add a role to. The latest group to be created shows up in the "Access controls » Authentication method » LDAP strategies » LDAP Groups" page but once I try to add a role other than "user" it doesn't show as added in the UI even when the message at the top of the screen says the role has been added. The users can't search any logs that they should have access through the new role created for the new LDAP Group. What's odd is that the /opt/splunk/etc/system/local/authentication.conf has the new role added to the new LDAP Group. looking in splunkd.log there is this message: 02-06-2020 10:58:07.296 -0500 WARN UserManagerPro - Strategy="Splunk": the group="SPL_DIGITAL" was not found on the LDAP server. Suggest to remove it from the role map to save server loading time. Not sure what to do. Not sure if this is a problem with AD or with splunk.
One of our 3rd party apps has some pretty unfriendly logging. The app itself carries out somewhere between 20-30 jobs, each of which has its own log. the issue we have is that all logs are written to... See more...
One of our 3rd party apps has some pretty unfriendly logging. The app itself carries out somewhere between 20-30 jobs, each of which has its own log. the issue we have is that all logs are written to one directory and the log files themselves are named such as this 20200213.445933.log The only way to distinguish between job log files is by a header within each log that has a description included. A further issue is that every line in the file is prefixed with a date and time. This results in Splunk splitting every line into a separate event even when the true event may be several lines long. for example: [2020-02-13 15:00:34] ######################################################### [2020-02-13 15:00:34] # Log File Path: /data/logs/jobs/20200213.445933.log [2020-02-13 15:00:34] # Creation Date: Thu Feb 13 15:00:34 GMT 2020 [2020-02-13 15:00:34] # Description: DQ:Import DQ CAR Files [2020-02-13 15:00:34] # Parameters: --terminatetime 175000 -mapping 52000 -daemon yes -rb true [2020-02-13 15:00:34] ######################################################### [2020-02-13 15:00:34] 'INIT' actions: [2020-02-13 15:00:34] Collect Files [2020-02-13 15:00:34] Collect Files Action [2020-02-13 15:00:34] Connected: ftp://*********************** [2020-02-13 15:00:34] Filter: ^BT.*\.CAR [2020-02-13 15:00:35] Files found: 0 [2020-02-13 15:00:35] Retrieving batches for mapping : DQ CAR Records [2020-02-13 15:00:35] Found no Batch files to import [2020-02-13 15:00:35] No 'CLSE' actions [2020-02-13 15:01:35] 'INIT' actions: [2020-02-13 15:01:35] Collect Files [2020-02-13 15:01:35] Collect Files Action [2020-02-13 15:01:35] Connected: ftp://*********************** [2020-02-13 15:01:35] Filter: ^BT.*\.CAR [2020-02-13 15:01:35] Files found: 0 [2020-02-13 15:01:35] Retrieving batches for mapping : DQ CAR Records [2020-02-13 15:01:35] Found no Batch files to import [2020-02-13 15:01:35] No 'CLSE' actions [2020-02-13 15:02:45] 'INIT' actions: [2020-02-13 15:02:45] Collect Files [2020-02-13 15:02:46] Collect Files Action [2020-02-13 15:02:46] Connected: ftp://*********************** [2020-02-13 15:02:46] Filter: ^BT.*\.CAR [2020-02-13 15:02:46] Files found: 0 [2020-02-13 15:02:46] Retrieving batches for mapping : DQ CAR Records [2020-02-13 15:02:46] Found no Batch files to import [2020-02-13 15:02:46] No 'CLSE' actions [2020-02-13 15:03:47] 'INIT' actions: [2020-02-13 15:03:47] Collect Files [2020-02-13 15:03:47] Collect Files Action [2020-02-13 15:03:47] Connected: ftp://*********************** [2020-02-13 15:03:47] Filter: ^BT.*\.CAR [2020-02-13 15:03:47] Files found: 0 [2020-02-13 15:03:47] Retrieving batches for mapping : DQ CAR Records [2020-02-13 15:03:47] Found no Batch files to import [2020-02-13 15:03:47] No 'CLSE' actions One event would actually look like this: [2020-02-13 15:00:34] 'INIT' actions: [2020-02-13 15:00:34] Collect Files [2020-02-13 15:00:34] Collect Files Action [2020-02-13 15:00:34] Connected: ftp://*********************** [2020-02-13 15:00:34] Filter: ^BT.*\.CAR [2020-02-13 15:00:35] Files found: 0 [2020-02-13 15:00:35] Retrieving batches for mapping : DQ CAR Records [2020-02-13 15:00:35] Found no Batch files to import [2020-02-13 15:00:35] No 'CLSE' actions Our 3rd party developer has advised that this cannot be changed, so the only option is to work around this in splunk somehow. I was wondering if it is possible to regex out the description in each log and assign it as a sourcetype. Each sourcetype could then have its own event splitting rules. Is this possible?