All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I Have trained a LogisiticRegression model by using TFIDF data (3K events in a month) as input successfully using probabilities=true.  In the fit process it shows the probabilities of everything cor... See more...
I Have trained a LogisiticRegression model by using TFIDF data (3K events in a month) as input successfully using probabilities=true.  In the fit process it shows the probabilities of everything correctly, I can even do a ROC curve analysis.  The problem comes when use the model by doing a new search and TFIDF the data, and right after the  "|apply logistic_model probabilities=true"  to new data (say... last 24 hours). The behavior is that it only shows the probabilities for the first event (sometimes two or three but not all if I apply the model to "old data") and the others appear blank but the predicted field appears correctly. Now, if I do a search and I apply only the TFIDF_model, without the apply logistic_model and then I "|loadjob  123ABC"  having only the TFIDF data calculated previously and then  Iapply the model to the loaded job of TFIDF data, the probabilities appear magically. I am almost sure this is a bug, but I want to know if there is some workaround ?   Thanks
Hi All, I want to shift the paginator of my dashboard panel to the upside. The dashboard is a xml dashboard.   Is there any way to directly edit the dashboard without adding css ? If not ple... See more...
Hi All, I want to shift the paginator of my dashboard panel to the upside. The dashboard is a xml dashboard.   Is there any way to directly edit the dashboard without adding css ? If not please suggest the css method as well.   Thanks in advance!
Hi everyone, I'm getting probably an issue with the extraction of my Fortinet data. I have installed the following apps:   Fortinet FortiGate App for Splunk SplunkAppForFortinet 1.5.1 Fo... See more...
Hi everyone, I'm getting probably an issue with the extraction of my Fortinet data. I have installed the following apps:   Fortinet FortiGate App for Splunk SplunkAppForFortinet 1.5.1 Fortinet Fortigate Add-on for Splunk Splunk_TA_fortinet_fortigate 1.6.2   Does anyone know the different of the field action and ftnt_action? because I'm getting different results there.  In field action do I have for example "blocked" but in ftnt_action do I have "detected" and also "dropped". This is a bit confusing while I'm trying to get only blocked attacks.  Could someone please help me?
I need to find the rows from the first inputlookup collection that has matching field values from the second inputlookup collection. For example: collection A :      field1, field2, field3        ... See more...
I need to find the rows from the first inputlookup collection that has matching field values from the second inputlookup collection. For example: collection A :      field1, field2, field3                                   X             1          3                                    X            2         4                                     Y            4         1                                     Z             1        2                                      B            3         3                                      B            1           1 CollectionB:    fieldX                                X                                       Y                                         B           The expected result is: (exclude row containing 'Z' as it does not entry in collectionB)                                  field1, field2, field3                                   X             1          3                                    X            2         4                                    Y            4         1                                     B            1           1 the query like: | inputlookup collectionA | search field1 IN ('X','Y','Z'....). How can I set  values 'X','Y','Z'....  to search for field1 from collectionB as this list can be of any length. I tried the following but didn't work: | inputlookup collectionA | search field1 IN (| inputlookup collectionB  |fields fieldX). (as in reality the collectionB can have more than one columns but I want to match values only with fieldX)
I am getting the events from an Australian timeline. But time running in my laptop is IST. So, when i try to calculate the events from today beginning i.e., 12:00 am  to latest now with span of 3h. i... See more...
I am getting the events from an Australian timeline. But time running in my laptop is IST. So, when i try to calculate the events from today beginning i.e., 12:00 am  to latest now with span of 3h. i having my stats showing the timechart starts from yesterday 10pm. I don't understand the mistake. But above the results it showing the events from 8,sep,2021 00:00:00 to 8, sep,2021 13:53:12(now). But i Don't know why it is starting to show from 10 pm yesterday. I calculating the avg of the results of particular field.
Hi , I want to add a text box in a dashboard panel and the manual input value of that textbox should be added to a new column in an already existing table. I understand that this can be done by loo... See more...
Hi , I want to add a text box in a dashboard panel and the manual input value of that textbox should be added to a new column in an already existing table. I understand that this can be done by lookup to save the values but i am not sure how to go ahead with it. This is the data format of the table i have with sample data(the original data i have is confidential). EMAIL NAME IP ID(new column) nish123@gmail.com Nishanth 10.10.10.0   abc098@gmail.com ABC 224.0.0.0   amit187@gmail.com Amit Sharma 63.125.0.0      I want to add a text box to this panel whose values should be inputted into ID column based on the unique value of EMAIL. and i want to save this table with the new values of ID.  How can this be done?? Any help would be appreciated.thanks  
1) What will you do when there is a delay in the indexer? 2) How long the delay period is? (Any maximum time cap is there or will you wait for the complete delay to be cleared in indexer) 3) Will y... See more...
1) What will you do when there is a delay in the indexer? 2) How long the delay period is? (Any maximum time cap is there or will you wait for the complete delay to be cleared in indexer) 3) Will you send any notifications regarding the indexer delay? If yes i) What are the information you can include in that notification (Like any tentative time for the next alert schedule) ii) If there is a continuous delay, so you missed 2-5 time intervals, can you send mail for each time period or a single mail with all the information? 4) If there is 2 hours delay in the indexer, did you check for the missed intervals after the delay is cleared, or else check only from the current time period? (For example, RunFrequency is 5 mins and there is a delay from 10 AM and it is cleared at 11 AM. Did you scan from 10 AM or from 11 AM?)
Hello. Splunk version - 8.2.2 Splunk DB connect version - 3.6.0. After update Splunk Enterprise version  from 8.0.2 to 8.2.2. i have noticed problem with timezone for Oracle database sources. My ... See more...
Hello. Splunk version - 8.2.2 Splunk DB connect version - 3.6.0. After update Splunk Enterprise version  from 8.0.2 to 8.2.2. i have noticed problem with timezone for Oracle database sources. My timezone is Europe/Kiev (GMT +3). I set timezone in DB connection settings, also i have tried to set timezone at Java settings (-Duser.timezone=Europe/Kiev) but each time a have the same result. So. I created DB connection with this select: SELECT CAST(EXTENDED_TIMESTAMP AS TIMESTAMP) EXTENDED_TIMESTAMP, AUDIT_TYPE, STATEMENT_TYPE, RETURNCODE FROM SYS.DBA_COMMON_AUDIT_TRAIL WHERE EXTENDED_TIMESTAMP > ? ORDER BY EXTENDED_TIMESTAMP Rising column - EXTENDED_TIMESTAMP Time column - EXTENDED_TIMESTAMP.    
multiple UFs on single machine will be supported from Splunk. 
I don't know how to disable this message for my dashboard properly.  I have added to a css style block within my dashboard      .splunk-choice-input-message { visibility: hidden }     This... See more...
I don't know how to disable this message for my dashboard properly.  I have added to a css style block within my dashboard      .splunk-choice-input-message { visibility: hidden }     This suppresses all of the text beneath all of the inputs in the dashboard which is fine as it's better than causing concern with the "Duplicate values causing conflict" message. So if anyone knows a way to manipulate the CSS to only suppress that message I'd be grateful.  I don't have to allow duplicates in the dropdown and that in general it's probably a good hint but sometimes it just makes sense or so I think to ignore such warnings. I [mis]use the dropdown to also show a relative timeline of events in the order they appear in a an splunk event (which is comprised of many lines of text).  The user can drill down to the specific log line within a splunk event that they they want to examine in detail.  I also realise that I could solve this with condition on change and split the $value$ selected so that I get the id part without breaking the built in idea of duplicates being bad..  I don't know that I should have to do extra calculations just to achieve the simple idea.  Data below is my justification and it works really well for the use case which actually has hundreds of lines in the combo box.       | makeresults | eval events="1147,Event A [1] 1066,Event B [2..3] 1147,Event A [4] 1156,Event C [5..8] 1147,Event A [9] 1073,Event D [10..14] 1050,Event E [15..20] 1073,Event D [21..40] 1156,Event C [41..44] 1050,Event E [45..46] 1147,Event A [47] 1090,Event F [48] 1678,Event G [49] 1090,Event F [50] 1180,Event H [51] 1127,Event I [52] 1097,Event J [53] 1127,Event I [54..55] 1180,Event H [56] 1068,Event K [57..60] 1138,Event L [61..63] 1122,Event M [64]" | rex max_match=0 field=events "(?<event>.*)\n" | table id event | mvexpand event | rex max_match=0 field=event "(?<id>.*),(?<event>.*)"       It's true that the label in this case will be unique and the id will be duplicated.  Yet it's the label that holds the human readable useful data and the id just refers to a lookup key to present.
Hi there,   I clicked on 'Start Free Trial' and waited for an email for login credentials for 'My instance'. It's been an hour and I've also tried logging into two different browsers with my Splunk... See more...
Hi there,   I clicked on 'Start Free Trial' and waited for an email for login credentials for 'My instance'. It's been an hour and I've also tried logging into two different browsers with my Splunk account login, cleared cache & restarted my computer. I've gone as far as creating another login to see if my account was the issue, but same problems there. Can I have the credentials needed sent to me so I can log in and use the Cloud Trial?
Hi Splunkers. whenever I'm running Splunk rebuild command I'm getting the below error.  
I see this : /opt/splunk/etc/apps/splunk_essentials_8_2/appserver/static/exampleInfo.json differs /opt/splunk/etc/apps/splunk_essentials_8_2/default/app.conf differs /opt/splunk/etc/apps/... See more...
I see this : /opt/splunk/etc/apps/splunk_essentials_8_2/appserver/static/exampleInfo.json differs /opt/splunk/etc/apps/splunk_essentials_8_2/default/app.conf differs /opt/splunk/etc/apps/splunk_essentials_8_2/default/data/ui/views/Dashboard_Studio.xml differs /opt/splunk/etc/apps/splunk_essentials_8_2/default/data/ui/views/Durable_Scheduled_Searches.xml differs /opt/splunk/etc/apps/splunk_essentials_8_2/default/data/ui/views/Federated_Search_Support_for_On-prem_to_On-prem_Environments.xml differs /opt/splunk/etc/apps/splunk_essentials_8_2/default/data/ui/views/Health_Report_Improvements_and_Search_Head_Cluster_Health_Report_Capabilities.xml differs /opt/splunk/etc/apps/splunk_essentials_8_2/default/data/ui/views/New_JSON_Commands.xml differs /opt/splunk/etc/apps/splunk_essentials_8_2/default/data/ui/views/Search_Restriction_by_Data_Age.xml differs /opt/splunk/etc/apps/splunk_essentials_8_2/default/data/ui/views/Splunk_Operator_for_Kubernetes.xml differs /opt/splunk/etc/apps/splunk_essentials_8_2/splunk_essentials_8_2_new.tgz missing
I have a Splunk search that contains a call to a custom search command that I created. The command (python script) takes in events (windows event log events) and finds event sequences. The command on... See more...
I have a Splunk search that contains a call to a custom search command that I created. The command (python script) takes in events (windows event log events) and finds event sequences. The command only returns the events that are part of a found sequence and removes all the other events from the results, so hundreds, thousands, or more events go into the command and typically 0 or just a few events are returned. The command also adds a few fields to each of the remaining events to denote what sequence it is part of, the step in the sequence, and the time duration into the sequence. This search with the custom command works fine from a Splunk search page, I can see the search results on the "statistics" tab. Although, if I change the search mode from Verbose to either Fast or Smart, I no longer get my results and instead get "No results found". The results come back when I switch back to Verbose mode. When I save my search as a dashboard, the dashboard shows "No results found". If I edit the dashboard and change the visualization from "statistics table" to "events" and then back to "statistics table" and save, I see the search results on the dashboard. But, when I reload the dashboard it shows "No results found" again. Below is the search that I am using on the search page and on the dashboard:   source="2021-06-*" index="wineventlog" (sourcetype="WinEventLog:Security" OR sourcetype="WinEventLog:Microsoft-Windows-Sysmon/Operational") ((EventCode=11) OR (EventCode=4688 AND New_Process_Name="*schtasks.exe*")) NOT splunk | eval TargetFilename = mvindex(TargetFilename, 0) | evidseq F 2h 11 "(ComputerName>'name',TargetFilename>'exe')" 4688 "(New_Process_Name='*schtasks.exe*',ComputerName!'%name%',Process_Command_Line='*%exe%*')" 11 "(ComputerName='%name%')" | table _time EventCode timeSinceStart sequenceId sequenceStep ComputerName TargetFilename New_Process_Name Process_Command_Line | sort sequenceId sequenceStep   Also, I found that if I add the following to the bottom of my search on the dashboard, the correct results actually do appear consistently, even when I reload the dashboard, but I do not want to display my results with the transaction command.   | transaction sequenceId | sort sequenceId   So, how can I (reliably) get the results to show up on the dashboard? Any help anyone can provide would be greatly appreciated!
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined   From ? Here is how those fields break down: %h Remote hostname (i.e., who's asking?) %l Remote logname (if de... See more...
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined   From ? Here is how those fields break down: %h Remote hostname (i.e., who's asking?) %l Remote logname (if defined) %u Remote user (if authenticated) %t Time request was received %r first line of the request %>s Final status (200, 404, etc) %b Size of the response in bytes %{Referer} How did this user get here? %{User-Agent} What browser is the user using? so that we can I want to be able to filter on the first item (remote hostname) so I can weed out known scanners, but every time I try to do something with that field it gets deleted.   Can I translate the Apache log format into something splunk can handle? Thank u a million in advance.
Hello I have some issues with Defining Fields from Splitted  Raw Data within an Event. Sample Events, Code used to split Raw Event, Output of Splitted Data, and My Issues are given below :  Raw Eve... See more...
Hello I have some issues with Defining Fields from Splitted  Raw Data within an Event. Sample Events, Code used to split Raw Event, Output of Splitted Data, and My Issues are given below :  Raw Events: DAS7CNB_L_0__20210630-23574912_5827.html@^@^2021/06/30@^@^23:57:49@^@^DAS7CNB@^@^select "tin","payer_tin","min"(case when "state" in( 'AA','AE','AP','AS','FM','GU','MH','MP','PR','PW','VI' ) then 1 else 0 end) as "f1065k1nonus","max"(case when "state" in( 'WA','OR','CA','AK','HI','MT','ID','WY','NV','UT','CO','AZ','NM' ) then 1 when "state" in( 'ND','MN','SD','IA','NE','KS','MO','WI','IL','IN','MI','OH' ) then 2 when "state" in( 'NY','PA','NJ','NH','VT','ME','MA','RI','CT' ) then 3 when "state" in( 'TX','OK','AR','LA','KY','TN','MS','AL','WV','DE','MD','VA','NC','DC','SC','GA','FL' ) then 4 when "state" in( 'AA','AE','AP','AS','FM','GU','MH','MP','PR','PW','VI' ) then 5 else 0 end) as "f1065k1maxdistoff","max"("interest") as "interest_f1065_k1","max"("guarpaymt") as "guarpaymt_f1065_k1","max"("ord_inc") as "ord_inc_f1065_k1","max"("othrental") as "othrental_f1065_k1","max"("realestate") as "realestate_f1065_k1","max"("royalties") as "royalties_f1065_k1","max"("section179") as "section179_f1065_k1" into #TEMP9A from "irmf_f1065_k1" where "tax_yr" = 2016 and "tin" > 0 and "tin_typ" in( 0,1,2,3 ) group by "tin","payer_tin"@^@^|DAS7CNB.#TEMP9A|cdwsa.IRMF_F1065_K1@^@^ My SQL Command:  eval SQLField=split(_raw,"@^@^")| table SQLField Output of Splitted Data: DAS7CNB_L_0__20210630-23574912_5827.html 2021/06/30 23:57:49 DAS7CNB select "a"."basetin","w2nonus","w2maxdistoff","ssanonus","ssamaxdistoff","f1099rnonus","f1099rmaxdistoff","f1099miscnonus","f1099miscmaxdistoff","f1099gnonus","f1099gmaxdistoff","f1099intnonus","f1099intmaxdistoff","f1099oidnonus","f1099oidmaxdistoff","f1041k1nonus","f1041k1maxdistoff","f1065k1nonus","f1065k1maxdistoff","wages_w2","allocated_tips_w2","medicare_wages_w2","taxable_fica_tips_w2","WITHHLDG_w2","pens_annties_f1099_ssa_rrb","withhldg_f1099_ssa_rrb","gross_distrib_f1099r","taxable_amt_f1099r","WITHHLDG_f1099r","non_emp_compensation_f1099misc","othincome_f1099misc","rents_f1099misc","royalties_f1099misc","crop_insurance_f1099misc","WITHHLDG_f1099misc","taxbl_grant_f1099g","UNEMP_COMP_f1099g","prior_refnd_f1099g","agr_subsds_f1099g","atta_pymnt_f1099g","WITHHLDG_f1099g","interest_f1099int","savings_bonds_f1099int","WITHHLDG_f1099int","interest_f1099oid","withhldg_f1099oid","interest_f1041_k1","bus_inc_f1041_k1","net_rental_f1041_k1","oth_prtflo_f1041_k1","oth_rental_f1041_k1","interest_f1065_k1","guarpaymt_f1065_k1","ord_inc_f1065_k1","othrental_f1065_k1","realestate_f1065_k1","royalties_f1065_k1","section179_f1065_k1" into #TEMP9 from(select "basetin","w2nonus","w2maxdistoff","ssanonus","ssamaxdistoff","f1099rnonus","f1099rmaxdistoff","f1099miscnonus","f1099miscmaxdistoff","f1099gnonus","f1099gmaxdistoff","f1099intnonus","f1099intmaxdistoff","f1099oidnonus","f1099oidmaxdistoff","f1041k1nonus","f1041k1maxdistoff","wages_w2","allocated_tips_w2","medicare_wages_w2","taxable_fica_tips_w2","WITHHLDG_w2","pens_annties_f1099_ssa_rrb","withhldg_f1099_ssa_rrb","gross_distrib_f1099r","taxable_amt_f1099r","WITHHLDG_f1099r","non_emp_compensation_f1099misc","othincome_f1099misc","rents_f1099misc","royalties_f1099misc","crop_insurance_f1099misc","WITHHLDG_f1099misc","taxbl_grant_f1099g","UNEMP_COMP_f1099g","prior_refnd_f1099g","agr_subsds_f1099g","atta_pymnt_f1099g","WITHHLDG_f1099g","interest_f1099int","savings_bonds_f1099int","WITHHLDG_f1099int","interest_f1099oid","withhldg_f1099oid","interest_f1041_k1","bus_inc_f1041_k1","net_rental_f1041_k1","oth_prtflo_f1041_k1","oth_rental_f1041_k1" from #TEMP8) as "A" left outer join(select "tin","min"(case when "f1065k1nonus" = 1 then 1 else 0 end) as "f1065k1nonus","max"(case when "f1065k1maxdistoff" = 1 then 1 when "f1065k1maxdistoff" = 2 then 2 when "f1065k1maxdistoff" = 3 then 3 when "f1065k1maxdistoff" = 4 then 4 when "f1065k1maxdistoff" = 5 then 5 else 0 end) as "f1065k1maxdistoff","sum"("interest_f1065_k1") as "interest_f1065_k1","sum"("guarpaymt_f1065_k1") as "guarpaymt_f1065_k1","sum"("ord_inc_f1065_k1") as "ord_inc_f1065_k1","sum"("othrental_f1065_k1") as "othrental_f1065_k1","sum"("realestate_f1065_k1") as "realestate_f1065_k1","sum"("royalties_f1065_k1") as "royalties_f1065_k1","sum"("section179_f1065_k1") as "section179_f1065_k1" from #TEMP9a group by "tin") as "B" on "a"."basetin" = "b"."tin" DAS7CNB.#TEMP9 DAS7CNB.#TEMP9A|cdsawsa.IRMF_F1065_K1   My Issues: It splitted as expected. But, I have some issues with defining   text  (please see the text in Bold  right above My Issues:)  values DAS7CNB.#TEMP9A  as ID_DataFile and cdsawsa.IRMF_F1065_K1 as ID_DataTempFile Thank you.....any help will be highly appreciated.  
I want to use splunk to send an alert when the power goes out in our office. The current idea is to set up a machine (probably windows or linux) powered into an outlet, set up as a Universal Forwarde... See more...
I want to use splunk to send an alert when the power goes out in our office. The current idea is to set up a machine (probably windows or linux) powered into an outlet, set up as a Universal Forwarder sending a constant stream of info to the Enterprise instance (what this form of info would be I'm not sure yet. probably a script that constantly loops). And then to have the Enterprise instance (on aws so it will still be online if the power goes out) monitor for when the Forwarder machine stops sending information - then send me an alert. So when the power goes out the machine in the office will power down and the Enterprise instance will recognize this and alert me. If anyone has any other ideas of ways that they might monitored for power loss(or can help to outline how I should set up my current idea) please let me know. Thanks!   Edit: Can't figure out how to change the forum category of this post from feedback to something else. 
We recently updated our Splunk infrastructure to 8.1 and before we upgraded, the enable TLS option was checked on the mail server settings. The alert_actions.conf has not changed at all. Now for emai... See more...
We recently updated our Splunk infrastructure to 8.1 and before we upgraded, the enable TLS option was checked on the mail server settings. The alert_actions.conf has not changed at all. Now for emails being sent, we receive the following error in the python.log: sendemail:456 - [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:742) while sending mail to: <EMAIL_ADDRESS>   If I manually add use_tls = 1 in that conf file, there are no errors, but if enabled on the web UI, it errors with the above.    I am not sure what else to check here as nothing has changed except for the upgrading of the Splunk version on the servers. Has anyone else experienced this?
Hi all, I am trying to create an ansible playbook to automate splunk setup. I just noticed that the -user flag is not working (my admin is named admin instead of splunk). Am I doing something incorr... See more...
Hi all, I am trying to create an ansible playbook to automate splunk setup. I just noticed that the -user flag is not working (my admin is named admin instead of splunk). Am I doing something incorrect with this command? (Note: {{ splunk_user_password }} is an ansible variable that I am passing in to the bash command). /opt/splunkforwarder/bin/splunk start --accept-license --answer-yes -user splunk --seed-passwd {{ splunk_user_password }}   Do I need to use a user-seed.conf file instead? Does the user flag not work (I've tried both -- and - flags)?    Thanks, jack
I have the following rex substitution in a query to aggregate various log messages (with the string Liveness and Readiness):   index=k8s ("event.go") AND (kind="Pod") AND (type="Warning" OR type="E... See more...
I have the following rex substitution in a query to aggregate various log messages (with the string Liveness and Readiness):   index=k8s ("event.go") AND (kind="Pod") AND (type="Warning" OR type="Error") source="*kubelet.log" | rex mode=sed "s/(object=\"[^\s]+\")(.*)Liveness(.*)/\1 message=\"Liveness Error\"/g" | rex mode=sed "s/(object=\"[^\s]+\")(.*)Readiness(.*)/\1 message=\"Readiness Error\"/g"| dedup object message   The above appears to work correctly and provide the desired result. For example, the above transforms events like below: I0903 17:12:49.308289 2024433 event.go:211] "Event occurred" object="namespace1/podfoo" message="Readiness probe failed: + cd /sandbox\\n++ curl --output /dev/null --max-time 28 --silent --write-out '%{http_code}' http://0.0.0.0:20012/heartbeat\\n+ ret=000\\n+ for expected_status in 200\\n+ [[ 000 == 200 ]]\\n+ [[ '' == \\\\t\\\\r\\\\u\\\\e ]]\\n+ false\\n" nicely into  the following: I0903 17:12:49.308289 2024433 event.go:211] "Event occurred" object="namespace1/podfoo" message="Readiness Error" However when I try to stream the above query into stats ("stats count by message"), the transformed events generated as part of the rex substitution disappears for some reason and stats seem to be acting on the original event messages (as if the rex sed had no effect).   index=k8s ("event.go") AND (kind="Pod") AND (type="Warning" OR type="Error") source="*kubelet.log" | rex mode=sed "s/(object=\"[^\s]+\")(.*)Liveness(.*)/\1 message=\"Liveness Error\"/g" | rex mode=sed "s/(object=\"[^\s]+\")(.*)Readiness(.*)/\1 message=\"Readiness Error\"/g"| dedup object message | stats count by message   With the above, stats appears to aggregate on the original message contents of the events rather than the output of the rex substitution. For example, I see: message count Readiness probe errored: rpc error: code = Unknown ... 1059 Readiness probe failed: HTTP probe failed with statuscode: 503 2003 rather than the substituted message fields aggregated to something along the lines of  message count Readiness Error 3062   How can I get the output of the rex sed (like in the example above) to pass the substituted message fields in the events to stats?