All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a scheduled alert running every 15 minutes in the cron schedule. I set trigger action as Email, ServiceNow ticket & MS Teams notification. Here 80% of the alerts I am receiving successfully.... See more...
I have a scheduled alert running every 15 minutes in the cron schedule. I set trigger action as Email, ServiceNow ticket & MS Teams notification. Here 80% of the alerts I am receiving successfully. But i am failing to receive the remaining 20% alerts in Email, ServiceNow tickets & MS Teams. But when I am running the search I can able to find the result but I didn't receive the same alerts. When I search scheduler logs  I didn't find any failure logs. Please help here.
when I use SVG in Splunk Dashboard app,  it show bellow error, I want to know why we get this error, how can we fixed it.   Splunk version : 8.0.5 Splunk Dassboard App version: 0.8.0 sour... See more...
when I use SVG in Splunk Dashboard app,  it show bellow error, I want to know why we get this error, how can we fixed it.   Splunk version : 8.0.5 Splunk Dassboard App version: 0.8.0 sourcecode: {     "visualizations": {         "viz_7Gcj22nE": {             "type": "viz.choropleth.svg",             "options": {                 "backgroundColor": "transparent",                 "svg": "<svg width=\"322\" height=\"32\" viewBox=\"0 0 322 32\" fill=\"none\" xmlns=\"http://www.w3.org/2000/svg\">\n<rect id=\"level1\"  y=\"12\" width=\"33\" height=\"8\" rx=\"4\" fill=\"#483F9B\"/>\n<rect id=\"level2\"  x=\"37\" y=\"12\" width=\"32\" height=\"8\" rx=\"4\" fill=\"#483F9B\"/>\n<rect id=\"level3\"  x=\"73\" y=\"12\" width=\"32\" height=\"8\" rx=\"4\" fill=\"#483F9B\"/>\n<rect id=\"level4\"  x=\"109\" y=\"12\" width=\"32\" height=\"8\" rx=\"4\" fill=\"#483F9B\"/>\n<rect id=\"level5\"  x=\"145\" y=\"12\" width=\"32\" height=\"8\" rx=\"4\" fill=\"#483F9B\"/>\n<rect id=\"level6\"  x=\"181\" y=\"12\" width=\"32\" height=\"8\" rx=\"4\" fill=\"#483F9B\"/>\n<rect id=\"level7\"  x=\"217\" y=\"12\" width=\"32\" height=\"8\" rx=\"4\" fill=\"#483F9B\"/>\n<rect id=\"level8\"  x=\"253\" y=\"12\" width=\"32\" height=\"8\" rx=\"4\" fill=\"#483F9B\"/>\n<rect id=\"level9\"  x=\"289\" y=\"12\" width=\"33\" height=\"8\" rx=\"4\" fill=\"#483F9B\"/>\n</svg>"             },             "encoding": {                 "featureId": "primary[0]",                 "value": "primary[1]",                 "fill": {                     "field": "primary[1]",                     "format": {                         "type": "rangevalue",                         "ranges": [                             {                                 "from": 1500,                                 "value": "#483F9B"                             },                             {                                 "from": 500,                                 "to": 1500,                                 "value": "#A870EF"                             },                             {                                 "to": 500,                                 "value": "#483F9B"                             }                         ]                     }                 }             },             "dataSources": {                 "primary": "ds_xtEHKvxm"             }         }     },     "dataSources": {         "ds_xtEHKvxm": {             "type": "ds.search",             "options": {                 "query": "| makeresults \n| eval Progress=\"level1\"| eval count=1000\n| append [| makeresults | eval Progress=\"level2\"| eval count=1000]\n| append [| makeresults | eval Progress=\"level3\"| eval count=10]\n| append [| makeresults | eval Progress=\"level4\"| eval count=10]\n| append [| makeresults | eval Progress=\"level5\"| eval count=10]\n| append [| makeresults | eval Progress=\"level6\"| eval count=10]\n| append [| makeresults | eval Progress=\"level7\"| eval count=10]\n| append [| makeresults | eval Progress=\"level8\"| eval count=10]\n| append [| makeresults | eval Progress=\"level9\"| eval count=10]\n|table Progress,count"             },             "name": "Search_1"         }     },     "defaults": {         "dataSources": {             "ds.search": {                 "options": {                     "queryParameters": {                         "latest": "$global_time.latest$",                         "earliest": "$global_time.earliest$"                     }                 }             }         },         "visualizations": {             "global": {                 "showLastUpdated": true             }         }     },     "inputs": {         "input_global_trp": {             "type": "input.timerange",             "options": {                 "token": "global_time",                 "defaultValue": "-24h@h,now"             },             "title": "Global Time Range"         }     },     "layout": {         "type": "absolute",         "options": {},         "structure": [             {                 "item": "viz_7Gcj22nE",                 "type": "block",                 "position": {                     "x": 0,                     "y": 0,                     "w": 380,                     "h": 300                 }             }         ],         "globalInputs": [             "input_global_trp"         ]     },     "description": "",     "title": "test" }
hello,   I have alert transaction at "ACK" and at "Resolved", i have created table for each value, but unable to edit time format of each. Please help. Please find attached image for reference. Cu... See more...
hello,   I have alert transaction at "ACK" and at "Resolved", i have created table for each value, but unable to edit time format of each. Please help. Please find attached image for reference. Current Output-  857415 piyush.moorjani piyush.moorjani 2021-08-25T01:57:26Z 2021-08-25T01:58:47Z ACKED RESOLVED   need time format of third col.
Hello, I'm trying to configure the CloudTrail and CloudWatch data inputs to collect AWS logs for Splunk. When I select a region that I think is correct, there is no log data coming into Splunk. W... See more...
Hello, I'm trying to configure the CloudTrail and CloudWatch data inputs to collect AWS logs for Splunk. When I select a region that I think is correct, there is no log data coming into Splunk. When I go into the inputs.conf file manually and input the region that was assigned to my programs account, still, no log data.  I even went in configured an index for the AWS add-on, went into the meta-data and changed the saved searches/macros to point to the new index I created, etc.  Has anyone experienced this issue before?
Hi  need to calculate the average based on a condition.  testing=true vs testing=false  (lets say field A) field B has the values to average (milliseconds) field C has urls  something like thi... See more...
Hi  need to calculate the average based on a condition.  testing=true vs testing=false  (lets say field A) field B has the values to average (milliseconds) field C has urls  something like this: | stats avg(fieldB, when field A testing true) as trueV avg(fieldB, when field A testing false) as falseV by Field C Goal is a table like this: url  | average (true) | average( false).      
Hi All, we have lots of Alerts and reports configured in splunk which is in disabled state.. How can we find their list in excel sheet.. also how can we find all list of dashboards in splunk in one... See more...
Hi All, we have lots of Alerts and reports configured in splunk which is in disabled state.. How can we find their list in excel sheet.. also how can we find all list of dashboards in splunk in one excel sheet  to review it so that we can delete the unwanted one ..   PLease Note:- its not about orphaned search .. that i got them  from search app dashboard .. thanks..
I am running Splunk Enterprise on prem and have a set of indexers in a cluster in one region and another set of indexers in a separate cluster in a different region.   If region A is completely lost... See more...
I am running Splunk Enterprise on prem and have a set of indexers in a cluster in one region and another set of indexers in a separate cluster in a different region.   If region A is completely lost but we have backups in Region B of the data from Region A; is it possible to restore the data into the indexer cluster in Region B or would we have to restore the data and put into thawed and run the unthaw process bucket by bucket? We are not running a multi-site cluster. This is for a DR procedure but at the same time would be nice to know best way to do this as we have a 3rd cluster setup that eventually we will want the data in moved to one of the other clusters to allow for decommission of the 3rd clustered location. (The same indexes exists in all 3 separated clustered environments.)   Thanks.
I am working on upgrading a deployment server which is typically an easy task. My issue is that this particular environment has a strange path instead of the usual /opt/splunk/etc. This environment h... See more...
I am working on upgrading a deployment server which is typically an easy task. My issue is that this particular environment has a strange path instead of the usual /opt/splunk/etc. This environment has /opt/Splunk/splunkenterprise/etc/. I feel like if i run the upgrade as i normally do untarring the file to /opt, it could create some issues. does anyone have insight as to if this should be upgraded "business as usual" or if the command needs modification? I am worried that untarring in under /opt will cause issues with other possible dependencies to particular file paths. 
Here is a simple set of records. to demonstrate the data (but not the two sourcetypes). The query would be more like index=myindex ( sourcetype=A OR sourcetype=B) . Lets say RequiredOnHand is sourc... See more...
Here is a simple set of records. to demonstrate the data (but not the two sourcetypes). The query would be more like index=myindex ( sourcetype=A OR sourcetype=B) . Lets say RequiredOnHand is sourcetype=B and the other Containers are in sourcetype=A. I would like to create the following list: 1. list Contains from sourcetype=A that match (or missing) from sourcetype=B Contains Basket Bunch Pint RequiredOnHand Apples 0 0 0 Bananas 0 0 Grapes 0 Oranges 0 0 Strawberries 0 0 2. list any Contains values in sourcetype=A that are not in sourcetype=B Contains Basket Balls 1 3. list any Contains values in sourcetype=B missing from sourcetype=A Contains Basket Kiwi 1 | makeresults | eval Container="Basket" | eval Contains="Apples" | eval From="FieldA" | append [|makeresults| eval Container="Basket"| eval Contains="Oranges" | eval From="FieldB" ] | append [|makeresults| eval Container="Bunch" | eval Contains="Bananas" | eval From="FieldC" ] | append [|makeresults| eval Container="Bunch" | eval Contains="Grapes" | eval From="FieldD"] | append [|makeresults| eval Container="Pint" | eval Contains="Strawberries" ] | append [|makeresults| eval Container="Pint" | eval Contains="Grapes" ] | append [|makeresults| eval Container="RequiredOnHand" | eval Contains="Apples" | eval From="FieldA" ] | append [|makeresults| eval Container="RequiredOnHand"| eval Contains="Oranges" | eval From="FieldB" ] | append [|makeresults| eval Container="RequiredOnHand" | eval Contains="Bananas" | eval From="FieldC" ] | append [|makeresults| eval Container="RequiredOnHand" | eval Contains="Strawberries" ] | append [|makeresults| eval Container="RequiredOnHand" | eval Contains="Kiwi" | eval From="FieldD" ] | append [|makeresults| eval Container="RequiredOnHand" | eval Contains="Grapes" ] | append [| makeresults | eval Container="Basket" | eval Contains="Balls"| eval From="FieldA"] | chart count(Container) as chart-count over Contains by Container Results: Contains Basket Bunch Pint RequiredOnHand Apples 1 0 0 1 Balls 1 0 0 0 Bananas 0 1 0 1 Grapes 0 1 1 1 Kiwi 0 0 0 1 Oranges 1 0 0 1 Strawberries 0 0 1 1 Thanks for the help Oranges   0 0   Strawberries 0 0     2. list any Contains values in sourcetype=A that are not in sourcetype=B Contains Basket Balls  1 3. list any Contains values in sourcetype=B missing from sourcetype=A Contains Basket Kiwi  1 | makeresults | eval Container="Basket" | eval Contains="Apples" | eval From="FieldA" | append [|makeresults| eval Container="Basket"| eval Contains="Oranges" | eval From="FieldB" ] | append [|makeresults| eval Container="Bunch" | eval Contains="Bananas" | eval From="FieldC" ] | append [|makeresults| eval Container="Bunch" | eval Contains="Grapes" | eval From="FieldD"] | append [|makeresults| eval Container="Pint" | eval Contains="Strawberries" ] | append [|makeresults| eval Container="Pint" | eval Contains="Grapes" ] | append [|makeresults| eval Container="RequiredOnHand" | eval Contains="Apples" | eval From="FieldA" ] | append [|makeresults| eval Container="RequiredOnHand"| eval Contains="Oranges" | eval From="FieldB" ] | append [|makeresults| eval Container="RequiredOnHand" | eval Contains="Bananas" | eval From="FieldC" ] | append [|makeresults| eval Container="RequiredOnHand" | eval Contains="Strawberries" ] | append [|makeresults| eval Container="RequiredOnHand" | eval Contains="Kiwi" | eval From="FieldD" ] | append [|makeresults| eval Container="RequiredOnHand" | eval Contains="Grapes" ] | append [| makeresults | eval Container="Basket" | eval Contains="Balls"| eval From="FieldA"] | chart count(Container) as chart-count over Contains by Container Results: Contains Basket Bunch Pint RequiredOnHand Apples 1 0 0 1 Balls 1 0 0 0 Bananas 0 1 0 1 Grapes 0 1 1 1 Kiwi 0 0 0 1 Oranges 1 0 0 1 Strawberries 0 0 1 1 Thanks for the help
Greetings to all, I'm having an issue with the Microsoft Teams TA.  After setting up the Subscription, I'm getting this error: message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA_MS_Team... See more...
Greetings to all, I'm having an issue with the Microsoft Teams TA.  After setting up the Subscription, I'm getting this error: message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA_MS_Teams/bin/teams_subscription.py" subscription = create_subscription(helper, access_token, webhook_url, graph_base_url) ERROR ExecProcessor - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA_MS_Teams/bin/teams_subscription.py" ERROR400 Client Error: Bad Request for url: https://graph.microsoft.com/v1.0/subscriptions Thanks. Jeff
Is there any way to find out the port on which Splunk is set up programmatically using Splunk Java SDK? Basically to read the entry from $SPLUNK_HOME/etc/system/local/web.conf?
If my clustered Splunk cloud indexers contain data that has been deleted (marked for deletion) and that index hits its retention period for DDAA archiving, will the data marked delete be archived to ... See more...
If my clustered Splunk cloud indexers contain data that has been deleted (marked for deletion) and that index hits its retention period for DDAA archiving, will the data marked delete be archived to DDAA storage or will it be dropped at this point. I prefer dropped is there is a configuration option for this.  Thanks
I have been editing the following query in order to get the duration for workload.....how ever while creating the visualization we only see duration of the bar chart......i am looking to see if we ca... See more...
I have been editing the following query in order to get the duration for workload.....how ever while creating the visualization we only see duration of the bar chart......i am looking to see if we can get something with regards to the workload and/0r Jobname. | inputlookup cyclestarttimes.csv | lookup cycleendtimes.csv CYCLE WORKLOAD | search WORKLOAD=F91 | where isnotnull(ENDTIME) | eval STARTC = "2021-08-19" | eval ENDC = "2021-08-20" | eval STARTC = strptime(STARTC, "%Y-%m-%d") | eval ENDC = strptime(ENDC, "%Y-%m-%d") | eval CYCLEC = strptime(CYCLE, "%Y-%m-%d") | where CYCLEC >= STARTC AND CYCLEC <= ENDC | stats values(STARTTIME) as START values(ENDTIME) as STOP by WORKLOAD CYCLE | eval _time = strptime(START, "%Y-%m-%d %H:%M:%S") | eval end_time = strptime(STOP, "%Y-%m-%d %H:%M:%S") | eval duration = (end_time - _time) * 1000 | eval JOBNAME = WORKLOAD | stats count by _time,duration, WORKLOAD, JOBNAME | table _time WORKLOAD JOBNAME duration
We have onboarded Alicloud data in Splunk and looking for use cases creation.   Is there any ALicloud use cases doc for Splunk, What use cases we can created any reference link?    
Hi there  New here to using Splunk, we are looking to use Splunk Universal Forwarder to forward windows event logs to a splunk server.  I have installed the forwarder on a win10 client and i can se... See more...
Hi there  New here to using Splunk, we are looking to use Splunk Universal Forwarder to forward windows event logs to a splunk server.  I have installed the forwarder on a win10 client and i can see events coming into Splunk which is great!  Is there any way that i can tweek the Universal Forwarder on the client PC to not forward some events such as Information logs, Audit Success, and possibly stop forwarding all the text from the event such as the description and all that?   Trying to be as lean as possible with these events really.    Thanks
I want to know how I can incrementally go through and add missing times (hours) per user across a number of users.   Also the fail_num for those times should be '0'.    I've thought of using fore... See more...
I want to know how I can incrementally go through and add missing times (hours) per user across a number of users.   Also the fail_num for those times should be '0'.    I've thought of using foreach, but I'm not sure this is the route to go here.   Above is a result showing only 2 users from the query I'm making in one of my previous posts ("Detecting-Spikes-Anomalies-in-Failed-Logins-over-time"). For cases where the fail_num was 0 no entry was made and therefore I have no row for that timeslot.  When I use trendline to analyze this it won't work because it doesn't have enough points of data to compute the moving average.  I cannot use timechart to do this.   So, if there is a more programmatic way to add rows that are missing in by using regex or some more efficient method, then someone please enlighten me on this. I can't use timechart to fill these in because it breaks other things in my query.   I've thought of using foreach, but I'm not sure this is the route to go here. If you're wondering why I can't use timechart it makes it impossible for me to do the analysis unless 3d dimensional analysis or something exists in Splunk...sadly I'm not mathematically or programmatically gifted enough to think up such a solution.
Hi, I am working at a corporation,  using Splunk on my browsers.  I have installed the windows forwarder and configured my user name and the Splunk server (client side).  I used the URL and the port... See more...
Hi, I am working at a corporation,  using Splunk on my browsers.  I have installed the windows forwarder and configured my user name and the Splunk server (client side).  I used the URL and the port number from the browser URL of the corporate Splunk server.. Now I want to be able to send lab data to Splunk, I don't want to monitor anything on the windows system.  The PC is just a means to run a script that can collect data from some instruments. I was thinking the forwarder would allow me to then use some types of commands within my script (Python) to send data.  I cold write to a file, but would prefer to send data live with some kind of command. How can this be done, is there a specific documentation for this type of activity? Maybe I need to write to a file, let the forwarder monitor that file, and continuously overwrite that file, assuming the forwarder would look at the like on some sort of periodic basis, like 20s or 60s. I would appreciate any general guidance on this, especially if there is documentation to use
Hi, I have the bellow search: I am trying to use acceleration reporting however because the eventstats I can't, I have tried to rewrite the search however it does not work, could someone please hel... See more...
Hi, I have the bellow search: I am trying to use acceleration reporting however because the eventstats I can't, I have tried to rewrite the search however it does not work, could someone please help me?   index=test sourcetype=test | eval ResponseTime=round(response_time/1000,2) | eventstats perc99(ResponseTime) as p99Resp | eventstats perc90(ResponseTime) as p90Resp |eventstats perc75(ResponseTime) as p75Resp | eval p99Unit=if(ResponseTime<=p99Resp,0,1) | eval p00Response=ResponseTime | eval p98Response=if(ResponseTime<=p99Resp,ResponseTime,null()) | eval p99Response=if(ResponseTime<=p99Resp,null(),ResponseTime) | eval p90Unit=if(ResponseTime<=p90Resp,0,1) | eval p90Response=if(ResponseTime<=p90Resp,ResponseTime,null()) | eval p90Response=if(ResponseTime<=p90Resp,null(),ResponseTime) | eval p75Unit=if(ResponseTime<=p75Resp,0,1) | eval p75Response=if(ResponseTime<=p75Resp,ResponseTime,null()) | eval p75Response=if(ResponseTime<=p75Resp,null(),ResponseTime) | stats sum(p99Unit) as P99Count, avg(p99Response) as p99ResponseAvg, min(p99Response) as p99ResponseMin, max(p99Response) as p99ResponseMax sum(p90Unit) as P90Count, avg(p90Response) as p90ResponseAvg, min(p90Response) as p90ResponseMin, max(p90Response) as p90ResponseMax sum(p75Unit) as P75Count, avg(p75Response) as p75ResponseAvg, min(p75Response) as p75ResponseMin, max(p75Response) as p75ResponseMax | rename P99Count as "99% Total Count" | rename p99ResponseAvg as "99% AVG" | rename p99ResponseMin as "99% Min Response Time" | rename p99ResponseMax as "99% Max Response Time" | rename P90Count as "90% Total Count" | rename p90ResponseAvg as "90% AVG" | rename p90ResponseMin as "90% Min Response Time" | rename p90ResponseMax as "90% Max Response Time" | rename P75Count as "75% Total Count" | rename p75ResponseAvg as "75% AVG" | rename p75ResponseMin as "75% Min Response Time" | rename p75ResponseMax as "75% Max Response Time"   Thanks Joe
Hi, I have the bellow search: I am trying to use acceleration reporting however because the event stats I can't, I have tried to rewrite the search however it does not work, could someone please he... See more...
Hi, I have the bellow search: I am trying to use acceleration reporting however because the event stats I can't, I have tried to rewrite the search however it does not work, could someone please help me?   index=test sourcetype=test | eval ResponseTime=round(response_time/1000,2) | eventstats perc99(ResponseTime) as p99Resp | eventstats perc90(ResponseTime) as p90Resp |eventstats perc75(ResponseTime) as p75Resp | eval p99Unit=if(ResponseTime<=p99Resp,0,1) | eval p00Response=ResponseTime | eval p98Response=if(ResponseTime<=p99Resp,ResponseTime,null()) | eval p99Response=if(ResponseTime<=p99Resp,null(),ResponseTime) | eval p90Unit=if(ResponseTime<=p90Resp,0,1) | eval p90Response=if(ResponseTime<=p90Resp,ResponseTime,null()) | eval p90Response=if(ResponseTime<=p90Resp,null(),ResponseTime) | eval p75Unit=if(ResponseTime<=p75Resp,0,1) | eval p75Response=if(ResponseTime<=p75Resp,ResponseTime,null()) | eval p75Response=if(ResponseTime<=p75Resp,null(),ResponseTime) | stats sum(p99Unit) as P99Count, avg(p99Response) as p99ResponseAvg, min(p99Response) as p99ResponseMin, max(p99Response) as p99ResponseMax sum(p90Unit) as P90Count, avg(p90Response) as p90ResponseAvg, min(p90Response) as p90ResponseMin, max(p90Response) as p90ResponseMax sum(p75Unit) as P75Count, avg(p75Response) as p75ResponseAvg, min(p75Response) as p75ResponseMin, max(p75Response) as p75ResponseMax | rename P99Count as "99% Total Count" | rename p99ResponseAvg as "99% AVG" | rename p99ResponseMin as "99% Min Response Time" | rename p99ResponseMax as "99% Max Response Time" | rename P90Count as "90% Total Count" | rename p90ResponseAvg as "90% AVG" | rename p90ResponseMin as "90% Min Response Time" | rename p90ResponseMax as "90% Max Response Time" | rename P75Count as "75% Total Count" | rename p75ResponseAvg as "75% AVG" | rename p75ResponseMin as "75% Min Response Time" | rename p75ResponseMax as "75% Max Response Time"   Thanks Joe
Hi, I have TCP 514 logs in the same sourcetype. There are different formats of timestamp in log and even in events. I don't understand my mistakes with datetime.xml. It's working for one format bu... See more...
Hi, I have TCP 514 logs in the same sourcetype. There are different formats of timestamp in log and even in events. I don't understand my mistakes with datetime.xml. It's working for one format but not for the second. I text regexp with search ( | rex field=_raw ".........") fields are correctly extracted. I follow thus tuto: https://www.function1.com/2013/01/oh-no-splunking-log-files-with-multiple-formats-no-problem Thanks for your help.   Example: first log: <111> YYYY-MM-DDTHH:MM:SS+02:00 localhost house 12154 - @ip [DD/LitMM/YYYY:HH:MM:SS.MS] ........... _time is correctly extract, second log: <145> YYYY-MM-DDTHH:MM:SS+02:00 localhost foo - - YYYY-MM-DDTHH:MM:SS.MS+0000 jizjfoziejfz battle: cececeijoijoi [YYYY-MM-DDTHH:MM:SS.MS+0000] ........... _time is not extracted, value is index time   I'm on a standalone station, so i copy regexp without storage (maybe typo). Configuration: in datetime.xml on HeayFW (etc/apps/test/default) <define name="_house" extract="day, litmonth,year,hour,minute,second,subsecond"> <text>house.*\[(\d{2})/(\w{3})/(\d{4}):(\d{2}):(\d{2}):(\d{2})\.\d+\]></text> </define> <define name="_battle" extract="year,month,day,hour,minute,second,subsecond"> <text>battle.*\[(\d{4})\-(\d{2})-(\d{2})T(\d{2}):(\d{2}):(\d{2})\.\d+\+\d{4}\]></text> </define> <timePatterns> <use name="_house"/> <use name="_battle"/> </timePatterns> <datePatterns> <use name="_house"/> <use name="_battle"/> </datePatterns> </datetime> in props.conf [my_sourcetype] DATETIME_CONGIG= /etc/apps/test/defaults/datetime.xml LINE_BREAKER = ([\r|\n])+ SHOULD_LINEMERGE = false