All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, So I've been facing some challenges with some of my users and I don't really know exactly how to tackle this. Despite common sense and explaining them on a "welcoming page" on their app how t... See more...
Hi, So I've been facing some challenges with some of my users and I don't really know exactly how to tackle this. Despite common sense and explaining them on a "welcoming page" on their app how to search with index="theirIndex" sourcetype="blabla" ...   They keep on not using index="", because you know, it works... they are searching all the indexes they've got access to with their roles and it's quite annoying when they should really specify an index there. Do you have any advice on how to introduce that? I read about some filter that could be applied in the past with srchFilter or even thought of removing the Default indexes from their roles but not really sure which work best.   Thank you all, sorry if it's an easy question.   Ps: yes, my users are quite stubborn sometimes, I already won the "all time" challenge   
We forward all config logs from our Palo Alto Networks firewall directly into Splunk I can see that the config logs show up in Splunk but I don't see any info on the before and after change fields ... See more...
We forward all config logs from our Palo Alto Networks firewall directly into Splunk I can see that the config logs show up in Splunk but I don't see any info on the before and after change fields when I look at the source within Splunk, that info isn't in it but it shows in the PAN config logs on the firewall itself I want to create a report that within Splunk that shows all firewall config changes, including the before and after (kind of pointless without it). any idea what is wrong? Heath
Hi Team, What is the best way to monitor large rolling log files?? As of now I have following configuration to monitor files, (there are 180+ log files)   [monitor:///apps/folders/.../xxx.out] in... See more...
Hi Team, What is the best way to monitor large rolling log files?? As of now I have following configuration to monitor files, (there are 180+ log files)   [monitor:///apps/folders/.../xxx.out] index=app_server   At the end of month, log files are deleted and new log files are created by the application. But the issue is, the log files are 20Gb+ in size by end of the month. Recently when we migrated the server, we have started getting following error for some of the log files,    12-02-2020 19:03:58.335 +0530 ERROR TailReader - File will not be read, is too small to match seekptr checksum (file=/xxx/xxx/xxx/xxx.out). Last time we saw this initcrc, filename was different. You may wish to use larger initCrcLen for this sourcetype, or a CRC salt on this source. Consult the documentation or file a support case online at http://www.splunk.com/page/submit_issue for more info. WARN TailReader - Enqueuing a very large file=<hadoop large file> in the batch reader, with bytes_to_read=4981188783, reading of other large files could be delayed   I tried "crcSalt = <SOURCE>" option as well, there is no. difference. Please suggest what configuration I should use for monitoring log files in given case. Thanks.
I am running a search with a corn expression "0 10-18/2 * * *". This translates to "At minute 0 past every 2nd hour from 10 through 18.” I want to run this job every time with "9 AM to Now", and the... See more...
I am running a search with a corn expression "0 10-18/2 * * *". This translates to "At minute 0 past every 2nd hour from 10 through 18.” I want to run this job every time with "9 AM to Now", and there isn't an option under "Time Range". I would appreciate if someone can help me achieve this.
I've been asked to update 'Imperva Database Audit Analysis' and I'm running into issues trying to update the Audit Dashboard.  The sanitized data looks like this   Nov 10 23:20:52 syslog_server {"h... See more...
I've been asked to update 'Imperva Database Audit Analysis' and I'm running into issues trying to update the Audit Dashboard.  The sanitized data looks like this   Nov 10 23:20:52 syslog_server {"header":"Imperva Inc.|SecureSphere|version|Audit|Audit.DAM|Informative|", "dest-ip":"dip_address", "db-user":"db_user", "source-ip":"sip_address", "real-time":"Nov 10 2020 23:20:51 GMT", "audit-policy":"["Policy - Policy_Name - Login/Logout"]", "server-group":"server_group", "service-name":"service_name", "application-name":"application_name", "source-application":"source_application", "os-user":"os_user", "host-name":"fqdn", "sql-error":"", "mx-ip":"mx_ip_address", "gw-ip":"gw_ip_address", "objects-list":"[]", "operation-name":"Login", "schema-name":"schema_name", "object-name":"${Event.struct.operations.objects.name}", "agent-name":"agent_name", "success":"True", "os-user-chain":"root-->user", "db-name":"db_name" } Nov 10 23:20:52 syslog_server {"header":"Imperva Inc.|SecureSphere|version|Audit|Audit.DAM|Informative|", "dest-ip":"dip_address", "db-user":"db_user", "source-ip":"sip_address", "real-time":"Nov 10 2020 23:20:50 GMT", "audit-policy":"["Global Policy - Login/Logout"]", "server-group":"server_group", "service-name":"service_name", "application-name":"application_name", "source-application":"source_application", "os-user":"os_user", "host-name":"fqdn", "sql-error":"", "mx-ip":"mx_ip_address", "gw-ip":"gw_ip_address", "objects-list":"[]", "operation-name":"Login", "schema-name":"schema_name", "object-name":"${Event.struct.operations.objects.name}", "agent-name":"agent_name", "success":"True", "os-user-chain":"", "db-name":"db_name" } Nov 10 23:20:52 syslog_server {"header":"Imperva Inc.|SecureSphere|version|Audit|Audit.DAM|Informative|", "dest-ip":"dip_address", "db-user":"db_user", "source-ip":"sip_address", "real-time":"Nov 10 2020 23:20:51 GMT", "audit-policy":"["Policy - Login/Logout - SQL"]", "server-group":"server_group", "service-name":"service_name", "application-name":"application_name", "source-application":"source_application", "os-user":"", "host-name":"fqdn", "sql-error":"", "mx-ip":"mx_ip_address", "gw-ip":"gw_ip_address", "objects-list":"[]", "operation-name":"Login", "schema-name":"schema_name", "object-name":"${Event.struct.operations.objects.name}", "agent-name":"agent_name", "success":"True", "os-user-chain":"\-->user", "db-name":"db_name" } Nov 10 23:20:52 syslog_server {"header":"Imperva Inc.|SecureSphere|version|Audit|Audit.DAM|Informative|", "dest-ip":"dip_address", "db-user":"db_user", "source-ip":"sip_address", "real-time":"Nov 10 2020 23:20:51 GMT", "audit-policy":"["Policy - Policy_Name - Login/Logout"]", "server-group":"server_group", "service-name":"service_name", "application-name":"application_name", "source-application":"source_application", "os-user":"os_user", "host-name":"fqdn", "sql-error":"", "mx-ip":"mx_ip_address", "gw-ip":"gw_ip_address", "objects-list":"[]", "operation-name":"Login", "schema-name":"schema_name", "object-name":"${Event.struct.operations.objects.name}", "agent-name":"agent_name", "success":"True", "os-user-chain":"root-->user", "db-name":"db_name" } Nov 10 23:20:52 syslog_server {"header":"Imperva Inc.|SecureSphere|version|Audit|Audit.DAM|Informative|", "dest-ip":"dip_address", "db-user":"db_user", "source-ip":"sip_address", "real-time":"Nov 10 2020 23:20:51 GMT", "audit-policy":"["Global Policy - Login/Logout"]", "server-group":"server_group", "service-name":"service_name", "application-name":"application_name", "source-application":"source_application", "os-user":"os_user", "host-name":"fqdn", "sql-error":"", "mx-ip":"mx_ip_address", "gw-ip":"gw_ip_address", "objects-list":"[]", "operation-name":"Logout", "schema-name":"schema_name", "object-name":"${Event.struct.operations.objects.name}", "agent-name":"agent_name", "success":"True", "os-user-chain":"", "db-name":"db_name" } Nov 10 23:20:52 syslog_server {"header":"Imperva Inc.|SecureSphere|version|Audit|Audit.DAM|Informative|", "dest-ip":"dip_address", "db-user":"db_user", "source-ip":"sip_address", "real-time":"Nov 10 2020 23:20:50 GMT", "audit-policy":"["Policy - Login/Logout - SQL"]", "server-group":"server_group", "service-name":"service_name", "application-name":"application_name", "source-application":"source_application", "os-user":"", "host-name":"fqdn", "sql-error":"", "mx-ip":"mx_ip_address", "gw-ip":"gw_ip_address", "objects-list":"[]", "operation-name":"Login", "schema-name":"schema_name", "object-name":"${Event.struct.operations.objects.name}", "agent-name":"agent_name", "success":"True", "os-user-chain":"\-->user", "db-name":"db_name" } Nov 10 23:20:52 syslog_server {"header":"Imperva Inc.|SecureSphere|version|Audit|Audit.DAM|Informative|", "dest-ip":"dip_address", "db-user":"db_user", "source-ip":"sip_address", "real-time":"Nov 10 2020 23:20:51 GMT", "audit-policy":"["Global Policy - Login/Logout"]", "server-group":"server_group", "service-name":"service_name", "application-name":"application_name", "source-application":"source_application", "os-user":"os_user", "host-name":"fqdn", "sql-error":"", "mx-ip":"mx_ip_address", "gw-ip":"gw_ip_address", "objects-list":"[]", "operation-name":"Logout", "schema-name":"schema_name", "object-name":"object_name", "agent-name":"agent_name", "success":"True", "os-user-chain":"", "db-name":"db_name" } Nov 10 23:20:52 syslog_server {"header":"Imperva Inc.|SecureSphere|version|Audit|Audit.DAM|Informative|", "dest-ip":"dip_address", "db-user":"db_user", "source-ip":"sip_address", "real-time":"Nov 10 2020 23:20:50 GMT", "audit-policy":"["Policy - Login/Logout - SQL"]", "server-group":"server_group", "service-name":"service_name", "application-name":"application_name", "source-application":"source_application", "os-user":"", "host-name":"fqdn", "sql-error":"", "mx-ip":"mx_ip_address", "gw-ip":"gw_ip_address", "objects-list":"[]", "operation-name":"Login", "schema-name":"schema_name", "object-name":"object_name", "agent-name":"agent_name", "success":"True", "os-user-chain":"\-->user", "db-name":"db_nme" }   Since Splunk doesn't handle embedded [] and {} in json, I created this search to process the events.   index=my_index sourcetype=source:type | rex field=_raw "(?<st_json>\{.*)" | eval st_json_1=replace(st_json, "\"\[\]\"", "\"Null\"") | eval st_json=replace(st_json_1, "\"\[", "") | eval st_json_1=replace(st_json, "\]\"", "") | eval st_json=replace(st_json_1, "\$\{", "") | eval st_json_1=replace(st_json, "\}\",", "\",") | spath input=st_json_1 | eval dest_ip_db_name= 'dest-ip'."\\".'db-name' | chart count by dest_ip_db_name | sort limit=10 -count | rename dest_ip_db_name AS "Database Host \ Database Name" count AS "Number Of Events"   This works.  When I move it to the dashboard I get the "Unexpected close tag".  This is the query in the dashboard.   <query>index=my_index sourcetype=source:type | rex field=_raw "(?<st_json>\{.*)" | eval st_json_1=replace(st_json, "\"\[\]\"", "\"Null\"") | eval st_json=replace(st_json_1, "\"\[", "") | eval st_json_1=replace(st_json, "\]\"", "") | eval st_json=replace(st_json_1, "\$\{", "") | eval st_json_1=replace(st_json, "\}\",", "\",") | spath input=st_json_1 | eval dest_ip_db_name= 'dest-ip'."\\".'db-name' | chart count by dest_ip_db_name | sort limit=10 -count | rename dest_ip_db_name AS "Database Host \ Database Name" count AS "Number Of Events"</query>   I don't see anything that would cause the 'Unexpected close tag'.  Is there an issue with doing the \ escapes in SimpleXML or something else that I'm not aware of? TIA, Joe
Hello ,  I'm trying to identify the total list of indexes have been created in the Splunk (all this year ) , i have used below query to find out , but looks like this is not correct    index = _au... See more...
Hello ,  I'm trying to identify the total list of indexes have been created in the Splunk (all this year ) , i have used below query to find out , but looks like this is not correct    index = _audit operation=create | stats values(object) as new_index_created by _time splunk_server | rename _time as creation_time splunk_server as indexer|convert ctime(creation_time)|dedup new_index_created   any inputs ?   
Hi, I am looking for a bit guidance  breaking out multi-kv pairs in json logs. For example, I have json email logs where each email event may have multiple>>> multivalve fields, which I need separa... See more...
Hi, I am looking for a bit guidance  breaking out multi-kv pairs in json logs. For example, I have json email logs where each email event may have multiple>>> multivalve fields, which I need separated / formatted as individual lines... For instance a single email may have multiple attached files, and each file will have a fileName field, fileHash field, and fileExtn field. like this in the json... <hash1>  <fileName1>  <fileExtn1> <hash2>  <fileName2>  <fileExtn2> <hash3>  <fileName3>  <fileExtn3> I want to table the each group on a separate line by subject and sender... The issue is that I can only get 1 of the fields to break out correctly (like <hash?>) but the other fields <fileName?> and <fileExtn> are lumped together like this...   <hash1>      <fileName1> <fileExtn1>                          <fileName2> <fileExtn2>                         <fileName3> <fileExtn3> This works for 1 field,   .... | spath output=hash path=foo{}.blah | mvexpand hash | spath input= hash | table hash subject sender     but I don't know how to apply this method to multiple fields and make sure the hash, fileName, fileExtn  all line up in a single formatted line with subject and sender...   Any help greatly appreciated, Thank you!
I am scheduling an alert with cron for every 5 min */5 * * * * everything is going fine but when i am checking in "searches, report and alert" section the "next schedule time" is showing as none. t... See more...
I am scheduling an alert with cron for every 5 min */5 * * * * everything is going fine but when i am checking in "searches, report and alert" section the "next schedule time" is showing as none. this is the standalone environment. please help me in finding out what exactly is going wrong.   Thanks in advance
Sorry for the newbie question, but I can't seem to figure out how to use HEC. I am using a free cloud account. I first went into Settings->Data Input and created a HEC and got the token. But, nowhere... See more...
Sorry for the newbie question, but I can't seem to figure out how to use HEC. I am using a free cloud account. I first went into Settings->Data Input and created a HEC and got the token. But, nowhere on that page does it tell me what the endpoint should be. I found this document, that talks about three different ways of creating a HEC depending on your software type. While I know I don't have Enterprise, is the free account a "Self Serve"? Or, a "Managed"? I assume it is "Managed" because the first thing you are supposed to do is go to the "Global Settings" from the HEC page to enable HEC, however when I go to my HEC page there isn't a Global Setting link.  Now, if I follow the instructions there it says to go to Settings->Add Data->Monitor->HEC, which I did, but it appeared to just go through the same steps I did when I just went to Settings->Data Input. Regardless, I went through the process and got yet another token. Now, further down the document it has a "How to send data to the HEC" and shows this as the endpoint "<protocol>://input-<host>:<port>/<endpoint>" which is fine, except it doesn't tell you what "host" is. I figured it would be the same as the URL that I used to login: "prd-p-0qk3h.splunkcloud.com", however, the DNS entry for http-inputs-prd-p-0qk3h.splunkcloud.com doesn't exist, nor does input-prd-p-0qk3h.splunikcloud.com. So, at this point I am stuck. If anyone can get me over this hurdle, I'd greatly appreciate it.
I am trying to figure out if there is a query that will tell me which forwarder some of the data I have in my indexers came from.  This question is coming because we have a syslog server that gathers... See more...
I am trying to figure out if there is a query that will tell me which forwarder some of the data I have in my indexers came from.  This question is coming because we have a syslog server that gathers files from multiple servers, and the forwarder on the syslog server reads those syslog and sends to the indexers.  But in doing this it updates the host to be the server that is in each of the files. What I want to know is which of our syslog servers did the actual forwarding of that data over to the indexer. Is that data in there metadata somewhere which can be queried? Thanks.
Hello, I have been using the Linux Auditd app, which has been great, but I noticed that the learnt_posix_identities  lookup filters out the root user.   [|inputlookup auditd_indices] [|inputlookup... See more...
Hello, I have been using the Linux Auditd app, which has been great, but I noticed that the learnt_posix_identities  lookup filters out the root user.   [|inputlookup auditd_indices] [|inputlookup auditd_sourcetypes] type="USER_START" acct=* NOT acct=root NOT auid=0 terminal=/dev/tty* OR NOT addr=? | dedup auid | table auid acct | rename auid as _key | rename acct as user | outputlookup append=true learnt_posix_identities   A lot of my syscalls are coming from root and the dashboards display unknown user. I could just manually edit the KV Store to add root, however I wanted to understand why this filter was here to make sure I don't break something. Regards, Dave
Short Version Is there a way to see who the owner is of the dashboard currently opened? This is similar to, but different from, being able to see who the current user is that is looking at a dashboa... See more...
Short Version Is there a way to see who the owner is of the dashboard currently opened? This is similar to, but different from, being able to see who the current user is that is looking at a dashboard (| rest splunk_server=local /services/authentication/current-context). The idea would be to hide that query into the dashboard in order to manipulate the data results. Note: I know there are rest calls to see all of knowledge objects and owners; I'm specifically interested in the currently active dashboard. Use Case If you had a dashboard that displayed data based on who is actively looking at it you could leverage the rest query I pasted above. The end result might be something like this >link<; likely a number of ways to achieve similar ends. However, this requires the user logging into Splunk. What if you wanted to email a PDF copy of a dashboard to a user/user group but have the results customized based on the recipient? In the end you will likely need to create a dashboard instance for each destination group but how do you minimize the manual work of individual customizations? Further, unlike viewing saved searches as the user (vs owner), the pdf/report delivery engine will not recognize the user; it will only use the owner of the dashboard.  My thought is if you use macros for the vast majority of your searches you can at least minimize the number of places you need to make changes as the need arises. The next pieces is what about when you create that copy of the dashboard you changed the owner to the recipient. If you could replace that hidden search that would otherwise look at who the user is and pass tokenized arguments as variables to display data could you do the same thing if that hidden search would instead look at who owns the current copy of the dashboard?  Anyone else gone down this route - either to identify the current dashboard owner or in support of the ultimate use case?
Hi, I have this error message and it is stopping any data being shown in data summary, I can't add any data as .zip or .csv. I see real-time win logs being pumped in but can't actually add data manu... See more...
Hi, I have this error message and it is stopping any data being shown in data summary, I can't add any data as .zip or .csv. I see real-time win logs being pumped in but can't actually add data manually. I am forwarding data to localhost and receiving on port 9997, since I changed forwarding and receiving I have had this error message. Are there default forwarding and receiving ports?  Will a re-install of Splunk Enterprise web interface give me back default settings so I can manually add data again.  I can get data in Lookups but I really need to be able to add data manually.  Thanks for any help.
I have an example of a custom python script for displaying events from db in the splunk, help with the same example how to do this for Java?       import sys, time from splunklib.searchcommands i... See more...
I have an example of a custom python script for displaying events from db in the splunk, help with the same example how to do this for Java?       import sys, time from splunklib.searchcommands import \ dispatch, GeneratingCommand, Configuration, Option, validators @Configuration() class GenerateHelloCommand(GeneratingCommand): count = Option(require=True, validate=validators.Integer()) def generate(self): for i in range(1, self.count + 1): text = 'Hello World %d' % i yield {'_time': time.time(), 'event_no': i, '_raw': text } dispatch(GenerateHelloCommand, sys.argv, sys.stdin, sys.stdout, __name__)      
Search optimization question for y’all: We have an accelerated data model to try to drive improved performance for some dashboards. It is working, but... for the one *really* large class we have, it ... See more...
Search optimization question for y’all: We have an accelerated data model to try to drive improved performance for some dashboards. It is working, but... for the one *really* large class we have, it seems to take way longer than it should for an accelerated, tstats-based search. Here is one of the tstats commands. I’m eager to see if there’s something obvious I’m missing. The main issue seems to be the values(<field>) statements immediately after the FROM clause. If I remove those, the search drops from ~80 seconds to ~8 seconds.  I.e., those values(*) items are costly. But I don’t see another way to get them in to the results… because some events don’t have those fields. (I.e., putting them in by clause will make those events disappear, which is not what we need.)  How do I get a fast return of all records and their field values?  (NOTE: In other contexts, I’ve gotten around this by adding a custom field in the data model that concatenates the fields I want, then I re-extract (rex) them after tstats line. But that seems like a ridiculous workaround … and in this case, because of the number of fields I need to return, I might as well just throw _raw in to the data model… which seems ridiculous.)   | tstats summariesonly=t values(All_Activity.sessionID) values(All_Activity.personID) values(All_Activity.chapterTitle) values(All_Activity.sectionName) values(All_Activity.behaviorDetail) values(All_Activity.action) values(All_Activity.year) values(All_Activity.searchTerm) values(All_Activity.termName) values(All_Activity.videoname) FROM datamodel="etext_behavior" WHERE All_Activity.bookTitle="Design Your First Year Experience 2020 edition" All_Activity.course="LAS 101 Fall 2020" earliest=1597714329 latest=1603071129 BY _time All_Activity.risingID All_Activity.course All_Activity.bookTitle span=1s    
Hi, I'm new at splunk and signed up for Free Splunk Cloud. I setup a universal forwarder on a windows server and connected this forwarder to my instance of splunk cloud. I can see that there i a co... See more...
Hi, I'm new at splunk and signed up for Free Splunk Cloud. I setup a universal forwarder on a windows server and connected this forwarder to my instance of splunk cloud. I can see that there i a connection on the firewall but also in splunk on the cloud monitoring console at Forwarders I can see this machine sending some data. Then I want to send more data and added to the inputs.conf on the system/local on windows server the sections: [WinEventLog] interval=60 evt_resolve_ad_obj = 0 evt_dc_name= vDC01.xxxx.yyyyy evt_dns_name= xxxxx.yyyyy [WinEventLog://Security] disabled = 0 start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 Now I wonder why I cannot see any data on the splunk for that. Because in global section there is [default] index = default I should find that on default index - but there are no data?  Sure in the Secury-Eventlog is enought data to transfer. I wonder what to write to the inputs.conf when the windows version is localized to german - but found nothing on the weg so i think WinEventLog://Security is correct. Then I go thru the data I can see - I wonder if ack=false is a problem? 2-02-2020 14:22:56.866 +0000 INFO Metrics - group=tcpin_connections, ingest_pipe=0, 194.208.5.50:53158:9997, connectionType=cookedSSL, sourcePort=53158, sourceHost=xxxxxxxx, sourceIp=yyyyyy, destPort=9997, kb=0.3212890625, _tcp_Bps=10.612885479692524, _tcp_KBps=0.01036414597626223, _tcp_avg_thruput=0.3515837042852563, _tcp_Kprocessed=2229.7880859375, _tcp_eps=0.03225801057657302, _process_time_ms=0, evt_misc_kBps=0, evt_raw_kBps=0, evt_fields_kBps=0, evt_fn_kBps=0, evt_fv_kBps=0, evt_fn_str_kBps=0, evt_fn_meta_dyn_kBps=0, evt_fn_meta_predef_kBps=0, evt_fn_meta_str_kBps=0, evt_fv_num_kBps=0, evt_fv_str_kBps=0, evt_fv_predef_kBps=0, evt_fv_offlen_kBps=0, evt_fv_fp_kBps=0, build=24fd52428b5a, version=8.1.0.1, os=Windows, arch=x64, hostname=zzzzzzzzzz, guid=38460E6F-B4AF-479B-B3ED-717E41DD40A5, fwdType=uf, ssl=true, lastIndexer=54.156.189.210:9997, ack=false Then I googled and found that I have to add a datasource under Settings | Data | "Datasource" (not sure how to translate correct).  When I go to this I function and I think here is something missing:     There are "local sources": - Here I see HTTP and are able to add new sources (under actions)   "Forwarded sources" - Here everything is empty - no button to add anything   If I understand correctly I have to add windows Eventlog here?   Thank you! Regards Juergen      
I have events that unfortunately use a space instead of a 0 in their timestamp field.  The timestamp goes down to 6 decimal places, so there can be as many as 5 leading spaces in the decimal seconds ... See more...
I have events that unfortunately use a space instead of a 0 in their timestamp field.  The timestamp goes down to 6 decimal places, so there can be as many as 5 leading spaces in the decimal seconds section.  Each event starts with the timestamp as below.  As you can see, it has a leading space and I'd like to change that to a 0 [12-02-2020:08.31.44. 15133] SIP IN:  I have tried using SEDCMD in the props.conf, but it didn't seem to work on the newly indexed events, is my regex not correct or am I way off? [sip_sbc] SEDCMD-replace_space=s/^(\[[0-9-:\.]{20}\]) ()/\10\2/g   Edit: This is my current props.conf settings.  This works fine for the timestamps that have 6 digits after the . but any of them that have leading spaces fail to get the proper timestamp         [sip_sbc] BREAK_ONLY_BEFORE_DATE = DATETIME_CONFIG = LINE_BREAKER = ----------------------------------------------------------------------------------------([\r\n]+) NO_BINARY_CHECK = true SHOULD_LINEMERGE = false TIME_FORMAT = %m-%d-%Y:%H.%M.%S.%6N TIME_PREFIX = [ category = Custom pulldown_type = 1 BREAK_ONLY_BEFORE_TIME = disabled = false MUST_BREAK_AFTER =           Edit No. 2:  I have added a custom datetime.xml file for the app.  It does have an effect, but it's not working quite right.  It doesn't pad the leading spaces to 0, it just removes the space and therefor causes the subseconds to be much higher than they are supposed to be on some timestamps.           <datetime> <define name="custom_dateformat" extract="month, day, year"> <text><![CDATA[\[(\d+)-(\d+)-(\d+)]]></text> </define> <define name="custom_timeformat" extract="hour, minute, second, subsecond"> <text><![CDATA[\[\d+-\d+-\d+:(\d+).(\d+).(\d+).\s*(\d+)]]></text> </define> <timePatterns> <use name="custom_timeformat" /> </timePatterns> <datePatterns> <use name="custom_dateformat" /> </datePatterns> </datetime>         This datetime.xml caused an event with the timestamp: 01-08-2021:11.28.23.  8213 (note the 2 spaces before 8213)  to be parsed as 1/8/21 11:28:23.821  (this is 8 10ths of a second after the timestamp should be) It should be 1/8/21 11:28:23.008213
Hi All, i have installed the splunk cloudgateway app, configured proxy settings and have registered a device. However i am unable to load any assigned dashboards. The mobile device displays  'Oops... See more...
Hi All, i have installed the splunk cloudgateway app, configured proxy settings and have registered a device. However i am unable to load any assigned dashboards. The mobile device displays  'Oops, something went wrong' mob log extracted: [2020-12-02T14:06:52Z] [9475:] [ApplicationRequestManager] [error] CallingRequest SingleClientCallingRequest(A82153C3-AFB8-4AC4-AF2B-9118A15229B9)[parent: SingleClientRequest[requestID:A82153C3-AFB8-4AC4-AF2B-9118A15229B9 context:{SpacebridgeDashboardListCommand}]] failed with error: spacebridgeError(SpacebridgeProtobuf.Spacebridge_SpacebridgeMessage: id: "9abdb705-9472-4f57-99b8-93e313550f12" to: "`\232\262\205\311\317\256,\260L\323\001\330\330\vnR\247\361PEn\262\027\033\353\312\344L\221F\331" replyToMessageId: "A82153C3-AFB8-4AC4-AF2B-9118A15229B9" error {   code: ERROR_MESSAGE_UNDELIVERABLE } )   dashboard status: Cloud app internal log: appreciate any support/assistance   Many Thanks
I have a json file like below {"env":"UAT","label":"jenkins-17887.api.v2.dm.btc","App":"dm-d-services","rlmtemplate":"f2_api_fed","lastupdate":2020-11-23 11:09:78:455,"region":"APAC"}{"env":"UAT","l... See more...
I have a json file like below {"env":"UAT","label":"jenkins-17887.api.v2.dm.btc","App":"dm-d-services","rlmtemplate":"f2_api_fed","lastupdate":2020-11-23 11:09:78:455,"region":"APAC"}{"env":"UAT","label":"jenkins-17687.api.v2.dm.btc","App":"dt-s-services","rlmtemplate":"f3_api_fed","lastupdate":2020-11-23 11:025:79:475,"region":"APAC"}{"env":"UAT","label":"jenkins-18657.api.v2.dm.btc","App":"dt-s-services","rlmtemplate":"f3_api_fed","lastupdate":2020-11-23 11:025:79:475,"region":"APAC"}{"env":"UAT","label":"jenkins-17637.api.v2.dm.btc","App":"dt-s-services","rlmtemplate":"f3_api_fed","lastupdate":2020-11-23 11:025:79:475,"region":"APAC"} in splunk, _raw contains valid json data for all events. issue is all fields are multi valued containing two copies of the json object. forwarding json data to splunk and props is installed on indexer not in heavy forwarder, props.conf [test_json] INDEXED_EXTRACTIONS = JSON KV_MODE = none AUTO_KV_JSON = false SHOULD_LINEMERGE = false FYI , using splunk 8.0 tried by setting KV_MODE = JSON  but it is also not working  
ERROR [monki_HMCatalogSyncJob::de.hybris.platform.servicelayer.internal.jalo.ServicelayerJob] -[J= U= C=] (monki) (0000MM1K) [CatalogVersionSyncJob] Finished synchronization in 0d 00h:00m:0... See more...
ERROR [monki_HMCatalogSyncJob::de.hybris.platform.servicelayer.internal.jalo.ServicelayerJob] -[J= U= C=] (monki) (0000MM1K) [CatalogVersionSyncJob] Finished synchronization in 0d 00h:00m:07s:499ms. There were errors during the synchronization! ​ INFO [monki_HMCatalogSyncJob::de.hybris.platform.servicelayer.internal.jalo.ServicelayerJob] -[J= U= C=] (monki) (0000ML9S) [CatalogVersionSyncJob] Finished synchronization in 0d 00h:00m:17s:091ms. No errors.