All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Use the appendpipe command to add synthetic results when the subsearch finds nothing. | where [ | loadjob $stoermeldungen_sid$ | where stoerCode IN ("S00") | addinfo | where ... See more...
Use the appendpipe command to add synthetic results when the subsearch finds nothing. | where [ | loadjob $stoermeldungen_sid$ | where stoerCode IN ("S00") | addinfo | where importZeit_unixF >= relative_time(info_max_time,"-d@d") AND importZeit_unixF <= relative_time(info_max_time,"@d") | stats count as dayCount by zbpIdentifier | sort -dayCount | head 10 | appendpipe [|stats count as Count | eval zbpIdentifier="Nothing found" | where Count=0 | fields - Count] | table zbpIdentifier ]  
Dear experts Based on the following search:  <search id="subsearch_results"> <query> search index="iii" search_name="nnn" Umgebung="uuu" isbName="isb" status IN ("ALREA... See more...
Dear experts Based on the following search:  <search id="subsearch_results"> <query> search index="iii" search_name="nnn" Umgebung="uuu" isbName="isb" status IN ("ALREADY*", "NO_NOTIF*", "UNCONF*", "NOTIF*") zbpIdentifier NOT 453-8888 stoerCodeGruppe NOT ("GUT*") | eval importZeit_unixF = strptime(importZeit, "%Y-%m-%dT%H:%M:%S.%N%Z") | eval importZeit_humanF = strftime(importZeit_unixF, "%Y-%m-%d %H:%M:%S") | table importZeit_humanF importZeit_unixF zbpIdentifier status stoerCode stoerCodeGruppe </query> <earliest>$t_time.earliest$</earliest> <latest>$t_time.latest$@d</latest> <done> <condition> <set token="stoermeldungen_sid">$job.sid$</set> </condition> </done> </search> I try to load some data with:  <query> | loadjob $stoermeldungen_sid$ | where stoerCode IN ("S00") | where [ | loadjob $stoermeldungen_sid$ | where stoerCode IN ("S00") | addinfo | where importZeit_unixF &gt;= relative_time(info_max_time,"-d@d") AND importZeit_unixF &lt;= relative_time(info_max_time,"@d") | stats count as dayCount by zbpIdentifier | sort -dayCount | head 10 | table zbpIdentifier ] | addinfo | where .... Basic idea:  the subsearch first derives the top 10 of the elements based on the number of yesterdays error messages.  based on the subsearch result then the 7 day history is read and displayed (not fully shown in the example above) All works fine except if there are no messages found by the subsearch. If yesterday no error messages of the given type were recorded, the subsearch returns a result which causes the following error message in the dashboard: Error in ´where´command: The expression is malformed. An unexpected character is reached at ´)´.  The where command is the one which should take the result of the subsearch (3rd line of code).  The error message is just not nice for the end user, better would be to get just an empty chart if no data is found.  The question is: How to fix the result of the subsearch in a way, that also the main search runs and gets the proper empty result, and therefore the empty graph instead of the "not nice" error message? Thank you for your help.
As I said above, there is a steep learning curve with SPL's JSON flattening schema.  But it is learnable, and the syntax is reasonably logical. (Logical, not intuitive or self-explanatory.) First, t... See more...
As I said above, there is a steep learning curve with SPL's JSON flattening schema.  But it is learnable, and the syntax is reasonably logical. (Logical, not intuitive or self-explanatory.) First, the easiest way to to examine each individual array element is by mvexpand.  Like | spath path=json.msg | spath input=json.msg path=query{} | mvexpand query{} | rename query{} as query_single Here, xxx{} is  SPL's explicity denotation of an array that is flattened from a structure; array is most commonly known in SPL as multivalued.  You will see a lot of this word in documentation. Second, if you only want the first element, simply take the first element using mvindex. | spath path=json.msg | spath input=json.msg path=query{} | eval first_query = mvindex('query{}', 0) Test these over any of the emulations @ITWhisperer and I supplied above, and compare with your real data.
The queue = nullQueue setting is not valid in props.conf.  Make sure the host and source names match that of the incoming data.  Consider adding a sourcetype stanza for the data. The stanzas belong... See more...
The queue = nullQueue setting is not valid in props.conf.  Make sure the host and source names match that of the incoming data.  Consider adding a sourcetype stanza for the data. The stanzas belong in the first full instance of Splunk that processes the data (indexers and HFs).  Put them in the default directory of a custom app.
Hi @Gorwinn , let me understand, you want to take all the exents except the ones containing the word "envoy", is it correct? at first, how are you taking these logs? if using an Heavy Forwarder, y... See more...
Hi @Gorwinn , let me understand, you want to take all the exents except the ones containing the word "envoy", is it correct? at first, how are you taking these logs? if using an Heavy Forwarder, you have to put the props.conf and transforms.conf on the first Splunk Full instance that data pass trhough, in other words on the Heavy Forwarder, if present or on the Indexer. then, the transformation names must me unique in props.conf:   [host::vcenter] TRANSFORMS-null = setnull [source::/var/log/remote/catchall/*/*.log] TRANSFORMS-null2 = setnull   then check the regex using the rex command in Splunk. Anyway, the issue usually is the location of the conf files (obviously I suppose that you restarted Splunk after conf files modification!). The documentation is at https://docs.splunk.com/Documentation/Splunk/9.4.0/Forwarding/Routeandfilterdatad#Filter_event_data_and_send_to_queues  Ciao. Giuseppe
Hello All!  I am trying to discard a certain event before the Indexers Ingest it using keyword envoy. Below is an example timestamp vcenter envoy-access 2024-12-29T23:53:56.632Z info envoy[139855... See more...
Hello All!  I am trying to discard a certain event before the Indexers Ingest it using keyword envoy. Below is an example timestamp vcenter envoy-access 2024-12-29T23:53:56.632Z info envoy[139855859431232] [Originator@6876 sub=Default] 2024-12-29T23:53:50.392Z POST /sdk HTTP/1.1 200 via_upstream I tried creating props and transforms conf in  $SPLUNK_HOME/etc/system/local but it's not working. My questions are if my stanzas are correct and if I should put them in local directory? Appreciate any assistance you can provide, Thank you.  Props.conf [nullQueue] queue = nullQueue [host::vcenter] TRANSFORMS-null = setnull [source::/var/log/remote/catchall/(IPAddress of Vcenter)/*.log] TRANSFORMS-null = setnull transforms.conf [setnull] REGEX = envoy DEST_KEY = queue FORMAT = nullQueue
Hello hello! I think what you are looking for here is the `transaction` command, but it can have some extra over-head.  I'll leave some examples here to see if they work for you. Since your requirem... See more...
Hello hello! I think what you are looking for here is the `transaction` command, but it can have some extra over-head.  I'll leave some examples here to see if they work for you. Since your requirement is simple, I suggest using the `stats` command instead of `transaction`. If you wanted to look at a specific EventID first and then another specific EventID after, `transaction` might be easier to implement. Version using `transaction`:   index=lalala source=lalala (EventID=4720 OR (EventID=4728 AND PrimaryGroupId IN (512,516,517,518,519))) | transaction UserName maxspan=5m | search EventID=4720 AND EventID=4728   Version using `stats`: index=lalala source=lalala (EventID=4720 OR (EventID=4728 AND PrimaryGroupId IN (512,516,517,518,519))) | stats values(EventID) AS EventIDs by UserName | search EventIDs=4720 EventIDs=4728 Edit: Fixing the code blocks.
I'm trying to create a search in which the following should be done:  - look for a user creation process (ID 4720) - and then look (for the same user) if there is a follow up group adding event (... See more...
I'm trying to create a search in which the following should be done:  - look for a user creation process (ID 4720) - and then look (for the same user) if there is a follow up group adding event (4728) for privileged groups like (512,516 etc.)    my SPL was so far like that:    index=lalala source=lalala EventID=4720 OR 4728 PrimaryGroupId IN (512,516,517,518,519)   BUT that way I only look for either a user creation OR a user being added as a privileged user. but I want to like both. I understand that I need to somehow connect those two searches but I don't know how exactly.     
Hi, After completing the upgrade from Splunk Enterprise version 9.3.2 to v9.4 the KVstore will no longer start. Splunk has yet to do the KVstore upgrade to v7 as the KVstore cannot start. We were al... See more...
Hi, After completing the upgrade from Splunk Enterprise version 9.3.2 to v9.4 the KVstore will no longer start. Splunk has yet to do the KVstore upgrade to v7 as the KVstore cannot start. We were already on 4.2 wiredtiger. The is no [kvstore] stanza in server.conf so everything should be default. The relavent lines from splunkd.log are:     INFO KVStoreConfigurationProvider [9192 MainThread] - Since x509 is not enabled - using a default config from [sslConfig] for Mongod mTLS authentication WARN KVStoreConfigurationProvider [9192 MainThread] - Action scheduled, but event loop is not ready yet INFO MongodRunner [7668 KVStoreConfigurationThread] - Starting mongod with executable name=mongod-4.2.exe version=kvstore version 4.2 INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --dbpath C:\Program Files\Splunk\var\lib\splunk\kvstore\mongo INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --storageEngine wiredTiger INFO MongodRunner [7668 KVStoreConfigurationThread] - Using cacheSize=1.65GB INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --port 8191 INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --timeStampFormat iso8601-utc INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --oplogSize 200 INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --keyFile C:\Program Files\Splunk\var\lib\splunk\kvstore\mongo\splunk.key INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --setParameter enableLocalhostAuthBypass=0 INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --setParameter oplogFetcherSteadyStateMaxFetcherRestarts=0 INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --replSet 4EA2F2AF-2584-4BB0-A2C4-414E7CB68BC2 INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --bind_ip=0.0.0.0 (all ipv4 addresses) INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --sslCAFile C:\Program Files\Splunk\etc\auth\cacert.pem INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --tlsAllowConnectionsWithoutCertificates for version 4.2 INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --sslMode requireSSL INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --sslAllowInvalidHostnames WARN KVStoreConfigurationProvider [9192 MainThread] - Action scheduled, but event loop is not ready yet INFO KVStoreConfigurationProvider [9192 MainThread] - "SAML cert db" registration with KVStore successful INFO KVStoreConfigurationProvider [9192 MainThread] - "Auth cert db" registration with KVStore successful INFO KVStoreConfigurationProvider [9192 MainThread] - "JsonWebToken Manager" registration with KVStore successful INFO KVStoreBackupRestore [1436 KVStoreBackupThread] - thread started. INFO KVStoreConfigurationProvider [9192 MainThread] - "Certificate Manager" registration with KVStore successful INFO MongodRunner [7668 KVStoreConfigurationThread] - Found an existing PFX certificate INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --sslCertificateSelector subject=SplunkServerDefaultCert INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --sslAllowInvalidCertificates INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --tlsDisabledProtocols noTLS1_0,noTLS1_1 INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --sslCipherConfig ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDH-ECDSA-AES256-GCM-SHA384:ECDH-ECDSA-AES128-GCM-SHA256:ECDH-ECDSA-AES128-SHA256:AES256-GCM-SHA384:AES128-GCM-SHA256:AES128-SHA256 INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --noscripting WARN MongoClient [7668 KVStoreConfigurationThread] - Disabling TLS hostname validation for localhost ERROR MongodRunner [5692 MongodLogThread] - mongod exited abnormally (exit code 14, status: exited with code 14) - look at mongod.log to investigate. ERROR KVStoreBulletinBoardManager [5692 MongodLogThread] - KV Store process terminated abnormally (exit code 14, status exited with code 14). See mongod.log and splunkd.log for details. WARN KVStoreConfigurationProvider [5692 MongodLogThread] - Action scheduled, but event loop is not ready yet ERROR KVStoreBulletinBoardManager [5692 MongodLogThread] - KV Store changed status to failed. KVStore process terminated.. ERROR KVStoreConfigurationProvider [7668 KVStoreConfigurationThread] - Failed to start mongod on first attempt reason=KVStore service will not start because kvstore process terminated ERROR KVStoreConfigurationProvider [7668 KVStoreConfigurationThread] - Could not start mongo instance. Initialization failed. ERROR KVStoreBulletinBoardManager [7668 KVStoreConfigurationThread] - Failed to start KV Store process. See mongod.log and splunkd.log for details. INFO KVStoreConfigurationProvider [7668 KVStoreConfigurationThread] - Mongod service shutting down     mogod.log contains the following:   W CONTROL [main] Option: sslMode is deprecated. Please use tlsMode instead. W CONTROL [main] Option: sslCAFile is deprecated. Please use tlsCAFile instead. W CONTROL [main] Option: sslCipherConfig is deprecated. Please use tlsCipherConfig instead. W CONTROL [main] Option: sslAllowInvalidHostnames is deprecated. Please use tlsAllowInvalidHostnames instead. W CONTROL [main] Option: sslAllowInvalidCertificates is deprecated. Please use tlsAllowInvalidCertificates instead. W CONTROL [main] Option: sslCertificateSelector is deprecated. Please use tlsCertificateSelector instead. W CONTROL [main] net.tls.tlsCipherConfig is deprecated. It will be removed in a future release. W NETWORK [main] Mixing certs from the system certificate store and PEM files. This may produced unexpected results. W NETWORK [main] sslCipherConfig parameter is not supported with Windows SChannel and is ignored. W NETWORK [main] sslCipherConfig parameter is not supported with Windows SChannel and is ignored. W NETWORK [main] Server certificate has no compatible Subject Alternative Name. This may prevent TLS clients from connecting W ASIO [main] No TransportLayer configured during NetworkInterface startup W NETWORK [main] sslCipherConfig parameter is not supported with Windows SChannel and is ignored. W ASIO [main] No TransportLayer configured during NetworkInterface startup W NETWORK [main] sslCipherConfig parameter is not supported with Windows SChannel and is ignored. I CONTROL [initandlisten] MongoDB starting : pid=4640 port=8191 dbpath=C:\Program Files\Splunk\var\lib\splunk\kvstore\mongo 64-bit host=[redacted] I CONTROL [initandlisten] targetMinOS: Windows 7/Windows Server 2008 R2 I CONTROL [initandlisten] db version v4.2.24 I CONTROL [initandlisten] git version: 5e4ec1d24431fcdd28b579a024c5c801b8cde4e2 I CONTROL [initandlisten] allocator: tcmalloc I CONTROL [initandlisten] modules: enterprise I CONTROL [initandlisten] build environment: I CONTROL [initandlisten] distmod: windows-64 I CONTROL [initandlisten] distarch: x86_64 I CONTROL [initandlisten] target_arch: x86_64 I CONTROL [initandlisten] options: { net: { bindIp: "0.0.0.0", port: 8191, tls: { CAFile: "C:\Program Files\Splunk\etc\auth\cacert.pem", allowConnectionsWithoutCertificates: true, allowInvalidCertificates: true, allowInvalidHostnames: true, certificateSelector: "subject=SplunkServerDefaultCert", disabledProtocols: "noTLS1_0,noTLS1_1", mode: "requireTLS", tlsCipherConfig: "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RS..." } }, replication: { oplogSizeMB: 200, replSet: "4EA2F2AF-2584-4BB0-A2C4-414E7CB68BC2" }, security: { javascriptEnabled: false, keyFile: "C:\Program Files\Splunk\var\lib\splunk\kvstore\mongo\splunk.key" }, setParameter: { enableLocalhostAuthBypass: "0", oplogFetcherSteadyStateMaxFetcherRestarts: "0" }, storage: { dbPath: "C:\Program Files\Splunk\var\lib\splunk\kvstore\mongo", engine: "wiredTiger", wiredTiger: { engineConfig: { cacheSizeGB: 1.65 } } }, systemLog: { timeStampFormat: "iso8601-utc" } } W NETWORK [initandlisten] sslCipherConfig parameter is not supported with Windows SChannel and is ignored. W NETWORK [initandlisten] sslCipherConfig parameter is not supported with Windows SChannel and is ignored. I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=1689M,cache_overflow=(file_max=0M),session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress], W STORAGE [initandlisten] Failed to start up WiredTiger under any compatibility version. F STORAGE [initandlisten] Reason: 129: Operation not supported F - [initandlisten] Fatal Assertion 28595 at src\mongo\db\storage\wiredtiger\wiredtiger_kv_engine.cpp 928 F - [initandlisten] \n\n***aborting after fassert() failure\n\n    Does anyone have any idea how to resolve this? Thanks,
See my reply here if it can help https://community.splunk.com/t5/Deployment-Architecture/Splunk-Storage-Sizing-Guidelines-and-calculations/m-p/708258/highlight/true#M29013
See my reply here if it can help https://community.splunk.com/t5/Deployment-Architecture/Splunk-Storage-Sizing-Guidelines-and-calculations/m-p/708258/highlight/true#M29013
See my reply here if it can help https://community.splunk.com/t5/Deployment-Architecture/Splunk-Storage-Sizing-Guidelines-and-calculations/m-p/708258/highlight/true#M29013
See my reply here if it can help https://community.splunk.com/t5/Deployment-Architecture/Splunk-Storage-Sizing-Guidelines-and-calculations/m-p/708258/highlight/true#M29013
Things have improved a lot thanks to tsidxWritingLevel enhancements. If you set tsidxWritingLevel=4, the maximum available today, and all your buckets have been already written with this level you ... See more...
Things have improved a lot thanks to tsidxWritingLevel enhancements. If you set tsidxWritingLevel=4, the maximum available today, and all your buckets have been already written with this level you can achieve a compress ratio of 5.35:1 This means 55 TB of raw logs will occupy around 10 TB (tsidx + raw) on disk. At least this is what we have in our deployment. This number can vary depending on the type of data you are ingesting. Here the query I used, running All Time, starting from the one present in the Monitoring Console >> Indexing >> Index and Volumes >> Index Detail: Instance   | rest splunk_server=<oneOfYourIndexers> /services/data/indexes datatype=all | join type=outer title [ | rest splunk_server=<oneOfYourIndexers> /services/data/indexes-extended datatype=all ] | `dmc_exclude_indexes` | eval warm_bucket_size = coalesce('bucket_dirs.home.warm_bucket_size', 'bucket_dirs.home.size') | eval cold_bucket_size = coalesce('bucket_dirs.cold.bucket_size', 'bucket_dirs.cold.size') | eval hot_bucket_size = if(isnotnull(cold_bucket_size), total_size - cold_bucket_size - warm_bucket_size, total_size - warm_bucket_size) | eval thawed_bucket_size = coalesce('bucket_dirs.thawed.bucket_size', 'bucket_dirs.thawed.size') | eval warm_bucket_size_gb = coalesce(round(warm_bucket_size / 1024, 2), 0.00) | eval hot_bucket_size_gb = coalesce(round(hot_bucket_size / 1024, 2), 0.00) | eval cold_bucket_size_gb = coalesce(round(cold_bucket_size / 1024, 2), 0.00) | eval thawed_bucket_size_gb = coalesce(round(thawed_bucket_size / 1024, 2), 0.00) | eval warm_bucket_count = coalesce('bucket_dirs.home.warm_bucket_count', 0) | eval hot_bucket_count = coalesce('bucket_dirs.home.hot_bucket_count', 0) | eval cold_bucket_count = coalesce('bucket_dirs.cold.bucket_count', 0) | eval thawed_bucket_count = coalesce('bucket_dirs.thawed.bucket_count', 0) | eval home_event_count = coalesce('bucket_dirs.home.event_count', 0) | eval cold_event_count = coalesce('bucket_dirs.cold.event_count', 0) | eval thawed_event_count = coalesce('bucket_dirs.thawed.event_count', 0) | eval home_bucket_size_gb = coalesce(round((warm_bucket_size + hot_bucket_size) / 1024, 2), 0.00) | eval homeBucketMaxSizeGB = coalesce(round('homePath.maxDataSizeMB' / 1024, 2), 0.00) | eval home_bucket_capacity_gb = if(homeBucketMaxSizeGB > 0, homeBucketMaxSizeGB, "unlimited") | eval home_bucket_usage_gb = home_bucket_size_gb." / ".home_bucket_capacity_gb | eval cold_bucket_capacity_gb = coalesce(round('coldPath.maxDataSizeMB' / 1024, 2), 0.00) | eval cold_bucket_capacity_gb = if(cold_bucket_capacity_gb > 0, cold_bucket_capacity_gb, "unlimited") | eval cold_bucket_usage_gb = cold_bucket_size_gb." / ".cold_bucket_capacity_gb | eval currentDBSizeGB = round(currentDBSizeMB / 1024, 2) | eval maxTotalDataSizeGB = if(maxTotalDataSizeMB > 0, round(maxTotalDataSizeMB / 1024, 2), "unlimited") | eval disk_usage_gb = currentDBSizeGB." / ".maxTotalDataSizeGB | eval currentTimePeriodDay = coalesce(round((now() - strptime(minTime,"%Y-%m-%dT%H:%M:%S%z")) / 86400, 0), 0) | eval frozenTimePeriodDay = coalesce(round(frozenTimePeriodInSecs / 86400, 0), 0) | eval frozenTimePeriodDay = if(frozenTimePeriodDay > 0, frozenTimePeriodDay, "unlimited") | eval freeze_period_viz_day = currentTimePeriodDay." / ".frozenTimePeriodDay | eval total_bucket_count = toString(coalesce(total_bucket_count, 0), "commas") | eval totalEventCount = toString(coalesce(totalEventCount, 0), "commas") | eval total_raw_size_gb = round(total_raw_size / 1024, 2) | eval avg_bucket_size_gb = round(currentDBSizeGB / total_bucket_count, 2) | eval compress_ratio = round(total_raw_size_gb / currentDBSizeGB, 2)." : 1" | fields title, datatype currentDBSizeGB, totalEventCount, total_bucket_count, avg_bucket_size_gb, total_raw_size_gb, compress_ratio, minTime, maxTime freeze_period_viz_day, disk_usage_gb, home_bucket_usage_gb, cold_bucket_usage_gb, hot_bucket_size_gb, warm_bucket_size_gb, cold_bucket_size_gb, thawed_bucket_size_gb, hot_bucket_count, warm_bucket_count, cold_bucket_count, thawed_bucket_count, home_event_count, cold_event_count, thawed_event_count, homePath, homePath_expanded, coldPath, coldPath_expanded, thawedPath, thawedPath_expanded, summaryHomePath_expanded, tstatsHomePath, tstatsHomePath_expanded, maxTotalDataSizeMB, frozenTimePeriodInSecs, homePath.maxDataSizeMB, coldPath.maxDataSizeMB, maxDataSize, maxHotBuckets, maxWarmDBCount | search title=* | table title currentDBSizeGB total_raw_size_gb compress_ratio | where isnotnull(total_raw_size_gb) | where isnotnull(compress_ratio) | stats sum(currentDBSizeGB) as currentDBSizeGB, sum(total_raw_size_gb) as total_raw_size_gb | eval compress_ratio = round(total_raw_size_gb / currentDBSizeGB, 2)." : 1"        
Hi another guys already show to you how you can technically add _time or event_time into your stats command. But I think that much more important thing is to understand and decide what _time you are... See more...
Hi another guys already show to you how you can technically add _time or event_time into your stats command. But I think that much more important thing is to understand and decide what _time you are needing with stats? Usually stats is used to make some statistics aggregations for some values or another option is join some data together. In both of these case you must understand your data and what you really want to show with your _time field value. I suppose that in many cases it's much harder to make decisions what _time you need to use. Use _time from one event, make some span e.g. 1min, 1hour etc. of calculate average or median time of events or something else? r. Ismo
The stats command discards all fields not mentioned in the command so, in this case, only the count, user, ip, and action fields are available.  Fields cannot be re-added after they've been discarded... See more...
The stats command discards all fields not mentioned in the command so, in this case, only the count, user, ip, and action fields are available.  Fields cannot be re-added after they've been discarded by such a command.   The solution is to include the desired field(s) in the stats command. | stats count by event_time, user, ip, action This may or may not make sense depending on your data and the desired output.
It's a FYI for all using 3rd party. Chances are many not paying attention on subsecond field value.
That's right. Go to third party to fix the issue if possible. 
So, the solution is to perhaps use a different third party solution or raise defect with said third party and get them to fix their data corruption?  (Not a Splunk problem!?)
Are you trying to perform the stats by _time also ? Just add your event_time into the stats command.  Change the event_time format to only Hour and Minute or just by hour ? index=* | eval event_t... See more...
Are you trying to perform the stats by _time also ? Just add your event_time into the stats command.  Change the event_time format to only Hour and Minute or just by hour ? index=* | eval event_time=strftime(_time, "%Y-%m-%d %H:%M:%S") | stats count by event_time, user, ip, action | iplocation ip | sort -count