All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm trying to create a search in which the following should be done:  - look for a user creation process (ID 4720) - and then look (for the same user) if there is a follow up group adding event (... See more...
I'm trying to create a search in which the following should be done:  - look for a user creation process (ID 4720) - and then look (for the same user) if there is a follow up group adding event (4728) for privileged groups like (512,516 etc.)    my SPL was so far like that:    index=lalala source=lalala EventID=4720 OR 4728 PrimaryGroupId IN (512,516,517,518,519)   BUT that way I only look for either a user creation OR a user being added as a privileged user. but I want to like both. I understand that I need to somehow connect those two searches but I don't know how exactly.     
Hi, After completing the upgrade from Splunk Enterprise version 9.3.2 to v9.4 the KVstore will no longer start. Splunk has yet to do the KVstore upgrade to v7 as the KVstore cannot start. We were al... See more...
Hi, After completing the upgrade from Splunk Enterprise version 9.3.2 to v9.4 the KVstore will no longer start. Splunk has yet to do the KVstore upgrade to v7 as the KVstore cannot start. We were already on 4.2 wiredtiger. The is no [kvstore] stanza in server.conf so everything should be default. The relavent lines from splunkd.log are:     INFO KVStoreConfigurationProvider [9192 MainThread] - Since x509 is not enabled - using a default config from [sslConfig] for Mongod mTLS authentication WARN KVStoreConfigurationProvider [9192 MainThread] - Action scheduled, but event loop is not ready yet INFO MongodRunner [7668 KVStoreConfigurationThread] - Starting mongod with executable name=mongod-4.2.exe version=kvstore version 4.2 INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --dbpath C:\Program Files\Splunk\var\lib\splunk\kvstore\mongo INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --storageEngine wiredTiger INFO MongodRunner [7668 KVStoreConfigurationThread] - Using cacheSize=1.65GB INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --port 8191 INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --timeStampFormat iso8601-utc INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --oplogSize 200 INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --keyFile C:\Program Files\Splunk\var\lib\splunk\kvstore\mongo\splunk.key INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --setParameter enableLocalhostAuthBypass=0 INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --setParameter oplogFetcherSteadyStateMaxFetcherRestarts=0 INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --replSet 4EA2F2AF-2584-4BB0-A2C4-414E7CB68BC2 INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --bind_ip=0.0.0.0 (all ipv4 addresses) INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --sslCAFile C:\Program Files\Splunk\etc\auth\cacert.pem INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --tlsAllowConnectionsWithoutCertificates for version 4.2 INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --sslMode requireSSL INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --sslAllowInvalidHostnames WARN KVStoreConfigurationProvider [9192 MainThread] - Action scheduled, but event loop is not ready yet INFO KVStoreConfigurationProvider [9192 MainThread] - "SAML cert db" registration with KVStore successful INFO KVStoreConfigurationProvider [9192 MainThread] - "Auth cert db" registration with KVStore successful INFO KVStoreConfigurationProvider [9192 MainThread] - "JsonWebToken Manager" registration with KVStore successful INFO KVStoreBackupRestore [1436 KVStoreBackupThread] - thread started. INFO KVStoreConfigurationProvider [9192 MainThread] - "Certificate Manager" registration with KVStore successful INFO MongodRunner [7668 KVStoreConfigurationThread] - Found an existing PFX certificate INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --sslCertificateSelector subject=SplunkServerDefaultCert INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --sslAllowInvalidCertificates INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --tlsDisabledProtocols noTLS1_0,noTLS1_1 INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --sslCipherConfig ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDH-ECDSA-AES256-GCM-SHA384:ECDH-ECDSA-AES128-GCM-SHA256:ECDH-ECDSA-AES128-SHA256:AES256-GCM-SHA384:AES128-GCM-SHA256:AES128-SHA256 INFO MongodRunner [7668 KVStoreConfigurationThread] - Using mongod command line --noscripting WARN MongoClient [7668 KVStoreConfigurationThread] - Disabling TLS hostname validation for localhost ERROR MongodRunner [5692 MongodLogThread] - mongod exited abnormally (exit code 14, status: exited with code 14) - look at mongod.log to investigate. ERROR KVStoreBulletinBoardManager [5692 MongodLogThread] - KV Store process terminated abnormally (exit code 14, status exited with code 14). See mongod.log and splunkd.log for details. WARN KVStoreConfigurationProvider [5692 MongodLogThread] - Action scheduled, but event loop is not ready yet ERROR KVStoreBulletinBoardManager [5692 MongodLogThread] - KV Store changed status to failed. KVStore process terminated.. ERROR KVStoreConfigurationProvider [7668 KVStoreConfigurationThread] - Failed to start mongod on first attempt reason=KVStore service will not start because kvstore process terminated ERROR KVStoreConfigurationProvider [7668 KVStoreConfigurationThread] - Could not start mongo instance. Initialization failed. ERROR KVStoreBulletinBoardManager [7668 KVStoreConfigurationThread] - Failed to start KV Store process. See mongod.log and splunkd.log for details. INFO KVStoreConfigurationProvider [7668 KVStoreConfigurationThread] - Mongod service shutting down     mogod.log contains the following:   W CONTROL [main] Option: sslMode is deprecated. Please use tlsMode instead. W CONTROL [main] Option: sslCAFile is deprecated. Please use tlsCAFile instead. W CONTROL [main] Option: sslCipherConfig is deprecated. Please use tlsCipherConfig instead. W CONTROL [main] Option: sslAllowInvalidHostnames is deprecated. Please use tlsAllowInvalidHostnames instead. W CONTROL [main] Option: sslAllowInvalidCertificates is deprecated. Please use tlsAllowInvalidCertificates instead. W CONTROL [main] Option: sslCertificateSelector is deprecated. Please use tlsCertificateSelector instead. W CONTROL [main] net.tls.tlsCipherConfig is deprecated. It will be removed in a future release. W NETWORK [main] Mixing certs from the system certificate store and PEM files. This may produced unexpected results. W NETWORK [main] sslCipherConfig parameter is not supported with Windows SChannel and is ignored. W NETWORK [main] sslCipherConfig parameter is not supported with Windows SChannel and is ignored. W NETWORK [main] Server certificate has no compatible Subject Alternative Name. This may prevent TLS clients from connecting W ASIO [main] No TransportLayer configured during NetworkInterface startup W NETWORK [main] sslCipherConfig parameter is not supported with Windows SChannel and is ignored. W ASIO [main] No TransportLayer configured during NetworkInterface startup W NETWORK [main] sslCipherConfig parameter is not supported with Windows SChannel and is ignored. I CONTROL [initandlisten] MongoDB starting : pid=4640 port=8191 dbpath=C:\Program Files\Splunk\var\lib\splunk\kvstore\mongo 64-bit host=[redacted] I CONTROL [initandlisten] targetMinOS: Windows 7/Windows Server 2008 R2 I CONTROL [initandlisten] db version v4.2.24 I CONTROL [initandlisten] git version: 5e4ec1d24431fcdd28b579a024c5c801b8cde4e2 I CONTROL [initandlisten] allocator: tcmalloc I CONTROL [initandlisten] modules: enterprise I CONTROL [initandlisten] build environment: I CONTROL [initandlisten] distmod: windows-64 I CONTROL [initandlisten] distarch: x86_64 I CONTROL [initandlisten] target_arch: x86_64 I CONTROL [initandlisten] options: { net: { bindIp: "0.0.0.0", port: 8191, tls: { CAFile: "C:\Program Files\Splunk\etc\auth\cacert.pem", allowConnectionsWithoutCertificates: true, allowInvalidCertificates: true, allowInvalidHostnames: true, certificateSelector: "subject=SplunkServerDefaultCert", disabledProtocols: "noTLS1_0,noTLS1_1", mode: "requireTLS", tlsCipherConfig: "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RS..." } }, replication: { oplogSizeMB: 200, replSet: "4EA2F2AF-2584-4BB0-A2C4-414E7CB68BC2" }, security: { javascriptEnabled: false, keyFile: "C:\Program Files\Splunk\var\lib\splunk\kvstore\mongo\splunk.key" }, setParameter: { enableLocalhostAuthBypass: "0", oplogFetcherSteadyStateMaxFetcherRestarts: "0" }, storage: { dbPath: "C:\Program Files\Splunk\var\lib\splunk\kvstore\mongo", engine: "wiredTiger", wiredTiger: { engineConfig: { cacheSizeGB: 1.65 } } }, systemLog: { timeStampFormat: "iso8601-utc" } } W NETWORK [initandlisten] sslCipherConfig parameter is not supported with Windows SChannel and is ignored. W NETWORK [initandlisten] sslCipherConfig parameter is not supported with Windows SChannel and is ignored. I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=1689M,cache_overflow=(file_max=0M),session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress], W STORAGE [initandlisten] Failed to start up WiredTiger under any compatibility version. F STORAGE [initandlisten] Reason: 129: Operation not supported F - [initandlisten] Fatal Assertion 28595 at src\mongo\db\storage\wiredtiger\wiredtiger_kv_engine.cpp 928 F - [initandlisten] \n\n***aborting after fassert() failure\n\n    Does anyone have any idea how to resolve this? Thanks,
Hey,   I want to add _time column after stats command but I couldn't select the best command. Forexample;   index=* | eval event_time=strftime(_time, "%Y-%m-%d %H:%M:%S") | stats count by use... See more...
Hey,   I want to add _time column after stats command but I couldn't select the best command. Forexample;   index=* | eval event_time=strftime(_time, "%Y-%m-%d %H:%M:%S") | stats count by user, ip, action | iplocation ip | sort -count   How can I add this field?   Thanks    
Hi all, I have this use case below: Need to create a splunk alert for this scenario: Detections will be created from Splunk logs for specific events like Authentication failed , such as exceeding X... See more...
Hi all, I have this use case below: Need to create a splunk alert for this scenario: Detections will be created from Splunk logs for specific events like Authentication failed , such as exceeding X number of failed logins over Y time.  Below search splunk i am using:           index=nprod_database sourcetype=tigergraph:app:auditlog:8542 host=VCAUSC11EUAT* | search userAgent OR "actionName":"login" "timestamp":"2025-01-07T*"| sort -_time           I am not able to write the correct search query to find Authentication failed exceeding, for example 3 times. Attached screenshot. Thanks for your help. Dieudonne.
Hello, We are Splunk Cloud subscribers. We want to utilize the NetApp for Splunk Add-On. We've Two on-site Deployment servers, one Windows, one Linux and an on-site Heavy Forwarder. My interpretatio... See more...
Hello, We are Splunk Cloud subscribers. We want to utilize the NetApp for Splunk Add-On. We've Two on-site Deployment servers, one Windows, one Linux and an on-site Heavy Forwarder. My interpretation of the instructions are that we install the NetApp Add-Ons (ONTAP Indexes & Extractions) within the cloud hosted search head.   The Cloud instructions leave me with the impression, that we may need to utilize the heavy forwarder as a data collection node for the NetApp Add-Ons as well. There we would manually install the app components within the splunk home /etc/apps directory. Looking within the deployment server and the heavy forwarder. Both splunk home directories installed have directory permissions set to 700.  We're hoping this method of installation does not apply to us then and the cloud installation process automated much of this and obviated the need to manually configure the heavy forwarder. Upon completing these Add-On installations via the cloud hosted search head, are there any additional steps or actions we will need to take to complete the installation aside from the NetApp Appliance configurations? Thank you, Terry
I'm currently going over our alerts, cleaning them up and optimizing them.  However, I recall there being a "best practice" when it comes to writing SPL. Obviously, there may be caveats to it, bu... See more...
I'm currently going over our alerts, cleaning them up and optimizing them.  However, I recall there being a "best practice" when it comes to writing SPL. Obviously, there may be caveats to it, but what is the usual best practice when structuring your SPL commands? Is this correct or no? search, index, source, sourcetype | where, filter, regex | rex, replace, eval | stats, chart, timechart | sort, sortby | table, fields, transpose | dedup, head | eventstats, streamstats | map, lookup
Hello, I have a .NET Transaction Rule named:  "/ws/rest/api"  The matching Rule is a Regex: /ws/rest/api/V[0-9].[0-9]/pthru A couple of examples of the the URLs that would match this rule are: /w... See more...
Hello, I have a .NET Transaction Rule named:  "/ws/rest/api"  The matching Rule is a Regex: /ws/rest/api/V[0-9].[0-9]/pthru A couple of examples of the the URLs that would match this rule are: /ws/rest/api/V3.0/pthru/workingorders /ws/rest/api/V4.0/pthru/cart /ws/rest/api/V4.0/pthru/cart/items I am splitting the Rule by URI segments, 4, 5, 6.  but the resulting name is:  /ws/rest/api.V4.0pthruCart Is there a way to add "/" between each segment, or is there a better way to do this that give us a better looking Transaction Name? Thanks for your help, Tom
Inherited Splunk deployment.  Looks like authentication was setup with proxysso.  I am unfamiliar with this and we are planning on migrating the proxysso authentication to SAML.   In the past, I hav... See more...
Inherited Splunk deployment.  Looks like authentication was setup with proxysso.  I am unfamiliar with this and we are planning on migrating the proxysso authentication to SAML.   In the past, I have used the web UI for authentications like LDAP.  ProxySSO seems to be a backend conf file? Not sure on how to proceed if there will be a conflict of just adding the SAML authentication method and will it simply override the ProxySSO configurations?  Or does the ProxySSO conf need to be removed first and then saml configured?  If that is the case, what methods to remove? Thank you
Hello, First, I am aware that there are multiple posts regarding my question, but I can't seem to use them in my scenario. Please see an example below. There are two fields, location and name. I ne... See more...
Hello, First, I am aware that there are multiple posts regarding my question, but I can't seem to use them in my scenario. Please see an example below. There are two fields, location and name. I need to filter out name that contain  "2" and stats count name based on location.  I came up with this search, but the problem is it did not include location A (because the count is zero) Please suggest. I appreciate your help.  Thanks | makeresults format=csv data="location, name location A, name A2 location B, name B1 location B, name B2 location C, name C1 location C, name C2 location C, name C3" | search name != "*2*" | stats count by location Data location name location A name A2 location B name B1 location B name B2 location C name C1 location C name C2 location C name C3   Expected output: location count(name) location A 0 location B 1 location C 2
Hello Team,    How to search specific app user successful and failure events by month for Jan to Dec? Base search,   index=my_index app=a | table app action user |eval Month=strftime(_tim... See more...
Hello Team,    How to search specific app user successful and failure events by month for Jan to Dec? Base search,   index=my_index app=a | table app action user |eval Month=strftime(_time,"%m") |stats count by user Month I am not getting any result by above search.    
Recently our splunk security alert integration has stopped working last month (December) where we'd send an alert automatically from splunk cloud to our onmicrosoft.com@amer.teams.ms e-mail. Is th... See more...
Recently our splunk security alert integration has stopped working last month (December) where we'd send an alert automatically from splunk cloud to our onmicrosoft.com@amer.teams.ms e-mail. Is the support of this being deprecated on the Microsoft side? Or is this a whitelisting issue? Anyone else experience a similar problem?
Here is my raw data in the splunk query <s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/"> <s:Body xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/20... See more...
Here is my raw data in the splunk query <s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/"> <s:Body xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <application xmlns="http://www.abc.com/services/listService"> <header> <user>def@ghi.com</user> <password>al3yu2430nald</password>   If I want to mask the password value and show in the splunk output as: <s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/"> <s:Body xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <application xmlns="http://www.abc.com/services/listService"> <header> <user>def@ghi.com</user> <password>xxxxxxxxxxxx</password> How can I do that?
Hello, I have 2 queries where indices are different and have a common field dest_ip which is my focus(same field name in both indices). Please note that there are also some other common fields such ... See more...
Hello, I have 2 queries where indices are different and have a common field dest_ip which is my focus(same field name in both indices). Please note that there are also some other common fields such as src_ip, action etc.   Query 1:   index=*corelight* sourcetype=*corelight* server_name="*microsoft.com*   additional fields: action, ssl_version, ssl_cipher   Query 2:   index="*firewall*" sourcetype=*traffic* src_ip=10.1.1.100   additional fields: _time, src_zone, src_ip, dest_zone, transport, dest_port, app, rule, action, session_end_reason, packets_out, packets_in, src_translated_ip, dvc_name     I'm trying to output all the corresponding server_names for each dest_ip, as a table with all the listed fields from both query outputs   I'm new to Splunk and learning my way; I've tried the following so far -   A) using join (which is usually very slow and sometimes doesn't give me a result)   index=*corelight* sourcetype=*corelight* server_name=*microsoft.com* | join dest_ip [ search index="*firewall*" sourcetype=*traffic* src_ip=10.1.1.100 | fields src_ip, src_user, dest_ip, rule, action, app, transport, version, session_end_reason, dvc_name, bytes_out ] | dedup server_name | table _time, src_ip, dest_ip, transport, dest_port, app, rule, server_name, action, session_end_reason, dvc_name | rename _time as "timestamp", transport as "protocol"     b) using an OR    (index=*corelight* sourcetype=*corelight* server_name=*microsoft.com*) OR (index="*firewall*" sourcetype=*traffic* src_ip=10.1.1.100) | dedup src_ip, dest_ip | table src_zone, src_ip, dest_zone, dest_ip, server_name, transport, dest_port, app, rule, action, session_end_reason, packets_out, packets_in, src_translated_ip, dvc_name | rename src_zone AS From, src_ip AS Source, dest_zone AS To, dest_ip AS Destination, server_name AS SNI, transport AS Protocol, dest_port AS Port, app AS "Application", rule AS "Rule", action AS "Action", session_end_reason AS "End Reason", packets_out AS "Packets Out", packets_in AS "Packets In", src_translated_ip AS "Egress IP", dvc_name AS "DC"       My questions - Would you suggest a better way to write/construct my above queries?   In my OR output, I only see a couple of columns populating values (eg. src_ip, dest_ip, action) while the rest are empty. My guess is they're populating because I'm doing an inner join and these are the common fields between the two. Since I'm unable to populate the others, maybe I need to do a left join?   Can you kindly guide me on how to rename fields specific to each index when combining queries using OR? I've tried a few times but haven't been successful For example, in my above OR statement - how and where in the query do I rename the field ssl_cipher in index=*corelight* to ENCRYPT_ALGORITHM?    Many thanks!
Yesterday I upgraded Splunk on one of my Deployment Servers from 9.3.1 with the 9.4.0 rpm on a Amazon Linux host and ran into the following error after starting splunk with: /opt/splunk/bin/splunk s... See more...
Yesterday I upgraded Splunk on one of my Deployment Servers from 9.3.1 with the 9.4.0 rpm on a Amazon Linux host and ran into the following error after starting splunk with: /opt/splunk/bin/splunk start --accept-license --no-prompt --answer-yes (typical batch of startup messages here ... until) sh: line 1: 16280 Segmentation fault      (core dumped) splunk migrate renew-certs 2>&1 ERROR while running renew-certs migration. Repeated attempts at starting failed to render anything different. Ended up having to revert to the prior version. This is, in fact, the first failed upgrade I've had since I started using this product over 10 years ago. I have backed out of the upgrade, but considering the vagueness of this error message, I'm asking the community if anyone has seen this before.   
FYI, it's possible if you have HF => third party s2s => indexer.
I'm building a search which takes a URL and returns all events from separate indexes/products where a client (user endpoint, server, etc) attempted access.  The goal is to answer "who tried to visit ... See more...
I'm building a search which takes a URL and returns all events from separate indexes/products where a client (user endpoint, server, etc) attempted access.  The goal is to answer "who tried to visit url X". I have reviewed the default CIM data models here: https://docs.splunk.com/Documentation/CIM/5.1.0/User/CIMfields However, none seem to fit this specific use case.  Can anyone sanity check me to see if I've overlooked one?  Thanks!
I need to upgrade the Splunk Universal forwarder version to all the existing installed windows 2016 and 2019 servers. I am using Splunk Enterprise as a Search head and indexer. Is there a way that I... See more...
I need to upgrade the Splunk Universal forwarder version to all the existing installed windows 2016 and 2019 servers. I am using Splunk Enterprise as a Search head and indexer. Is there a way that I can upgrade the old version with the latest without uninstalling the old and install the new one. And how this task can be done for all the servers together instead of one by one.
Hi Everyone,   I am trying to create one dashboard out of search query but I am getting stuck where I am unable to the host details in the dashboard.   query is -  index="vm-details" | eval... See more...
Hi Everyone,   I am trying to create one dashboard out of search query but I am getting stuck where I am unable to the host details in the dashboard.   query is -  index="vm-details" | eval date=strftime(_time, "%Y-%m-%d") | stats dc(host) as host_count, values(host) as hosts by date | sort date I am getting host_count and date in dashboard but my requirement is I need hostname should come while hovering host_count I tried using values(host) directly but that didnt work. can someone help? CC: @ITWhisperer  Thanks, Veeresh Shenoy S
Hello, I have a requirement to collect and monitor logs from several machines running in a private network. These machines are generating logs that need to be sent to Splunk Cloud for monitoring. ... See more...
Hello, I have a requirement to collect and monitor logs from several machines running in a private network. These machines are generating logs that need to be sent to Splunk Cloud for monitoring. Here's what I've done so far: Installed Universal Forwarder: I have installed the Splunk Universal Forwarder on each machine that generates logs. Configured Forwarding: I used the command ./splunk add forward-server prd-xxx.splunkcloud.com:9997 to set the server address for forwarding logs to Splunk Cloud. Set Up Monitoring: I added the directory to be monitored with the command ./splunk add monitor /var/log. However, I'm unable to see any logs on the Splunk Cloud dashboard at "prd-xxx.splunkcloud.com:9997". I have a question regarding port 9997; it seems that this port should be open on Splunk Cloud, but I don't see an option to configure this in Splunk Cloud as there is no "Settings > Forwarding and Receiving > Receive data" section available. How can I resolve this issue and ensure that logs are properly sent to and visible on Splunk Cloud? Thanks.
Dear All, Kindly suggest , How to sort data in stats command output as per event time . Example:  Requirement : VPN login details as per source user In last one hour. SPL Query : index="network"... See more...
Dear All, Kindly suggest , How to sort data in stats command output as per event time . Example:  Requirement : VPN login details as per source user In last one hour. SPL Query : index="network" sourcetype=vpn | eval "Login_Time" = strftime(_time, "%m/%d/%Y %I:%M:%S %p") | stats values(SourceUser)as User  values(Login_Time) as VPN_Login_Time count by _time host Date Host User VPN Login Time Count 1/7/2025 0:00 10.10.8.45 Amar Rajesh Zainab 01/07/2025 06:01:25 AM 01/7/2025  06:30:21 AM 01/7/2025  06:50:49 AM 3 challenge in above example output,  Amar was logged in at 01/7/2025  06:30:21AM and  Zainab was logged in at 01/07/2025 06:01:25 AM but in output result user were sorted with alphabetical order and Login time were  sorted in descending order.  And User field can not be added as third field in by expression .