All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Everyone, I have 5 instances of Splunk running my Mac (Big Sur v11.6): SH+IDX DPL HFWD UF (sending to HFWD) UF (sending to IDX) All working pretty well, but there are a few hick-... See more...
Hi Everyone, I have 5 instances of Splunk running my Mac (Big Sur v11.6): SH+IDX DPL HFWD UF (sending to HFWD) UF (sending to IDX) All working pretty well, but there are a few hick-ups running on MacOSX (Big Sur, 11.6), and the new major one I've run it to is there is NO introspection (Resource Usage) collected! The "resource_usage.log" is completely empty, and running :        /opt/splunk_dpl/bin/splunkd instrument-resource-usage -p 8087 --with-kvstore --debug       Writes:       I-data gathering (Resource Usage) not supported on this platform. DEBUG RU_main - I-data gathering (IOWait Statistics) not supported on this OS WARN WatchdogActions - Initialization failed for action=pstacks. Deleting. DEBUG InstrumentThread - Entering 0th iter (thread KVStoreOperationStatsInstrumentThread) DEBUG InstrumentThread - Entering 0th iter (thread KVStoreCollectionStatsInstrumentThread) DEBUG InstrumentThread - Entering 0th iter (thread KVStoreServerStatusInstrumentThread) DEBUG InstrumentThread - Entering 0th iter (thread KVStoreProfilingDataInstrumentThread) DEBUG InstrumentThread - Entering 0th iter (thread KVStoreReplicaSetStatsInstrumentThread)         1. Does this really mean there is no support for resource usage on Mac, og am I getting something wrong here? To me there is not really that much difference between a Mac and a Linux box (while knowing there are some differences) , and most commands run on a linux are run the exact same way on a mac. 2. If this does not come out of the box, how can it be enabled? 3. Which processes are run on linux to exactly fulfill the "Resource Usage" and IOWait stats, that one could try move into the mac? 4. Does anyone know exactly how and where the scripts/processes are configured in Splunk to facilitate this? Any core details would be most appreciated. Cheers, Bjarne
Hi, Splunkers,  I have a dashboard with multiple panels, which all use shared time picker from token field2. when I used the following drilldown link to send token Gucid_token,  time range is used ... See more...
Hi, Splunkers,  I have a dashboard with multiple panels, which all use shared time picker from token field2. when I used the following drilldown link to send token Gucid_token,  time range is used dashboard's default time range.  <drilldown>           <link target="_blank">/app/appname/guciduuidsid_search_applied_rules_with_ors_log_kvp?form.Gucid_token=$click.value2</link>         </drilldown> but when I click drilldown link, I prefer to use a different hardcode time range, like  "Last 7 days", instead of original default time range of my dashboard. so, I added form.field2=Last 7 days in my drilldown link following  the 1st token form.Gucid_token=$click.value2  as below.   but unfortunately, it doesn't work.  <drilldown>           <link target="_blank">/app/appname/guciduuidsid_search_applied_rules_with_ors_log_kvp?form.Gucid_token=$click.value2$&amp;form.field2=Last%207%20days</link>         </drilldown>   anyone knows how to pass the hardcode time range through this drilldown link?    thanks in advance.   Kevin
This is my current WMI setup:   [WMI:WinLogSysTst] disabled = 0 event_log_file = System index = winlogsystst interval = 5 server = localhost current_only = 0     How can i tell it to get older d... See more...
This is my current WMI setup:   [WMI:WinLogSysTst] disabled = 0 event_log_file = System index = winlogsystst interval = 5 server = localhost current_only = 0     How can i tell it to get older data than when i made the input. I get only recent data and not old one. Thnak you.
I am taking events from three source types (same index; two common fields present across all three) and creating a table with the results. The events are indexed using a "timestamps" field that is pr... See more...
I am taking events from three source types (same index; two common fields present across all three) and creating a table with the results. The events are indexed using a "timestamps" field that is present in the raw data (the result of an API call to a monitoring tool and a subsequent JSON payload retrieval of synthetic test metrics; the value is in epoch time and pushed into _time using a transform aligned with the source types). Here's the query I'm using: index=smoketest_* sourcetype=smoketest_json_dyn_result OR sourcetype=smoketest_json_dyn_duration OR sourcetype=smoketest_json_dyn_statuscode | rename dt.entity.synthetic_location AS synLoc, dt.entity.http_check AS httpCheck | stats values(*) AS * by httpCheck, synLoc, _time | rename "responseTime{}" AS "Response Time (ms)" | table _time, synLoc, httpCheck, status, "Response Time (ms)", "Status code" The common fields found in all three source types are "synLoc" and "httpCheck". 95% of the time, I get the desired result pictured here (requested fields from all three sourcetypes align as a single row on the table): In this example, you can see the results of two unique tests (executing every five minutes, over a 15 minute period). Since the events being grabbed from the three source types all have the same _time value, this works as expected. If, however, one or two of the source types have events with a _time value that does not match the others, this happens: Again, there are two unique tests represented here. However, note that one row reflects a value from one source type at 10:01 while the two values from the other two source types are on a separate row at 10:02. Ideally, all three values should be on the same row (much like the 10:06 and 10:11 entries). How can I alter my search query to account for this behavior?
Hi All,   We are receiving below timestamp  issues, 0000 WARN DateParserVerbose [104706 merging_0] - Accepted time format has changed ((?i)(?<![\d\.])(20\d\d)([-/])([01]?\d)\2([012]?\d|3[01])\s... See more...
Hi All,   We are receiving below timestamp  issues, 0000 WARN DateParserVerbose [104706 merging_0] - Accepted time format has changed ((?i)(?<![\d\.])(20\d\d)([-/])([01]?\d)\2([012]?\d|3[01])\s+([012]?\d):([0-6]?\d):([0-6]?\d)\s*(?i)((?:(?:UT|UTC|GMT(?![+-])|CET|CEST|CETDST|MET|MEST|METDST|MEZ|MESZ|EET|EEST|EETDST|WET|WEST|WETDST|MSK|MSD|IST|JST|KST|HKT|AST|ADT|EST|EDT|CST|CDT|MST|MDT|PST|PDT|CAST|CADT|EAST|EADT|WAST|WADT|Z)|(?:GMT)?[+-]\d\d?:?(?:\d\d)?)(?!\w))?), possibly indicating a problem in extracting timestamp 12-27-2021 14:33:04.972 +0000 WARN DateParserVerbose [104095 merging_0] - Accepted time format has changed ((?i)(?<![\w\.])(?i)(?i)(0?[1-9]|[12]\d|3[01])(?:st|nd|rd|th|[,\.;])?([\- /]) {0,2}(?i)(?:(?i)(?<![\d\w])(jan|\x{3127}\x{6708}|feb|\x{4E8C}\x{6708}|mar|\x{4E09}\x{6708}|apr|\x{56DB}\x{6708}|may|\x{4E94}\x{6708}|jun|\x{516D}\x{6708}|jul|\x{4E03}\x{6708}|aug|\x{516B}\x{6708}|sep|\x{4E5D}\x{6708}|oct|\x{5341}\x{6708}|nov|\x{5341}\x{3127}\x{6708}|dec|\x{5341}\x{4E8C}\x{6708})[a-z,\.;]*|(?i)(0?[1-9]|1[012])(?!:))\2 {0,2}(?i)(20\d\d|19\d\d|[9012]\d(?!\d))(?![\w\.])), possibly indicating a problem in extracting timestamps All the Linux servers that are sending logs to Splunk are in EST timezone and we are expecting the same timezone to be indexed at. But we are still seeing issues not able to identify which of the servers are causing timezone issues. Is there any other checks we need to perform to solve the above errors. Thanks, Sharada
Hi need to find error codes then due to ID, count number of IPS. 2021-12-26 22:38:59,248 INFO CUS.AbCD-Server-2-0000000 [LoginService] load idss[IPS=987654*1234-1,productCode=000] 2021-12-26 22:38... See more...
Hi need to find error codes then due to ID, count number of IPS. 2021-12-26 22:38:59,248 INFO CUS.AbCD-Server-2-0000000 [LoginService] load idss[IPS=987654*1234-1,productCode=000] 2021-12-26 22:38:59,280 ERROR CUS.AbCD-Server-2-0000000 [LoginService] authorize: [AB_100] This is huge value. ConfigApp[DAILY_STATIC_SECOND_PIN] 2021-12-26 22:38:59,248 INFO CUS.AbCD-Server-2-0000000 [LoginService] load idss[IPS=987654*1234-1,productCode=000] 2021-12-26 22:38:59,280 ERROR CUS.AbCD-Server-2-0000000 [LoginService] authorize: [AB_100] This is huge value. ConfigApp[DAILY_STATIC_SECOND_PIN] 2021-12-26 22:38:59,248 INFO CUS.AbCD-Server-3-9999999 [LoginService] load idss[IPS=123456*4321-1,productCode=000] 2021-12-26 22:38:59,280 ERROR CUS.AbCD-Server-3-9999999 [LoginService] authorize: [AB_500] This is huge value. ConfigApp[DAILY_STATIC_SECOND_PIN] expected output: ID                                                                       IPS                                     count CUS.AbCD-Server-2-0000000   987654*1234-1     2 CUS.AbCD-Server-2-9999999   123456*4321-1     1   Any idea? Thanks,
Hi, The Eventhub capacity limited therefore we ask if we can also use an storage account to ingest the data via this addon? In the details of this addon is described that eventhub is used: Microso... See more...
Hi, The Eventhub capacity limited therefore we ask if we can also use an storage account to ingest the data via this addon? In the details of this addon is described that eventhub is used: Microsoft Defender Advanced Hunting Add-on for Splunk | Splunkbase   Kind regards
Hi, I am stuck implementing below use case , please help me on this : I have a lookup say url_requested.csv.  http_url host *002redir023.dns04* test *yahoo* test ... See more...
Hi, I am stuck implementing below use case , please help me on this : I have a lookup say url_requested.csv.  http_url host *002redir023.dns04* test *yahoo* test Another csv file :  malicious.csv url Description xyzsaas.com C&C http://002redir023.dns04.com malicious I have to check the url values in "url_requested.csv" with that in "malicious.csv" and get only those url and description which has a match in "malicious.csv" . url_requested.csv lookup has url column with wildcard prefixed and suffixed. I have added the wildcard configuration in transforms.conf following this : https://community.splunk.com/t5/Splunk-Search/Can-we-use-wildcard-characters-in-a-lookup-table/m-p/94513. My query : | inputlookup malicious.csv | table url description | lookup url_requested.csv  http_url as url outputnew host | search host=* | fields - host I am getting no results running this query. Please let me know where I am going wrong and help me with the solution. Result I am looking for : url Description http://002redir023.dns04.com malicious
I have searched high and low for an answer here and on web, but seems that i can't find a suitable answer.   Did anyone got this error while tring to get data in?     Data could not be written: ... See more...
I have searched high and low for an answer here and on web, but seems that i can't find a suitable answer.   Did anyone got this error while tring to get data in?     Data could not be written: /nobody/search/inputs/WinEventLog://System/start_from: oldest     I played a bit with System log of windows and at first i used the "Local event log collection" but then changed my mind and changed it to "Remote event log collections". But yet again, first time, using "Local event log collection" i got older data too, second time using "Remote event log collections" i get only newer data. What can i do to reset it? In what file should i look? Thank you.
Hi All, I need to improve the performance of my below search, which currently completes in about 132sec. The search looks for last 7 days data from firewall logs.  Search: index="xxx" src_ip !="a.... See more...
Hi All, I need to improve the performance of my below search, which currently completes in about 132sec. The search looks for last 7 days data from firewall logs.  Search: index="xxx" src_ip !="a.b.c.d/26" src_ip !="x.y.z.w/26" src_zone!=ABCD src_zone!=ABCDE (dest_zone = "ABCD" OR (dvc_name IN ("qwerty","abcd","xyz","asdf") AND dest_zone="XYZ")) app IN (ldap,rmi-iiop) | lookup some_lookup ip as src_ip OUTPUT matched | search matched!="yes" | stats count by src_ip,action,date_mday | stats count by src_ip,action | search (action=allowed OR (action=blocked AND count>1))   Thanks in advance. Regards, Shaquib
Need to trim search result from left till occurange of PulseSecure: and get everything after that. Note post PulseSecure: line length and character may vary. Charcter is mix or alfabet, number, speci... See more...
Need to trim search result from left till occurange of PulseSecure: and get everything after that. Note post PulseSecure: line length and character may vary. Charcter is mix or alfabet, number, special characters etc Sample:- Dec 27 06:29:37 AAAAAA PulseSecure: 2021-12-27 06:29:37 - AAAAAA  - [110.1.1.1] Default Network::aa.aa.aa(AAA_BBB)[BB_CC_EEE] I need result as below to be saved in field Extracted 2021-12-27 06:29:37 - AAAAAA  - [110.1.1.1] Default Network::aa.aa.aa(AAA_BBB)[BB_CC_EEE]
Hi, I've a shell script to restart services. I want to setup an alert condition to run this shell script in a remote node(remote host in which this script should run should be determined based on th... See more...
Hi, I've a shell script to restart services. I want to setup an alert condition to run this shell script in a remote node(remote host in which this script should run should be determined based on the value which is being returned in "Host" field value in the Splunk query).  If I place the shell script in "$SPLUNK_HOME/bin/scripts" , this script runs only on the Splunk server. I want to know how can i make it to run on the remote node based on the host value being returned from Splunk query. Any help would be much appreciated!. Thank you  
I am attempting to migrate my KV store to wiredTiger per https://docs.splunk.com/Documentation/Splunk/8.1.1/Admin/MigrateKVstore#Migrate_the_KV_store_after_an_upgrade_to_Splunk_Enterprise_8.1_or_high... See more...
I am attempting to migrate my KV store to wiredTiger per https://docs.splunk.com/Documentation/Splunk/8.1.1/Admin/MigrateKVstore#Migrate_the_KV_store_after_an_upgrade_to_Splunk_Enterprise_8.1_or_higher_in_a_single-instance_deployment After running the migrate command, I get this error:       [ansible@splunk splunk]$ sudo ./bin/splunk migrate kvstore-storage-engine --target-engine wiredTiger Starting KV Store storage engine upgrade: Phase 1 (dump) of 2: ............................................................................................... Phase 2 (restore) of 2: Restoring data back to previous KV Store database ERROR: Failed to migrate to storage engine wiredTiger, reason=KVStore service will not start because kvstore process terminated       Looking at my mongodb.log file, I see the following:       2021-12-27T00:43:57.647Z I CONTROL [initandlisten] MongoDB starting : pid=4416 port=8191 dbpath=/opt/splunk/var/lib/splunk/kvstore/mongo 64-bit host=splunk 2021-12-27T00:43:57.647Z I CONTROL [initandlisten] db version v3.6.17-linux-splunk-v4 2021-12-27T00:43:57.647Z I CONTROL [initandlisten] git version: 226949cc252af265483afbf859b446590b09b098 2021-12-27T00:43:57.647Z I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.2za-fips 24 Aug 2021 2021-12-27T00:43:57.647Z I CONTROL [initandlisten] allocator: tcmalloc 2021-12-27T00:43:57.647Z I CONTROL [initandlisten] modules: none 2021-12-27T00:43:57.647Z I CONTROL [initandlisten] build environment: 2021-12-27T00:43:57.647Z I CONTROL [initandlisten] distarch: x86_64 2021-12-27T00:43:57.647Z I CONTROL [initandlisten] target_arch: x86_64 2021-12-27T00:43:57.647Z I CONTROL [initandlisten] 3072 MB of memory available to the process out of 15854 MB total system memory 2021-12-27T00:43:57.647Z I CONTROL [initandlisten] options: { net: { bindIp: "0.0.0.0", port: 8191, ssl: { PEMKeyFile: "/opt/splunk/etc/auth/server.pem", PEMKeyPassword: "<password>", allowInvalidHostnames: true, disabledProtocols: "noTLS1_0,noTLS1_1", mode: "requireSSL", sslCipherConfig: "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RS..." }, unixDomainSocket: { enabled: false } }, replication: { oplogSizeMB: 200 }, security: { javascriptEnabled: false, keyFile: "/opt/splunk/var/lib/splunk/kvstore/mongo/splunk.key" }, setParameter: { enableLocalhostAuthBypass: "0", oplogFetcherSteadyStateMaxFetcherRestarts: "0" }, storage: { dbPath: "/opt/splunk/var/lib/splunk/kvstore/mongo", engine: "mmapv1", mmapv1: { smallFiles: true } }, systemLog: { timeStampFormat: "iso8601-utc" } } 2021-12-27T00:43:57.664Z I JOURNAL [initandlisten] journal dir=/opt/splunk/var/lib/splunk/kvstore/mongo/journal 2021-12-27T00:43:57.664Z I JOURNAL [initandlisten] recover : no journal files present, no recovery needed 2021-12-27T00:43:57.948Z I JOURNAL [durability] Durability thread started 2021-12-27T00:43:57.948Z I JOURNAL [journal writer] Journal writer thread started 2021-12-27T00:43:57.949Z I CONTROL [initandlisten] 2021-12-27T00:43:57.949Z I CONTROL [initandlisten] ** WARNING: No SSL certificate validation can be performed since no CA file has been provided 2021-12-27T00:43:57.949Z I CONTROL [initandlisten] ** Please specify an sslCAFile parameter. 2021-12-27T00:43:57.949Z I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended. 2021-12-27T00:43:57.949Z I CONTROL [initandlisten] 2021-12-27T00:43:58.069Z I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/opt/splunk/var/lib/splunk/kvstore/mongo/diagnostic.data' 2021-12-27T00:43:58.100Z I STORAGE [initandlisten] 2021-12-27T00:43:58.100Z I STORAGE [initandlisten] ** WARNING: mongod started without --replSet yet 1 documents are present in local.system.replset 2021-12-27T00:43:58.100Z I STORAGE [initandlisten] ** Restart with --replSet unless you are doing maintenance and no other clients are connected. 2021-12-27T00:43:58.100Z I STORAGE [initandlisten] ** The TTL collection monitor will not start because of this. 2021-12-27T00:43:58.100Z I STORAGE [initandlisten] ** 2021-12-27T00:43:58.100Z I STORAGE [initandlisten] For more info see http://dochub.mongodb.org/core/ttlcollections 2021-12-27T00:43:58.100Z I STORAGE [initandlisten] 2021-12-27T00:43:58.101Z I NETWORK [initandlisten] listening via socket bound to 0.0.0.0 2021-12-27T00:43:58.101Z I NETWORK [initandlisten] waiting for connections on port 8191 ssl 2021-12-27T00:43:58.575Z I NETWORK [listener] connection accepted from 127.0.0.1:51402 #1 (1 connection now open) 2021-12-27T00:43:58.582Z I NETWORK [conn1] received client metadata from 127.0.0.1:51402 conn1: { driver: { name: "mongoc", version: "1.16.2" }, os: { type: "Linux", name: "Red Hat Enterprise Linux", version: "8.5", architecture: "x86_64" }, platform: "cfg=0x00001620c9 posix=200112 stdc=201710 CC=GCC 9.1.0 CFLAGS="-g -fstack-protector-strong -static-libgcc -L/opt/splunk-home/lib/static-libstdc" LDFLA..." } 2021-12-27T00:43:58.599Z I ACCESS [conn1] Successfully authenticated as principal __system on local from client 127.0.0.1:51402 2021-12-27T00:43:58.599Z I NETWORK [conn1] end connection 127.0.0.1:51402 (0 connections now open) mongodump 2021-12-26T17:43:59.640-0700 WARNING: --sslAllowInvalidCertificates and --sslAllowInvalidHostnames are deprecated, please use --tlsInsecure instead 2021-12-27T00:43:59.652Z I NETWORK [listener] connection accepted from 127.0.0.1:51404 #2 (1 connection now open) 2021-12-27T00:43:59.728Z I ACCESS [conn2] Successfully authenticated as principal __system on local from client 127.0.0.1:51404 2021-12-27T00:43:59.750Z I NETWORK [listener] connection accepted from 127.0.0.1:51406 #3 (2 connections now open) 2021-12-27T00:43:59.805Z I ACCESS [conn3] Successfully authenticated as principal __system on local from client 127.0.0.1:51406 mongodump 2021-12-26T17:44:00.073-0700 writing admin.system.indexes to mongodump 2021-12-26T17:44:00.075-0700 done dumping admin.system.indexes (2 documents) mongodump 2021-12-26T17:44:00.075-0700 writing config.system.indexes to mongodump 2021-12-26T17:44:00.077-0700 done dumping config.system.indexes (3 documents) mongodump 2021-12-26T17:44:00.077-0700 writing admin.system.version to mongodump 2021-12-26T17:44:00.079-0700 done dumping admin.system.version (1 document) ... a whole bunch of other dumps completing... mongodump 2021-12-26T17:44:00.635-0700 done dumping s_Splunk5+n+0jIfNWH9x+qdy7cD4GTT_sse_jse2D8rEiNk5kfRO1HbJ@VAjMp.c (10 documents) 2021-12-27T00:44:00.635Z I NETWORK [conn2] end connection 127.0.0.1:51404 (3 connections now open) 2021-12-27T00:44:00.635Z I NETWORK [conn3] end connection 127.0.0.1:51406 (2 connections now open) 2021-12-27T00:44:00.636Z I NETWORK [conn5] end connection 127.0.0.1:51410 (1 connection now open) 2021-12-27T00:44:00.636Z I NETWORK [conn4] end connection 127.0.0.1:51408 (0 connections now open) 2021-12-27T00:44:00.671Z I NETWORK [listener] connection accepted from 127.0.0.1:51412 #6 (1 connection now open) 2021-12-27T00:44:00.676Z I NETWORK [conn6] received client metadata from 127.0.0.1:51412 conn6: { driver: { name: "mongoc", version: "1.16.2" }, os: { type: "Linux", name: "Red Hat Enterprise Linux", version: "8.5", architecture: "x86_64" }, platform: "cfg=0x00001620c9 posix=200112 stdc=201710 CC=GCC 9.1.0 CFLAGS="-g -fstack-protector-strong -static-libgcc -L/opt/splunk-home/lib/static-libstdc" LDFLA..." } 2021-12-27T00:44:00.676Z I NETWORK [listener] connection accepted from 127.0.0.1:51414 #7 (2 connections now open) 2021-12-27T00:44:00.682Z I NETWORK [conn7] received client metadata from 127.0.0.1:51414 conn7: { driver: { name: "mongoc", version: "1.16.2" }, os: { type: "Linux", name: "Red Hat Enterprise Linux", version: "8.5", architecture: "x86_64" }, platform: "cfg=0x00001620c9 posix=200112 stdc=201710 CC=GCC 9.1.0 CFLAGS="-g -fstack-protector-strong -static-libgcc -L/opt/splunk-home/lib/static-libstdc" LDFLA..." } 2021-12-27T00:44:00.699Z I ACCESS [conn7] Successfully authenticated as principal __system on local from client 127.0.0.1:51414 2021-12-27T00:44:00.723Z I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends 2021-12-27T00:44:00.724Z I NETWORK [signalProcessingThread] shutdown: going to close listening sockets... 2021-12-27T00:44:00.724Z I FTDC [signalProcessingThread] Shutting down full-time diagnostic data capture 2021-12-27T00:44:00.726Z I STORAGE [signalProcessingThread] shutdown: waiting for fs preallocator... 2021-12-27T00:44:00.726Z I STORAGE [signalProcessingThread] shutdown: final commit... 2021-12-27T00:44:00.729Z I JOURNAL [signalProcessingThread] journalCleanup... 2021-12-27T00:44:00.729Z I JOURNAL [signalProcessingThread] removeJournalFiles 2021-12-27T00:44:00.729Z I JOURNAL [signalProcessingThread] old journal file will be removed: /opt/splunk/var/lib/splunk/kvstore/mongo/journal/j._0 2021-12-27T00:44:00.730Z I JOURNAL [signalProcessingThread] Terminating durability thread ... 2021-12-27T00:44:00.828Z I JOURNAL [journal writer] Journal writer thread stopped 2021-12-27T00:44:00.828Z I JOURNAL [durability] Durability thread stopped 2021-12-27T00:44:00.828Z I STORAGE [signalProcessingThread] shutdown: closing all files... 2021-12-27T00:44:00.855Z I STORAGE [signalProcessingThread] closeAllFiles() finished 2021-12-27T00:44:00.855Z I STORAGE [signalProcessingThread] shutdown: removing fs lock... 2021-12-27T00:44:00.855Z I CONTROL [signalProcessingThread] now exiting 2021-12-27T00:44:00.856Z I CONTROL [signalProcessingThread] shutting down with code:0       I've seen some other errors reported with this process, but they all seem to be related to file permission errors. My file permissions seem OK, and give the dumb of existing data works, doesn't seem to be related anyhow. Any other ideas of what is wrong here?
Hi All, Does Deployment Server compatible with deployment client that has higher version? For example :  Deployment Server - 7.3.5  Deployment Client - 8.0.9 OR  Deployment Server - 8.0.9  Dep... See more...
Hi All, Does Deployment Server compatible with deployment client that has higher version? For example :  Deployment Server - 7.3.5  Deployment Client - 8.0.9 OR  Deployment Server - 8.0.9  Deployment Client - 8.2.3   Thank you for the help! Hen  
Hello Team,  Splunk UF has been installed in all our 1000+ windows servers and we are monitoring those logs. Now the scenario is we have one more Splunk team in my organization and they needs to mon... See more...
Hello Team,  Splunk UF has been installed in all our 1000+ windows servers and we are monitoring those logs. Now the scenario is we have one more Splunk team in my organization and they needs to monitor the one of the path in only 4 servers/hosts in that 1000+ servers .  we need to configure the dual feeding for those 4 windows servers. My configuration file :  inputs.conf  :  [monitor://xxxx:\xxxxxxx\xxxxxxxxxx\xxxxxxxxxxxx\xxxxxxxxxxx\log*] index=xxxxxxxxxx sourcetype= xxxxxxxxxxxx host = xxxxxxxxxx,xxxxxxxxxxxx,xxxxxxxxxxxx,xxxxxxxxxxxxxx -----> these are the 4 windows servers/hosts which we need to send the dual feed.  disabled = 0 Outputs.conf [tcpout:xxxxxxxx] server= xxxxxxxxxxx:9997,xxxxxxxxxxx:9997,xxxxxxxxxxxxx:9997,xxxxxxxxxxxxx:9997 But after adding this configurations, they are receiving the wrong logs. How to send the logs for only those 4 servers with correct data?  Kindly help us on this.   
Hi, I couldn't find a way to bundle my app with a python dependency package, for example tenacity. The documentation only describe app dependencies and not python dependencies. I imagine running s... See more...
Hi, I couldn't find a way to bundle my app with a python dependency package, for example tenacity. The documentation only describe app dependencies and not python dependencies. I imagine running slim package command which downloads the required dependencies and when installing, those dependencies are been installed automatically on the server python environment.
As the title suggests I am attempting to set a custom and default for a splunk dashboard that I created. When it opens I need it to snap to the previous weekday between 16:26 and 16:42 CST. Does anyo... See more...
As the title suggests I am attempting to set a custom and default for a splunk dashboard that I created. When it opens I need it to snap to the previous weekday between 16:26 and 16:42 CST. Does anyone know how I would go about doing this?
I have logs which shows the job status ( Running, succeeded and failed) and all jobs have unique job id , now I want to calculate the duration it took to get failed or succeeded for each job id. Here... See more...
I have logs which shows the job status ( Running, succeeded and failed) and all jobs have unique job id , now I want to calculate the duration it took to get failed or succeeded for each job id. Here, all jobs id would have two event first one -running and second - succeeded or failed.  How it can be done   
I am looking to middle align the single value panel in my SPLUNK dashboard.