All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have searched high and low for an answer here and on web, but seems that i can't find a suitable answer.   Did anyone got this error while tring to get data in?     Data could not be written: ... See more...
I have searched high and low for an answer here and on web, but seems that i can't find a suitable answer.   Did anyone got this error while tring to get data in?     Data could not be written: /nobody/search/inputs/WinEventLog://System/start_from: oldest     I played a bit with System log of windows and at first i used the "Local event log collection" but then changed my mind and changed it to "Remote event log collections". But yet again, first time, using "Local event log collection" i got older data too, second time using "Remote event log collections" i get only newer data. What can i do to reset it? In what file should i look? Thank you.
Hi All, I need to improve the performance of my below search, which currently completes in about 132sec. The search looks for last 7 days data from firewall logs.  Search: index="xxx" src_ip !="a.... See more...
Hi All, I need to improve the performance of my below search, which currently completes in about 132sec. The search looks for last 7 days data from firewall logs.  Search: index="xxx" src_ip !="a.b.c.d/26" src_ip !="x.y.z.w/26" src_zone!=ABCD src_zone!=ABCDE (dest_zone = "ABCD" OR (dvc_name IN ("qwerty","abcd","xyz","asdf") AND dest_zone="XYZ")) app IN (ldap,rmi-iiop) | lookup some_lookup ip as src_ip OUTPUT matched | search matched!="yes" | stats count by src_ip,action,date_mday | stats count by src_ip,action | search (action=allowed OR (action=blocked AND count>1))   Thanks in advance. Regards, Shaquib
Need to trim search result from left till occurange of PulseSecure: and get everything after that. Note post PulseSecure: line length and character may vary. Charcter is mix or alfabet, number, speci... See more...
Need to trim search result from left till occurange of PulseSecure: and get everything after that. Note post PulseSecure: line length and character may vary. Charcter is mix or alfabet, number, special characters etc Sample:- Dec 27 06:29:37 AAAAAA PulseSecure: 2021-12-27 06:29:37 - AAAAAA  - [110.1.1.1] Default Network::aa.aa.aa(AAA_BBB)[BB_CC_EEE] I need result as below to be saved in field Extracted 2021-12-27 06:29:37 - AAAAAA  - [110.1.1.1] Default Network::aa.aa.aa(AAA_BBB)[BB_CC_EEE]
Hi, I've a shell script to restart services. I want to setup an alert condition to run this shell script in a remote node(remote host in which this script should run should be determined based on th... See more...
Hi, I've a shell script to restart services. I want to setup an alert condition to run this shell script in a remote node(remote host in which this script should run should be determined based on the value which is being returned in "Host" field value in the Splunk query).  If I place the shell script in "$SPLUNK_HOME/bin/scripts" , this script runs only on the Splunk server. I want to know how can i make it to run on the remote node based on the host value being returned from Splunk query. Any help would be much appreciated!. Thank you  
I am attempting to migrate my KV store to wiredTiger per https://docs.splunk.com/Documentation/Splunk/8.1.1/Admin/MigrateKVstore#Migrate_the_KV_store_after_an_upgrade_to_Splunk_Enterprise_8.1_or_high... See more...
I am attempting to migrate my KV store to wiredTiger per https://docs.splunk.com/Documentation/Splunk/8.1.1/Admin/MigrateKVstore#Migrate_the_KV_store_after_an_upgrade_to_Splunk_Enterprise_8.1_or_higher_in_a_single-instance_deployment After running the migrate command, I get this error:       [ansible@splunk splunk]$ sudo ./bin/splunk migrate kvstore-storage-engine --target-engine wiredTiger Starting KV Store storage engine upgrade: Phase 1 (dump) of 2: ............................................................................................... Phase 2 (restore) of 2: Restoring data back to previous KV Store database ERROR: Failed to migrate to storage engine wiredTiger, reason=KVStore service will not start because kvstore process terminated       Looking at my mongodb.log file, I see the following:       2021-12-27T00:43:57.647Z I CONTROL [initandlisten] MongoDB starting : pid=4416 port=8191 dbpath=/opt/splunk/var/lib/splunk/kvstore/mongo 64-bit host=splunk 2021-12-27T00:43:57.647Z I CONTROL [initandlisten] db version v3.6.17-linux-splunk-v4 2021-12-27T00:43:57.647Z I CONTROL [initandlisten] git version: 226949cc252af265483afbf859b446590b09b098 2021-12-27T00:43:57.647Z I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.2za-fips 24 Aug 2021 2021-12-27T00:43:57.647Z I CONTROL [initandlisten] allocator: tcmalloc 2021-12-27T00:43:57.647Z I CONTROL [initandlisten] modules: none 2021-12-27T00:43:57.647Z I CONTROL [initandlisten] build environment: 2021-12-27T00:43:57.647Z I CONTROL [initandlisten] distarch: x86_64 2021-12-27T00:43:57.647Z I CONTROL [initandlisten] target_arch: x86_64 2021-12-27T00:43:57.647Z I CONTROL [initandlisten] 3072 MB of memory available to the process out of 15854 MB total system memory 2021-12-27T00:43:57.647Z I CONTROL [initandlisten] options: { net: { bindIp: "0.0.0.0", port: 8191, ssl: { PEMKeyFile: "/opt/splunk/etc/auth/server.pem", PEMKeyPassword: "<password>", allowInvalidHostnames: true, disabledProtocols: "noTLS1_0,noTLS1_1", mode: "requireSSL", sslCipherConfig: "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RS..." }, unixDomainSocket: { enabled: false } }, replication: { oplogSizeMB: 200 }, security: { javascriptEnabled: false, keyFile: "/opt/splunk/var/lib/splunk/kvstore/mongo/splunk.key" }, setParameter: { enableLocalhostAuthBypass: "0", oplogFetcherSteadyStateMaxFetcherRestarts: "0" }, storage: { dbPath: "/opt/splunk/var/lib/splunk/kvstore/mongo", engine: "mmapv1", mmapv1: { smallFiles: true } }, systemLog: { timeStampFormat: "iso8601-utc" } } 2021-12-27T00:43:57.664Z I JOURNAL [initandlisten] journal dir=/opt/splunk/var/lib/splunk/kvstore/mongo/journal 2021-12-27T00:43:57.664Z I JOURNAL [initandlisten] recover : no journal files present, no recovery needed 2021-12-27T00:43:57.948Z I JOURNAL [durability] Durability thread started 2021-12-27T00:43:57.948Z I JOURNAL [journal writer] Journal writer thread started 2021-12-27T00:43:57.949Z I CONTROL [initandlisten] 2021-12-27T00:43:57.949Z I CONTROL [initandlisten] ** WARNING: No SSL certificate validation can be performed since no CA file has been provided 2021-12-27T00:43:57.949Z I CONTROL [initandlisten] ** Please specify an sslCAFile parameter. 2021-12-27T00:43:57.949Z I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended. 2021-12-27T00:43:57.949Z I CONTROL [initandlisten] 2021-12-27T00:43:58.069Z I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/opt/splunk/var/lib/splunk/kvstore/mongo/diagnostic.data' 2021-12-27T00:43:58.100Z I STORAGE [initandlisten] 2021-12-27T00:43:58.100Z I STORAGE [initandlisten] ** WARNING: mongod started without --replSet yet 1 documents are present in local.system.replset 2021-12-27T00:43:58.100Z I STORAGE [initandlisten] ** Restart with --replSet unless you are doing maintenance and no other clients are connected. 2021-12-27T00:43:58.100Z I STORAGE [initandlisten] ** The TTL collection monitor will not start because of this. 2021-12-27T00:43:58.100Z I STORAGE [initandlisten] ** 2021-12-27T00:43:58.100Z I STORAGE [initandlisten] For more info see http://dochub.mongodb.org/core/ttlcollections 2021-12-27T00:43:58.100Z I STORAGE [initandlisten] 2021-12-27T00:43:58.101Z I NETWORK [initandlisten] listening via socket bound to 0.0.0.0 2021-12-27T00:43:58.101Z I NETWORK [initandlisten] waiting for connections on port 8191 ssl 2021-12-27T00:43:58.575Z I NETWORK [listener] connection accepted from 127.0.0.1:51402 #1 (1 connection now open) 2021-12-27T00:43:58.582Z I NETWORK [conn1] received client metadata from 127.0.0.1:51402 conn1: { driver: { name: "mongoc", version: "1.16.2" }, os: { type: "Linux", name: "Red Hat Enterprise Linux", version: "8.5", architecture: "x86_64" }, platform: "cfg=0x00001620c9 posix=200112 stdc=201710 CC=GCC 9.1.0 CFLAGS="-g -fstack-protector-strong -static-libgcc -L/opt/splunk-home/lib/static-libstdc" LDFLA..." } 2021-12-27T00:43:58.599Z I ACCESS [conn1] Successfully authenticated as principal __system on local from client 127.0.0.1:51402 2021-12-27T00:43:58.599Z I NETWORK [conn1] end connection 127.0.0.1:51402 (0 connections now open) mongodump 2021-12-26T17:43:59.640-0700 WARNING: --sslAllowInvalidCertificates and --sslAllowInvalidHostnames are deprecated, please use --tlsInsecure instead 2021-12-27T00:43:59.652Z I NETWORK [listener] connection accepted from 127.0.0.1:51404 #2 (1 connection now open) 2021-12-27T00:43:59.728Z I ACCESS [conn2] Successfully authenticated as principal __system on local from client 127.0.0.1:51404 2021-12-27T00:43:59.750Z I NETWORK [listener] connection accepted from 127.0.0.1:51406 #3 (2 connections now open) 2021-12-27T00:43:59.805Z I ACCESS [conn3] Successfully authenticated as principal __system on local from client 127.0.0.1:51406 mongodump 2021-12-26T17:44:00.073-0700 writing admin.system.indexes to mongodump 2021-12-26T17:44:00.075-0700 done dumping admin.system.indexes (2 documents) mongodump 2021-12-26T17:44:00.075-0700 writing config.system.indexes to mongodump 2021-12-26T17:44:00.077-0700 done dumping config.system.indexes (3 documents) mongodump 2021-12-26T17:44:00.077-0700 writing admin.system.version to mongodump 2021-12-26T17:44:00.079-0700 done dumping admin.system.version (1 document) ... a whole bunch of other dumps completing... mongodump 2021-12-26T17:44:00.635-0700 done dumping s_Splunk5+n+0jIfNWH9x+qdy7cD4GTT_sse_jse2D8rEiNk5kfRO1HbJ@VAjMp.c (10 documents) 2021-12-27T00:44:00.635Z I NETWORK [conn2] end connection 127.0.0.1:51404 (3 connections now open) 2021-12-27T00:44:00.635Z I NETWORK [conn3] end connection 127.0.0.1:51406 (2 connections now open) 2021-12-27T00:44:00.636Z I NETWORK [conn5] end connection 127.0.0.1:51410 (1 connection now open) 2021-12-27T00:44:00.636Z I NETWORK [conn4] end connection 127.0.0.1:51408 (0 connections now open) 2021-12-27T00:44:00.671Z I NETWORK [listener] connection accepted from 127.0.0.1:51412 #6 (1 connection now open) 2021-12-27T00:44:00.676Z I NETWORK [conn6] received client metadata from 127.0.0.1:51412 conn6: { driver: { name: "mongoc", version: "1.16.2" }, os: { type: "Linux", name: "Red Hat Enterprise Linux", version: "8.5", architecture: "x86_64" }, platform: "cfg=0x00001620c9 posix=200112 stdc=201710 CC=GCC 9.1.0 CFLAGS="-g -fstack-protector-strong -static-libgcc -L/opt/splunk-home/lib/static-libstdc" LDFLA..." } 2021-12-27T00:44:00.676Z I NETWORK [listener] connection accepted from 127.0.0.1:51414 #7 (2 connections now open) 2021-12-27T00:44:00.682Z I NETWORK [conn7] received client metadata from 127.0.0.1:51414 conn7: { driver: { name: "mongoc", version: "1.16.2" }, os: { type: "Linux", name: "Red Hat Enterprise Linux", version: "8.5", architecture: "x86_64" }, platform: "cfg=0x00001620c9 posix=200112 stdc=201710 CC=GCC 9.1.0 CFLAGS="-g -fstack-protector-strong -static-libgcc -L/opt/splunk-home/lib/static-libstdc" LDFLA..." } 2021-12-27T00:44:00.699Z I ACCESS [conn7] Successfully authenticated as principal __system on local from client 127.0.0.1:51414 2021-12-27T00:44:00.723Z I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends 2021-12-27T00:44:00.724Z I NETWORK [signalProcessingThread] shutdown: going to close listening sockets... 2021-12-27T00:44:00.724Z I FTDC [signalProcessingThread] Shutting down full-time diagnostic data capture 2021-12-27T00:44:00.726Z I STORAGE [signalProcessingThread] shutdown: waiting for fs preallocator... 2021-12-27T00:44:00.726Z I STORAGE [signalProcessingThread] shutdown: final commit... 2021-12-27T00:44:00.729Z I JOURNAL [signalProcessingThread] journalCleanup... 2021-12-27T00:44:00.729Z I JOURNAL [signalProcessingThread] removeJournalFiles 2021-12-27T00:44:00.729Z I JOURNAL [signalProcessingThread] old journal file will be removed: /opt/splunk/var/lib/splunk/kvstore/mongo/journal/j._0 2021-12-27T00:44:00.730Z I JOURNAL [signalProcessingThread] Terminating durability thread ... 2021-12-27T00:44:00.828Z I JOURNAL [journal writer] Journal writer thread stopped 2021-12-27T00:44:00.828Z I JOURNAL [durability] Durability thread stopped 2021-12-27T00:44:00.828Z I STORAGE [signalProcessingThread] shutdown: closing all files... 2021-12-27T00:44:00.855Z I STORAGE [signalProcessingThread] closeAllFiles() finished 2021-12-27T00:44:00.855Z I STORAGE [signalProcessingThread] shutdown: removing fs lock... 2021-12-27T00:44:00.855Z I CONTROL [signalProcessingThread] now exiting 2021-12-27T00:44:00.856Z I CONTROL [signalProcessingThread] shutting down with code:0       I've seen some other errors reported with this process, but they all seem to be related to file permission errors. My file permissions seem OK, and give the dumb of existing data works, doesn't seem to be related anyhow. Any other ideas of what is wrong here?
Hi All, Does Deployment Server compatible with deployment client that has higher version? For example :  Deployment Server - 7.3.5  Deployment Client - 8.0.9 OR  Deployment Server - 8.0.9  Dep... See more...
Hi All, Does Deployment Server compatible with deployment client that has higher version? For example :  Deployment Server - 7.3.5  Deployment Client - 8.0.9 OR  Deployment Server - 8.0.9  Deployment Client - 8.2.3   Thank you for the help! Hen  
Hello Team,  Splunk UF has been installed in all our 1000+ windows servers and we are monitoring those logs. Now the scenario is we have one more Splunk team in my organization and they needs to mon... See more...
Hello Team,  Splunk UF has been installed in all our 1000+ windows servers and we are monitoring those logs. Now the scenario is we have one more Splunk team in my organization and they needs to monitor the one of the path in only 4 servers/hosts in that 1000+ servers .  we need to configure the dual feeding for those 4 windows servers. My configuration file :  inputs.conf  :  [monitor://xxxx:\xxxxxxx\xxxxxxxxxx\xxxxxxxxxxxx\xxxxxxxxxxx\log*] index=xxxxxxxxxx sourcetype= xxxxxxxxxxxx host = xxxxxxxxxx,xxxxxxxxxxxx,xxxxxxxxxxxx,xxxxxxxxxxxxxx -----> these are the 4 windows servers/hosts which we need to send the dual feed.  disabled = 0 Outputs.conf [tcpout:xxxxxxxx] server= xxxxxxxxxxx:9997,xxxxxxxxxxx:9997,xxxxxxxxxxxxx:9997,xxxxxxxxxxxxx:9997 But after adding this configurations, they are receiving the wrong logs. How to send the logs for only those 4 servers with correct data?  Kindly help us on this.   
Hi, I couldn't find a way to bundle my app with a python dependency package, for example tenacity. The documentation only describe app dependencies and not python dependencies. I imagine running s... See more...
Hi, I couldn't find a way to bundle my app with a python dependency package, for example tenacity. The documentation only describe app dependencies and not python dependencies. I imagine running slim package command which downloads the required dependencies and when installing, those dependencies are been installed automatically on the server python environment.
As the title suggests I am attempting to set a custom and default for a splunk dashboard that I created. When it opens I need it to snap to the previous weekday between 16:26 and 16:42 CST. Does anyo... See more...
As the title suggests I am attempting to set a custom and default for a splunk dashboard that I created. When it opens I need it to snap to the previous weekday between 16:26 and 16:42 CST. Does anyone know how I would go about doing this?
I have logs which shows the job status ( Running, succeeded and failed) and all jobs have unique job id , now I want to calculate the duration it took to get failed or succeeded for each job id. Here... See more...
I have logs which shows the job status ( Running, succeeded and failed) and all jobs have unique job id , now I want to calculate the duration it took to get failed or succeeded for each job id. Here, all jobs id would have two event first one -running and second - succeeded or failed.  How it can be done   
I am looking to middle align the single value panel in my SPLUNK dashboard.
INFO [] () process='isValid', result='failed', dacNumber='[DAC_111_646]',  accountNumber=1122333 INFO [] () process='isValid', result='failed', dacNumber='[DAC_111_777]',  accountNumber=112... See more...
INFO [] () process='isValid', result='failed', dacNumber='[DAC_111_646]',  accountNumber=1122333 INFO [] () process='isValid', result='failed', dacNumber='[DAC_111_777]',  accountNumber=1122333 INFO [] () process='isValid', result='failed', dacNumber='[DAC_111_888]',  accountNumber=1122333  INFO [] () process='isValid', result='success', dacNumber='[DAC_111_777]',  accountNumber=1122333  INFO [] () process='isValid', result='success', dacNumber='[DAC_111_999]',  accountNumber=1122333  INFO [] () process='isValid', result='success', dacNumber='[DAC_111_646]',  accountNumber=1122333   How to get all failed dacNumber which never passed.  In the above example it should give me DAC_111_777. Please help.  
I am unable to start splunk it shows like access denied tell me to start splunk in windows
Hi, I'm facing issue while configuring SAML, using gsuite "Saml response does not contain group information"  Please give resolution for above error. Thanks sujith
I am working on using the same time range as an argument used in the Time range picker.  how do I do that? |metadata index=* type=hosts|eval First_Time=strftime(firstTime, "%Y-%d-%m %H:%M") Thi... See more...
I am working on using the same time range as an argument used in the Time range picker.  how do I do that? |metadata index=* type=hosts|eval First_Time=strftime(firstTime, "%Y-%d-%m %H:%M") This is my search query and I need the "firstTime" values to be the same value as used in the search head (i.e) if this search is run from 1st Nov to 30th Nov, I need the firstTime values also in this specified time range as given in the time-range picker.
Hi, Splunkers,   when I run a splunk search,  I use  NOT  string  to exclude result with this string. if I have a dashboard, how to add text or dropdown input to select this  string to exclude it ... See more...
Hi, Splunkers,   when I run a splunk search,  I use  NOT  string  to exclude result with this string. if I have a dashboard, how to add text or dropdown input to select this  string to exclude it from dashboard return? BTW, this string might not be a value of any field, just a random string.   Kevin
I was surprised by this result: In a field starting with a value that can be interpreted as an integer, groupby treats it lexically, but sort treats it numerically.  How does sort determine the inten... See more...
I was surprised by this result: In a field starting with a value that can be interpreted as an integer, groupby treats it lexically, but sort treats it numerically.  How does sort determine the intention?  Is there a syntax to force lexical sort? To illustrate, consider the following:   | makeresults | eval i = mvrange(-3, 4) | mvexpand i | eval i = printf("%+d", i) . "x" | stats count by i       As is (groupby only) i count +0x 1 +1x 1 +2x 1 +3x 1 -1x 1 -2x 1 -3x 1 Add |sort i i count -3x 1 -2x 1 -1x 1 +0x 1 +1x 1 +2x 1 +3x 1 In my use case, numeric sort is desired. (That was how I "discovered" this.)  Just curious about mechanism.
Hello, I am new to Splunk and working on getting our environment setup correctly.   I have a SC4S server setup and working.  My question is about UF installed on Windows servers and Windows AD serve... See more...
Hello, I am new to Splunk and working on getting our environment setup correctly.   I have a SC4S server setup and working.  My question is about UF installed on Windows servers and Windows AD servers.   Should the UF be setup to send info to the SC4S server or should they send them directly to the Splunk Indexer? Thanks,
I need to Forward All Windows Security/Application/system logs to 2 Separate Splunk instances with different Index names.   so  Security log ------- Index1 on serverA , Index2 on ServerB   in my I... See more...
I need to Forward All Windows Security/Application/system logs to 2 Separate Splunk instances with different Index names.   so  Security log ------- Index1 on serverA , Index2 on ServerB   in my Input.cof on my UF do i use Index=index1,Index2 Then in Output of HF send to Index_servers= ServerA/ServerB I need to make sure ServerB does not get hit with Index1