All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All i have onboarded linux logs from S3--> Splunk . I found additional timestamp is getting attached to the events. Can you please help me in removing the additional timestamp. Below is the expec... See more...
Hi All i have onboarded linux logs from S3--> Splunk . I found additional timestamp is getting attached to the events. Can you please help me in removing the additional timestamp. Below is the expected log format. Before, 2020-07-01T10:59:58Z messages {"message":"Jun 1 10:59:58 stg-coinbrh: [get_meta] Trying to get http://10.4.3.1/latest/meta-data/network/interfaces/macs/06:c3:45:12:56:12/subnet-ipv4-cidr-block"} 2020-07-01T10:59:58Z messages {"message":"Jun 4 10:59:58 stg-mbcoln: [rewrite_aliases] Rewriting aliases of eth0"} After, Jun 1 10:59:58 stg-coinbrh: [get_meta] Trying to get http://10.4.3.1/latest/meta-data/network/interfaces/macs/06:c3:45:12:56:12/subnet-ipv4-cidr-block Jun 4 10:59:58 stg-mbcoln: [rewrite_aliases] Rewriting aliases of eth0 Please help me in defining exact props and transforms settings to achieve this.   Thanks in advance    
Hello, We are trying to trim the fat In our log ingestion and need to determine what's actually in use vs what's ingested. Is this a possibility in any sense? Weather it be at the source types leve... See more...
Hello, We are trying to trim the fat In our log ingestion and need to determine what's actually in use vs what's ingested. Is this a possibility in any sense? Weather it be at the source types level or even More granular at the source or specific events  I think this would be super useful as we currently don't know what's valuable information In splunk and what's not.   Thanks!
Hi, I am new to splunk, I wanted to know if it is possible to change the color format of default features provided by splunk such as color of each splunk top bar drill down, create dashboard green c... See more...
Hi, I am new to splunk, I wanted to know if it is possible to change the color format of default features provided by splunk such as color of each splunk top bar drill down, create dashboard green color or the export button for dashboard. Thanks in advance!
Hi, I am new to splunk, I wanted to know if it possible, anyhow ,to forward logs from Logstash into Splunk? If so, what are pre-requisite needed such as data connectivity between them, data read fre... See more...
Hi, I am new to splunk, I wanted to know if it possible, anyhow ,to forward logs from Logstash into Splunk? If so, what are pre-requisite needed such as data connectivity between them, data read frequency setup .
search index=abc dp_"response"| stats perc95(api_time_taken) as abc by api   This is the search query I am using while invoking through splunk rest API. In the result, I am not getting the abc ... See more...
search index=abc dp_"response"| stats perc95(api_time_taken) as abc by api   This is the search query I am using while invoking through splunk rest API. In the result, I am not getting the abc field, only the API values are listed . Is there anything specific I need to do to include perc95,avg or max values in the result.   From UI, it works completely fine where it shows the abc column with the 95 percentile value If someone can guide me, it would be really helpful.   Thanks, Santosh Thank, Santosh
We have an indexer cluster with several custom indexes configured in the indexes.conf, however when we run:   splunk show cluster-status --verbose   only the main index shows up (among intern... See more...
We have an indexer cluster with several custom indexes configured in the indexes.conf, however when we run:   splunk show cluster-status --verbose   only the main index shows up (among internal _ indexes). What could be the reason for that? When searching via the search heads, the indexes all work fine.
This is my query and I have some challenges in the log. The thing is my daily job will start at 11 PM. If the job runs successfully it will complete before 11:30. So I set status as success. But in c... See more...
This is my query and I have some challenges in the log. The thing is my daily job will start at 11 PM. If the job runs successfully it will complete before 11:30. So I set status as success. But in case of job time out the job time out at next day at 1:30 AM. Again, the job started on the next day 11:PM and ran successfully, but now I have failure and success in same day. How can I check the event and set status as a failure? index=xx* app_name="xxx" OR cf_app_name="yyy*" OR app_name="ccc" |bucket _time span=1d |eval dayweek=strftime(_time,"%H")|convert timeformat="%m-%d-%y" ctime(_time) as c_time|eval Job = case(like(msg, "%first%"), "first Job", like(msg, "%second%"), "second Job", like(msg, "%third%"), "third job",like(msg, "%fourth%"), "fourth job")| stats count(eval(like(msg, "%All feed is completed%") OR like(msg, "%Success:%") OR like(msg, "%Success: %") OR like(msg, "%Finished success%"))) as Successcount count(eval(like(msg, "%Fatal Error: %") OR like(msg, "%Fatal Error:%") OR like(msg, "%Job raised exception%") AND like(msg, "% job error%"))) as failurecount by Job c_time dayweek|eval status=case((Job="fourth job") AND (dayweek=="Saturday" OR dayweek=="Sunday"),"NA",Successcount>0,"Success",failurecount>0,"Failure")| xyseries Job c_time status  
Hi all! While testing rollback workflow we faced with kvstore failed. When we try to start splunk with ./splunk start we get followed error in var/log/splunk/mongod.log:   2020-06-30T16:42:40.231Z... See more...
Hi all! While testing rollback workflow we faced with kvstore failed. When we try to start splunk with ./splunk start we get followed error in var/log/splunk/mongod.log:   2020-06-30T16:42:40.231Z W CONTROL No SSL certificate validation can be performed since no CA file has been provided; please specify an sslCAFile parameter 2020-06-30T16:42:40.296Z I CONTROL [initandlisten] MongoDB starting : pid=120021 port=8191 dbpath=/opt/splunk/var/lib/splunk/kvstore/mongo 64-bit host=vm08 2020-06-30T16:42:40.296Z I CONTROL [initandlisten] db version v3.0.14-splunk 2020-06-30T16:42:40.296Z I CONTROL [initandlisten] git version: 08352afcca24bfc145240a0fac9d28b978ab77f3 2020-06-30T16:42:40.296Z I CONTROL [initandlisten] build info: Linux ip-10-113-204-203 2.6.18-194.el5xen #1 SMP Tue Mar 16 22:01:26 EDT 2010 x86_64 BOOST_LIB_VERSION=1_49 2020-06-30T16:42:40.296Z I CONTROL [initandlisten] allocator: tcmalloc 2020-06-30T16:42:40.296Z I CONTROL [initandlisten] options: { net: { port: 8191, ssl: { PEMKeyFile: "/opt/splunk/etc/auth/server.pem", PEMKeyPassword: "<password>", allowInvalidHostnames: true, disabledProtocols: "noTLS1_0,noTLS1_1", mode: "requireSSL", sslCipherConfig: "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RS..." }, unixDomainSocket: { enabled: false } }, replication: { oplogSizeMB: 200, replSet: "0F92C83E-3832-4718-8A20-6AE05900C13D" }, security: { javascriptEnabled: false, keyFile: "/opt/splunk/var/lib/splunk/kvstore/mongo/splunk.key" }, setParameter: { enableLocalhostAuthBypass: "0" }, storage: { dbPath: "/opt/splunk/var/lib/splunk/kvstore/mongo", mmapv1: { smallFiles: true } }, systemLog: { timeStampFormat: "iso8601-utc" } } 2020-06-30T16:42:40.350Z W - [initandlisten] Detected unclean shutdown - /opt/splunk/var/lib/splunk/kvstore/mongo/mongod.lock is not empty. 2020-06-30T16:42:40.391Z I STORAGE [initandlisten] 2020-06-30T16:42:40.391Z I STORAGE [initandlisten] ** WARNING: Readahead for /opt/splunk/var/lib/splunk/kvstore/mongo is set to 4096KB 2020-06-30T16:42:40.391Z I STORAGE [initandlisten] ** We suggest setting it to 256KB (512 sectors) or less 2020-06-30T16:42:40.391Z I STORAGE [initandlisten] ** http://dochub.mongodb.org/core/readahead 2020-06-30T16:42:40.392Z I JOURNAL [initandlisten] journal dir=/opt/splunk/var/lib/splunk/kvstore/mongo/journal 2020-06-30T16:42:40.392Z I JOURNAL [initandlisten] recover begin 2020-06-30T16:42:40.393Z I JOURNAL [initandlisten] info no lsn file in journal/ directory 2020-06-30T16:42:40.393Z I JOURNAL [initandlisten] recover lsn: 0 2020-06-30T16:42:40.393Z I JOURNAL [initandlisten] recover /opt/splunk/var/lib/splunk/kvstore/mongo/journal/j._0 2020-06-30T16:42:40.397Z I JOURNAL [initandlisten] recover cleaning up 2020-06-30T16:42:40.397Z I JOURNAL [initandlisten] removeJournalFiles 2020-06-30T16:42:40.439Z I JOURNAL [initandlisten] recover done 2020-06-30T16:42:40.439Z I JOURNAL [initandlisten] preallocating a journal file /opt/splunk/var/lib/splunk/kvstore/mongo/journal/prealloc.0 2020-06-30T16:42:41.261Z I JOURNAL [durability] Durability thread started 2020-06-30T16:42:41.278Z I JOURNAL [journal writer] Journal writer thread started 2020-06-30T16:42:41.371Z I - [initandlisten] Invariant failure 1 == version src/mongo/db/storage/mmap_v1/btree/btree_interface.cpp 267 2020-06-30T16:42:41.391Z I CONTROL [initandlisten] 0x55fcf57642e2 0x55fcf56fea69 0x55fcf56e1b39 0x55fcf54e36be 0x55fcf552b674 0x55fcf50db490 0x55fcf50deb20 0x55fcf50c8cc1 0x55fcf50d3f53 0x55fcf50d5f36 0x55fcf50d8db6 0x55fcf4fb410d 0x55fcf4fb8a09 0x7fe558f44555 0x55fcf4fb1149 ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"55FCF4A99000","o":"CCB2E2","s":"_ZN5mongo15printStackTraceERSo"},{"b":"55FCF4A99000","o":"C65A69","s":"_ZN5mongo10logContextEPKc"},{"b":"55FCF4A99000","o":"C48B39","s":"_ZN5mongo15invariantFailedEPKcS1_j"},{"b":"55FCF4A99000","o":"A4A6BE","s":"_ZN5mongo18getMMAPV1InterfaceEPNS_11HeadManagerEPNS_11RecordStoreEPNS_19SavedCursorRegistryERKNS_8OrderingERKSsi"},{"b":"55FCF4A99000","o":"A92674","s":"_ZN5mongo26MMAPV1DatabaseCatalogEntry8getIndexEPNS_16OperationContextEPKNS_22CollectionCatalogEntryEPNS_17IndexCatalogEntryE"},{"b":"55FCF4A99000","o":"642490","s":"_ZN5mongo12IndexCatalog24_setupInMemoryStructuresEPNS_16OperationContextEPNS_15IndexDescriptorEb"},{"b":"55FCF4A99000","o":"645B20","s":"_ZN5mongo12IndexCatalog4initEPNS_16OperationContextE"},{"b":"55FCF4A99000","o":"62FCC1","s":"_ZN5mongo10CollectionC2EPNS_16OperationContextERKNS_10StringDataEPNS_22CollectionCatalogEntryEPNS_11RecordStoreEPNS_20DatabaseCatalogEntryE"},{"b":"55FCF4A99000","o":"63AF53","s":"_ZN5mongo8Database30_getOrCreateCollectionInstanceEPNS_16OperationContextERKNS_10StringDataE"},{"b":"55FCF4A99000","o":"63CF36","s":"_ZN5mongo8DatabaseC1EPNS_16OperationContextERKNS_10StringDataEPNS_20DatabaseCatalogEntryE"},{"b":"55FCF4A99000","o":"63FDB6","s":"_ZN5mongo14DatabaseHolder6openDbEPNS_16OperationContextERKNS_10StringDataEPb"},{"b":"55FCF4A99000","o":"51B10D","s":"_ZN5mongo13initAndListenEi"},{"b":"55FCF4A99000","o":"51FA09","s":"main"},{"b":"7FE558F22000","o":"22555","s":"__libc_start_main"},{"b":"55FCF4A99000","o":"518149"}],"processInfo":{ "mongodbVersion" : "3.0.14-splunk", "gitVersion" : "08352afcca24bfc145240a0fac9d28b978ab77f3", "uname" : { "sysname" : "Linux", "release" : "3.10.0-1062.18.1.el7.x86_64", "version" : "#1 SMP Tue Mar 17 23:49:17 UTC 2020", "machine" : "x86_64" }, "somap" : [ { "b" : "55FCF4A99000", "elfType" : 3 }, { "b" : "7FFD806E9000", "elfType" : 3 }, { "b" : "7FE559CD9000", "path" : "/lib64/libpthread.so.0", "elfType" : 3 }, { "b" : "7FE55A099000", "path" : "/opt/splunk/lib/libssl.so.1.0.0", "elfType" : 3 }, { "b" : "7FE5599FE000", "path" : "/opt/splunk/lib/libcrypto.so.1.0.0", "elfType" : 3 }, { "b" : "7FE5597F6000", "path" : "/lib64/librt.so.1", "elfType" : 3 }, { "b" : "7FE5595F2000", "path" : "/lib64/libdl.so.2", "elfType" : 3 }, { "b" : "7FE5592F0000", "path" : "/lib64/libm.so.6", "elfType" : 3 }, { "b" : "7FE558F22000", "path" : "/lib64/libc.so.6", "elfType" : 3 }, { "b" : "7FE559EF5000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3 }, { "b" : "7FE55A07B000", "path" : "/opt/splunk/lib/libz.so.1", "elfType" : 3 } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x32) [0x55fcf57642e2] mongod(_ZN5mongo10logContextEPKc+0xE9) [0x55fcf56fea69] mongod(_ZN5mongo15invariantFailedEPKcS1_j+0xB9) [0x55fcf56e1b39] mongod(_ZN5mongo18getMMAPV1InterfaceEPNS_11HeadManagerEPNS_11RecordStoreEPNS_19SavedCursorRegistryERKNS_8OrderingERKSsi+0x1CE) [0x55fcf54e36be] mongod(_ZN5mongo26MMAPV1DatabaseCatalogEntry8getIndexEPNS_16OperationContextEPKNS_22CollectionCatalogEntryEPNS_17IndexCatalogEntryE+0x74) [0x55fcf552b674] mongod(_ZN5mongo12IndexCatalog24_setupInMemoryStructuresEPNS_16OperationContextEPNS_15IndexDescriptorEb+0x90) [0x55fcf50db490] mongod(_ZN5mongo12IndexCatalog4initEPNS_16OperationContextE+0x260) [0x55fcf50deb20] mongod(_ZN5mongo10CollectionC2EPNS_16OperationContextERKNS_10StringDataEPNS_22CollectionCatalogEntryEPNS_11RecordStoreEPNS_20DatabaseCatalogEntryE+0x111) [0x55fcf50c8cc1] mongod(_ZN5mongo8Database30_getOrCreateCollectionInstanceEPNS_16OperationContextERKNS_10StringDataE+0x93) [0x55fcf50d3f53] mongod(_ZN5mongo8DatabaseC1EPNS_16OperationContextERKNS_10StringDataEPNS_20DatabaseCatalogEntryE+0x206) [0x55fcf50d5f36] mongod(_ZN5mongo14DatabaseHolder6openDbEPNS_16OperationContextERKNS_10StringDataEPb+0x176) [0x55fcf50d8db6] mongod(_ZN5mongo13initAndListenEi+0x107D) [0x55fcf4fb410d] mongod(main+0x159) [0x55fcf4fb8a09] libc.so.6(__libc_start_main+0xF5) [0x7fe558f44555] mongod(+0x518149) [0x55fcf4fb1149] ----- END BACKTRACE ----- 2020-06-30T16:42:41.391Z I - [initandlisten] ***aborting after invariant() failure   please advise. how to make a downgrade correctly?
Hi, We have to ingest activity log into Splunk. We installed Microsoft add-on for Splunk on our heavy forwarder. When we click on the input tab it keeps spinning and I see a bunch of exceptions in ... See more...
Hi, We have to ingest activity log into Splunk. We installed Microsoft add-on for Splunk on our heavy forwarder. When we click on the input tab it keeps spinning and I see a bunch of exceptions in the log. I tested installing on a test box with same network config, everything works fine but on our heavy forwarder is an issue. OS is centos box. Any Azure adds on the same issue. I figured out some configuration its not able to read but what it is is something I am not able to figure out. 6-30-2020 15:02:50.557 -0400 DEBUG AdminManagerExternal - Sending handler setup data:\n<?xml version="1.0" encoding="UTF-8"?>\n<eai>\n <eai_settings>\n <appName>TA-MS-AAD</appName>\n <userName>nobody</userName>\n <customAction></customAction>\n <maxCount>0</maxCount>\n <posOffset>0</posOffset>\n <requestedAction>2</requestedAction>\n <shouldFilter></shouldFilter>\n <sortAscending>true</sortAscending>\n <sortByKey>name</sortByKey>\n </eai_settings>\n <sessionKey>vIhbfLFoBcWd2QCagd7Gg3k6kN0gvxARFwcRvaUefynzDvBAEtTKQGjoqELPNBmQNNcZ^rQ7NhjTKMMgA^9h^M3w41LuqWnWKO4XMQA_P7uzDxlio1cfOHdHoBp7w3MNy1voyC</sessionKey>\n <productType>enterprise</productType>\n <callerArgs>\n <id></id>\n <args/>\n </callerArgs>\n <setup/>\n</eai>\n 06-30-2020 15:02:50.557 -0400 DEBUG AdminManagerExternal - Sending handler setup data:\n<?xml version="1.0" encoding="UTF-8"?>\n<eai>\n <eai_settings>\n <appName>TA-MS-AAD</appName>\n <userName>nobody</userName>\n <customAction></customAction>\n <maxCount>0</maxCount>\n <posOffset>0</posOffset>\n <requestedAction>2</requestedAction>\n <shouldFilter></shouldFilter>\n <sortAscending>true</sortAscending>\n <sortByKey>name</sortByKey>\n </eai_settings>\n <sessionKey>vIhbfLFoBcWd2QCagd7Gg3k6kN0gvxARFwcRvaUefynzDvBAEtTKQGjoqELPNBmQNNcZ^rQ7NhjTKMMgA^9h^M3w41LuqWnWKO4XMQA_P7uzDxlio1cfOHdHoBp7w3MNy1voyC</sessionKey>\n <productType>enterprise</productType>\n <callerArgs>\n <id></id>\n <args/>\n </callerArgs>\n <setup/>\n</eai>\n 06-30-2020 15:02:50.558 -0400 DEBUG AdminManagerExternal - Sending handler setup data:\n<?xml version="1.0" encoding="UTF-8"?>\n<eai>\n <eai_settings>\n <appName>TA-MS-AAD</appName>\n <userName>nobody</userName>\n <customAction></customAction>\n <maxCount>0</maxCount>\n <posOffset>0</posOffset>\n <requestedAction>2</requestedAction>\n <shouldFilter></shouldFilter>\n <sortAscending>true</sortAscending>\n <sortByKey>name</sortByKey>\n </eai_settings>\n <sessionKey>vIhbfLFoBcWd2QCagd7Gg3k6kN0gvxARFwcRvaUefynzDvBAEtTKQGjoqELPNBmQNNcZ^rQ7NhjTKMMgA^9h^M3w41LuqWnWKO4XMQA_P7uzDxlio1cfOHdHoBp7w3MNy1voyC</sessionKey>\n <productType>enterprise</productType>\n <callerArgs>\n <id></id>\n <args/>\n </callerArgs>\n <setup/>\n</eai>\n 06-30-2020 15:02:50.563 -0400 DEBUG AdminManagerExternal - Got back data: <eai_error><recognized>false</recognized><type>&lt;class 'backports.configparser.InterpolationSyntaxError'&gt;</type><message>'%' must be followed by '%' or '(', found: '%8x8O'</message><stacktrace>Traceback (most recent call last):\n File "/opt/splunk/lib/python3.7/site-packages/splunk/admin.py", line 151, in init\n hand = handler(mode, ctxInfo)\n File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/splunktaucclib/rest_handler/admin_external.py", line 67, in __init__\n get_splunkd_uri(),\n File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/solnlib/splunkenv.py", line 210, in get_splunkd_uri\n scheme, host, port = get_splunkd_access_info()\n File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/solnlib/splunkenv.py", line 182, in get_splunkd_access_info\n 'server', 'sslConfig', 'enableSplunkdSSL')):\n File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/solnlib/splunkenv.py", line 230, in get_conf_key_value\n stanzas = get_conf_stanzas(conf_name)\n File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/solnlib/splunkenv.py", line 284, in get_conf_stanzas\n out[section] = {item[0]: item[1] for item in parser.items(section)}\n File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/backports/configparser/__init__.py", line 870, in items\n return [(option, value_getter(option)) for option in d.keys()]\n File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/backports/configparser/__init__.py", line 870, in &lt;listcomp&gt;\n return [(option, value_getter(option)) for option in d.keys()]\n File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/backports/configparser/__init__.py", line 867, in &lt;lambda&gt;\n section, option, d[option], d)\n File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/backports/configparser/__init__.py", line 387, in before_get\n self._interpolate_some(parser, option, L, value, section, defaults, 1)\n File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/backports/configparser/__init__.py", line 437, in _interpolate_some\n "found: %r" % (rest,))\nbackports.configparser.InterpolationSyntaxError: '%' must be followed by '%' or '(', found: '%8x8O'\n</stacktrace></eai_error>\n 06-30-2020 15:02:50.563 -0400 DEBUG AdminManagerExternal - Found serialized error from external handler. 06-30-2020 15:02:50.563 -0400 ERROR AdminManagerExternal - Stack trace from python handler:\nTraceback (most recent call last):\n File "/opt/splunk/lib/python3.7/site-packages/splunk/admin.py", line 151, in init\n hand = handler(mode, ctxInfo)\n File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/splunktaucclib/rest_handler/admin_external.py", line 67, in __init__\n get_splunkd_uri(),\n File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/solnlib/splunkenv.py", line 210, in get_splunkd_uri\n scheme, host, port = get_splunkd_access_info()\n File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/solnlib/splunkenv.py", line 182, in get_splunkd_access_info\n 'server', 'sslConfig', 'enableSplunkdSSL')):\n File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/solnlib/splunkenv.py", line 230, in get_conf_key_value\n stanzas = get_conf_stanzas(conf_name)\n File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/solnlib/splunkenv.py", line 284, in get_conf_stanzas\n out[section] = {item[0]: item[1] for item in parser.items(section)}\n File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/backports/configparser/__init__.py", line 870, in items\n return [(option, value_getter(option)) for option in d.keys()]\n File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/backports/configparser/__init__.py", line 870, in <listcomp>\n return [(option, value_getter(option)) for option in d.keys()]\n File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/backports/configparser/__init__.py", line 867, in <lambda>\n section, option, d[option], d)\n File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/backports/configparser/__init__.py", line 387, in before_get\n self._interpolate_some(parser, option, L, value, section, defaults, 1)\n File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/backports/configparser/__init__.py", line 437, in _interpolate_some\n "found: %r" % (rest,))\nbackports.configparser.InterpolationSyntaxError: '%' must be followed by '%' or '(', found: '%8x8O'\n 06-30-2020 15:02:50.563 -0400 ERROR AdminManagerExternal - Unexpected error "<class 'backports.configparser.InterpolationSyntaxError'>" from python handler: "'%' must be followed by '%' or '(', found: '%8x8O'". See splunkd.log for more details. 06-30-2020 15:02:50.569 -0400 DEBUG AdminManagerExternal - Got back data: <eai_error><recognized>false</recognized><type>&lt;class 'backports.configparser.InterpolationSyntaxError'&gt;</type><message>'%' must be followed by '%' or '(', found: '%8x8O'</message><stacktrace>Traceback (most recent call last):\n File "/opt/splunk/lib/python3.7/site-packages/splunk/admin.py", line 151, in init\n hand = handler(mode, ctxInfo)\n File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/splunktaucclib/rest_handler/admin_external.py", line 67, in __init__\n get_splunkd_uri(),\n File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/solnlib/splunkenv.py", line 210, in get_splunkd_uri\n scheme, host, port = get_splunkd_access_info()\n File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/solnlib/splunkenv.py", line 182, in get_splunkd_access_info\n 'server', 'sslConfig', 'enableSplunkdSSL')):\n File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/solnlib/splunkenv.py", line 230, in get_conf_key_value\n stanzas = get_conf_stanzas(conf_name)\n File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/solnlib/splunkenv.py", line 284, in get_conf_stanzas\n out[section] = {item[0]: item[1] for item in parser.items(section)}\n File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/backports/configparser/__init__.py", line 870, in items\n return [(option, value_getter(option)) for option in d.keys()]\n File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/backports/configparser/__init__.py", line 870, in &lt;listcomp&gt;\n return [(option, value_getter(option)) for option in d.keys()]\n File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/backports/configparser/__init__.py", line 867, in &lt;lambda&gt;\n section, option, d[option], d)\n File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/backports/configparser/__init__.py", line 387, in before_get\n self._interpolate_some(parser, option, L, value, section, defaults, 1)\n File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/backports/configparser/__init__.py", line 437, in _interpolate_some\n "found: %r" % (rest,))\nbackports.configparser.InterpolationSyntaxError: '%' must be followed by '%' or '(', found: '%8x8O'\n</stacktrace></eai_error>\n 06-30-2020 15:02:50.569 -0400 DEBUG AdminManagerExternal - Found serialized error from external handler. 06-30-2020 15:02:50.569 -0400 ERROR AdminManagerExternal - Stack trace from python handler:\nTraceback (most recent call last):\n File "/opt/splunk/lib/python3.7/site-packages/splunk/admin.py", line 151, in init\n hand = handler(mode, ctxInfo)\n File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/splunktaucclib/rest_handler/admin_external.py", line 67, in __init__\n get_splunkd_uri(),\n File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/solnlib/splunkenv.py", line 210, in get_splunkd_uri\n scheme, host, port = get_splunkd_access_info()\n File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/solnlib/splunkenv.py", line 182, in get_splunkd_access_info\n 'server', 'sslConfig', 'enableSplunkdSSL')):\n File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/solnlib/splunkenv.py", line 230, in get_conf_key_value\n stanzas = get_conf_stanzas(conf_name)\n File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/solnlib/splunkenv.py", line 284, in get_conf_stanzas\n out[section] = {item[0]: item[1] for item in parser.items(section)}\n File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/backports/configparser/__init__.py", line 870, in items\n return [(option, value_getter(option)) for option in d.keys()]\n File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/backports/configparser/__init__.py", line 870, in <listcomp>\n return [(option, value_getter(option)) for option in d.keys()]\n File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/backports/configparser/__init__.py", line 867, in <lambda>\n section, option, d[option], d)\n File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/backports/configparser/__init__.py", line 387, in before_get\n self._interpolate_some(parser, option, L, value, section, defaults, 1)\n File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/backports/configparser/__init__.py", line 437, in _interpolate_some\n "found: %r" % (rest,))\nbackports.configparser.InterpolationSyntaxError: '%' must be followed by '%' or '(', found: '%8x8O'\n 06-30-2020 15:02:50.569 -0400 ERROR AdminManagerExternal - Unexpected error "<class 'backports.configparser.InterpolationSyntaxError'>" from python handler: "'%' must be followed by '%' or '(', found: '%8x8O'". See splunkd.log for more details.    
Hello Experts, I am wondering is there any ways to make the search strings flexibly? Like I have multiple queries as below: - index=index_1 host=host_1 (scope=A OR scope=B) | ....  - index=index_2... See more...
Hello Experts, I am wondering is there any ways to make the search strings flexibly? Like I have multiple queries as below: - index=index_1 host=host_1 (scope=A OR scope=B) | ....  - index=index_2 host=host_2 (scope=C OR scope=D) | ....  - index=index_3 host=host_3 (scope=A OR scope=B OR scope=E OR scope=F) | .... So instead of writting a macro with 3 arguments: $index$, $host$, $scopes$ - customMacro(3) We just pass only index as argument and based on the number in the index, we modify the host and scope? - host=case(match(index,1), host_1...) Thank in advance!
its been a while since I've worked with splunk   I have an error detail that I can search in splunk: index=* errorMessage and it returns: dateTime - sessionId - errorMessage if I search the ses... See more...
its been a while since I've worked with splunk   I have an error detail that I can search in splunk: index=* errorMessage and it returns: dateTime - sessionId - errorMessage if I search the sessionId I get: index=* sessionId dateTime - sessionId - customerDetail     How can I find the customerDetail using one query by searching for the errorMessage?        
Hi I would like to ask why is the Splunk Realtime Savesearch still running even it's expired.   Also whats the purpose of the Expiration time (24hrs) on the settings if it still runs ?  
Hi I have just received 3 new machines for a Splunk cluster. The new machines are faster than my previous hardware. and I would like to know is it better to Option A Using current hardware as SH ... See more...
Hi I have just received 3 new machines for a Splunk cluster. The new machines are faster than my previous hardware. and I would like to know is it better to Option A Using current hardware as SH and new machines as Indexers Option B  Use new machine as SH and old machine + 2 new machines as Indexers Both have the same amount of threads 56 and both have SSD Old= Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz 3 new = Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz Thanks in Advance Robert  
Hi Team, How can we check what kind of data AppDynamics agents reads from application/server/DB and sends it to the controller? How can we know from our end, what data exactly AppDynamics agent... See more...
Hi Team, How can we check what kind of data AppDynamics agents reads from application/server/DB and sends it to the controller? How can we know from our end, what data exactly AppDynamics agent reads? Thanks in advance. Regards, ^ Edited by @Ryan.Paredez to improve the title
Hi, I was wondering if we can use the field in eval inside the regular expression in rex?   my search query | eval IP=if(("$spec_IP$"=="*"),"(?<file_ip>\d+.\d+.\d+.\d+)","$spec_IP$") | rex fie... See more...
Hi, I was wondering if we can use the field in eval inside the regular expression in rex?   my search query | eval IP=if(("$spec_IP$"=="*"),"(?<file_ip>\d+.\d+.\d+.\d+)","$spec_IP$") | rex field=_raw "\d{1,2}-\S{3}\s\d{2}:\d{2}:\d{2}.\d{3}\s\S{3}\s\[IP\]\s%NICWIN-4-Security_560_Security[\S\s]+?(?<log_time>(Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)\s\d{2}\s\d{2}:\d{2}:\d{2})[\S\s]+?\S*Object\sName:\s(?<object_name>[\S\s]+?)New\sHandle\sID[\S\s]+?Primary\sUser\sName:\s(?<username>[\S\s]+?)\s+" I am trying to use the eval field IP (bold & italic) in the regular expression in the rex command.
Does Splunk cloud support Workato Add on similar to Splunk Enterprise. Please provide us with the relevant documentation if it supports. As well please let us know how to enable the management port ... See more...
Does Splunk cloud support Workato Add on similar to Splunk Enterprise. Please provide us with the relevant documentation if it supports. As well please let us know how to enable the management port 8089 in Splunk Cloud instance/Enterprise edition.
Hello, I have created a Machine learning job to Detect categorical outliers and saved as an alert. I have scheduled alert for everyday and I am receiving results. I am getting some results which are... See more...
Hello, I have created a Machine learning job to Detect categorical outliers and saved as an alert. I have scheduled alert for everyday and I am receiving results. I am getting some results which are legitimate or False Positive too. So Is there any way where I can give these results to the Machine learning job I have created for learning . I have tested and it seems it is not auto learning.  Kindly suggest something if you have any ideas.
I try to exclude the private ip range with command | search NOT ( src=10.0.0.0/8 OR src=192.168.0.0/16 OR src=172.16.0.0/12) but I still found the private ips in my search result  
Dear Team,   I had an issue with splunk and had to follow this post: https://community.splunk.com/t5/All-Apps-and-Add-ons/Amazon-Web-Services-Add-on-s3-generic-error-TypeError-int-object/td-p/4391... See more...
Dear Team,   I had an issue with splunk and had to follow this post: https://community.splunk.com/t5/All-Apps-and-Add-ons/Amazon-Web-Services-Add-on-s3-generic-error-TypeError-int-object/td-p/439118 to make my splunk works again. However, now Splunk will ingest logs from the beginning of everything. How do I make splunk to ingest logs from the last 7 days / 14 days. I'm pretty new with Splunk so I really appreciate every input from you guys.
I am building a REST API input using add-on builder for ingesting logs from Oracle Identity Cloud Service following instructions in the documentation https://www.oracle.com/webfolder/technetwork/tuto... See more...
I am building a REST API input using add-on builder for ingesting logs from Oracle Identity Cloud Service following instructions in the documentation https://www.oracle.com/webfolder/technetwork/tutorials/obe/cloud/idcs/idcs_splunk_obe/splunk.html However, I am not sure where I should use attributes:values from the docs in the REST input wizard in the add-on builder. Can someone please help with this ?