All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

The answer to this probably stupid simple. Banging my head on this. Help and patience please.   I am writing a query which audits user role activations. A user can have many roles, i.e. Network Admi... See more...
The answer to this probably stupid simple. Banging my head on this. Help and patience please.   I am writing a query which audits user role activations. A user can have many roles, i.e. Network Admin, Security Reader, User Admin, Storage Operator, etc. Specifically, I want to view how many days has passed since a user activated a specific role.   Today - Last Activation = Days Since Last Activation   Questions: How can I most efficiently subtract the time of the most recent role activation event from the current time? How can I then only show the most recent results? How to then audit all users who have not activated a role in the past 30 days?   Base query for a single user search over 30 days and sample results are below.    index=audit user=bob@example.com | eval days = round((now() - _time)/86400) | table Role, user, _time, days | sort - days Sample Data Below Role UserName _time days Global Reader bob@example.com 2021-09-19T08:35:06.998 29 Global Reader bob@example.com 2021-09-19T08:35:05.514 29 Systems Administrator bob@example.com 2021-09-23T05:55:51.177 25 Systems Administrator bob@example.com 2021-09-23T05:55:49.036 25 Global Reader bob@example.com 2021-09-24T00:48:20.254 24 Storage Operator bob@example.com 2021-09-24T00:48:18.942 24 Systems Administrator bob@example.com 2021-09-27T07:22:23.971 21 Systems Administrator bob@example.com 2021-09-27T07:22:22.971 21 Global Reader bob@example.com 2021-09-27T07:19:40.569 21 Global Reader bob@example.com 2021-09-27T07:19:39.460 21   Desired results only show the most recent events  Role UserName _time days Global Reader bob@example.com 2021-09-24T00:48:20.254 24 Storage Operator bob@example.com 2021-09-24T00:48:18.942 24 Systems Administrator bob@example.com 2021-09-27T07:22:22.971 21 Global Reader bob@example.com 2021-09-27T07:19:39.460 21
Hello Splunk Community,  Can anyone help me build a query based on the below; I have a batch job that usually starts at 5.30pm everyday and finishes in the early morning of the following day. The b... See more...
Hello Splunk Community,  Can anyone help me build a query based on the below; I have a batch job that usually starts at 5.30pm everyday and finishes in the early morning of the following day. The batch job has multiple steps logged as separate events and has no unique id to link the step to the batch job. I want to create a timechart which shows the total duration (step 1 to 5) for each batch job occurring daily. Example of 1 batch job start & end time (Dummy Data Used): Step Start_Time End_Time 1 2021-09-11 17:30:00 2021-09-11 23:45:01 2 2021-09-11 23:45:01 2021-09-12 01:45:20 3 2021-09-12 01:45:20 2021-09-12 02:35:20 4 2021-09-12 02:35:20 2021-09-12 03:04:25 5 2021-09-12 03:04:25 2021-09-12 05:23:06   I hope someone can figure this one out as i have been stuck on it for a few days.  Many Thanks,  Zoe
I created my Splunk instance and created a username/password. I am unable to log in to Splunk. HELP:(
Hi -   I have a production outage and I am really struggling to fix it - I have had my Unix admin on and Splunk support however they can't pinpoint the issue.   Any help would be amazi... See more...
Hi -   I have a production outage and I am really struggling to fix it - I have had my Unix admin on and Splunk support however they can't pinpoint the issue.   Any help would be amazing, please...I have 1 SH 1 MD and 3 indexers cluster.   On the SH every X minute, i get "waiting for data" and it just hangs there for up to 60 seconds, all the screens 100% unusable. If i run a search on the MD or Indexers its all fine.We did turn on some verbose logs on in log.cfg [python] splunk = DEBUG   And we got this error that i cant really find on google.   2021-10-17 13:07:49,790 DEBUG [616c036bfc7fa6567ccfd0] cached:69 - Memoizer expired cachekey=('j_ycUrLUdujLkZGGP2BX^rH^0aBu4qOEaCTBx89kc4teMYUq_MasVgLLGH9T^lr900ey_UYle7JSPVCMiA1vQiPTChSEQrNiPxobUi1Ut_VKQAwW1UC8H8R8fxXyODmliNS8', <function getServerZoneInfo at 0x7fa655877440>, (), frozenset())  2021-10-17 13:07:49,791 DEBUG [616c036bfc7fa6567ccfd0] cached:69 - Memoizer expired cachekey=('j_ycUrLUdujLkZGGP2BX^rH^0aBu4qOEaCTBx89kc4teMYUq_MasVgLLGH9T^lr900ey_UYle7JSPVCMiA1vQiPTChSEQrNiPxobUi1Ut_VKQAwW1UC8H8R8fxXyODmliNS8', <function isModSetup at 0x7fa656048290>, ('Murex',), frozenset()) 2021-10-17 13:14:20,027 DEBUG [616c036bfc7fa6567ccfd0] cached:69 - Memoizer expired cachekey=('Qq597goWUdHcWpAp40yLt648IoIOsjngeZUNYmko18k_8LehDC7ZD0Daauwm0vMgbCiNehFe0KYbLHY3m^XFDWqHZPZM1w01V02phdaScMSdDFRrEu8_Q50t1lA5QRBVaiVL6N', <function getEntities at 0x7fa6560a2d40>, ('data/ui/manager',), frozenset({('count', -1), ('namespace', 'Murex')})) 6:50 Any help would be so good, as i am in the process of thinking of installing the apps onto the search head for tomorrow Production, i think that will work?
This is the sample accessLog which is coming up in the Splunk ui. {"timestamp":"2021-10-17T15:03:56,763Z","level":"INFO","thread":"reactor-http-epolpl-20","message":"method=GET, uri=/api/v1/hello1... See more...
This is the sample accessLog which is coming up in the Splunk ui. {"timestamp":"2021-10-17T15:03:56,763Z","level":"INFO","thread":"reactor-http-epolpl-20","message":"method=GET, uri=/api/v1/hello1, status=200, duration=1, "logger":"reactor.netty.http.server.AccessLog"} {"timestamp":"2021-10-17T15:03:56,763Z","level":"INFO","thread":"reactor-http-epolpl-20","message":"method=GET, uri=/api/v1/dummy1, status=200, duration=1, "logger":"reactor.netty.http.server.AccessLog"} I want to extract the url and make a count for all the API's like how many times an API is hitted from the uri part(uri=/api/v1/dummy1). I tried the below query, but it's not giving the desired result. index=dummy OR index=dummy1 source=*dummy-service* logger=reactor.netty.http.server.AccessLog | rex field=message "(?<url>uri.[\/api\/v1\/hello]+)" | chart count by url Can someone help in this ? 
How to use "whois" .apps "network tools" doesn't work. "lookup whois" does not work. are there other valid applications? or how to write a request.  username src_ip John 82.*.*.* Smit 17... See more...
How to use "whois" .apps "network tools" doesn't work. "lookup whois" does not work. are there other valid applications? or how to write a request.  username src_ip John 82.*.*.* Smit 172.*.*.*
Help me write the request correctly.  user src_ip Jonh 82.*.*.* Smit 172.*.*.* the documentation says that these lines are required. How to write code to get data correctly. via whoi... See more...
Help me write the request correctly.  user src_ip Jonh 82.*.*.* Smit 172.*.*.* the documentation says that these lines are required. How to write code to get data correctly. via whois by scr_ip https://github.com/doksu/TA-centralops/wiki | lookup local=t centralopswhois_cache _key AS domain | centralopswhois output=json limit=2 domain    
Hi:    We have some  hardware capture NIC can accelerate packet capture,But it need load special libpcap.so not os default libpcap. I use ldd and can not find  stream load any version libpcap.so. ... See more...
Hi:    We have some  hardware capture NIC can accelerate packet capture,But it need load special libpcap.so not os default libpcap. I use ldd and can not find  stream load any version libpcap.so. So my question is can how can I load my owen libpcap.so and accelerate splunk stream packet capture? Is that porssible?  
I have one 1 primary index namely azure with 2 sourcetypes namely: mscs:kube-good and mscs:kube-audit-good.  I believe they could be duplication of data logs between the 2 sourcetypes. What is the sp... See more...
I have one 1 primary index namely azure with 2 sourcetypes namely: mscs:kube-good and mscs:kube-audit-good.  I believe they could be duplication of data logs between the 2 sourcetypes. What is the splunk queries that can tell me if there is duplication of logs between the 2 sourcetypes. Do they each have information that the other doesn't contain. Is there a lot of overlap? Please give me the splunk queries that will do this job.
Is the "Free Splunk" free for business use please?
After from 7.2.1 to 8.6.1 I'm getting the following errors in mongo.log.     2021-10-16T15:01:30.798Z W CONTROL [main] net.ssl.sslCipherConfig is deprecated. It will be removed in a future releas... See more...
After from 7.2.1 to 8.6.1 I'm getting the following errors in mongo.log.     2021-10-16T15:01:30.798Z W CONTROL [main] net.ssl.sslCipherConfig is deprecated. It will be removed in a future release. 2021-10-16T15:01:30.840Z I CONTROL [initandlisten] MongoDB starting : pid=2763 port=8191 dbpath=/opt/splunk/var/lib/splunk/kvstore/mongo 64-bit host=7d49a1b4a62a 2021-10-16T15:01:30.840Z I CONTROL [initandlisten] db version v3.6.17-linux-splunk-v4 2021-10-16T15:01:30.840Z I CONTROL [initandlisten] git version: 226949cc252af265483afbf859b446590b09b098 2021-10-16T15:01:30.840Z I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.2y-fips 16 Feb 2021 2021-10-16T15:01:30.840Z I CONTROL [initandlisten] allocator: tcmalloc 2021-10-16T15:01:30.840Z I CONTROL [initandlisten] modules: none 2021-10-16T15:01:30.840Z I CONTROL [initandlisten] build environment: 2021-10-16T15:01:30.840Z I CONTROL [initandlisten] distarch: x86_64 2021-10-16T15:01:30.840Z I CONTROL [initandlisten] target_arch: x86_64 2021-10-16T15:01:30.840Z I CONTROL [initandlisten] options: { net: { bindIp: "0.0.0.0", port: 8191, ssl: { PEMKeyFile: "/opt/splunk/etc/auth/server.pem", PEMKeyPassword: "<password>", allowInvalidHostnames: true, disabledProtocols: "noTLS1_0,noTLS1_1", mode: "requireSSL", sslCipherConfig: "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RS..." }, unixDomainSocket: { enabled: false } }, replication: { oplogSizeMB: 200, replSet: "BD6F6380-BE84-4A28-A93F-359500FA793C" }, security: { javascriptEnabled: false, keyFile: "/opt/splunk/var/lib/splunk/kvstore/mongo/splunk.key" }, setParameter: { enableLocalhostAuthBypass: "0", oplogFetcherSteadyStateMaxFetcherRestarts: "0" }, storage: { dbPath: "/opt/splunk/var/lib/splunk/kvstore/mongo", engine: "mmapv1", mmapv1: { smallFiles: true } }, systemLog: { timeStampFormat: "iso8601-utc" } } 2021-10-16T15:01:30.858Z I JOURNAL [initandlisten] journal dir=/opt/splunk/var/lib/splunk/kvstore/mongo/journal 2021-10-16T15:01:30.860Z I JOURNAL [initandlisten] recover : no journal files present, no recovery needed 2021-10-16T15:01:30.863Z I CONTROL [initandlisten] LogFile::synchronousAppend failed with 8192 bytes unwritten out of 8192 bytes; b=0x562db354c000 Bad address 2021-10-16T15:01:30.863Z F - [initandlisten] Fatal Assertion 13515 at src/mongo/db/storage/mmap_v1/logfile.cpp 250 2021-10-16T15:01:30.863Z F - [initandlisten] ***aborting after fassert() failure 2021-10-16T15:01:30.880Z F - [initandlisten] Got signal: 6 (Aborted). 0x562db04d2de1 0x562db04d1ff9 0x562db04d24dd 0x1466a4e77340 0x1466a4ad7cc9 0x1466a4adb0d8 0x562daeb8c792 0x562daf0f1510 0x562daf0c4a44 0x562daf0c5035 0x562daf0c9c72 0x562daf0b51f5 0x562daee3a603 0x562daec07a1a 0x562daec0b313 0x562daeb8e159 0x1466a4ac2ec5 0x562daebf2665 ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"562DAE212000","o":"22C0DE1","s":"_ZN5mongo15printStackTraceERSo"},{"b":"562DAE212000","o":"22BFFF9"},{"b":"562DAE212000","o":"22C04DD"},{"b":"1466A4E67000","o":"10340"},{"b":"1466A4AA1000","o":"36CC9","s":"gsignal"},{"b":"1466A4AA1000","o":"3A0D8","s":"abort"},{"b":"562DAE212000","o":"97A792","s":"_ZN5mongo32fassertFailedNoTraceWithLocationEiPKcj"},{"b":"562DAE212000","o":"EDF510","s":"_ZN5mongo7LogFile17synchronousAppendEPKvm"},{"b":"562DAE212000","o":"EB2A44","s":"_ZN5mongo3dur20_preallocateIsFasterEv"},{"b":"562DAE212000","o":"EB3035","s":"_ZN5mongo3dur19preallocateIsFasterEv"},{"b":"562DAE212000","o":"EB7C72","s":"_ZN5mongo3dur16preallocateFilesEv"},{"b":"562DAE212000","o":"EA31F5","s":"_ZN5mongo3dur7startupEPNS_11ClockSourceEl"},{"b":"562DAE212000","o":"C28603","s":"_ZN5mongo20ServiceContextMongoD29initializeGlobalStorageEngineEv"},{"b":"562DAE212000","o":"9F5A1A"},{"b":"562DAE212000","o":"9F9313","s":"_ZN5mongo11mongoDbMainEiPPcS1_"},{"b":"562DAE212000","o":"97C159","s":"main"},{"b":"1466A4AA1000","o":"21EC5","s":"__libc_start_main"},{"b":"562DAE212000","o":"9E0665"}],"processInfo":{ "mongodbVersion" : "3.6.17-linux-splunk-v4", "gitVersion" : "226949cc252af265483afbf859b446590b09b098", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "4.19.107-Unraid", "version" : "#1 SMP Thu Mar 5 13:55:57 PST 2020", "machine" : "x86_64" }, "somap" : [ { "b" : "562DAE212000", "elfType" : 3 }, { "b" : "7FFDFEBB7000", "elfType" : 3 }, { "b" : "1466A5A7A000", "path" : "/lib/x86_64-linux-gnu/libresolv.so.2", "elfType" : 3 }, { "b" : "1466A5797000", "path" : "/opt/splunk/lib/libcrypto.so.1.0.0", "elfType" : 3 }, { "b" : "1466A5E39000", "path" : "/opt/splunk/lib/libssl.so.1.0.0", "elfType" : 3 }, { "b" : "1466A5593000", "path" : "/lib/x86_64-linux-gnu/libdl.so.2", "elfType" : 3 }, { "b" : "1466A538B000", "path" : "/lib/x86_64-linux-gnu/librt.so.1", "elfType" : 3 }, { "b" : "1466A5085000", "path" : "/lib/x86_64-linux-gnu/libm.so.6", "elfType" : 3 }, { "b" : "1466A4E67000", "path" : "/lib/x86_64-linux-gnu/libpthread.so.0", "elfType" : 3 }, { "b" : "1466A4AA1000", "path" : "/lib/x86_64-linux-gnu/libc.so.6", "elfType" : 3 }, { "b" : "1466A5C95000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3 }, { "b" : "1466A5E1A000", "path" : "/opt/splunk/lib/libz.so.1", "elfType" : 3 } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x562db04d2de1] mongod(+0x22BFFF9) [0x562db04d1ff9] mongod(+0x22C04DD) [0x562db04d24dd] libpthread.so.0(+0x10340) [0x1466a4e77340] libc.so.6(gsignal+0x39) [0x1466a4ad7cc9] libc.so.6(abort+0x148) [0x1466a4adb0d8] mongod(_ZN5mongo32fassertFailedNoTraceWithLocationEiPKcj+0x0) [0x562daeb8c792] mongod(_ZN5mongo7LogFile17synchronousAppendEPKvm+0x250) [0x562daf0f1510] mongod(_ZN5mongo3dur20_preallocateIsFasterEv+0x184) [0x562daf0c4a44] mongod(_ZN5mongo3dur19preallocateIsFasterEv+0x35) [0x562daf0c5035] mongod(_ZN5mongo3dur16preallocateFilesEv+0x662) [0x562daf0c9c72] mongod(_ZN5mongo3dur7startupEPNS_11ClockSourceEl+0x65) [0x562daf0b51f5] mongod(_ZN5mongo20ServiceContextMongoD29initializeGlobalStorageEngineEv+0x273) [0x562daee3a603] mongod(+0x9F5A1A) [0x562daec07a1a] mongod(_ZN5mongo11mongoDbMainEiPPcS1_+0x873) [0x562daec0b313] mongod(main+0x9) [0x562daeb8e159] libc.so.6(__libc_start_main+0xF5) [0x1466a4ac2ec5] mongod(+0x9E0665) [0x562daebf2665] ----- END BACKTRACE -----     I've already tried the following: created a new server.pem (even though it wasn't expired) ensure enough disk space is available splunk clean kvstore --all chmod 600 /opt/splunk/var/lib/splunk/kvstore/mongo/splunk.key chown -R user:group /opt/splunk/   This is on a single splunk instance and I'm not actively using the kvstore for any of my apps so completely starting fresh with it seemed like the best choice, but even after running clean I'm still getting the same errors.
Sorry about this lame post. Our Splunk admin had to leave unexpectedly and now it's up to me to do this without any prior knowledge.  I'm trying to figure out how to make a dashboard that displays ou... See more...
Sorry about this lame post. Our Splunk admin had to leave unexpectedly and now it's up to me to do this without any prior knowledge.  I'm trying to figure out how to make a dashboard that displays our biggest indexers out of about 100.  Management wants to know which indexes are ingesting the most data daily and how much.     Any help would be appreciated.  Thank you
Hi, I had 3 shift in a day, and the last shift from this night to the morning tomorrow. I want i can collect logs of this shift and count it. My SPL here:       Base Search here | eval date_hour=... See more...
Hi, I had 3 shift in a day, and the last shift from this night to the morning tomorrow. I want i can collect logs of this shift and count it. My SPL here:       Base Search here | eval date_hour=strftime(_time,"%H") | eval date=strftime(_time,"%d/%m") | eval shift=case(date_hour>7 AND date_hour<15, "Shift 1", date_hour>14 AND date_hour<22, "Shift 2", date_hour>22 OR date_hour<8 , "Shift 3") | stats count by a, b, date, shift | chart sum(count) by shift, date | addtotals         I'm using 24h format. The Shift 3 in case command does not work well. I missed the time between 0h-8h of next day for shift 3 of the day that I'm checking.
Hello,   I am trying to create a dropdown panel with the below mentioned splunk query. I was able to view the results when i manually run the query but the dropdown is not populating any results. A... See more...
Hello,   I am trying to create a dropdown panel with the below mentioned splunk query. I was able to view the results when i manually run the query but the dropdown is not populating any results. Any help would be appreciated. Query: index=<< index_name >> |  dedup  SERVER_TYPE  |  table SERVER_TYPE       Thank You
If I were to have the UF run a PowerShell script, and that script stops the UF, does that also end that PowerShell script session? If so, is there a way to keep it running?
Hello, There is a tube Splunk video on finding new service interactive logins here: https://www.youtube.com/watch?v=bgIG2um_Hd0 The following line I just need a better understanding. | eval isOut... See more...
Hello, There is a tube Splunk video on finding new service interactive logins here: https://www.youtube.com/watch?v=bgIG2um_Hd0 The following line I just need a better understanding. | eval isOutlier=if (earliest >= relative_time(now),  "-1d@d"), 1, 0) I understand this much. It is an outlier (1) if : The earliest time of the first event is greater or equal to the time you ran the search    "-1d@d"  -->I am not understanding this part? Is it going back 1 day to find other matches that are also >= relative time (now)?    You would only get an Outlier if the times are the same . If you go back "1d@d"    the earliest time of an event 1 day ago will never be equal to the the time you ran the event which is the relative _time(now).  How are the matches made when your going back 1d@d? I know I am thinking about this the wrong way. any assistance in understanding the logic would be greatly appreciated.        
hi Team, is it possible to retrive the action items present on an alert based on the alert name using python SDK Example i know i have an alert by the name which skipped  using that name i can retr... See more...
hi Team, is it possible to retrive the action items present on an alert based on the alert name using python SDK Example i know i have an alert by the name which skipped  using that name i can retrive the query that is used. but wondering if i can retrive the action on the alert (like the webhook or the emails to which it is sent, and the cron schedule)
Can you guide us how to implement Splunk making a call to Rest API of another application with some custom payload for an alert event.
Hi! I make a dashboard in Splunk Dashboard Studio, but I don't know how I can program the Auto refresh ( every 1 minute) to update the entire dashboard.   Please Help!   Thanks
Hello, We are using Splunk cloud and seeing the below error message on SH.  Search Scheduler Search Lag Root Cause(s): The number of extremely lagged searches (4) over the last hour exceeded the... See more...
Hello, We are using Splunk cloud and seeing the below error message on SH.  Search Scheduler Search Lag Root Cause(s): The number of extremely lagged searches (4) over the last hour exceeded the red threshold (1) on this Splunk instance   Can someone please help me in fixing this issue?     Thanks