All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

After from 7.2.1 to 8.6.1 I'm getting the following errors in mongo.log.     2021-10-16T15:01:30.798Z W CONTROL [main] net.ssl.sslCipherConfig is deprecated. It will be removed in a future releas... See more...
After from 7.2.1 to 8.6.1 I'm getting the following errors in mongo.log.     2021-10-16T15:01:30.798Z W CONTROL [main] net.ssl.sslCipherConfig is deprecated. It will be removed in a future release. 2021-10-16T15:01:30.840Z I CONTROL [initandlisten] MongoDB starting : pid=2763 port=8191 dbpath=/opt/splunk/var/lib/splunk/kvstore/mongo 64-bit host=7d49a1b4a62a 2021-10-16T15:01:30.840Z I CONTROL [initandlisten] db version v3.6.17-linux-splunk-v4 2021-10-16T15:01:30.840Z I CONTROL [initandlisten] git version: 226949cc252af265483afbf859b446590b09b098 2021-10-16T15:01:30.840Z I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.2y-fips 16 Feb 2021 2021-10-16T15:01:30.840Z I CONTROL [initandlisten] allocator: tcmalloc 2021-10-16T15:01:30.840Z I CONTROL [initandlisten] modules: none 2021-10-16T15:01:30.840Z I CONTROL [initandlisten] build environment: 2021-10-16T15:01:30.840Z I CONTROL [initandlisten] distarch: x86_64 2021-10-16T15:01:30.840Z I CONTROL [initandlisten] target_arch: x86_64 2021-10-16T15:01:30.840Z I CONTROL [initandlisten] options: { net: { bindIp: "0.0.0.0", port: 8191, ssl: { PEMKeyFile: "/opt/splunk/etc/auth/server.pem", PEMKeyPassword: "<password>", allowInvalidHostnames: true, disabledProtocols: "noTLS1_0,noTLS1_1", mode: "requireSSL", sslCipherConfig: "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RS..." }, unixDomainSocket: { enabled: false } }, replication: { oplogSizeMB: 200, replSet: "BD6F6380-BE84-4A28-A93F-359500FA793C" }, security: { javascriptEnabled: false, keyFile: "/opt/splunk/var/lib/splunk/kvstore/mongo/splunk.key" }, setParameter: { enableLocalhostAuthBypass: "0", oplogFetcherSteadyStateMaxFetcherRestarts: "0" }, storage: { dbPath: "/opt/splunk/var/lib/splunk/kvstore/mongo", engine: "mmapv1", mmapv1: { smallFiles: true } }, systemLog: { timeStampFormat: "iso8601-utc" } } 2021-10-16T15:01:30.858Z I JOURNAL [initandlisten] journal dir=/opt/splunk/var/lib/splunk/kvstore/mongo/journal 2021-10-16T15:01:30.860Z I JOURNAL [initandlisten] recover : no journal files present, no recovery needed 2021-10-16T15:01:30.863Z I CONTROL [initandlisten] LogFile::synchronousAppend failed with 8192 bytes unwritten out of 8192 bytes; b=0x562db354c000 Bad address 2021-10-16T15:01:30.863Z F - [initandlisten] Fatal Assertion 13515 at src/mongo/db/storage/mmap_v1/logfile.cpp 250 2021-10-16T15:01:30.863Z F - [initandlisten] ***aborting after fassert() failure 2021-10-16T15:01:30.880Z F - [initandlisten] Got signal: 6 (Aborted). 0x562db04d2de1 0x562db04d1ff9 0x562db04d24dd 0x1466a4e77340 0x1466a4ad7cc9 0x1466a4adb0d8 0x562daeb8c792 0x562daf0f1510 0x562daf0c4a44 0x562daf0c5035 0x562daf0c9c72 0x562daf0b51f5 0x562daee3a603 0x562daec07a1a 0x562daec0b313 0x562daeb8e159 0x1466a4ac2ec5 0x562daebf2665 ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"562DAE212000","o":"22C0DE1","s":"_ZN5mongo15printStackTraceERSo"},{"b":"562DAE212000","o":"22BFFF9"},{"b":"562DAE212000","o":"22C04DD"},{"b":"1466A4E67000","o":"10340"},{"b":"1466A4AA1000","o":"36CC9","s":"gsignal"},{"b":"1466A4AA1000","o":"3A0D8","s":"abort"},{"b":"562DAE212000","o":"97A792","s":"_ZN5mongo32fassertFailedNoTraceWithLocationEiPKcj"},{"b":"562DAE212000","o":"EDF510","s":"_ZN5mongo7LogFile17synchronousAppendEPKvm"},{"b":"562DAE212000","o":"EB2A44","s":"_ZN5mongo3dur20_preallocateIsFasterEv"},{"b":"562DAE212000","o":"EB3035","s":"_ZN5mongo3dur19preallocateIsFasterEv"},{"b":"562DAE212000","o":"EB7C72","s":"_ZN5mongo3dur16preallocateFilesEv"},{"b":"562DAE212000","o":"EA31F5","s":"_ZN5mongo3dur7startupEPNS_11ClockSourceEl"},{"b":"562DAE212000","o":"C28603","s":"_ZN5mongo20ServiceContextMongoD29initializeGlobalStorageEngineEv"},{"b":"562DAE212000","o":"9F5A1A"},{"b":"562DAE212000","o":"9F9313","s":"_ZN5mongo11mongoDbMainEiPPcS1_"},{"b":"562DAE212000","o":"97C159","s":"main"},{"b":"1466A4AA1000","o":"21EC5","s":"__libc_start_main"},{"b":"562DAE212000","o":"9E0665"}],"processInfo":{ "mongodbVersion" : "3.6.17-linux-splunk-v4", "gitVersion" : "226949cc252af265483afbf859b446590b09b098", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "4.19.107-Unraid", "version" : "#1 SMP Thu Mar 5 13:55:57 PST 2020", "machine" : "x86_64" }, "somap" : [ { "b" : "562DAE212000", "elfType" : 3 }, { "b" : "7FFDFEBB7000", "elfType" : 3 }, { "b" : "1466A5A7A000", "path" : "/lib/x86_64-linux-gnu/libresolv.so.2", "elfType" : 3 }, { "b" : "1466A5797000", "path" : "/opt/splunk/lib/libcrypto.so.1.0.0", "elfType" : 3 }, { "b" : "1466A5E39000", "path" : "/opt/splunk/lib/libssl.so.1.0.0", "elfType" : 3 }, { "b" : "1466A5593000", "path" : "/lib/x86_64-linux-gnu/libdl.so.2", "elfType" : 3 }, { "b" : "1466A538B000", "path" : "/lib/x86_64-linux-gnu/librt.so.1", "elfType" : 3 }, { "b" : "1466A5085000", "path" : "/lib/x86_64-linux-gnu/libm.so.6", "elfType" : 3 }, { "b" : "1466A4E67000", "path" : "/lib/x86_64-linux-gnu/libpthread.so.0", "elfType" : 3 }, { "b" : "1466A4AA1000", "path" : "/lib/x86_64-linux-gnu/libc.so.6", "elfType" : 3 }, { "b" : "1466A5C95000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3 }, { "b" : "1466A5E1A000", "path" : "/opt/splunk/lib/libz.so.1", "elfType" : 3 } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x562db04d2de1] mongod(+0x22BFFF9) [0x562db04d1ff9] mongod(+0x22C04DD) [0x562db04d24dd] libpthread.so.0(+0x10340) [0x1466a4e77340] libc.so.6(gsignal+0x39) [0x1466a4ad7cc9] libc.so.6(abort+0x148) [0x1466a4adb0d8] mongod(_ZN5mongo32fassertFailedNoTraceWithLocationEiPKcj+0x0) [0x562daeb8c792] mongod(_ZN5mongo7LogFile17synchronousAppendEPKvm+0x250) [0x562daf0f1510] mongod(_ZN5mongo3dur20_preallocateIsFasterEv+0x184) [0x562daf0c4a44] mongod(_ZN5mongo3dur19preallocateIsFasterEv+0x35) [0x562daf0c5035] mongod(_ZN5mongo3dur16preallocateFilesEv+0x662) [0x562daf0c9c72] mongod(_ZN5mongo3dur7startupEPNS_11ClockSourceEl+0x65) [0x562daf0b51f5] mongod(_ZN5mongo20ServiceContextMongoD29initializeGlobalStorageEngineEv+0x273) [0x562daee3a603] mongod(+0x9F5A1A) [0x562daec07a1a] mongod(_ZN5mongo11mongoDbMainEiPPcS1_+0x873) [0x562daec0b313] mongod(main+0x9) [0x562daeb8e159] libc.so.6(__libc_start_main+0xF5) [0x1466a4ac2ec5] mongod(+0x9E0665) [0x562daebf2665] ----- END BACKTRACE -----     I've already tried the following: created a new server.pem (even though it wasn't expired) ensure enough disk space is available splunk clean kvstore --all chmod 600 /opt/splunk/var/lib/splunk/kvstore/mongo/splunk.key chown -R user:group /opt/splunk/   This is on a single splunk instance and I'm not actively using the kvstore for any of my apps so completely starting fresh with it seemed like the best choice, but even after running clean I'm still getting the same errors.
Sorry about this lame post. Our Splunk admin had to leave unexpectedly and now it's up to me to do this without any prior knowledge.  I'm trying to figure out how to make a dashboard that displays ou... See more...
Sorry about this lame post. Our Splunk admin had to leave unexpectedly and now it's up to me to do this without any prior knowledge.  I'm trying to figure out how to make a dashboard that displays our biggest indexers out of about 100.  Management wants to know which indexes are ingesting the most data daily and how much.     Any help would be appreciated.  Thank you
Hi, I had 3 shift in a day, and the last shift from this night to the morning tomorrow. I want i can collect logs of this shift and count it. My SPL here:       Base Search here | eval date_hour=... See more...
Hi, I had 3 shift in a day, and the last shift from this night to the morning tomorrow. I want i can collect logs of this shift and count it. My SPL here:       Base Search here | eval date_hour=strftime(_time,"%H") | eval date=strftime(_time,"%d/%m") | eval shift=case(date_hour>7 AND date_hour<15, "Shift 1", date_hour>14 AND date_hour<22, "Shift 2", date_hour>22 OR date_hour<8 , "Shift 3") | stats count by a, b, date, shift | chart sum(count) by shift, date | addtotals         I'm using 24h format. The Shift 3 in case command does not work well. I missed the time between 0h-8h of next day for shift 3 of the day that I'm checking.
Hello,   I am trying to create a dropdown panel with the below mentioned splunk query. I was able to view the results when i manually run the query but the dropdown is not populating any results. A... See more...
Hello,   I am trying to create a dropdown panel with the below mentioned splunk query. I was able to view the results when i manually run the query but the dropdown is not populating any results. Any help would be appreciated. Query: index=<< index_name >> |  dedup  SERVER_TYPE  |  table SERVER_TYPE       Thank You
If I were to have the UF run a PowerShell script, and that script stops the UF, does that also end that PowerShell script session? If so, is there a way to keep it running?
Hello, There is a tube Splunk video on finding new service interactive logins here: https://www.youtube.com/watch?v=bgIG2um_Hd0 The following line I just need a better understanding. | eval isOut... See more...
Hello, There is a tube Splunk video on finding new service interactive logins here: https://www.youtube.com/watch?v=bgIG2um_Hd0 The following line I just need a better understanding. | eval isOutlier=if (earliest >= relative_time(now),  "-1d@d"), 1, 0) I understand this much. It is an outlier (1) if : The earliest time of the first event is greater or equal to the time you ran the search    "-1d@d"  -->I am not understanding this part? Is it going back 1 day to find other matches that are also >= relative time (now)?    You would only get an Outlier if the times are the same . If you go back "1d@d"    the earliest time of an event 1 day ago will never be equal to the the time you ran the event which is the relative _time(now).  How are the matches made when your going back 1d@d? I know I am thinking about this the wrong way. any assistance in understanding the logic would be greatly appreciated.        
hi Team, is it possible to retrive the action items present on an alert based on the alert name using python SDK Example i know i have an alert by the name which skipped  using that name i can retr... See more...
hi Team, is it possible to retrive the action items present on an alert based on the alert name using python SDK Example i know i have an alert by the name which skipped  using that name i can retrive the query that is used. but wondering if i can retrive the action on the alert (like the webhook or the emails to which it is sent, and the cron schedule)
Can you guide us how to implement Splunk making a call to Rest API of another application with some custom payload for an alert event.
Hi! I make a dashboard in Splunk Dashboard Studio, but I don't know how I can program the Auto refresh ( every 1 minute) to update the entire dashboard.   Please Help!   Thanks
Hello, We are using Splunk cloud and seeing the below error message on SH.  Search Scheduler Search Lag Root Cause(s): The number of extremely lagged searches (4) over the last hour exceeded the... See more...
Hello, We are using Splunk cloud and seeing the below error message on SH.  Search Scheduler Search Lag Root Cause(s): The number of extremely lagged searches (4) over the last hour exceeded the red threshold (1) on this Splunk instance   Can someone please help me in fixing this issue?     Thanks
Hello all, i am facing a problem we have an environment of: 11 index (cluster) / 1 as Cluster Master 3 search heads (cluster) 1 SHC Deployer 2 deployment servers 3 Heavy forwarders 1 Enterpri... See more...
Hello all, i am facing a problem we have an environment of: 11 index (cluster) / 1 as Cluster Master 3 search heads (cluster) 1 SHC Deployer 2 deployment servers 3 Heavy forwarders 1 Enterprise Security  The indexes, the deployment servers, the ES, forwarders and SHC deployer we could upgrade them without problem. All now are in 8.2 version But the 3 search heads that are in a cluster, we get the error 'Web interface doesnt seems be available", the daemon does start it, but it does not start the web. We deactivate some applications that are not supported with python 3 but still the web does not start.
I've been working with the /services/search/jobs/export API recently and I noticed that setting the output mode to 'json' can cause responses to be suppressed. Here's an example:   curl -u $USER:$P... See more...
I've been working with the /services/search/jobs/export API recently and I noticed that setting the output mode to 'json' can cause responses to be suppressed. Here's an example:   curl -u $USER:$PASSWORD -k https://<splunk>/services/search/jobs/export -d search='search=savedsearch "my_search"' <?xml version='1.0' encoding='UTF-8'?> <response><messages><msg type="FATAL">Error in 'search' command: Unable to parse the search: Comparator '=' is missing a term on the left hand side.</msg></messages></response>   This same request in a different output mode has no response content.   curl -u $USER:$PASSWORD -k https://splunk.drwholdings.com:8089/services/search/jobs/export -d search='search=savedsearch "my_search"' -d "output_mode=json"   Is there some other flag I need to set to have these errors come through in JSON mode? Requests that don't result in error responses return fine. Both requests come back with status code 200.
Hi all! I'm trying to get Security Essentials to recognize Mimecast for it's Email requirement under Data Inventory. It does not recognize it and only gives onboarding info on O365.  I've got Mi... See more...
Hi all! I'm trying to get Security Essentials to recognize Mimecast for it's Email requirement under Data Inventory. It does not recognize it and only gives onboarding info on O365.  I've got Mimecast for Splunk installed and all the dashboards show up. I've updated the "Email" data model to include the index "mimecast". It is accelerated and contains data.    The CIM Usage Dashboard shows data in the Email data model, populates the dataset field and shows results.  The SA_CIM_Validator also recognizes the Email datamodel as having the Mimecast data.  SO What am I missing, folks? Much appreciate any thoughts, ideas you can share.   
Hi How can extract these fields: field1=Version field2=Author field3=Date field4=IssueNo   Here is the log: 23:53:00.512 app module: Abc , Ver:21.2 , 21/10/10 By: J_Danob customer 03:10:15.3... See more...
Hi How can extract these fields: field1=Version field2=Author field3=Date field4=IssueNo   Here is the log: 23:53:00.512 app module: Abc , Ver:21.2 , 21/10/10 By: J_Danob customer 03:10:15.394 app module: cust_Pack.C, Ver:2.4, Last Updated:21/02/06, by:Jefri.Poor 22:21:51.398 app module: My Properties : Ver. 2.0, Last Updated: 20/03/02, By: Alex J Parson 04:11:26.184 app module: api.C, Ver.:6.0 , Last Updated: 21/11/05, By: J_Danob IssueNo: 12345 04:05:01.488 app module: AjaxSec.C , Ver: 2, 21/07/08 By:J_Danob app 12:27:24.259 app module: L: FORWARD 10 VER 6.1.0 [2021-05-04] [app] Ticket_Again BY Jack Danob 04:11:27.643 app module: [0]L: FORWARD 10 VER 6.2.7 [2021-08-17] [CUST] [ISSUENO:98765] [BY J_Danob] [Edit] 23:53:00.512 app module: Container Version 2.0.0 Added By Jack Danob Date 2021-01-01 23:53:00.512 app module: [0]L: ForwarderSB Version 3 By Danob 21/1/31 check all 04:11:26.186 app module: ApiGateway: Version[2.2.0] [21-09-26] [IssueNo:12345] [BY Jefri.Poor] [Solving]   expected output: Version Date                    Author                    IssueNo 21.2         21/10/10      J_Danob 2.4            21/02/06      Jefri.Poor 2.0            20/03/02      Alex J Parson 6.0            21/11/05      J_Danob               12345 2                21/07/08      J_Danob 6.1.0        2021-05-04 Jack Danob 6.2.7        2021-08-17 J_Danob                 98765 2.0.0        2021-01-01 Jack Danob 3                21/1/31          Danob 2.2.0        21-09-26        Jefri.Poor               12345   Thanks,
Hey all,  I got a really helpful response last time and now I'm back with another question.  I have a search with the same sourcetype that I want to run multiple clauses against to return different... See more...
Hey all,  I got a really helpful response last time and now I'm back with another question.  I have a search with the same sourcetype that I want to run multiple clauses against to return different results in a table for comparison. Example: sourcetype = xyz | where (color == "red" OR color == "blue" OR color == "purple" OR color == "green") AND (crayon == "crayola" OR crayon == "prisma" OR crayon == "offBrand" OR crayon == "brandA") |  some stuff here that matches the search to a lookup | stats count(name) as "All sets" by Type (say name and type is the information pulled from the lookup) | where (color == "red" OR color == "blue) AND (crayon != "crayola" AND crayon != "offBrand") | stats count(name) as "Set A" by Type | where (color == "red" OR color == "blue) AND (crayon != "prisma" AND crayon != "brandA") | stats count(name) as "Set B" by Type  The end result I want is this:   All sets Set A Set B 5 3 2   I know it's bad practice to use join and append. I also know the where clauses are supposed to be higher up. I'm just not sure how to achieve this. I can get the 'All sets' just fine of course but after that nothing works. Any help for this newbie would be much appreciated. Thanks!
I am trying to use AWS Cognito to authenticate to a Splunk dashboard using SAML.  There is a lot of information on configuring Cognito with other vendors,  but not a lot of information on how to do t... See more...
I am trying to use AWS Cognito to authenticate to a Splunk dashboard using SAML.  There is a lot of information on configuring Cognito with other vendors,  but not a lot of information on how to do this with Splunk.  I have been trying to piece together settings from various documents I found during my research, but I don't know a lot about SAML. I downloaded the Splunk Metadata file and uploaded it in Cognito, but I get an error stating  "We were unable to create identity provider: No IDPSSODescriptor found in metadata for protocol urn:oasis:names:tc:SAML:2.0:protocol and entity id splunkEntityId ."  I didn't see any IDPSSODescriptor in the uploaded file, which leads me to believe this may be incompatible. My Splunk SAML setting is as follows: [saml] entityId = urn:amazon:cognito:sp:<my cognito pool id> fqdn = testdashboardlb-79456348.us-east-1.elb.amazonaws.com  <-- This is my load balancer idpSLOUrl = https://testdashboard.auth.us-east-1.amazoncognito.com/saml2/logout idpSSOUrl = https://testdashboard.auth.us-east-1.amazoncognito.com/saml2/idpresponse inboundDigestMethod = SHA1;SHA256;SHA384;SHA512 inboundSignatureAlgorithm = RSA-SHA1;RSA-SHA256;RSA-SHA384;RSA-SHA512 issuerId = urn:amazon:cognito:sp:my cognito pool id> lockRoleToFullDN = true redirectAfterLogoutToUrl = testdash.xxxxxxxxx.com redirectPort = 443 replicateCertificates = false signAuthnRequest = false signatureAlgorithm = RSA-SHA1 signedAssertion = true sloBinding = HTTP-POST ssoBinding = HTTP-POST [authentication] authSettings = saml authType = SAML   I can authenticate and enter my MFA token.  After that, I receive an error "Required String parameter 'SAMLResponse' is not present." Any help is appreciated.
Hello, In the Monitoring Console Summary Dashboard, under Deployment Metrics, there is an "Avg. Search Latency" indicator. I searched in the official documentation but I didn't found an extensive ex... See more...
Hello, In the Monitoring Console Summary Dashboard, under Deployment Metrics, there is an "Avg. Search Latency" indicator. I searched in the official documentation but I didn't found an extensive explanation. What does this metric shows? Thanks a lot, Edoardo
Hello Splunk ninjas, We all know about scheduled reports configured to use a schedule window - when they run delayed,  they still gather data for the time range that they would have covered if they ... See more...
Hello Splunk ninjas, We all know about scheduled reports configured to use a schedule window - when they run delayed,  they still gather data for the time range that they would have covered if they started on time. In short - it will search over the time range it was originally scheduled to cover. What happens when the search query is using now() function ? Like many of the ESCU correlation searches... Example: There is a query containing : | where firstTimeSeen > relative_time(now(),1h) The report is scheduled every hour (cron =  0 * * * *)  using a search time range earliest=now, latest=-70min. Schedule window = auto. And this is a busy day therefore our query is executed 40 minutes later than scheduled. As mentioned at the begining, the time range used doesn't change, it's still :00 - :59 (previous hour). However, the now() has this definition :This function takes no arguments and returns the time that the search was started. The result set of the report is different now. Is this behavior flawed by design ? Many of the ES/ESCU correlation searches use this kind of filtering ( based on now()). How to solve this ? no schedule window ? no auto ? higher priority ? durable search ? real-time mode instead of continuous ? Thanks for your educated answers.
Hello all, I'm using a lookup table with a _time field to create a timechart which works great.   However, the lookup table has data for say 90 days and I don't always want the timechart to be for ... See more...
Hello all, I'm using a lookup table with a _time field to create a timechart which works great.   However, the lookup table has data for say 90 days and I don't always want the timechart to be for the full 90 days.   How can I limit my timechart to 30 days from my lookup table that has 90 days worth of data without deleting the extra 60 days?   The _time field is in already in the format %y-%m-%d %H:%M I've tried  |inputlookup mylookupfile where earliest=-30d Thank you!
Hi Experts,                    As part of an new initiative looking at SLO metrics. I have created the below query which nicely counts the amount of errors per day over a 30 day window and also prov... See more...
Hi Experts,                    As part of an new initiative looking at SLO metrics. I have created the below query which nicely counts the amount of errors per day over a 30 day window and also provides a nice average level on the same graph using an overlay for easy viewing. earliest=-30d@d index=fx ERROR sourcetype=mysourcetype source="mysource.log" | rex field=source "temp(?<instance>.*?)\/" | stats count by _time instance | timechart span=1d max(count) by instance | appendcols [search earliest=-30d@d index=fx ERROR sourcetype=mysourcetype source="mysource.log" | rex field=source "temp(?<instance>.*?)\/" | stats count by _time instance | stats avg(count) AS 30d_average]|filldown 30d_average I wanted to somehow work out the percentage of good results (anything that is lower then the average value) and the percentage of bad results (above the average) and show in a stats table for each instance. Help needed! thanks in advance Theo