All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Did you find solution to your issue? I tried testing on my dev server and getting exactly the same set of errors as you. The other suggestions I found didn't work.
./splunk cmd mongod -version mongod: /splunk/lib/libcrypto.so.10: no version information available (required by mongod) mongod: /splunk/lib/libcrypto.so.10: no version information available (requ... See more...
./splunk cmd mongod -version mongod: /splunk/lib/libcrypto.so.10: no version information available (required by mongod) mongod: /splunk/lib/libcrypto.so.10: no version information available (required by mongod) mongod: /splunk/lib/libcrypto.so.10: no version information available (required by mongod) mongod:/splunk/lib/libssl.so.10: no version information available (required by mongod) db version v7.0.14 Build Info: { "version": "7.0.14", "gitVersion": "ce59cfc6a3c5e5c067dca0d30697edd68d4f5188", "openSSLVersion": "OpenSSL 1.0.2zk-fips 3 Sep 2024", "modules": [ "enterprise" ], "allocator": "tcmalloc", "environment": { "distmod": "rhel70", "distarch": "x86_64", "target_arch": "x86_64" } }   why am i getting  mongo db error    mongod: /splunk/lib/libcrypto.so.10: no version information available (required by mongod) mongod: /splunk/lib/libcrypto.so.10: no version information available (required by mongod) mongod: /splunk/lib/libcrypto.so.10: no version information available (required by mongod) mongod:/splunk/lib/libssl.so.10: no version information available (required by mongod)
OK. Summing up what's been already said and then adding some. The amount of data you're receiving affects several things: 1) Licensing. While it is indeed true what @gcusello pointed at - you can e... See more...
OK. Summing up what's been already said and then adding some. The amount of data you're receiving affects several things: 1) Licensing. While it is indeed true what @gcusello pointed at - you can exceed your license for some time but this is meant for some unforeseen unusual situations. You should not rely on constatntly exceeding your license. Even if it does technically work (and judging by your license size your license is most probably a non-enforcing one which means it will only generate a warning), that's not what you bought. And any contact with Splunk (be it support case, be it a question for license extension) might end up with uncomfortable questions about your license size and real usage. Of course if this is something that happens just once in a while, that's OK. And BTW, if you exceed your ingestion limit it's the searching which gets blocked with an enforcing license, not indexing - you will not (contrary to some competitors' solutions) lose your data. 2) Storage - this is kinda obvious. The more data you're ingesting, the more storage you need to hold it given constant retention period. Since Splunk rolls buckets from cold to frozen (by default that means deleting the data) based on size limit or age limit, whichever is hit first that means that if you don't have enough space allocated and configured for your indexes, even if you are able to ingest that additional amount of data, it will not be held for long enough because it will get deleted due to lack of space. So instead of holding data for - let's say - last two weeks, you'll have only two days of data because the rest will have been pushed out of the index. 3) Processing power. There are some guidelines to sizing Splunk environments. Of course the real life performance may differ compared to the rule of thumb for generalized cases but still your cluster seems relatively small even for the amount of data you're receiving now (depending on how evenly spread the ingestion is across your sites it might be already hugely undersized), not to mention additional data you'll be receiving normally and definitely not adding the DDOS data. If you  overstress the indexers you will clog your pipelines. That will create a pushback beause the forwarders won't be able to forward their data to indexers. So they might stop getting/receiving data from their sources. It's only half-bad if the sources can be "paused" and queried later for the missing data so you'll only cause lag. But if you have "pushing" sources (like syslog), you'll end up losing data. So licensing is the least of your problems.
We need more information.
I pulled up the original the events and then also downloaded a different external document from our system to generate those logs. I trimmed it down from the 12 events to 3. I reduced the sanitizing ... See more...
I pulled up the original the events and then also downloaded a different external document from our system to generate those logs. I trimmed it down from the 12 events to 3. I reduced the sanitizing to help out. In the dashboards the DocumentTypeId is where I am starting with because that identifies which module the file is located in our application. The DocumentId is the SQL document Id number assigned to the file. Lastly the DocumentFileTypeId identifies the file format. I'm also looking at leverage the DBConnect add-on. I'm also looking at that as an option and use the DocumentId for the search instead.  { [-] Locking: null accessDate: 2025-03-21T16:37:14.8614186-06:00 auditResultSets: null clientIPAddress: 255.255.255.255 commandText: ref.DocumentFileTypeGetById commandType: 4 module: Vendor.PRODUCT.BLL.DocumentManagement parameters: [ [-] { [-] name: @RETURN_VALUE value: 0 } { [-] name: @DocumentFileTypeId value: 7 } ] schema: ref serverHost: Webserver serverIPAddress: 255.255.255.255 sourceSystem: WebSite storedProcedureName: DocumentFileTypeGetById traceInformation: [ [-] { [-] class: Vendor.PRODUCT.Web.UI.Website.DocumentManagement.ViewDocument method: Page_Load type: Page } { [-] class: Vendor.PRODUCT.BLL.DocumentManagement.DocumentManager method: Get type: Manager } ] userId: UserNumber userName: Username }{ [-] Locking: null accessDate: 2025-03-21T16:37:14.8614186-06:00 auditResultSets: null clientIPAddress: 255.255.255.255 commandText: ref.DocumentFileTypeGetById commandType: 4 module: Vendor.PRODUCT.BLL.DocumentManagement parameters: [ [-] { [-] name: @RETURN_VALUE value: 0 } { [-] name: @DocumentFileTypeId value: 7 } ] schema: ref serverHost: Webserver serverIPAddress: 255.255.255.255 sourceSystem: WebSite storedProcedureName: DocumentFileTypeGetById traceInformation: [ [-] { [-] class: Vendor.PRODUCT.Web.UI.Website.DocumentManagement.ViewDocument method: Page_Load type: Page } { [-] class: Vendor.PRODUCT.BLL.DocumentManagement.DocumentManager method: Get type: Manager } ] userId: UserNumber userName: Username } { [-] Locking: null accessDate: 2025-03-21T16:37:14.8614186-06:00 auditResultSets: null clientIPAddress: 255.255.255.255 commandText: ref.DocumentAttributeGetByDocumentTypeId commandType: 4 module: Vendor.PRODUCT.BLL.DocumentManagement parameters: [ [-] { [-] name: @RETURN_VALUE value: 0 } { [-] name: @DocumentTypeId value: 92 } { [-] name: @IncludeInactive value: false } ] schema: ref serverHost: Webserver serverIPAddress: 255.255.255.255 sourceSystem: WebSite storedProcedureName: DocumentAttributeGetByDocumentTypeId traceInformation: [ [-] { [-] class: Vendor.PRODUCT.Web.UI.Website.DocumentManagement.ViewDocument method: Page_Load type: Page } { [-] class: Vendor.PRODUCT.BLL.DocumentManagement.DocumentManager method: Get type: Manager } ] userId: UserNumber userName: Username }  
thambisetty has a great search for this at: https://community.splunk.com/t5/Splunk-Search/How-to-find-dashboards-not-in-use-by-the-amount-of-days/m-p/418629 Here it is, modified for your use case (f... See more...
thambisetty has a great search for this at: https://community.splunk.com/t5/Splunk-Search/How-to-find-dashboards-not-in-use-by-the-amount-of-days/m-p/418629 Here it is, modified for your use case (find dashboards not viewed in the last 60 days) | rest /servicesNS/-/-/data/ui/views splunk_server=local f=id f=updated f=eai:acl ``` Produces all views that are present in local searchhead ``` | table id,updated,eai:acl.removable, eai:acl.app ```eai:acl.removable tells whether the dashboard can be deleted or not. removable=1 means can be deleted. removable=0 means could be system dashboard``` | rename eai:acl.* as * | rex field=id ".*\/(?<dashboard>.*)$" | table app dashboard updated removable | join type=left dashboard app [ search index=_audit earliest=-60d ```Change this earliest= value if you want a different value than 60 days``` action=search provenance="UI:Dashboard:*" sourcetype=audittrail savedsearch_name!="" | stats earliest(_time) as earliest_time latest(_time) as latest_time by app provenance | rex field=provenance ".*\:(?<dashboard>.*)$" | table earliest_time latest_time app dashboard ```produces dashboards that are used in timerange given in earliest/global time range```] | where removable=1 ``` condition to return only dashboards that are not viewed ``` | stats values(dashboard) as dashboard by app
Very helpful. Was able to adjust this for our environment to thaw 18,000+ buckets.
Thanks @isoutamo , This was a great and useful input, most appreciated
Hi, how can i query all Dashboards with no access in the last 60d?
Thanks a lot. 
@marathon-man wrote: I guess setting local = true for the command (actually all the commands I guess) would do the trick here, as mentioned in another reply? Yes, but: If someone runs... See more...
@marathon-man wrote: I guess setting local = true for the command (actually all the commands I guess) would do the trick here, as mentioned in another reply? Yes, but: If someone runs, say, index=foo | mylocalcommand then Splunk will pull all events to the SH, including all _raw and fields - that's a lot of work. Therefore I'd include the recommendations (e.g. "only run after stats") in the docs together with setting local=true.
@pdafale_avantor  An add-on handles data ingestion and parsing and must be installed on Indexers or Heavy Forwarders. An App includes the dashboards, visualizations, and search-time configurations t... See more...
@pdafale_avantor  An add-on handles data ingestion and parsing and must be installed on Indexers or Heavy Forwarders. An App includes the dashboards, visualizations, and search-time configurations that allow you to interact with the data., needs to be installed on Search Heads.   App: https://splunkbase.splunk.com/app/3884  - Install this app on the search head. Add-on: https://splunkbase.splunk.com/app/3885  - This is the Indexer TA for the Corelight App. Important: The TA for Corelight add-on is required on indexers, or index clusters. If your Corelight sensors send data directly to a heavy forwarder or a Splunk Cloud Platform receiver that is a heavy forwarder, the TA for Corelight is also required on those instances. The add-on is not required on search heads, or single-instance Splunk Enterprise environments. I've installed the app in my test environment, and it includes several dashboards. Please refer to the image below for your reference. Once your data is onboarded into Splunk with the correct sourcetypes, these dashboards will automatically populate with your data  
I suppose the command should be defined as centralized streaming command instead of distributed one - the local setting in commands.conf - see https://docs.splunk.com/Documentation/Splunk/latest/Adm... See more...
I suppose the command should be defined as centralized streaming command instead of distributed one - the local setting in commands.conf - see https://docs.splunk.com/Documentation/Splunk/latest/Admin/Commandsconf Thanks, I'll be giving this a try. I guess I can turn replicate off for the collections while I'm at it.
Is your command actively querying the collection? If so, replicate=true won't help you.   replicate=true will push the collection's content to the indexers *as CSV*. The KV Store on the indexers (... See more...
Is your command actively querying the collection? If so, replicate=true won't help you.   replicate=true will push the collection's content to the indexers *as CSV*. The KV Store on the indexers (if even running) won't know the collection or its content. Ah, that explains a lot! I'm surprised and doubtful that replicate=true ever worked for someone running Splunk Enterprise on-prem. They claimed it did, but I never tested it myself. Easiest fix would be to only use the command on the SHs, e.g. after a |stats, |tstats, etc. - or if need be |localop. I guess setting local = true for the command (actually all the commands I guess) would do the trick here, as mentioned in another reply?
@gcusello if the problem is the storage -- Yes problem is the storage - we have 6.9TB in each indexers of 6 indexers.  you could change the dimension of the index where these logs are stored so they... See more...
@gcusello if the problem is the storage -- Yes problem is the storage - we have 6.9TB in each indexers of 6 indexers.  you could change the dimension of the index where these logs are stored so they will be deleted more frequently and you will not use all the disk space. - how to do this? Please explain more sorry we have volumes configured on our environment.
@python  You can use this is_scheduled=0 means Filters unscheduled searches. | rest /services/saved/searches | where is_scheduled=0 To list all saved searches and alerts that are not scheduled  ... See more...
@python  You can use this is_scheduled=0 means Filters unscheduled searches. | rest /services/saved/searches | where is_scheduled=0 To list all saved searches and alerts that are not scheduled   | rest /services/saved/searches | search is_scheduled=0 alert_type=* disabled=0 | table title, qualifiedSearch, alert_type, is_scheduled, disabled   | rest /services/saved/searches | where is_scheduled=0 | table title, description, search, eai:acl.owner, eai:acl.app
In addition to adding storage, consider increasing the number of indexers.  Unless the indexers are very over-powered, you probably will need more of them to ingest double the amount of data.
Hi @osh55 , please share your search, anyway, you have to adapt the eval commands to the different kinds of logs. Ciao. Giuseppe
Hi @Karthikeya , is the question is on the license exceedings, you don't have problems exceeding less than 45 times in 60 days. if the problem is the storage, you could change the dimension of the... See more...
Hi @Karthikeya , is the question is on the license exceedings, you don't have problems exceeding less than 45 times in 60 days. if the problem is the storage, you could change the dimension of the index where these logs are stored so they will be deleted more frequently and you will not use all the disk space. You could also change this max dimension when you have excessive data ingestion and then restore the normal parameter at the end, anyway the easiest method is configure the max dimension for your indexes. Ciao. Giuseppe