I pulled up the original the events and then also downloaded a different external document from our system to generate those logs. I trimmed it down from the 12 events to 3. I reduced the sanitizing ...
See more...
I pulled up the original the events and then also downloaded a different external document from our system to generate those logs. I trimmed it down from the 12 events to 3. I reduced the sanitizing to help out. In the dashboards the DocumentTypeId is where I am starting with because that identifies which module the file is located in our application. The DocumentId is the SQL document Id number assigned to the file. Lastly the DocumentFileTypeId identifies the file format. I'm also looking at leverage the DBConnect add-on. I'm also looking at that as an option and use the DocumentId for the search instead. { [-] Locking: null accessDate: 2025-03-21T16:37:14.8614186-06:00 auditResultSets: null clientIPAddress: 255.255.255.255 commandText: ref.DocumentFileTypeGetById commandType: 4 module: Vendor.PRODUCT.BLL.DocumentManagement parameters: [ [-] { [-] name: @RETURN_VALUE value: 0 } { [-] name: @DocumentFileTypeId value: 7 } ] schema: ref serverHost: Webserver serverIPAddress: 255.255.255.255 sourceSystem: WebSite storedProcedureName: DocumentFileTypeGetById traceInformation: [ [-] { [-] class: Vendor.PRODUCT.Web.UI.Website.DocumentManagement.ViewDocument method: Page_Load type: Page } { [-] class: Vendor.PRODUCT.BLL.DocumentManagement.DocumentManager method: Get type: Manager } ] userId: UserNumber userName: Username }{ [-] Locking: null accessDate: 2025-03-21T16:37:14.8614186-06:00 auditResultSets: null clientIPAddress: 255.255.255.255 commandText: ref.DocumentFileTypeGetById commandType: 4 module: Vendor.PRODUCT.BLL.DocumentManagement parameters: [ [-] { [-] name: @RETURN_VALUE value: 0 } { [-] name: @DocumentFileTypeId value: 7 } ] schema: ref serverHost: Webserver serverIPAddress: 255.255.255.255 sourceSystem: WebSite storedProcedureName: DocumentFileTypeGetById traceInformation: [ [-] { [-] class: Vendor.PRODUCT.Web.UI.Website.DocumentManagement.ViewDocument method: Page_Load type: Page } { [-] class: Vendor.PRODUCT.BLL.DocumentManagement.DocumentManager method: Get type: Manager } ] userId: UserNumber userName: Username } { [-] Locking: null accessDate: 2025-03-21T16:37:14.8614186-06:00 auditResultSets: null clientIPAddress: 255.255.255.255 commandText: ref.DocumentAttributeGetByDocumentTypeId commandType: 4 module: Vendor.PRODUCT.BLL.DocumentManagement parameters: [ [-] { [-] name: @RETURN_VALUE value: 0 } { [-] name: @DocumentTypeId value: 92 } { [-] name: @IncludeInactive value: false } ] schema: ref serverHost: Webserver serverIPAddress: 255.255.255.255 sourceSystem: WebSite storedProcedureName: DocumentAttributeGetByDocumentTypeId traceInformation: [ [-] { [-] class: Vendor.PRODUCT.Web.UI.Website.DocumentManagement.ViewDocument method: Page_Load type: Page } { [-] class: Vendor.PRODUCT.BLL.DocumentManagement.DocumentManager method: Get type: Manager } ] userId: UserNumber userName: Username }
thambisetty has a great search for this at: https://community.splunk.com/t5/Splunk-Search/How-to-find-dashboards-not-in-use-by-the-amount-of-days/m-p/418629 Here it is, modified for your use case (f...
See more...
thambisetty has a great search for this at: https://community.splunk.com/t5/Splunk-Search/How-to-find-dashboards-not-in-use-by-the-amount-of-days/m-p/418629 Here it is, modified for your use case (find dashboards not viewed in the last 60 days) | rest /servicesNS/-/-/data/ui/views splunk_server=local f=id f=updated f=eai:acl ``` Produces all views that are present in local searchhead ```
| table id,updated,eai:acl.removable, eai:acl.app ```eai:acl.removable tells whether the dashboard can be deleted or not. removable=1 means can be deleted. removable=0 means could be system dashboard```
| rename eai:acl.* as *
| rex field=id ".*\/(?<dashboard>.*)$"
| table app dashboard updated removable
| join type=left dashboard app
[ search index=_audit earliest=-60d ```Change this earliest= value if you want a different value than 60 days``` action=search provenance="UI:Dashboard:*" sourcetype=audittrail savedsearch_name!=""
| stats earliest(_time) as earliest_time latest(_time) as latest_time by app provenance
| rex field=provenance ".*\:(?<dashboard>.*)$"
| table earliest_time latest_time app dashboard ```produces dashboards that are used in timerange given in earliest/global time range```]
| where removable=1 ``` condition to return only dashboards that are not viewed ```
| stats values(dashboard) as dashboard by app
@marathon-man wrote: I guess setting local = true for the command (actually all the commands I guess) would do the trick here, as mentioned in another reply? Yes, but: If someone runs...
See more...
@marathon-man wrote: I guess setting local = true for the command (actually all the commands I guess) would do the trick here, as mentioned in another reply? Yes, but: If someone runs, say, index=foo | mylocalcommand then Splunk will pull all events to the SH, including all _raw and fields - that's a lot of work. Therefore I'd include the recommendations (e.g. "only run after stats") in the docs together with setting local=true.
@pdafale_avantor An add-on handles data ingestion and parsing and must be installed on Indexers or Heavy Forwarders. An App includes the dashboards, visualizations, and search-time configurations t...
See more...
@pdafale_avantor An add-on handles data ingestion and parsing and must be installed on Indexers or Heavy Forwarders. An App includes the dashboards, visualizations, and search-time configurations that allow you to interact with the data., needs to be installed on Search Heads. App: https://splunkbase.splunk.com/app/3884 - Install this app on the search head. Add-on: https://splunkbase.splunk.com/app/3885 - This is the Indexer TA for the Corelight App. Important: The TA for Corelight add-on is required on indexers, or index clusters. If your Corelight sensors send data directly to a heavy forwarder or a Splunk Cloud Platform receiver that is a heavy forwarder, the TA for Corelight is also required on those instances. The add-on is not required on search heads, or single-instance Splunk Enterprise environments. I've installed the app in my test environment, and it includes several dashboards. Please refer to the image below for your reference. Once your data is onboarded into Splunk with the correct sourcetypes, these dashboards will automatically populate with your data
I suppose the command should be defined as centralized streaming command instead of distributed one - the local setting in commands.conf - see https://docs.splunk.com/Documentation/Splunk/latest/Adm...
See more...
I suppose the command should be defined as centralized streaming command instead of distributed one - the local setting in commands.conf - see https://docs.splunk.com/Documentation/Splunk/latest/Admin/Commandsconf Thanks, I'll be giving this a try. I guess I can turn replicate off for the collections while I'm at it.
Is your command actively querying the collection? If so, replicate=true won't help you. replicate=true will push the collection's content to the indexers *as CSV*. The KV Store on the indexers (...
See more...
Is your command actively querying the collection? If so, replicate=true won't help you. replicate=true will push the collection's content to the indexers *as CSV*. The KV Store on the indexers (if even running) won't know the collection or its content. Ah, that explains a lot! I'm surprised and doubtful that replicate=true ever worked for someone running Splunk Enterprise on-prem. They claimed it did, but I never tested it myself. Easiest fix would be to only use the command on the SHs, e.g. after a |stats, |tstats, etc. - or if need be |localop. I guess setting local = true for the command (actually all the commands I guess) would do the trick here, as mentioned in another reply?
@gcusello if the problem is the storage -- Yes problem is the storage - we have 6.9TB in each indexers of 6 indexers. you could change the dimension of the index where these logs are stored so they...
See more...
@gcusello if the problem is the storage -- Yes problem is the storage - we have 6.9TB in each indexers of 6 indexers. you could change the dimension of the index where these logs are stored so they will be deleted more frequently and you will not use all the disk space. - how to do this? Please explain more sorry we have volumes configured on our environment.
@python You can use this is_scheduled=0 means Filters unscheduled searches. | rest /services/saved/searches | where is_scheduled=0 To list all saved searches and alerts that are not scheduled ...
See more...
@python You can use this is_scheduled=0 means Filters unscheduled searches. | rest /services/saved/searches | where is_scheduled=0 To list all saved searches and alerts that are not scheduled | rest /services/saved/searches | search is_scheduled=0 alert_type=* disabled=0 | table title, qualifiedSearch, alert_type, is_scheduled, disabled | rest /services/saved/searches | where is_scheduled=0 | table title, description, search, eai:acl.owner, eai:acl.app
In addition to adding storage, consider increasing the number of indexers. Unless the indexers are very over-powered, you probably will need more of them to ingest double the amount of data.
Hi @Karthikeya , is the question is on the license exceedings, you don't have problems exceeding less than 45 times in 60 days. if the problem is the storage, you could change the dimension of the...
See more...
Hi @Karthikeya , is the question is on the license exceedings, you don't have problems exceeding less than 45 times in 60 days. if the problem is the storage, you could change the dimension of the index where these logs are stored so they will be deleted more frequently and you will not use all the disk space. You could also change this max dimension when you have excessive data ingestion and then restore the normal parameter at the end, anyway the easiest method is configure the max dimension for your indexes. Ciao. Giuseppe
I have been using the Splunk Add on for Salesforce Add on for while now but i want to know if anyone else is using it and noticed if the number of events being ingesting has decreased? When i loo...
See more...
I have been using the Splunk Add on for Salesforce Add on for while now but i want to know if anyone else is using it and noticed if the number of events being ingesting has decreased? When i look back to December i could see i could see Splunk would ingest mutiple UserLicense events per day but now its one event every 4 days.
Thank you. For some reason I am still only able to filter for type accounting. At this point, I am wondering whether it is an issue with splunk or ISE does not send this information as part of syslo...
See more...
Thank you. For some reason I am still only able to filter for type accounting. At this point, I am wondering whether it is an issue with splunk or ISE does not send this information as part of syslog. Regards, Martin