This depends on your base search and your definition of duplicates.
If your base search creates or retains the _raw field then the events will be duplicated but the source will be set to some random stash file and the sourcetype will be set to stash and the internal field _indextime will be different too
If your base search does not contain a _raw field the _raw will be created from the fields in your search a timestamp will be added by default aswell as a info_search_time field which will be different for every execution
The documentation is quite good in my opinion:
My definition of duplicates would be two events in the summary index that represent the exact same data set from the source index. For example if I have my report which populates the summary index scheduled to run once a day, but then someone also goes and runs it manually on Tuesday, will the data in the summary index be doubled up for Tuesday as opposed to the Monday data when it only ran as scheduled (no additional manual run).
Yes the data will be doubled. You have the possibility to identify and mark the extra data as deleted but the collect command does not have any 'intelligence' to detect that situation.