Page 2 of 81

Posted: Mon Oct 30, 2006 6:01 pm
by Spazz
I was looking at the merge drag and drop tracks option "Delete Dropped Tracks!"

Anyways, if that's the way it works it should be perfect. I'll download it and have a try later on tonight

Posted: Mon Oct 30, 2006 7:02 pm
by Bex
Delete dropped tracks is optional! :wink:

Re: Advanced Duplicate Find & Fix [Script] (2006-10-30)

Posted: Mon Oct 30, 2006 8:12 pm
by Al_G
Bex wrote: I'd really like some feedback on this script!
- Bugs
- Improvements
- Suggestions
And of course if you like it! :D
I'll try this script when I get home, but I have one suggestion that is a result of having a set of FLAC flies and corresponding MP3 files for portable devices. If possible I'd like the option to either look at all files, or just a particular type of file. I can think of reasons for wanting to have both available.

Posted: Mon Oct 30, 2006 8:36 pm
by Randall Grogan
2,897 duplicates. Looks like I now have a way to spend my nights and weekends.

Posted: Tue Oct 31, 2006 3:54 am
by trixmoto
Yes, my Duplicate Contents node is also empty. What am I missing?

Posted: Tue Oct 31, 2006 5:25 am
by Bex
@Al_G,
Thanks for your input but that can't be done with scripts. This script only make it easier to determine: what to keep, what to delete and merge playhistory if you want.

@Randall Grogan,
2,897 :o I believe we have a new leader :D

@ trixmoto,
It seems that you haven't scanned your tracks for duplicates (calculate hash value). Select:
Tools->Options->Library->Analyze tracks for duplicates (take extra time)
And rescan all your collection. Also make sure that "Only for files with changed timestamp or size" is not selected.
If you have unsynchronised tags you must either deselect "Update track info from tags when rescanning files" or synchronize your tags. Otherwise you will overwrite your database values.

Posted: Tue Oct 31, 2006 12:35 pm
by Al_G
Bex wrote:@Al_G,
Thanks for your input but that can't be done with scripts. This script only make it easier to determine: what to keep, what to delete and merge playhistory if you want.
Thanks for looking, Bex. As it turns out I don't have anything in the Duplicate Content node so the script, which I installed to try, has nothing to do anyway. I re-scanned all the tracks with the 'Analyze track' setting enabled and the timestamp check disabled, to make sure.

Posted: Wed Nov 01, 2006 2:54 pm
by Bex
Script is updated
Ver 1.1 (2006-11-01)
- Added a Custom Duplicate Search Node 8)

Image

Image

Custom Duplicate Search works like this:
- You can search for duplicates with any combination of: Artist, Title, Album, FileName, Size and Length
- The result is displayed exactly as in the "Same Content" Node, but sorted on Artist instead
- But you don't (yet) have a "OK-list" in this node. It might not be implemented due to unclear logic
- Possibility to Merge PlayHistory, PlayCount and DateAdded with Drag & Drop of duplicate tracks
- To only search on same FileName is a bit slow, especially on big collections

Note!
It was not easy but I finally succeded to implement this feature. :)
However, due to performance reasons, the script creates a somewhat big tmp table (which is deleted on shutdown). But when you do a lot of custom searches (=population of the temp table) it will increase the size of your Database and eventually make MM slow.
This is however very easy to cure. Simply compact the database from within MM (File->Maintain Libary->Compact Database)
Especially when you have made a lot of searches!

Compacting the Database is a very good thing todo and should be done at least one time per month. (or week - Depending on how much work you do in MM)


Bug reports, suggestions and comments are as always welcome!

Enjoy!
/Bex

Posted: Thu Nov 02, 2006 11:49 am
by Bex
Yet another update!
Ver 1.2 (2006-11-02)
- Added a "Folders with Tracks Not Analyzed" Node

Image

Image

Update the script to see how it works! :wink:


Bug reports, suggestions and comments are as always welcome!

Enjoy!
/Bex

Posted: Thu Nov 02, 2006 12:11 pm
by Mizery_Made
Nice touch :)

Looks like what I have been looking for.

Posted: Sun Nov 05, 2006 1:58 pm
by loufeliz
Been using MMonkey for a few months, will be upgrading to Gold.

I initially started ripping my CD collection at 128k bitrate and 196k and now 256k. I have duplicates of same songs at different bit rates and wish to keep the higher bit rate and mark lower bit rate duplicates. Will install and give it a whirl. If it does not do it now, would be nice to setup a ranking order to decide which track to keep when dups are found:

i.e. bit rate, date, size etc....

Posted: Sun Nov 05, 2006 2:16 pm
by Bex
I'm afraid that it's a manual process to decide what to keep and what to delete. But I understand what you mean. What I could do is to add extra nodes which group the result into same Album Artist, Album Name, Folder or something else. But I haven't yet figured out what's best or how to do it.

If anyone has a suggestion on this matter, please let me know!

Posted: Sun Nov 05, 2006 10:02 pm
by Teknojnky
An idea for future consideration:

-same content, by artist (then by the current node layout)

This would make it easier to view all tracks by the same artists that are dupes.

Posted: Sun Nov 05, 2006 10:09 pm
by Mizery_Made
Teknojnky wrote:An idea for future consideration:

-same content, by artist (then by the current node layout)

This would make it easier to view all tracks by the same artists that are dupes.
There would probably be a problem with that though, considering the Dups are compared by "Hash" (or similar method) and not the Tag Data. So therefor, one of the Dups might be "Artist 1" ft. "Artist 2" while the other is "Artist 2" ft. "Artist 1" (if the same song was released on two different albums, on the album of each artist) or even if the tagging is wrong and it's "Artist" (dup 1) & "Artst" (dup 2)

I could be wrong though, if so, stick a sock in my mouth, Lol.

Posted: Mon Nov 06, 2006 12:13 am
by Teknojnky
Yea, I'm not sure if it would be easily implemented, I'd just love to see all the dupe tracks grouped by artist, then expand out from there.