I've imported 3 different GED files from Ancestry, all improvements from the previous import, and now I have 3 Aliases for every member in my Tree. Since there are 1200 individuals to manually go into each one and delete the extra alias is a task.
In Family Historian I have attached 5-600 pictures so if I start with a clean import I'll have to re-attach all the pictures again. The pictures are NOT in the GED file.
Is there a "batch Alias Delete" meaning one can delete all alias's with 1 instruction, and when you merge the names with a new GED file you will also bring in the proper alias from the GED, if it exists?
* Alias/Alternate Names Clean-up
- tatewise
- Megastar
- Posts: 27088
- Joined: 25 May 2010 11:00
- Family Historian: V7
- Location: Torbay, Devon, UK
- Contact:
Re: Alias/Alternate Names Clean-up
What exactly do you mean by "3 Aliases for every member in my Tree"?
Do you mean every person has 3 separate Individual records with the same or similar Names?
If so, that implies that when you 'imported' the new GEDCOM you did not apply the File Merge options suitably.
Exactly what FH commands did you use?
Usually File > Merge/Compare File will do a good job of automatically merging matching Individual and other records.
See glossary:merge_compare_files|> Merge/Compare File for detailed advice.
The most important tip is to create a new Project from each new GEDCOM and clean up the Ancestry 'junk'.
Then it will merge much more satisfactorily, and all Media will remain intact.
I strongly advise you revert to a recent GEDCOM Backup or Snapshot so you can start again.
Trying to clean up the triple alias data will be extremely tedious and error prone.
Do you mean every person has 3 separate Individual records with the same or similar Names?
If so, that implies that when you 'imported' the new GEDCOM you did not apply the File Merge options suitably.
Exactly what FH commands did you use?
Usually File > Merge/Compare File will do a good job of automatically merging matching Individual and other records.
See glossary:merge_compare_files|> Merge/Compare File for detailed advice.
The most important tip is to create a new Project from each new GEDCOM and clean up the Ancestry 'junk'.
Then it will merge much more satisfactorily, and all Media will remain intact.
I strongly advise you revert to a recent GEDCOM Backup or Snapshot so you can start again.
Trying to clean up the triple alias data will be extremely tedious and error prone.
Mike Tate ~ researching the Tate and Scott family history ~ tatewise ancestry
Re: Alias/Alternate Names Clean-up
I'll take a look there.
- tatewise
- Megastar
- Posts: 27088
- Joined: 25 May 2010 11:00
- Family Historian: V7
- Location: Torbay, Devon, UK
- Contact:
Re: Alias/Alternate Names Clean-up
BTW: Alias has a special meaning in FH/GEDCOM and I wonder if what you call Aliases are actually Alternate Names found via the Property Box more (+)... link to the right of the Name box?
Those Alternate Names should have been dealt with during the Merge process, either automatically by FH, or manually by yourself.
Those Alternate Names should have been dealt with during the Merge process, either automatically by FH, or manually by yourself.
Mike Tate ~ researching the Tate and Scott family history ~ tatewise ancestry
Re: Alias/Alternate Names Clean-up
OK, yes, technicality; I associate alias in the report as the A.K.A. or 'also known as', and the AKA is being duplicated or triplicated correlated with every time I import a GED.
And as you say I need to follow more stringent importing protocol you described above, and not view-able as I write this so I cannot
repeat verbatim the exact wording you said above.
But I understand. I'll try the various methods until I find the one that works, maybe use a tiny subset of a file and find where I eliminate the duplication. S/w is repeatable, scale-able and will always produce the same results with accurate data of any size so if I use a small subset I can test the results easier, and faster and as they say GIGO.
And as you say I need to follow more stringent importing protocol you described above, and not view-able as I write this so I cannot
repeat verbatim the exact wording you said above.
But I understand. I'll try the various methods until I find the one that works, maybe use a tiny subset of a file and find where I eliminate the duplication. S/w is repeatable, scale-able and will always produce the same results with accurate data of any size so if I use a small subset I can test the results easier, and faster and as they say GIGO.