From your screenshot, it looks like you're reading in the data correctly on both computers as UTF-8. The Visual Studio debugger is showing it correctly on both, right?
Anyway, the UTF-8 string is not "junk characters," it's UTF-8 encoded, where each Japanese character is encoded as either two or three bytes. It only looks like "junk" because you (or Notepad) are trying to interpret it as some other encoding, probably ASCII or UTF-16, which is not going to work. You probably just need to tell Notepad that it's UTF-8. (Although I would use a more savvy editor such as Notepad2, or just use Visual Studio.)
My other thought is, why are you using PDGetHostEncoding? That's probably set to something like Shift-JIS on the Japanese machine. And by definition, it's going to be different on different machines.
If you want to "get Unicode which supports in any OS or culture setting," then stop messing around with "host encodings" and just get Unicode, as either UTF-8 or UTF-16. Unicode doesn't rely on any "culture setting" or 8-bit "host encoding;" it's Unicode, the same 16-bit (or 32-bit) encoding anywhere in the world.
I would simply either use the UTF-8, and make sure that whatever program or dialog you're passing the data to knows it's UTF-8, or use UTF-16. If you're saving to a file, make sure to put the correct byte order marker on it, for either UTF-8, UTF16-BE, or UTF-16LE. The days of relying on 8-bit encodings are in the past; Unicode is the way forward.