Database globalization
DBAsupport.com Forums - Powered by vBulletin
Results 1 to 10 of 10

Thread: Database globalization

  1. #1
    Join Date
    Oct 2003
    Posts
    312

    Database globalization

    Hi all,

    I have a quick question about the database globalization. I have a database currently running on WE8ISO88559P1 and we want to convert to either UTF8 or WE8MSWIN1252?????

    1. Is there any issue to do that ????
    2. How, simply doing EXP/IMP????

    thanks,

  2. #2
    Join Date
    Mar 2002
    Location
    Mesa, Arizona
    Posts
    1,204
    May depend on your db version.

    8i or better should be OK. 8.0 supported it but had some nasty file creation problems.

    Just make sure your exp/imp client's NLS_LANG is set right first.

    The western european char set maps easily to utf8.

    We did have one off-the-shelf application (SalesLogix - CRM) complain about UTF8, so we switched it back.

    Best of Luck,
    Last edited by KenEwald; 03-10-2004 at 02:48 PM.
    "I do not fear computers. I fear the lack of them." Isaac Asimov
    Oracle Scirpts DBA's need

  3. #3
    Join Date
    Oct 2003
    Posts
    312
    Originally posted by KenEwald


    We did have one off-the-shelf application (SalesLogix - CRM) complain about UTF8, so we switched it back.



    thanks for your prompt reply.

    what do you mean by one of your application complain about UTF8 and you have to swich it back??? What did you switch to???


    Do you know any issues with changing the characterset to UTF8 or WE8MSWIN1252??????





    Also, I read on Asktom and it said we may have to modify the plsql code as follow:

    Your plsql routines may will have to change -- your data model may well have to
    change.

    You'll find that in utf, european characters (except ascii -- 7bit data) all
    take 2 bytes. That varchar2(80) you have in your database? It might only hold
    40 characters of eurpean data (or even less of other kinds of data). It is 80
    bytes (you can use the new 9i syntax varchar2( N char ) -- it'll allocate in
    characters, not bytes).

    So, you could find your 80 character description field cannot hold 80
    characters.

    You might find that x := a || b; fails -- with string to long in your plsql code
    due to the increased size.

    You might find that your string intensive routines run slower (substr(x,1,80) is
    no longer byte 1 .. byte 80 -- Oracle has to look through the string to find
    where characters start and stop -- it is more complex)

    chr(10) and chr(13) should work find, they are simple ASCII.


    On clob -- same impact as on varchar2, same issues.
    thanks
    Last edited by learning_bee; 03-10-2004 at 02:58 PM.

  4. #4
    Join Date
    Mar 2002
    Location
    Mesa, Arizona
    Posts
    1,204
    We had to switch from UTF8 back to Western European (WE8ISO8859P1).

    It's a packaged app, so we couldn't figure out what they were doing.

    Come to think of it, we did have a problem with JDBC drivers and Thai character convertion. We upgraded the JDBC drivers to match the db version and that fixed it. (9.2.0.4)

    Other than that, I don't know of any other character set conversion issues, but better check MetaLink for bugs on your specific version.

    Best of luck, I'd be curious to know it goes.
    "I do not fear computers. I fear the lack of them." Isaac Asimov
    Oracle Scirpts DBA's need

  5. #5
    Join Date
    Oct 2003
    Posts
    312
    kenwald,

    thanks,

    I am running some test now and this is what I do.

    I export the WE8ISO88559P1 database, below is part of the exp log:

    Connected to: Oracle9i Enterprise Edition Release 9.2.0.4.0 - Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.4.0 - Production
    Export done in WE8ISO8859P1 character set and AL16UTF16 NCHAR character set





    so I did create a new database and it's on UTF8, below is the part of my import:


    import done in WE8ISO8859P1 character set and AL16UTF16 NCHAR character set
    import server uses UTF8 character set (possible charset conversion)
    IMP-00046: using FILESIZE value from export file of 1992294400



    look like Oracle does the data conversion it self, my import still running with no error yet. Assume the import finish, with no error on the log, will I lose the data at all ??? or I am ok

  6. #6
    Join Date
    Mar 2002
    Location
    Mesa, Arizona
    Posts
    1,204
    Sounds like you're on your way.

    Yes the conversion is automatic.

    If it can't convert you'll get an error that looks something like this:

    INVALID CHARACTER ENCOUNTERED IN: FAILUTF8CONV

    Not to worry, WE8 is a subset of UTF8, I'd be real suprised if you have any errors.
    "I do not fear computers. I fear the lack of them." Isaac Asimov
    Oracle Scirpts DBA's need

  7. #7
    Join Date
    Oct 2003
    Posts
    312
    Kenwald,

    thanks for all the input. I also did another test based on one of the post in asktom. Basically, I change the NLS_LANG on the client machine to UTF8 and do the export from there and I got the error:

    IMP-00091 ...questionable statistics, I guess I can't do that b/c the database is on WE8ISO88559P1.

  8. #8
    Join Date
    Mar 2002
    Location
    Mesa, Arizona
    Posts
    1,204
    I've seen that before and it's never been a problem. I suppose the statistics would be questionable because rowsize calculations or something can't be accurately calculated.

    Statistics can be re-generated easily enough.

    -Ken
    "I do not fear computers. I fear the lack of them." Isaac Asimov
    Oracle Scirpts DBA's need

  9. #9
    Join Date
    Oct 2003
    Posts
    312
    in addition to this thread, I also had the additional question about the NLS_DATABASE_PARAMETERS. Below is the query about the NLS_DATABASE_PARAMETERS:

    PARAMETER VALUE
    NLS_LANGUAGE AMERICAN
    NLS_TERRITORY AMERICA
    NLS_CURRENCY $
    NLS_ISO_CURRENCY AMERICA
    NLS_NUMERIC_CHARACTERS .,
    NLS_CHARACTERSET WE8ISO8859P1
    NLS_CALENDAR GREGORIAN
    NLS_DATE_FORMAT DD-MON-RR
    NLS_DATE_LANGUAGE AMERICAN
    NLS_SORT BINARY
    NLS_TIME_FORMAT HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY $
    NLS_COMP BINARY
    NLS_LENGTH_SEMANTICS BYTE
    NLS_NCHAR_CONV_EXCP FALSE
    NLS_NCHAR_CHARACTERSET AL16UTF16
    NLS_RDBMS_VERSION 9.2.0.4.0


    as you see the data format was in DD-MON_RR, but when I do the query on sysdata from dual I got:



    SYSDATE
    3/11/2004 1:37:18.000 PM


    the question is so where this sysdate is driven by??? b/c it's not the same as the data format in NLS_DATABASE_PARAMETERS

  10. #10
    Join Date
    Feb 2002
    Location
    Brussels-Belgium
    Posts
    8
    right overwrites left:

    NLS_DATABASE_PARAMETERS -> NLS_INSTANCE_PARAMETERS -> NLS_SESSION_PARAMETERS

    database comes from ?
    instance comes from init.ora
    session comes from alter session or dbms_session.set_nls

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  



Click Here to Expand Forum to Full Width