[Libreoffice-qa] Ubuntu/Canonical doing more manual testing for LibreOffice?

classic Classic list List threaded Threaded
20 messages Options
Bjoern Michaelsen Bjoern Michaelsen
Reply | Threaded
Open this post in threaded view
|

[Libreoffice-qa] Ubuntu/Canonical doing more manual testing for LibreOffice?

Hi Sophie, Cor, QA-List

please see:

 https://lists.ubuntu.com/archives/ubuntu-desktop/2012-February/003745.html

this is an opportunity to get more structured manual testing done on
LibreOffice. Could you have a look if you could help out coordinating this with
our own efforts to get LibreOffice tested even better?

Best,

Bjoern
_______________________________________________
List Name: Libreoffice-qa mailing list
Mail address: [hidden email]
Change settings: http://lists.freedesktop.org/mailman/listinfo/libreoffice-qa
Problems? http://www.libreoffice.org/get-help/mailing-lists/how-to-unsubscribe/
Posting guidelines + more: http://wiki.documentfoundation.org/Netiquette
List archive: http://lists.freedesktop.org/archives/libreoffice-qa/
sophi sophi
Reply | Threaded
Open this post in threaded view
|

Re: Ubuntu/Canonical doing more manual testing for LibreOffice?

Hi all,
On 24/02/2012 11:12, Bjoern Michaelsen wrote:
> Hi Sophie, Cor, QA-List
>
> please see:
>
>   https://lists.ubuntu.com/archives/ubuntu-desktop/2012-February/003745.html
>
> this is an opportunity to get more structured manual testing done on
> LibreOffice. Could you have a look if you could help out coordinating this with
> our own efforts to get LibreOffice tested even better?

Sorry, I realized I only answered to Bjoern, that was not intended.
Although I don't spend much time on QA these last weeks (far less than I
would like to), I'll have a look through the week end and give my
feedback here. I'm not sure I'll have time to dedicate to this, but I'll
try to find it if nobody jump in.

Kind regards
Sophie
--
Founding member of The Document Foundation
_______________________________________________
List Name: Libreoffice-qa mailing list
Mail address: [hidden email]
Change settings: http://lists.freedesktop.org/mailman/listinfo/libreoffice-qa
Problems? http://www.libreoffice.org/get-help/mailing-lists/how-to-unsubscribe/
Posting guidelines + more: http://wiki.documentfoundation.org/Netiquette
List archive: http://lists.freedesktop.org/archives/libreoffice-qa/
Yi Fan Jiang Yi Fan Jiang
Reply | Threaded
Open this post in threaded view
|

Re: Ubuntu/Canonical doing more manual testing for LibreOffice?

In reply to this post by Bjoern Michaelsen
Hi Bjoern/fellows,

Thanks for the lovely news! I am CCing Rimas for his concern as well.

Hi Nicholas,

I am Yifan from libreoffice QA :) I just took a view of the Checkbox wikipage
quoted in your original mail and here are some related information available.
We have got a bunch of regression test cases specifically for managed in
Litmus test case management tool:

    https://tcm.documentfoundation.org/

    [details referring to wiki]
    http://wiki.documentfoundation.org/Litmus
    http://wiki.documentfoundation.org/QA/Testing/Regression_Tests#Full_Regression_Test

Meanwhile there are sample test cases description in Litmus can be found as:

https://tcm.documentfoundation.org/show_test.cgi?id=1302
https://tcm.documentfoundation.org/show_test.cgi?id=1067

From my understanding it seems possible to format selective test cases and
populate them to CheckBox. But the problem is the test cases are constantly
being updated/created/removed through libreoffice life cycle, there may be a
frustrating work to do not only for populating the cases to Checkbox but
maintaining 2 versions of test base (Litmus and Checkbox). So just wonder did
you experience similar situation or have any nice ideas for this (for example,
syncing 2 test bases)? Thanks again for the great news :)

Best wishes,
Yifan

On Fri, Feb 24, 2012 at 11:12:39AM +0100, Bjoern Michaelsen wrote:

> Hi Sophie, Cor, QA-List
>
> please see:
>
>  https://lists.ubuntu.com/archives/ubuntu-desktop/2012-February/003745.html
>
> this is an opportunity to get more structured manual testing done on
> LibreOffice. Could you have a look if you could help out coordinating this with
> our own efforts to get LibreOffice tested even better?
>
> Best,
>
> Bjoern
> _______________________________________________
> List Name: Libreoffice-qa mailing list
> Mail address: [hidden email]
> Change settings: http://lists.freedesktop.org/mailman/listinfo/libreoffice-qa
> Problems? http://www.libreoffice.org/get-help/mailing-lists/how-to-unsubscribe/
> Posting guidelines + more: http://wiki.documentfoundation.org/Netiquette
> List archive: http://lists.freedesktop.org/archives/libreoffice-qa/
>
_______________________________________________
List Name: Libreoffice-qa mailing list
Mail address: [hidden email]
Change settings: http://lists.freedesktop.org/mailman/listinfo/libreoffice-qa
Problems? http://www.libreoffice.org/get-help/mailing-lists/how-to-unsubscribe/
Posting guidelines + more: http://wiki.documentfoundation.org/Netiquette
List archive: http://lists.freedesktop.org/archives/libreoffice-qa/
sophi sophi
Reply | Threaded
Open this post in threaded view
|

Re: Ubuntu/Canonical doing more manual testing for LibreOffice?

Hi all,
On 28/02/2012 11:15, Yifan Jiang wrote:
> Hi Bjoern/fellows,
>
> Thanks for the lovely news! I am CCing Rimas for his concern as well.

+1 :)
>
> Hi Nicholas,

Hi Nicholas,

>
> I am Yifan from libreoffice QA :) I just took a view of the Checkbox wikipage
> quoted in your original mail and here are some related information available.
> We have got a bunch of regression test cases specifically for managed in
> Litmus test case management tool:
>
>      https://tcm.documentfoundation.org/
>
>      [details referring to wiki]
>      http://wiki.documentfoundation.org/Litmus
>      http://wiki.documentfoundation.org/QA/Testing/Regression_Tests#Full_Regression_Test
>
> Meanwhile there are sample test cases description in Litmus can be found as:
>
> https://tcm.documentfoundation.org/show_test.cgi?id=1302
> https://tcm.documentfoundation.org/show_test.cgi?id=1067
>
>  From my understanding it seems possible to format selective test cases and
> populate them to CheckBox. But the problem is the test cases are constantly
> being updated/created/removed through libreoffice life cycle, there may be a
> frustrating work to do not only for populating the cases to Checkbox but
> maintaining 2 versions of test base (Litmus and Checkbox). So just wonder did
> you experience similar situation or have any nice ideas for this (for example,
> syncing 2 test bases)? Thanks again for the great news :)

So, I'm ready to work on this, but as underlined by Yifan, it would be
interesting for us to keep also our Litmus system up to date and
synchronized because it's used under several OS and languages so it has
a real interest here.
The ability of syncing the 2 test bases would be really great as it
would avoid duplicating the work for the maintainers. Anyway, I'm here
to help, so don't hesitate to tell me what to do :-)

Kind regards
Sophie
--
Founding member of The Document Foundation
_______________________________________________
List Name: Libreoffice-qa mailing list
Mail address: [hidden email]
Change settings: http://lists.freedesktop.org/mailman/listinfo/libreoffice-qa
Problems? http://www.libreoffice.org/get-help/mailing-lists/how-to-unsubscribe/
Posting guidelines + more: http://wiki.documentfoundation.org/Netiquette
List archive: http://lists.freedesktop.org/archives/libreoffice-qa/
Bjoern Michaelsen Bjoern Michaelsen
Reply | Threaded
Open this post in threaded view
|

Re: Ubuntu/Canonical doing more manual testing for LibreOffice?

Hi Sophie, all,


On Fri, Mar 02, 2012 at 11:29:32AM +0100, Sophie Gautier wrote:
> So, I'm ready to work on this, but as underlined by Yifan, it would
> be interesting for us to keep also our Litmus system up to date and
> synchronized because it's used under several OS and languages so it
> has a real interest here.
> The ability of syncing the 2 test bases would be really great as it
> would avoid duplicating the work for the maintainers. Anyway, I'm
> here to help, so don't hesitate to tell me what to do :-)

Syncing the testbases will be nifty, but for now we should concentrate on
getting the existing testcases into checkbox. Beta2 is on 2012-03-29(*) and we
should make sure that we have as much tests in there as possible by then as
there will be widely distibuted calls for testing like this one:

http://www.theorangenotebook.com/2012/03/opportunity-manual-application-testing.html

then and we really should not miss that opportunity. We can discuss
syncronizing testcase bases etc. later. The time window is between 2012-03-29
and 2012-04-26 should get what ever testing we can out of that.  A we can
discuss creating a nifty sync solution after that as we will have some time to
the Libreoffice 3.6 and Ubuntu p+1 releases.

So: just getting the tests over to checkbox now (manually or by whatever means)
should have priority.

Do we agree there?

Best,

Bjoern


(*) https://launchpad.net/ubuntu/+milestone/ubuntu-12.04-beta-2
_______________________________________________
List Name: Libreoffice-qa mailing list
Mail address: [hidden email]
Change settings: http://lists.freedesktop.org/mailman/listinfo/libreoffice-qa
Problems? http://www.libreoffice.org/get-help/mailing-lists/how-to-unsubscribe/
Posting guidelines + more: http://wiki.documentfoundation.org/Netiquette
List archive: http://lists.freedesktop.org/archives/libreoffice-qa/
sophi sophi
Reply | Threaded
Open this post in threaded view
|

Re: Ubuntu/Canonical doing more manual testing for LibreOffice?

Hi Bjoern, all,

I've added Nicholas in copy of this mail
On 02/03/2012 13:50, Bjoern Michaelsen wrote:

> Hi Sophie, all,
>
>
> On Fri, Mar 02, 2012 at 11:29:32AM +0100, Sophie Gautier wrote:
>> So, I'm ready to work on this, but as underlined by Yifan, it would
>> be interesting for us to keep also our Litmus system up to date and
>> synchronized because it's used under several OS and languages so it
>> has a real interest here.
>> The ability of syncing the 2 test bases would be really great as it
>> would avoid duplicating the work for the maintainers. Anyway, I'm
>> here to help, so don't hesitate to tell me what to do :-)
>
> Syncing the testbases will be nifty, but for now we should concentrate on
> getting the existing testcases into checkbox. Beta2 is on 2012-03-29(*) and we
> should make sure that we have as much tests in there as possible by then as
> there will be widely distibuted calls for testing like this one:
>
> http://www.theorangenotebook.com/2012/03/opportunity-manual-application-testing.html
>
> then and we really should not miss that opportunity. We can discuss
> syncronizing testcase bases etc. later. The time window is between 2012-03-29
> and 2012-04-26 should get what ever testing we can out of that.  A we can
> discuss creating a nifty sync solution after that as we will have some time to
> the Libreoffice 3.6 and Ubuntu p+1 releases.
>
> So: just getting the tests over to checkbox now (manually or by whatever means)
> should have priority.
>
> Do we agree there?

ok, but I won't install ubuntu, setup an environment on launchpad and
install bazaar (or do I have to? is it mandatory?). So a little help so
that I don't spend all my week-end on this would be:
- is this link the good one for test formatting?
http://testcases.qa.ubuntu.com/CaseAndPlanGuidelines
- are the tests on Litmus good enough or should I write more?
- on this page, there is several tests for Ubuntu/OOo, Kubuntu/OOo,
Applications/LibreOffice, etc. What is the one I choose

Thanks in advance,
Kind regards
Sophie

>
> Best,
>
> Bjoern
>
>
> (*) https://launchpad.net/ubuntu/+milestone/ubuntu-12.04-beta-2
> _______________________________________________
> List Name: Libreoffice-qa mailing list
> Mail address: [hidden email]
> Change settings: http://lists.freedesktop.org/mailman/listinfo/libreoffice-qa
> Problems? http://www.libreoffice.org/get-help/mailing-lists/how-to-unsubscribe/
> Posting guidelines + more: http://wiki.documentfoundation.org/Netiquette
> List archive: http://lists.freedesktop.org/archives/libreoffice-qa/


--
Founding member of The Document Foundation
_______________________________________________
List Name: Libreoffice-qa mailing list
Mail address: [hidden email]
Change settings: http://lists.freedesktop.org/mailman/listinfo/libreoffice-qa
Problems? http://www.libreoffice.org/get-help/mailing-lists/how-to-unsubscribe/
Posting guidelines + more: http://wiki.documentfoundation.org/Netiquette
List archive: http://lists.freedesktop.org/archives/libreoffice-qa/
Petr Mladek Petr Mladek
Reply | Threaded
Open this post in threaded view
|

Re: test cases quality; was: Ubuntu/Canonical doing more manual testing for LibreOffice?

Sophie Gautier píše v Pá 02. 03. 2012 v 14:33 +0100:
> - are the tests on Litmus good enough or should I write more?

Great point. We have only very few test cases in Litmus (less than 50)
and the quality is debatable :-/

For example, I see test cases:

        + create empty Writer document
        + create empty Calc document

I do not think that we need manual tests for this. This basic operation
is part of any other complex test. In addition, exactly this is tested
within few seconds using the smoketest.


Another bunch of tests sounds like:

        + Translation check of creating a new database
        + Translation check when creating a table in a database
        + Translation check for Formula Editor


Of course, we need to check that the application is translated but we
can't check every dialog manually. Instead of the above particular
dialogs, we should check that different elements are localized, for
example:

        + "File/New" menu - because it consists of optional components
                          that are added from xml registry files
  + main menu and one submenu
        + a dialog with tabs, check boxes, combo boxes, itemized list,
          and other elements
        + help - because it using another technology than the other
                 dialogs
        + KDE/GNOME safe dialog because they are done another technology
                 as well
        + extensions - because the translation is done slightly
                    different way

If one submenu is localized, the other submenus should be localized as
well if the strings are in pootle.


IMHO, we need to discuss what test cases make sense and create a
reasonable test cases first.

We are still looking for an experienced QA guy who could step in, teach
people and drive this forward.


Best Regards,
Petr

_______________________________________________
List Name: Libreoffice-qa mailing list
Mail address: [hidden email]
Change settings: http://lists.freedesktop.org/mailman/listinfo/libreoffice-qa
Problems? http://www.libreoffice.org/get-help/mailing-lists/how-to-unsubscribe/
Posting guidelines + more: http://wiki.documentfoundation.org/Netiquette
List archive: http://lists.freedesktop.org/archives/libreoffice-qa/
Petr Mladek Petr Mladek
Reply | Threaded
Open this post in threaded view
|

Re: test cases quality; was: Ubuntu/Canonical doing more manual testing for LibreOffice?

Petr Mladek píše v Pá 02. 03. 2012 v 16:03 +0100:
> IMHO, we need to discuss what test cases make sense and create a
> reasonable test cases first.

BTW: My (draft) thoughts about how a good test case looks like can be
found at
http://wiki.documentfoundation.org/QA/Testing/Test_Case#Good_Test

Feel free to update the wiki page. It is not mine. I just entered the
initial content ;-)


Best Regards,
Petr

_______________________________________________
List Name: Libreoffice-qa mailing list
Mail address: [hidden email]
Change settings: http://lists.freedesktop.org/mailman/listinfo/libreoffice-qa
Problems? http://www.libreoffice.org/get-help/mailing-lists/how-to-unsubscribe/
Posting guidelines + more: http://wiki.documentfoundation.org/Netiquette
List archive: http://lists.freedesktop.org/archives/libreoffice-qa/
sophi sophi
Reply | Threaded
Open this post in threaded view
|

Re: test cases quality; was: Ubuntu/Canonical doing more manual testing for LibreOffice?

In reply to this post by Petr Mladek
Hi Petr,

On 02/03/2012 16:03, Petr Mladek wrote:

> Sophie Gautier píše v Pá 02. 03. 2012 v 14:33 +0100:
>> - are the tests on Litmus good enough or should I write more?
>
> Great point. We have only very few test cases in Litmus (less than 50)
> and the quality is debatable :-/
>
> For example, I see test cases:
>
> + create empty Writer document
> + create empty Calc document
>
> I do not think that we need manual tests for this. This basic operation
> is part of any other complex test. In addition, exactly this is tested
> within few seconds using the smoketest.
>
>
> Another bunch of tests sounds like:
>
> + Translation check of creating a new database
> + Translation check when creating a table in a database
> + Translation check for Formula Editor

Well, I don't think you get the purpose of what Litmus was done for. It
was for community testing at large, so very easy and short tests to
bring interest to the testing. It should have help also localizer to
test there version. Just as we did by the past and it worked well. Some
spend only 30mn others more that 3 hours because the online tests was
only the very basis of larger tests with a set of documents. So it's
more about the life of a team, than only a basic test. Unfortunately we
don't have the good tool here and no money to develop what could suite
our needs. Mozilla was developing a tool but it's not yet done either.
>
>
> Of course, we need to check that the application is translated but we
> can't check every dialog manually.

We had that by the past with the VCLTestool.
  Instead of the above particular

> dialogs, we should check that different elements are localized, for
> example:
>
> + "File/New" menu - because it consists of optional components
>                            that are added from xml registry files
>   + main menu and one submenu
> + a dialog with tabs, check boxes, combo boxes, itemized list,
>            and other elements
> + help - because it using another technology than the other
>                   dialogs
> + KDE/GNOME safe dialog because they are done another technology
>                   as well
> + extensions - because the translation is done slightly
>                      different way
>
> If one submenu is localized, the other submenus should be localized as
> well if the strings are in pootle.

It's not about localization only (but it's good for CTL and CJK) , but
also about the design of the dialog that allow to see the whole string
and then adapt the dialog or the l10n. It's not about to see if it
works, it's about the quality of the l10n and the design.
>
>
> IMHO, we need to discuss what test cases make sense and create a
> reasonable test cases first.
>
> We are still looking for an experienced QA guy who could step in, teach
> people and drive this forward.

So lets wait for that guy.

Kind regards
Sophie
--
Founding member of The Document Foundation
_______________________________________________
List Name: Libreoffice-qa mailing list
Mail address: [hidden email]
Change settings: http://lists.freedesktop.org/mailman/listinfo/libreoffice-qa
Problems? http://www.libreoffice.org/get-help/mailing-lists/how-to-unsubscribe/
Posting guidelines + more: http://wiki.documentfoundation.org/Netiquette
List archive: http://lists.freedesktop.org/archives/libreoffice-qa/
Petr Mladek Petr Mladek
Reply | Threaded
Open this post in threaded view
|

Re: test cases quality; was: Ubuntu/Canonical doing more manual testing for LibreOffice?

Sophie Gautier píše v Pá 02. 03. 2012 v 16:20 +0100:

> > Another bunch of tests sounds like:
> >
> > + Translation check of creating a new database
> > + Translation check when creating a table in a database
> > + Translation check for Formula Editor
>
> Well, I don't think you get the purpose of what Litmus was done for. It
> was for community testing at large, so very easy and short tests to
> bring interest to the testing. It should have help also localizer to
> test there version. Just as we did by the past and it worked well. Some
> spend only 30mn others more that 3 hours because the online tests was
> only the very basis of larger tests with a set of documents. So it's
> more about the life of a team, than only a basic test. Unfortunately we
> don't have the good tool here and no money to develop what could suite
> our needs. Mozilla was developing a tool but it's not yet done either.

I appreciate that you want to teach people using Litmus. Though, I am
afraid that you did not get my point.

Please, read the above mentioned test cases. One test describe how get
into one dialog and asks to check that all strings are translated.
Another test cases describes how to reach another dialog where the
strings need to be checked.

IMHO, there are hunderts or thousands of dialogs. IMHO, we do not want
a test case for every single dialog. We do not have enough people who
could create, translate, and process all such test cases.

Also I am not sure if would be effective to use Litmus for this type of
testing. It might take few seconds to check that all strings are in a
given language. It might take longer time until you enter your result in
Litmus and select another test case.

IMHO, we could do much better job here. If we have strings translated in
pootle and the build works correctly, all translated strings are used.
By other words, if you have translation for 1000 dialogs in pootle, it
is enough to QA only 1 dialog. The strings are extracted from pootle by
a script and applied in sources by another tool. If one string is used,
others are used as well[*].

You might say that you need to check layout of the strings that they are
not shrinked. Well, we need not check all strings here. It might be
enough to check only strings that look risky (translation is much
longer) than the original string.

You might say that we should check quality of the translation. I mean if
the translation makes sense in the context of the given dialog. Well,
this is not mentioned in the current test case. Also, I am not sure if
it is worth the effort. We do not change all strings in every release.
So, we do not need to check all translations.


> > Of course, we need to check that the application is translated but we
> > can't check every dialog manually.
>
> We had that by the past with the VCLTestool.

Hmm, how VCLTesttool helped here? Did it checked that a string was
localized? Did it checked if a translation was shrinked or confusing?

>   Instead of the above particular
> > dialogs, we should check that different elements are localized, for
> > example:
> >
> > + "File/New" menu - because it consists of optional components
> >                            that are added from xml registry files
> >   + main menu and one submenu
> > + a dialog with tabs, check boxes, combo boxes, itemized list,
> >            and other elements
> > + help - because it using another technology than the other
> >                   dialogs
> > + KDE/GNOME safe dialog because they are done another technology
> >                   as well
> > + extensions - because the translation is done slightly
> >                      different way

[*] There are several type of strings which are processed different way.
The above list mentions the basic categories. I suggest to test examples
of the categories instead of every single dialog.


> > If one submenu is localized, the other submenus should be localized as
> > well if the strings are in pootle.
>
> It's not about localization only (but it's good for CTL and CJK) , but
> also about the design of the dialog that allow to see the whole string
> and then adapt the dialog or the l10n. It's not about to see if it
> works, it's about the quality of the l10n and the design.

I agree that we need to test this but we are back in the current test
cases. They do not mention CTL/CJK problems. They do not ask for design
check. They do not concentrate of functionality that is really affected
by CTL/CJK.

I am neither localize-person nor professional QA guy. I have just
feeling that we could do the testing more effectively and the test cases
should teach people how to do it. IMHO, this is not the current state.

IMHO, it would be easier that start with functional tests rather than
entering hunderts of the "same" translations tests.


> > IMHO, we need to discuss what test cases make sense and create a
> > reasonable test cases first.
> >
> > We are still looking for an experienced QA guy who could step in, teach
> > people and drive this forward.
>
> So lets wait for that guy.

Sophie, I feel something negative in this sentence. Please, do not take
this mail personally. I know that you do much much work for this
project.

I try to show QA people some interesting directions when I found time.
Unfortunately, I have many other tasks in the release process and not
enough energy left for this interesting QA area.


Best Regards,
Petr

_______________________________________________
List Name: Libreoffice-qa mailing list
Mail address: [hidden email]
Change settings: http://lists.freedesktop.org/mailman/listinfo/libreoffice-qa
Problems? http://www.libreoffice.org/get-help/mailing-lists/how-to-unsubscribe/
Posting guidelines + more: http://wiki.documentfoundation.org/Netiquette
List archive: http://lists.freedesktop.org/archives/libreoffice-qa/
sophi sophi
Reply | Threaded
Open this post in threaded view
|

Re: test cases quality; was: Ubuntu/Canonical doing more manual testing for LibreOffice?

Petr,

First, I don't take anything personal in your mail. I disagree with you
but it's nothing personal :)

On 02/03/2012 17:26, Petr Mladek wrote:

> Sophie Gautier píše v Pá 02. 03. 2012 v 16:20 +0100:
>>> Another bunch of tests sounds like:
>>>
>>> + Translation check of creating a new database
>>> + Translation check when creating a table in a database
>>> + Translation check for Formula Editor
>>
>> Well, I don't think you get the purpose of what Litmus was done for. It
>> was for community testing at large, so very easy and short tests to
>> bring interest to the testing. It should have help also localizer to
>> test there version. Just as we did by the past and it worked well. Some
>> spend only 30mn others more that 3 hours because the online tests was
>> only the very basis of larger tests with a set of documents. So it's
>> more about the life of a team, than only a basic test. Unfortunately we
>> don't have the good tool here and no money to develop what could suite
>> our needs. Mozilla was developing a tool but it's not yet done either.
>
> I appreciate that you want to teach people using Litmus. Though, I am
> afraid that you did not get my point.

I don't want to teach them using Litmus, I want them to get an interest,
get fun and don't feel harassed by the task.

>
> Please, read the above mentioned test cases. One test describe how get
> into one dialog and asks to check that all strings are translated.
> Another test cases describes how to reach another dialog where the
> strings need to be checked
The check for the translation is a second purpose of the test, the first
purpose is to check the basics functionalities such as Save as, Open,
Copy, Past... etc.
>
> IMHO, there are hunderts or thousands of dialogs. IMHO, we do not want
> a test case for every single dialog. We do not have enough people who
> could create, translate, and process all such test cases.

We are testing functionalities and by the same way are checking for
basic i18n conversion (numbers, accentuated characters, date, size of
the string...)
>
> Also I am not sure if would be effective to use Litmus for this type of
> testing. It might take few seconds to check that all strings are in a
> given language. It might take longer time until you enter your result in
> Litmus and select another test case.

Litmus should be an entry for approaching QA for the community at large
i.e. no language barrier, no technical barrier, a team behind to guide
you further in more complex testing. Unfortunately, it's not a tool
adapted to our needs.
>
> IMHO, we could do much better job here. If we have strings translated in
> pootle and the build works correctly, all translated strings are used.
> By other words, if you have translation for 1000 dialogs in pootle, it
> is enough to QA only 1 dialog. The strings are extracted from pootle by
> a script and applied in sources by another tool. If one string is used,
> others are used as well[*].

As said, I'm not speaking about translation. The contents of the test
may confuse you when it speaks about localization, but it's only a
second purpose of the test, a "*while you are here*, please check that
the dialog has the good special characters in your language"
>
> You might say that you need to check layout of the strings that they are
> not shrinked. Well, we need not check all strings here. It might be
> enough to check only strings that look risky (translation is much
> longer) than the original string.

No, it's not enough, because most of the time, the team doing the
translation is one person only, so you can't remember where and when the
translation is longer than the original, and for some languages it's
always true.
>
> You might say that we should check quality of the translation. I mean if
> the translation makes sense in the context of the given dialog. Well,
> this is not mentioned in the current test case. Also, I am not sure if
> it is worth the effort. We do not change all strings in every release.
> So, we do not need to check all translations.

When you see the amount of strings for the number of people doing
translation, having a proof reading of the dialog during QA is not a
luxury ;) But I agree, as said it's not the first aim of the tests
>
>
>>> Of course, we need to check that the application is translated but we
>>> can't check every dialog manually.
>>
>> We had that by the past with the VCLTestool.
>
> Hmm, how VCLTesttool helped here? Did it checked that a string was
> localized? Did it checked if a translation was shrinked or confusing?

It took a snapshot of each dialog, menu, submenu, etc. When you want to
reach a certain amount of quality for you version, it was very useful
because you were sure that everything was checked. I don't say that you
run it on each version but I did it on each major OOo versions.

>
>>    Instead of the above particular
>>> dialogs, we should check that different elements are localized, for
>>> example:
>>>
>>> + "File/New" menu - because it consists of optional components
>>>                             that are added from xml registry files
>>>     + main menu and one submenu
>>> + a dialog with tabs, check boxes, combo boxes, itemized list,
>>>             and other elements
>>> + help - because it using another technology than the other
>>>                    dialogs
>>> + KDE/GNOME safe dialog because they are done another technology
>>>                    as well
>>> + extensions - because the translation is done slightly
>>>                       different way
>
> [*] There are several type of strings which are processed different way.
> The above list mentions the basic categories. I suggest to test examples
> of the categories instead of every single dialog.
>
>
>>> If one submenu is localized, the other submenus should be localized as
>>> well if the strings are in pootle.
>>
>> It's not about localization only (but it's good for CTL and CJK) , but
>> also about the design of the dialog that allow to see the whole string
>> and then adapt the dialog or the l10n. It's not about to see if it
>> works, it's about the quality of the l10n and the design.
>
> I agree that we need to test this but we are back in the current test
> cases. They do not mention CTL/CJK problems. They do not ask for design
> check. They do not concentrate of functionality that is really affected
> by CTL/CJK.

Because each team has to adapt the test in his language, the basis in
English doesn't mention every specificities.
>
> I am neither localize-person nor professional QA guy. I have just
> feeling that we could do the testing more effectively and the test cases
> should teach people how to do it. IMHO, this is not the current state.

Yes, you're right. But keep in mind that to teach people in their spare
time, they need to enjoy it. It needs to be a step by step learning,
growing interest as well as knowledge at the same time. And don't forget
the fun too. There should be very simple test cases and more complex
ones. Simple samples of document and much more complex ones. Defined
period of test to create a dynamic in the group, with visible results
and visible recognition, etc...
>
> IMHO, it would be easier that start with functional tests rather than
> entering hunderts of the "same" translations tests.

yes, of course :) but I think you get what I was meaning. You may see
localization of a test as repeating the test, when it's offering
somebody to come in with no other burden that the joy of participating
in his language and with his basic skills. Accept to waste some time
with less effective test but allow to more people to participate is what
is behind that tool.

>
>
>>> IMHO, we need to discuss what test cases make sense and create a
>>> reasonable test cases first.
>>>
>>> We are still looking for an experienced QA guy who could step in, teach
>>> people and drive this forward.
>>
>> So lets wait for that guy.
>
> Sophie, I feel something negative in this sentence. Please, do not take
> this mail personally. I know that you do much much work for this
> project.

this is not negative, it's a bit fatalist because your technical vision
is right and effective in a matter of testing, but it doesn't give
access to a wide range of people with very low skills but full of energy
that will request us a little (or more) time but for a strong feedback.
And I don't have the energy or the time to demonstrate that :)
>
> I try to show QA people some interesting directions when I found time.
> Unfortunately, I have many other tasks in the release process and not
> enough energy left for this interesting QA area.

As we all are. But anyway the work has to be done :-)

Kind regards
Sophie
--
Founding member of The Document Foundation
_______________________________________________
List Name: Libreoffice-qa mailing list
Mail address: [hidden email]
Change settings: http://lists.freedesktop.org/mailman/listinfo/libreoffice-qa
Problems? http://www.libreoffice.org/get-help/mailing-lists/how-to-unsubscribe/
Posting guidelines + more: http://wiki.documentfoundation.org/Netiquette
List archive: http://lists.freedesktop.org/archives/libreoffice-qa/
Bjoern Michaelsen Bjoern Michaelsen
Reply | Threaded
Open this post in threaded view
|

Re: Ubuntu/Canonical doing more manual testing for LibreOffice?

In reply to this post by sophi
Hi Sophie,
On Fri, Mar 02, 2012 at 02:33:25PM +0100, Sophie Gautier wrote:
> ok, but I won't install ubuntu, setup an environment on launchpad
> and install bazaar (or do I have to? is it mandatory?). So a little
> help so that I don't spend all my week-end on this would be:
> - is this link the good one for test formatting?
> http://testcases.qa.ubuntu.com/CaseAndPlanGuidelines
> - are the tests on Litmus good enough or should I write more?
> - on this page, there is several tests for Ubuntu/OOo, Kubuntu/OOo,
> Applications/LibreOffice, etc. What is the one I choose

I cant reach Nicholas right now, if there is no feedback by tommorrow, I will
investigate myself.

Best,

Bjoern
_______________________________________________
List Name: Libreoffice-qa mailing list
Mail address: [hidden email]
Change settings: http://lists.freedesktop.org/mailman/listinfo/libreoffice-qa
Problems? http://www.libreoffice.org/get-help/mailing-lists/how-to-unsubscribe/
Posting guidelines + more: http://wiki.documentfoundation.org/Netiquette
List archive: http://lists.freedesktop.org/archives/libreoffice-qa/
Petr Mladek Petr Mladek
Reply | Threaded
Open this post in threaded view
|

Re: test cases quality; was: Ubuntu/Canonical doing more manual testing for LibreOffice?

In reply to this post by sophi
Sophie Gautier píše v Pá 02. 03. 2012 v 18:02 +0100:
> Petr,
>
> First, I don't take anything personal in your mail. I disagree with you
> but it's nothing personal :)

I hope that we could learn from each other :-)

> On 02/03/2012 17:26, Petr Mladek wrote:
> > I appreciate that you want to teach people using Litmus. Though, I am
> > afraid that you did not get my point.
>
> I don't want to teach them using Litmus, I want them to get an interest,
> get fun and don't feel harassed by the task.

Sure. This is my target as well. The translations checks looked boring
on the first look.


> We are testing functionalities and by the same way are checking for
> basic i18n conversion (numbers, accentuated characters, date, size of
> the string...)

Ok, so one example of the current test:

--- cut ---
Name: Translation check of creating a new database

Steps to Perform:

   *Open a new database file (New → Database) and check [Create a
    database] then click on Next button.

      * Check [Yes, I want the wizard to register the database] and
        [Open the database for edition] and click on Finish.
      * Enter a name for the database (using special characters in your
        language) in the dialog box and click OK.

Expected Results:

      * the database wizard open: all strings in the dialog box and
        window are correctly localized to your own language.
--- cut ---

Ok, it checks translation and functionality.

Do we really need to check the functionality in all 100 localizations?
IMHO, if the database opens in English, it opens in all licalizations.
We do not need to force 100 people to spend time on this functional
test.

Do we need to check translation even when the strings were not changed
between the releases?

=> I strongly suggest to separate translation and functional checks. It
is very ineffective to test them together.

Thanks to Rimas, we could mark test cases as language dependent and
independent, so we have a great support for this separation.


> Litmus should be an entry for approaching QA for the community at large
> i.e. no language barrier, no technical barrier, a team behind to guide
> you further in more complex testing. Unfortunately, it's not a tool
> adapted to our needs.

I agree with you. I just say that many of the current test cases sounds
crazy as they are and might point people in a wrong direction.


> As said, I'm not speaking about translation. The contents of the test
> may confuse you when it speaks about localization, but it's only a
> second purpose of the test, a "*while you are here*, please check that
> the dialog has the good special characters in your language"

Yes, it is confusing because they mix the translation and functional
tests. All I want to say is that it is not effective and we should not
go this way.


> No, it's not enough, because most of the time, the team doing the
> translation is one person only, so you can't remember where and when the
> translation is longer than the original, and for some languages it's
> always true.

We could use some scripting here. Andras is interested into the
translations stuff. I wonder if he has time and could help here.


> > You might say that we should check quality of the translation. I mean if
> > the translation makes sense in the context of the given dialog. Well,
> > this is not mentioned in the current test case. Also, I am not sure if
> > it is worth the effort. We do not change all strings in every release.
> > So, we do not need to check all translations.
>
> When you see the amount of strings for the number of people doing
> translation, having a proof reading of the dialog during QA is not a
> luxury ;) But I agree, as said it's not the first aim of the tests

Sure. On the other hand, checking 1000 dialogs because you changed only
20 of them is not luxury as well.


> >> We had that by the past with the VCLTestool.
> >
> > Hmm, how VCLTesttool helped here? Did it checked that a string was
> > localized? Did it checked if a translation was shrinked or confusing?
>
> It took a snapshot of each dialog, menu, submenu, etc. When you want to
> reach a certain amount of quality for you version, it was very useful
> because you were sure that everything was checked. I don't say that you
> run it on each version but I did it on each major OOo versions.

I am not sure if we are speaking about the same tool. I speak about the
testtool that used the .bas scripts from the testautomation sources
module.

What do you mean by snapshot?

IMHO, it went through many dialogs and did many actions. AFAIK, it did
not check translations. It did not check that a dialog was shown
correctly. It only checked that it worked as expected.

Unfortunately, it was a pain to maintain and pain to analyze results?
Have you found any real bug by this tool?

IMHO, it gave you a false feeling that it checked something. It was
running several days. Printed many errors. I always spend few days
analyzing them. Most of them were random errors caused by asynchronous
operations or bugs in the test tool (testtool was not updated for the
new functionality). I found only very few bugs and spend many days/weeks
with it.


> Because each team has to adapt the test in his language, the basis in
> English doesn't mention every specificities.

Yes, but only small part of the functionality is language dependent.


> Yes, you're right. But keep in mind that to teach people in their spare
> time, they need to enjoy it. It needs to be a step by step learning,
> growing interest as well as knowledge at the same time. And don't forget
> the fun too. There should be very simple test cases and more complex
> ones. Simple samples of document and much more complex ones. Defined
> period of test to create a dynamic in the group, with visible results
> and visible recognition, etc...

Yup. I like the test case:
"Translation check when creating a table in a database" It makes perfect
sense as a functional test. I only do not understand why it focuses too
much on translation.

> > IMHO, it would be easier that start with functional tests rather than
> > entering hunderts of the "same" translations tests.
>
> yes, of course :) but I think you get what I was meaning. You may see
> localization of a test as repeating the test, when it's offering
> somebody to come in with no other burden that the joy of participating
> in his language and with his basic skills. Accept to waste some time
> with less effective test but allow to more people to participate is what
> is behind that tool.

I think that we are on the same page. After all, the new test cases are
not that bad. My only problem is that they mix functional and
translation checks.

Have a nice weekend,
Petr

_______________________________________________
List Name: Libreoffice-qa mailing list
Mail address: [hidden email]
Change settings: http://lists.freedesktop.org/mailman/listinfo/libreoffice-qa
Problems? http://www.libreoffice.org/get-help/mailing-lists/how-to-unsubscribe/
Posting guidelines + more: http://wiki.documentfoundation.org/Netiquette
List archive: http://lists.freedesktop.org/archives/libreoffice-qa/
sophi sophi
Reply | Threaded
Open this post in threaded view
|

Re: test cases quality; was: Ubuntu/Canonical doing more manual testing for LibreOffice?

Hi Petr,
On 02/03/2012 19:21, Petr Mladek wrote:
> Sophie Gautier píše v Pá 02. 03. 2012 v 18:02 +0100:
>> Petr,
>>
>> First, I don't take anything personal in your mail. I disagree with you
>> but it's nothing personal :)
>
> I hope that we could learn from each other :-)

yes :)

>
>> On 02/03/2012 17:26, Petr Mladek wrote:
>>> I appreciate that you want to teach people using Litmus. Though, I am
>>> afraid that you did not get my point.
>>
>> I don't want to teach them using Litmus, I want them to get an interest,
>> get fun and don't feel harassed by the task.
>
> Sure. This is my target as well. The translations checks looked boring
> on the first look.
>
QA has a great potential to get boring ;)

>
>> We are testing functionalities and by the same way are checking for
>> basic i18n conversion (numbers, accentuated characters, date, size of
>> the string...)
>
> Ok, so one example of the current test:
>
> --- cut ---
> Name: Translation check of creating a new database
>
> Steps to Perform:
>
>     *Open a new database file (New → Database) and check [Create a
>      database] then click on Next button.
>
>        * Check [Yes, I want the wizard to register the database] and
>          [Open the database for edition] and click on Finish.
>        * Enter a name for the database (using special characters in your
>          language) in the dialog box and click OK.
>
> Expected Results:
>
>        * the database wizard open: all strings in the dialog box and
>          window are correctly localized to your own language.
> --- cut ---
>
> Ok, it checks translation and functionality.
>
> Do we really need to check the functionality in all 100 localizations?

It's only checked in 5 or 6 language, even less if you look at the poll
I've ran on the l10n list.

> IMHO, if the database opens in English, it opens in all licalizations.
> We do not need to force 100 people to spend time on this functional
> test.
>
> Do we need to check translation even when the strings were not changed
> between the releases?

yes, because the amount of strings in the database is really big and you
need more than two eyes to check for the quality.
>
> =>  I strongly suggest to separate translation and functional checks. It
> is very ineffective to test them together.

you spare some resources, most of the time tests are done by people in
their native language. Do you want to run them only in English?
>
> Thanks to Rimas, we could mark test cases as language dependent and
> independent, so we have a great support for this separation.
>
Yes but again, this won't change a lot about the translation of the test
cases, testers will need to run them in their language.
>
>> Litmus should be an entry for approaching QA for the community at large
>> i.e. no language barrier, no technical barrier, a team behind to guide
>> you further in more complex testing. Unfortunately, it's not a tool
>> adapted to our needs.
>
> I agree with you. I just say that many of the current test cases sounds
> crazy as they are and might point people in a wrong direction.

yes, this is why Litmus is not adapted.

>
>
>> As said, I'm not speaking about translation. The contents of the test
>> may confuse you when it speaks about localization, but it's only a
>> second purpose of the test, a "*while you are here*, please check that
>> the dialog has the good special characters in your language"
>
> Yes, it is confusing because they mix the translation and functional
> tests. All I want to say is that it is not effective and we should not
> go this way.

Ok lets try without checking for the translation, we can remove the
specific directions about language in the test.

>
>
>> No, it's not enough, because most of the time, the team doing the
>> translation is one person only, so you can't remember where and when the
>> translation is longer than the original, and for some languages it's
>> always true.
>
> We could use some scripting here. Andras is interested into the
> translations stuff. I wonder if he has time and could help here.
>
>
>>> You might say that we should check quality of the translation. I mean if
>>> the translation makes sense in the context of the given dialog. Well,
>>> this is not mentioned in the current test case. Also, I am not sure if
>>> it is worth the effort. We do not change all strings in every release.
>>> So, we do not need to check all translations.
>>
>> When you see the amount of strings for the number of people doing
>> translation, having a proof reading of the dialog during QA is not a
>> luxury ;) But I agree, as said it's not the first aim of the tests
>
> Sure. On the other hand, checking 1000 dialogs because you changed only
> 20 of them is not luxury as well.

agreed

>
>
>>>> We had that by the past with the VCLTestool.
>>>
>>> Hmm, how VCLTesttool helped here? Did it checked that a string was
>>> localized? Did it checked if a translation was shrinked or confusing?
>>
>> It took a snapshot of each dialog, menu, submenu, etc. When you want to
>> reach a certain amount of quality for you version, it was very useful
>> because you were sure that everything was checked. I don't say that you
>> run it on each version but I did it on each major OOo versions.
>
> I am not sure if we are speaking about the same tool. I speak about the
> testtool that used the .bas scripts from the testautomation sources
> module.
>
> What do you mean by snapshot?

Sorry, wrong name, it's a screenshot I was talking about.

>
> IMHO, it went through many dialogs and did many actions. AFAIK, it did
> not check translations. It did not check that a dialog was shown
> correctly. It only checked that it worked as expected.
>
> Unfortunately, it was a pain to maintain and pain to analyze results?
> Have you found any real bug by this tool?
>
> IMHO, it gave you a false feeling that it checked something. It was
> running several days. Printed many errors. I always spend few days
> analyzing them. Most of them were random errors caused by asynchronous
> operations or bugs in the test tool (testtool was not updated for the
> new functionality). I found only very few bugs and spend many days/weeks
> with it.

Once you're used to them and don't change your environment, I was able
to find several bugs. But I don't want this tool back, I was just
talking about the ability to check for our translation by taking
screenshots of the dialogs.
>
>
>> Because each team has to adapt the test in his language, the basis in
>> English doesn't mention every specificities.
>
> Yes, but only small part of the functionality is language dependent.

yes this is why I didn't see it as an issue to check for the language at
the same time the test is run

>
>
>> Yes, you're right. But keep in mind that to teach people in their spare
>> time, they need to enjoy it. It needs to be a step by step learning,
>> growing interest as well as knowledge at the same time. And don't forget
>> the fun too. There should be very simple test cases and more complex
>> ones. Simple samples of document and much more complex ones. Defined
>> period of test to create a dynamic in the group, with visible results
>> and visible recognition, etc...
>
> Yup. I like the test case:
> "Translation check when creating a table in a database" It makes perfect
> sense as a functional test. I only do not understand why it focuses too
> much on translation.
>
>>> IMHO, it would be easier that start with functional tests rather than
>>> entering hunderts of the "same" translations tests.
>>
>> yes, of course :) but I think you get what I was meaning. You may see
>> localization of a test as repeating the test, when it's offering
>> somebody to come in with no other burden that the joy of participating
>> in his language and with his basic skills. Accept to waste some time
>> with less effective test but allow to more people to participate is what
>> is behind that tool.
>
> I think that we are on the same page. After all, the new test cases are
> not that bad. My only problem is that they mix functional and
> translation checks.

ok, lets see what it brings if we remove this part from the test. But
I'm afraid you're speaking of English tests only with no localization of
the tests.
>
> Have a nice weekend,

Thanks, you too!

Kind regards
Sophie
--
Founding member of The Document Foundation
_______________________________________________
List Name: Libreoffice-qa mailing list
Mail address: [hidden email]
Change settings: http://lists.freedesktop.org/mailman/listinfo/libreoffice-qa
Problems? http://www.libreoffice.org/get-help/mailing-lists/how-to-unsubscribe/
Posting guidelines + more: http://wiki.documentfoundation.org/Netiquette
List archive: http://lists.freedesktop.org/archives/libreoffice-qa/
Nicholas Skaggs Nicholas Skaggs
Reply | Threaded
Open this post in threaded view
|

Re: Ubuntu/Canonical doing more manual testing for LibreOffice?

In reply to this post by Yi Fan Jiang
Hello! If your not able use bazaar feel free to email the ubuntu-qa list. And yes the wiki page you linked is a great format to do them in. If you can take a look at this page, https://wiki.ubuntu.com/Testing/Automation/Checkbox/Walkthrough it shows what format the tests eventually have to be in for checkbox to use them properly. Submitting a file to the mailing list in a similar format would be best. See the gedit tests for an example: http://bazaar.launchpad.net/~nskaggs/checkbox/checkbox-app-testing/view/head:/jobs/gedit.txt.in.

Does that help? You don't need to use lp if you don't want to, and submit tests in checkbox format if possible. You can submit by emailing the mailing list. Thanks,

Nicholas

Bjoern Michaelsen <[hidden email]> wrote:

>Hi Sophie,
>On Fri, Mar 02, 2012 at 02:33:25PM +0100, Sophie Gautier wrote:
>> ok, but I won't install ubuntu, setup an environment on launchpad
>> and install bazaar (or do I have to? is it mandatory?). So a little
>> help so that I don't spend all my week-end on this would be:
>> - is this link the good one for test formatting?
>> http://testcases.qa.ubuntu.com/CaseAndPlanGuidelines
>> - are the tests on Litmus good enough or should I write more?
>> - on this page, there is several tests for Ubuntu/OOo, Kubuntu/OOo,
>> Applications/LibreOffice, etc. What is the one I choose
>
>I cant reach Nicholas right now, if there is no feedback by tommorrow, I will
>investigate myself.
>
>Best,
>
>Bjoern
_______________________________________________
List Name: Libreoffice-qa mailing list
Mail address: [hidden email]
Change settings: http://lists.freedesktop.org/mailman/listinfo/libreoffice-qa
Problems? http://www.libreoffice.org/get-help/mailing-lists/how-to-unsubscribe/
Posting guidelines + more: http://wiki.documentfoundation.org/Netiquette
List archive: http://lists.freedesktop.org/archives/libreoffice-qa/
Michael Meeks-2 Michael Meeks-2
Reply | Threaded
Open this post in threaded view
|

Re: test cases quality; was: Ubuntu/Canonical doing more manual testing for LibreOffice?

In reply to this post by Petr Mladek

On Fri, 2012-03-02 at 19:21 +0100, Petr Mladek wrote:
> Name: Translation check of creating a new database
...
>       * the database wizard open: all strings in the dialog box and
>         window are correctly localized to your own language.

        So - this looks pretty odd to me :-) This "string cropping" problem is
a constant annoyance, and yet it seems (to me) that we can verify at
compile time that there is little-to-no string cropping (assuming a UI
font with a sane width).

        Surely, as we translate each dialog, we can (at least on Linux with
freetype etc.) calculate the size of each string, and check that it
doesn't overlap with any other strings in the dialog.

        Surely we can also check that we have 100% translation by other means -
checking the .po files etc.

        Of course, some sanity check is good to, but if we could automate this
- would it save a lot of time ?

        All the best,

                Michael.

--
[hidden email]  <><, Pseudo Engineer, itinerant idiot

_______________________________________________
List Name: Libreoffice-qa mailing list
Mail address: [hidden email]
Change settings: http://lists.freedesktop.org/mailman/listinfo/libreoffice-qa
Problems? http://www.libreoffice.org/get-help/mailing-lists/how-to-unsubscribe/
Posting guidelines + more: http://wiki.documentfoundation.org/Netiquette
List archive: http://lists.freedesktop.org/archives/libreoffice-qa/
Petr Mladek Petr Mladek
Reply | Threaded
Open this post in threaded view
|

Re: test cases quality; was: Ubuntu/Canonical doing more manual testing for LibreOffice?

In reply to this post by sophi
Sophie Gautier píše v Pá 02. 03. 2012 v 19:39 +0100:
> > Do we really need to check the functionality in all 100 localizations?
>
> It's only checked in 5 or 6 language, even less if you look at the poll
> I've ran on the l10n list.

It is the current state. I hope that more people will use litmus in the
long term.

Also even these 5 group might check 5x more functions if they do not
duplicate the effort. :-)


> > IMHO, if the database opens in English, it opens in all licalizations.
> > We do not need to force 100 people to spend time on this functional
> > test.
> >
> > Do we need to check translation even when the strings were not changed
> > between the releases?
>
> yes, because the amount of strings in the database is really big and you
> need more than two eyes to check for the quality.

Yes but I hope that we could do it more effectively. We do not need to
check all strings every 6 months for every release. In addition, the
check is usually done by similar group of people, so too many reviews
does not bring anything new. Finally, if there is anything too bad,
users will report it. If people are happy with a string one year, we
need not spend too much effort on updating it.

LO is really complex. Users report new bugs weeks and months after the
release because some functionality is used less frequently. The more we
could check during the regression tests during beta phase, the better
for the experience of final users.

I agree that reviewing strings is important. I just try to explain that
we could do much more tests when we separate functional and translations
tests.


> > =>  I strongly suggest to separate translation and functional checks. It
> > is very ineffective to test them together.
>
> you spare some resources, most of the time tests are done by people in
> their native language. Do you want to run them only in English?

No, the description of the functional test cases should be localized.
Anyone could run it in any localization. The point is that when a French
guy checks that a database can be created, it need not be checked by
German, Italian, American. These other guys could use the time for doing
some other checks.


> > Yes, it is confusing because they mix the translation and functional
> > tests. All I want to say is that it is not effective and we should not
> > go this way.
>
> Ok lets try without checking for the translation, we can remove the
> specific directions about language in the test.

ok


> >> Because each team has to adapt the test in his language, the basis in
> >> English doesn't mention every specificities.
> >
> > Yes, but only small part of the functionality is language dependent.
>
> yes this is why I didn't see it as an issue to check for the language at
> the same time the test is run

I probably wrote my sentence a confusing way. I mean that only very
small part of the tests depends on the language.

For example, spell checking depends on language.  If German dictionary
is available, it does not mean that also Frech dictionary is available.
So, you need to check that the dictionary is available in each language.

On the other hand, most of the other functionality works exactly the
same way in all languages. For example, creating the database. You just
need to translate the steps into more localizations, so anyone could run
it. It is not important who runs the test. If it works in Italian, it
will work also in Frech, German, Greek, Japan and other languages. They
do not need to check this.


> > I think that we are on the same page. After all, the new test cases are
> > not that bad. My only problem is that they mix functional and
> > translation checks.
>
> ok, lets see what it brings if we remove this part from the test. But
> I'm afraid you're speaking of English tests only with no localization of
> the tests.

No, we already have a support for translating the test cases. I am sure
that we will be able to do it even more cleanly in the future.


Best Regards,
Petr

_______________________________________________
List Name: Libreoffice-qa mailing list
Mail address: [hidden email]
Change settings: http://lists.freedesktop.org/mailman/listinfo/libreoffice-qa
Problems? http://www.libreoffice.org/get-help/mailing-lists/how-to-unsubscribe/
Posting guidelines + more: http://wiki.documentfoundation.org/Netiquette
List archive: http://lists.freedesktop.org/archives/libreoffice-qa/
Nino Nino
Reply | Threaded
Open this post in threaded view
|

Re: test cases quality; was: Ubuntu/Canonical doing more manual testing for LibreOffice?

On Monday 05 March 2012, 10:41:06 Petr Mladek wrote:

> No, we already have a support for translating the test cases. I am sure
> that we will be able to do it even more cleanly in the future.

Is it correct that the Litmus UI is not localized yet? (Is it localizable at
all?)

Is there (or will there be) a possibility to "tag" a test case by a tester? So
that tests can be grouped deliberately? (I'm dreaming about an individual set
of test cases which I'm sort of "subscribed to": thus a great coverage could
be achieved, if there are a few people and everybody subscribes to a different
- individual - set).

Thanks,
Nino
_______________________________________________
List Name: Libreoffice-qa mailing list
Mail address: [hidden email]
Change settings: http://lists.freedesktop.org/mailman/listinfo/libreoffice-qa
Problems? http://www.libreoffice.org/get-help/mailing-lists/how-to-unsubscribe/
Posting guidelines + more: http://wiki.documentfoundation.org/Netiquette
List archive: http://lists.freedesktop.org/archives/libreoffice-qa/
Cor Nouws Cor Nouws
Reply | Threaded
Open this post in threaded view
|

Re: test cases quality; was: Ubuntu/Canonical doing more manual testing for LibreOffice?

In reply to this post by Petr Mladek
Petr Mladek wrote (05-03-12 10:41)

> I agree that reviewing strings is important. I just try to explain that
> we could do much more tests when we separate functional and translations
> tests.

I hope it also makes Litmus testing more attractive, when people
(realise they) can choose the tests in the areas that are most
interesting for them.

Also I expect there are people more interested in checking translation
and others more interested in features. And of course people interested
in both.

In the description, promotion, explanation at the wiki and such, it
might help to state e.g " Using <function> frequently ? Then take the
change to test this in our latest release! "

--
  - Cor
  - http://nl.libreoffice.org

_______________________________________________
List Name: Libreoffice-qa mailing list
Mail address: [hidden email]
Change settings: http://lists.freedesktop.org/mailman/listinfo/libreoffice-qa
Problems? http://www.libreoffice.org/get-help/mailing-lists/how-to-unsubscribe/
Posting guidelines + more: http://wiki.documentfoundation.org/Netiquette
List archive: http://lists.freedesktop.org/archives/libreoffice-qa/
Nicholas Skaggs Nicholas Skaggs
Reply | Threaded
Open this post in threaded view
|

Re: test cases quality; was: Ubuntu/Canonical doing more manual testing for LibreOffice?

In reply to this post by Michael Meeks-2
I'm chiming in here just to mention that the ubuntu QA community, and
the ubuntu one qa folks are looking at using the "new tool" written by
mozilla called case conductor. It's the successor to litmus. How this
will fit into our workflow and testing needs in the future is still TBD.
If your group is interested, I'd like to include you in these
discussions and prototyping.

Nicholas

On 03/02/2012 03:13 PM, Michael Meeks wrote:

> On Fri, 2012-03-02 at 19:21 +0100, Petr Mladek wrote:
>> Name: Translation check of creating a new database
> ...
>>       * the database wizard open: all strings in the dialog box and
>>         window are correctly localized to your own language.
> So - this looks pretty odd to me :-) This "string cropping" problem is
> a constant annoyance, and yet it seems (to me) that we can verify at
> compile time that there is little-to-no string cropping (assuming a UI
> font with a sane width).
>
> Surely, as we translate each dialog, we can (at least on Linux with
> freetype etc.) calculate the size of each string, and check that it
> doesn't overlap with any other strings in the dialog.
>
> Surely we can also check that we have 100% translation by other means -
> checking the .po files etc.
>
> Of course, some sanity check is good to, but if we could automate this
> - would it save a lot of time ?
>
> All the best,
>
> Michael.
>

_______________________________________________
List Name: Libreoffice-qa mailing list
Mail address: [hidden email]
Change settings: http://lists.freedesktop.org/mailman/listinfo/libreoffice-qa
Problems? http://www.libreoffice.org/get-help/mailing-lists/how-to-unsubscribe/
Posting guidelines + more: http://wiki.documentfoundation.org/Netiquette
List archive: http://lists.freedesktop.org/archives/libreoffice-qa/