#3722 Allow expected failures in integration tests
Closed: Fixed None Opened 10 years ago by pviktori.

Subtask of #3621 Automated integration testing

We should write tests for features that are not done yet or are known as broken (and there is a ticket for them). These should call rlFail in BeakerLib, but show as skip (or Xfail, or even pass) in Nose/Jenkins.


Moving open tickets to next month bucket.

This is currently not required by the Test suite. Moving to further release and decreasing priority.

3.4 development was shifted for one month, moving tickets to reflect reality better.

Adjusting time plan - 3.4 development was postponed as we focused on 3.3.x testing and stabilization.

Adjusting time plan - 3.4 development was postponed as we focused on 3.3.x testing and stabilization.

Moving to Future Releases, this is not currently blocking any our effort. Last expected failures were addressed in #4271.

Moving to the same bucket as it's prerequisite - #2933.

Please disregard comment:9, it was meant for #3741.

Now possible by calling pytest.xfail('reason') in the test, or marking a test function with @pytest.mark.xfail('reason').

http://pytest.org/latest/skipping.html

Metadata Update from @pviktori:
- Issue assigned to pviktori
- Issue set to the milestone: FreeIPA 4.1.3

7 years ago

Login to comment on this ticket.

Metadata