WIP: (Re-)introduce a working CI test for Pixelfed #7
No reviewers
Labels
No labels
0 points
0.5 points
1 point
13 points
2 points
21 points
3 points
34 points
5 points
55 points
8 points
api service
blocked
component: fediversity panel
component: nixops4
documentation
estimation high: >3d
estimation low: <2h
estimation mid: <8h
infinite points
productisation
project-management
question
role: application developer
role: application operator
role: hosting provider
role: maintainer
security
technical debt
testing
type unclear
type: bug
type: deliverable
type: key result
type: objective
type: task
type: user story
user experience
No milestone
No project
No assignees
2 participants
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: fediversity/fediversity#7
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "pixelfed-ci"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
d6dd77ae11toe05d693cdfe05d693cdfto29e0b52ef4@koen @kevin> The actions runners take forever to run my tests. On the Tweag builders, then run in under 2 minutes. On my laptop, they run in 15 to 30 minutes depending on whether I'm also in a call at the same time, for instance. Here, they take nearly an hour! Is there any way we could get beefier machines? I know we can add more cores and more RAM, but the effect is only marginal; I'd rather have faster cores, but I don't know if that is possible?
Something that deserves to be mentioned here is that the CI fails because the actions runners are so slow, I believe.
Specifically, Selenium tries to start the Chrome driver, and there is probably a timeout somewhere after which Chrome is still not detected, and Selenium crashes. I know how to increase the timeouts between steps of the Selenium scripts, but not the timeout related to starting the browser itself. Still, I will investigate.
6de5a4b31btocfe4d57787cfe4d57787to508ad1ca66508ad1ca66to29e0b52ef429e0b52ef4to3c537472a27512d8a6e9to51e6bc7ca6@Niols is this still relevant? We did get a beefier machine somewhere around that time, no?
We got a beefier VM which I have successfully used for CI so far. There was also works to get a bare metal machine to get even more performances, but we never finished setting it up as a CI runner and everything. Now that we have a reasonable CI, the work remains to see what failures were due to low performances of the runner and what failures were due to the test being too flakey, so I'd say this is still relevant, and I should prioritise some time soon to make it run again.
Pull request closed