Can't download archive over https #459
	
		Labels
		
	
	
	
	No labels
	
		
			
	
	api service
		
			blocked
		
			bug
		
			component: fediversity panel
		
			component: nixops4
		
			documentation
		
			estimation high: >3d
		
			estimation low: <2h
		
			estimation mid: <8h
		
			productisation
		
			project-management
		
			question
		
			role: application developer
		
			role: application operator
		
			role: hosting provider
		
			role: maintainer
		
			security
		
			technical debt
		
			testing
		
			type unclear
		
			type: key result
		
			type: objective
		
			type: task
		
			type: user story
		
			user experience
		
		
	
		No milestone
		
			
		
	
	No project
	
		
	
	
	
	
		No assignees
		
	
	
		
			
		
	
	
	
		2 participants
	
	
		
		
	Notifications
	
		
	
	
	
		
	
	
	Due date
No due date set.
	
		Dependencies
		
		
	
	
	No dependencies set.
		Reference: fediversity/fediversity#459
		
	
		Loading…
	
	Add table
		
		Reference in a new issue
	
	
	No description provided.
		
		Delete branch "%!s()"
	 
	Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
I want to enable consumers with a vanilla Nix installation such as after
to run some tests to validate that our software works as advertised, e.g.
This doesn't work because the
archiveURL times out.forgejo seems to respond HTTP 504 for both tarball and zip.
the forgejo server's logs
/var/log/nginx/{access,error}.logdid not mention these requests unfortunately.to be fair, we could update the forgejo, altho for all i know this could relate to the nginx as well?
this is also currently blocking us from sourcing dependencies from our forgejo (using npins), e.g.
vars/nix-templatingshould probably recheck the nginx settings for time-out values, which now seems to be set to one minute, corresponding to defaults for
services.nginx.proxyTimeoutandservices.nginx.uwsgiTimeout.increasing this to
15m(for both) seems to not yet fix this somehow:https://git.fediversity.eu/admin/monitor/queue shows queue
repo-archivedysfunctional, with a massive number of hanging items, apparently systematically failing to get processed.the following interaction feels suspicious:
edit: ensuring package availability at
machines/dev/vm02116/forgejo.nixappears not to have fixed this yet:so far i'm not finding issues mentioning that particular queue at forgejo's issue tracker.
i'm trying to figure out how to investigate processing of this queue, whether in code or by inspecting this leveldb that our forgejo seems configured to use (which seems used in-process).
gitea has setting
queue.*.WORKERS(default0), while forgejo only documentsMAX_WORKERS(default cpus/2). to be fair, i cannot find evenMAX_WORKERSdefined in forgejo's code, so perhaps these all just inherited from levelqueue, implyingWORKERSmight work for forgejo as well.so far tho, following grouping with
., either of:services.forgejo.settings."queue.repo-archive".WORKERS = 1;services.forgejo.settings."queue.repo-archive".WORKERS = 1;.. while showing up in the forgejo config file, seem not to get reflected in the number of workers (which to be fair is listed at 0 for each queue now, while others tend to have functioned fine).
actual processing logic seems defined at
doArchive, tho i've yet to figure out how to use the info there to find a next step to try.from an
stracelog, archiving jobs are gettingpkilled by forgejo:an attempt to clear the queues:
.. so far seems not to have affected the current count.
is the queue so big that flushing just times out?
unfortunately, neither
leveldbnorlevelqueueseem to offer their own CLIs.nor does
services.forgejo.settings.queue.LENGTH = 100;get rid of superfluous jobs.now, queue contents seem tied to
services.forgejo.settings.queue.TYPE, meaning switching this temporarily can resolve the symptoms:level^channel- switching to this made all queues display0outstanding items, and new download requests seem to go through! 🎉redischannel#559to get rid of the work-around, i may wanna look into clearing the leveldb queue:
let's see if i could just (re)move it to clear.
channel(#559)" #562that seems to have done it, so opened #562 now that this seems fixed (for now?).