Readers are currently storing and organising almost 2,000 e-books in their Libreture libraries.
Join them with a 30-day FREE Trial.
@kevinbeynon Still working on uploads. :)
Me too. ;)
I see you sometimes hitting the memory limit while uploading. We're working on increasing the capacity while also streamlining the process.
Thanks again for your support.
@kevinbeynon I was going to see if I can resplit my zips into a smaller size; the 500 MB is choking on #7 consistently.
If you try it, let me know how you get on.
The developer has found some ways to make the process more efficient, but any additional feedback you have in the meantime would be much appreciated.
@kevinbeynon Well, for my day job, we have a tendency to get customers doing 2-3 GB files all the time. We ended up forking off a separate processing job outside of the IIS/Apache process specifically to handle it. That helped avoid some of the memory pressure but also prevented the web server from reaping the process.
@kevinbeynon With more modern architecture, you might have a redis cache or message broker that passed it to background processors. But the principle is the same. The web server puts it somewhere, something else does work, the web server gives a response back.
I see. Yes, that's what I was thinking. Feeding back to the webserver with info for the JS uploader GUI.
Might be cheaper having multiple servers than upgrading existing one...
@kevinbeynon JS is nice for the fancier browsers, but a fallback for the ereaders would also be good; there are some decent libraries that let you do that.
I'm not sure where the OOM is coming from, but I suspect you getting the files isn't the problem as much as trying to open it and do things; that is where a secondary process can help a lot. (It is more complicated though).
The JS helps with getting the files. Treating the two as separate tasks will make it easier.
@kevinbeynon Model-View-Controller is life. :) I have a tendency to use secondary processes off a message queue instead of bumping up the front ends. Mainly because it gives you more flexibility for scaling with a 2- or 3-tier environment (web, processing, DB).
They've done a good job of building it, and they're great at helping out with ideas/approaches. The separation of web and parser sounds like a go-er and something they're likely happy to address.
Yes, if you are planning on supporting huge files, establishing and solving the concept of secondary services will make your life easier.
That is how Stack Overflow, GitHub, GitLab, and Reddit (to a smaller degree) work.
Stupid front ends that are difficult to bring down, back ends that can be scaled up with demand.
This approach would also let you have people correct the metadata of their books as background process (the "command" pattern).
@kevinbeynon Having your task plan really helps with that. I found I try to have the 1, 2, 4, 8, and 16 year plans when working on projects. Sometimes it goes wrong, but I found you make different decisions when you know where you are heading.
Speaking of which, have you considered a public site for gathering feature requests/support. Something that can be voted on like uservoice or GitLab's issue-only projects?
@dmoonfire
Oh, I have those. ;) They're tied into my other projects over at IndieBookCards.com and ScarletFerret.com
I have been looking for feature-request/bug-reporting tools. I'm collecting them on a private site for now. I'm confident I can address issues that are 'out-of-scope', but it still worries me a bit after seeing some rabbied responses to other software development directions.