Readers are currently storing and organising almost 2,000 e-books in their Libreture libraries.
Join them with a 30-day FREE Trial.
@kevinbeynon Still working on uploads. :)
@kevinbeynon I was going to see if I can resplit my zips into a smaller size; the 500 MB is choking on #7 consistently.
You 'should' be able to yes.
If you try it, let me know how you get on.
The developer has found some ways to make the process more efficient, but any additional feedback you have in the meantime would be much appreciated.
@kevinbeynon Well, for my day job, we have a tendency to get customers doing 2-3 GB files all the time. We ended up forking off a separate processing job outside of the IIS/Apache process specifically to handle it. That helped avoid some of the memory pressure but also prevented the web server from reaping the process.
@kevinbeynon With more modern architecture, you might have a redis cache or message broker that passed it to background processors. But the principle is the same. The web server puts it somewhere, something else does work, the web server gives a response back.
I see. Yes, that's what I was thinking. Feeding back to the webserver with info for the JS uploader GUI.
Might be cheaper having multiple servers than upgrading existing one...
@kevinbeynon JS is nice for the fancier browsers, but a fallback for the ereaders would also be good; there are some decent libraries that let you do that.
I'm not sure where the OOM is coming from, but I suspect you getting the files isn't the problem as much as trying to open it and do things; that is where a secondary process can help a lot. (It is more complicated though).
The JS helps with getting the files. Treating the two as separate tasks will make it easier.
@kevinbeynon Model-View-Controller is life. :) I have a tendency to use secondary processes off a message queue instead of bumping up the front ends. Mainly because it gives you more flexibility for scaling with a 2- or 3-tier environment (web, processing, DB).
They've done a good job of building it, and they're great at helping out with ideas/approaches. The separation of web and parser sounds like a go-er and something they're likely happy to address.
Yes, if you are planning on supporting huge files, establishing and solving the concept of secondary services will make your life easier.
That is how Stack Overflow, GitHub, GitLab, and Reddit (to a smaller degree) work.
Stupid front ends that are difficult to bring down, back ends that can be scaled up with demand.
This approach would also let you have people correct the metadata of their books as background process (the "command" pattern).
Thank you for this. It's helping me clarify my thinking.
Sooner vs Later is an exercise I've been applying to each feature I've requested so far.
Each can potentially bring along more users, but costs from savings rather than revenue.
But if there's one thing I can get behind is: doing it right! [salute]
It's also interesting seeing the difference between services that have large media storage requirements, as opposed to database-driven sites where hosting isn't as much of an issue.
@kevinbeynon Yeah. Learning the differences between those two can be a killer. My last few jobs have been dealing with compliance for large companies, so learning how to write apps that have 1.2 M rows of data that has to be shown in real time or generating 12k page PDFs is both challenging and frustrating at the same time. :)
@kevinbeynon Having your task plan really helps with that. I found I try to have the 1, 2, 4, 8, and 16 year plans when working on projects. Sometimes it goes wrong, but I found you make different decisions when you know where you are heading.
Speaking of which, have you considered a public site for gathering feature requests/support. Something that can be voted on like uservoice or GitLab's issue-only projects?
Oh, I have those. ;) They're tied into my other projects over at IndieBookCards.com and ScarletFerret.com
I have been looking for feature-request/bug-reporting tools. I'm collecting them on a private site for now. I'm confident I can address issues that are 'out-of-scope', but it still worries me a bit after seeing some rabbied responses to other software development directions.
I think it's a processing issue too.
And splitting this work sooner rather than later will save on running as well as dev costs, right?
The dev and I are looking at the Uploader & Parser, so I think this is the next big piece of work.
That makes sense, and funnily enough, something I've been wondering about.
I've been avoiding JavaScript components to aid with the upload to ensure accessibility and reduce load on low-spec devices, like e-readers. But it may be time to look into a JS uploader to help out.
@kevinbeynon Related to that, it wasn't clear. Could I upload multiple zips at the same time? Say if they were 100 MB?