hey @sebhth @ekansa @Electricarchaeo @precatlady @ryanfb @steko @jenniferlouise @mlemweb @seanmunger @captain_primate @JubalBarca
What do you think? Can we get #AncientToday going as a hashtag where we post something that each of us is working on today in the field of #AncientStudies (as broadly understood)? #pedagogy #digital and #YakShaving are fine, not just #research and #writing
I'll start ...
@paregorios @captain_primate @seanmunger @mlemweb @jenniferlouise @steko @ryanfb @Electricarchaeo @ekansa @sebhth
Hm. I feel like Ancient risks being confusing when applied to my stuff, as it's often used with the more specific meaning of "pre-medieval" by medievalists like myself?
Yeah, that there is a conventional disciplinary boundary that IMO gets in the way of so many interesting things. But at the same time, transgressing the terminology of that boundary can sow confusion and misguided reaction that is equally obstructive.
:|
@sebhth @ekansa @Electricarchaeo @ryanfb @steko @jenniferlouise @mlemweb @seanmunger @captain_primate
@paregorios @captain_primate @seanmunger @mlemweb @jenniferlouise @steko @ryanfb @Electricarchaeo @sebhth @JubalBarca
Yep. A hash tag tends to work if people can recognize its meaning without much effort. That tends to entrench concepts that maybe should be disrupted / questioned.
I don't have an easy answer, but I'm in favor of following some updates about "humanistic scholarship about older things".
@ekansa @paregorios @captain_primate @seanmunger @mlemweb @jenniferlouise @steko @ryanfb @Electricarchaeo @sebhth
Maybe ditch the "Today" since it'll just be whenever we're posting it, and use #BeforeModernTimes or something? Though I guess that may imply different things to non-historians who don't think of "modern" as "post sixteenth century"...
@JubalBarca @paregorios @captain_primate @seanmunger @mlemweb @jenniferlouise @steko @ryanfb @Electricarchaeo @sebhth
#BeforeModernTimes works for me, and I'm OK with fuzzy and differences in opinions about what that actually means.
@JubalBarca @paregorios @captain_primate @seanmunger @mlemweb @jenniferlouise @steko @ryanfb @Electricarchaeo @sebhth
#BeforeModernTimes
OK. I'm wanting to update how we archive data with Open Context.
I'm seriously looking at Zenodo. The question is, is it still worthwhile to put 1.5 million OC GeoJSON-LD files in Github? GitHub is a pain at that kind of scale, but if people think it essential or useful, I want to know.
Otherwise, I'll just use build off the Zenodo API.
Thoughts?
Who uses your data and do they like getting it from GitHub?
Because GitHub is a commercial single point of failure. And if it's more trouble than it's worth for you, and if nobody's clamoring for it, why not go straight to the archive?
That's my impression also. I think it will be more trouble than it is worth to go into GitHub. I just wanted to check to see if anyone had a compelling reason to also use it to version control structured data.
@ekansa Well, I like the notion of being able to do that with the #PleiadesGazetteer #JSON (and that's why I keep that JSON formatted and key-sorted), but *practically* I'm not sure what I'm getting out of it.
Yep, our JSON also has predictable key sorting. It's also sorta fun to see the GeoJSON rendered in GitHub. But Zenodo does the versioning thing.
Next question. 10's of thousands of GeoJSON-LD files in an archival "deposit". Should I just compress many files into giant tarballs or is there value in having each one individually identified / accessible in the repository?
My guess has been that a single or small number of giant, compressed blobs is preferable so that interested parties don't have to do lots of repetitious interaction with the archive server. Thus, in both Zenodo and the NYU FDA, Pleiades data is a single zip file:
https://doi.org/10.5281/zenodo.1193921
http://hdl.handle.net/2451/41737
This is based solely on personal annoyance with getting data from other places for other things.
Yep. OK. This all makes sense. I'll get cracking on this! I will make a different zip archive (probably zip, because easier for non-Linux folks) for each DOI identified dataset in Open Context. Some will have 10's of thousands of JSON files and images files etc, some will be just a CSV (for table dumps).
Sound workable?
@ekansa @paregorios this all sounds like what I would do myself: 1) move away from GitHub - single point of failure and not very good for "big data" 2) leverage the Zenodo API with versioning 3) one dataset = one archive file (with a descriptor? e.g. a datapackage.json or similar metadata)
Stefano -> YES! Thanks, I'm now in active development for archiving with Zenodo. The main issue is always granularity for us, and bundling up a bunch of JSON files into one submission is very attractive. For the most part that will work, but in some cases, there are some complex licensing issues. Some datasets need to have a variety of licenses for images, so I have to break them apart into different archive bundles for Zenodo.
Nothing is ever simple.
Here's a test upload (in the sandbox) of the files associated with a small project.
Does this look useful?
https://sandbox.zenodo.org/record/217212
Tips for improvement? All the metadata is generated from metadata we already had, so I'm pleased by the ability script uploads and documentation.
Second example, this one has Pleiades and PeriodO URIs in the metadata:
https://sandbox.zenodo.org/record/217236
I think I'm more or less OK to advance to using the real API not the sandbox?
Do you think it is good to go for production use?