Wednesday, October 10, 2012

Scaling! Or, why I need a CS degree to do this stuff

When last we spoke, I had just accessioned some materials from the EAD Roundtable which I thought were going to be good test cases for using these tools. Shortly after that, however, I was presented with an even juicier opportunity: the event files of the UWM Chancellor. These files contain event overview forms, some correspondence, some presentations, and other material relevant to the chancellor's activities during the course of a given academic year. The files are likely to be relatively high-use and high-value, which shot them to the front of my testing queue. They were also, by and large, actual document files, which meant not much weeding is necessary to pick the researcher-usable wheat from the supporting-files chaff.

Oh, yes, one more thing about these files: there are 13,000 of them. 10 GB of Chancellory goodness.

I hear the Digital Archivists laughing at my paltry numbers even now. "Ohhh, TEN GIGS," they say. "That's almost the size of the manifest for this huge research data set I'm archiving." Yeah, well, we can't all have integrated TDR software tools, OK guys? Some of us have to hold together their E-records processes with chewing gum and baling wire. 10 GB is far and away the largest accession that the UWM Archives has taken in thus far, so for me it's a challenge, all right? (I am sure that if and when I get this data curation initiative off the ground I will look back fondly on the days when I worried about accessions that were merely 10 GB.)

Anyway. First thing I noticed is that NONE of the tools I've been using scale on their own. From the reading I've done, the reason for this is the same reason that these tools are platform neutral: they run in Java, which means they use virtual memory to extract file metadata/build checksums/whatever. This many files means that virtual memory runs out fast and the process stops (if the program doesn't crash altogether). OK, fine. So I just have to do the analysis in chunks instead. This is annoying, but not undoable. DROID (which I'm going to discuss soon, I promise!) runs fine when I chunk it out by year, although it does give me the following in my SIP metadata folder:
Spot the irony: ".droid" is not listed in PRONOM as a recognized file format. This is what passes for humor in the Archives world, folks. 
Note the "comprehensive breakdown" files-- these are various outputs of DROID's reporting function, which makes the issue of multiple profiles significantly less problematic because I can open all of these profiles and have them vomit their output into a single chart for comparison. So far, so good.


The New Zealand Metadata Harvester, on the other hand... less far, less good. It didn't extract the whole accession, as I expected, so I attempted to chunk it down further. In most cases it did it by year fine; in other cases, I got this result instead:

Specific!

OK. No problem. Presumably by breaking it down even further, I can get it to where the number of files will be small enough that I can chunk it out even further. This works for me for the first few years, though I have to remove the October folder from all of them to make them work, which leads me to believe that the NZMH is tripping over a parse error. This theory is quickly demolished when I get to 2009, at which point *none* of the folders will extract properly. In a fit of optimism, I hit the logs to attempt to determine the file or folder the program is tripping over.
"Your tears are delicious."--New Zealand Metadata Harvester
I have now run into one of the biggest problems to plague Human-Computer Interaction since the UNIVAC days, namely that front line coders are, by and large, AWFUL at writing for laymen. I need to know from the logs:
  1. What happened
  2. What file it happened to
  3. How I, the average user, can fix it so it stops happening
What I have been given instead:
  1. What happened
  2. WHEN it happened (Seriously? I need to know to the second when my process crapped out?)
  3. The specific script violation(s) invoked
  4. How someone who is competent with coding can fix it so it stops happening (I think. I honestly don't know, since I don't read code)
The whole point of writing a GUI is so that people like me who aren't comfortable around command lines can use the program and solve problems when they arise. So, of course, when a problem DOES arise... I'm sent back to information that I need facility with the command line to use. Thanks, guys. Honestly, the lack of intelligible documentation for so many of these tools is the single greatest barrier to entry to e-records processing, and until it gets better too many archivists are just going to throw up their hands and give up on this process altogether. 

In the meantime, *I'm* not giving up, though I am leaving huge gaps in the metadata for some of these chunks (as well as seriously considering going back to school for Comp Sci if I am going to continue down the e-recs road... though that would also require time and money that I don't have). I suppose I could go folder by folder to determine which specifically is causing the problem, but that sort of defeats the purpose of automating the process with metadata extraction tools, doesn't it? Oh well. Something is better than nothing, I suppose, though it will be necessary to indicate somehow that the metadata is incomplete (for now I am using the "comment" tag in the NZMH output XML). I haven't even tried the Duke Data Accessioner on a collection this big yet, so that's going to be my next step, though I also want to figure out if I can run it without having to create copies of the files-- one set of this stuff is enough! (Until I create a working copy set for processing, anyway).

I AM pleased to note that the NARA File Analyzer seems to work fine, if sluggishly, for generating a manifest of files with checksums, and was able to get the entire accession in one go. Score one for the Federal Government! As noted in my post about ingest, however, the output is not very pretty, though if we have to use it we have to use it. I think when my fieldworker puts together her procedures list for e-records processing a separate section for large accessions is going to be necessary anyway, so might as well think about which tools might work better and how use of them is going to change.

Next up: heading backwards and talking about DROID, JHOVE, and NZMH in more detail as part two of metadata generation for ingest. Wheee!

Monday, September 24, 2012

A (not-so-)Brief Aside on Metadata and Its Uses


So, you may have noticed the gap in updates for this so-called “series”. This is largely because I have been actually doing the work instead of blogging about it (oh sure, “Productivity”, a likely story. But in this case it’s true!). In addition to this meaning that I will have a better grasp on what these tools can do when I DO get around to writing about them, I am also benefitting from having a fieldworker from SOIS working on this project with me. Her perspective is valuable because she hasn’t been immersing herself in documentation—I am trying to see a) how well I can teach other users to use these tools for our processes; and b) how easy these tools are to use full stop. For the most part, the answers have been "fair to middling" for both.

Part of the problem I’m running into is that separating out the steps like I am in this series is to a certain extent an artificial separation. With e-records, more so than with paper records, ingest is accessioning is appraisal is processing is access. By which I mean that the state of the system or tool in the next step down the road determines what can be done with the tool or tools you’re using in this step. This is most obvious in metadata harvesting,  hence the title of this post.

These tools that I’m using? Collect a LOT of metadata. Which on the one hand is good because it’s usually better to have more information about a file than less. On the other hand, the surfeit of metadata led my fieldworker (and, by extension, me) to ask the obvious question, i.e. “what are we going to do with all of this?” I doubt very much that we want to devote a huge amount of time and effort entering full descriptions for every file into Archivists’ Toolkit, especially for those collections (such as SAA) where I know we won’t be providing online access to all or even most of the files (we don’t need to create DAOs for webpage stylesheets, e.g.).

I’ve been giving this a lot of thought and comparing the various outputs of the tools that I’ve been testing, and I have reached a preliminary conclusion, i.e. that I need to be looking at different metadata outputs to do different things for me. (This is where my digital archivist readers say “Duh!”. Leave me alone, I am learning this stuff on the fly and never took a metadata course in library school). I’m just not going to use every single bit of metadata that is being extracted by the various tools (especially since a lot of it is provided redundantly by a number of them), but I am going to be using a lot of it in various capacities. To wit, I’ve identified four basic categories:


  1. Collection Summary metadata. How much data in how many files I have, what types of files are represented, the overall date range of the digital collection, etc. This is the stuff that goes in the general section of the finding aid for extent statement, technical requirements, general scope and content, etc. 
  2. Collection Preservation/Manifest metadata. Something human-browsable that is going to indicate paths to files, dates last modified of individual files, any subject metadata, file versioning etc. This is the stuff that will be provided as a spreadsheet or database for patrons if we provide near-line access in that manner. It will also be “packaged” with the data to serve as the preservation description information (including format requirements, fixity measures, etc.)
  3. Digital Object Manifest metadata. A specialized manifest for those files we’re going to provide access to through the finding aid (actually a subset of Collection Manifest metadata). These are the file names, paths, document titles (as applicable), relevant date (created or last modified), and basic technical requirements (probably file type only). This stuff will be imported as DAOs into Archivist’s Toolkit or a modified version of our EAD tagging spreadsheet and exported into our finding aids’ contents lists.
  4. Individual Object Preservation and Access metadata. This one I figured out from all the programs that spit out XML for individual files rather than one big XML file for the whole collection. A lot of the metadata in this category may actually be covered in the other three, but by creating individual metadata files for each file in the collection, it creates a more complete preservation package and allows for expedited ingest of items into a repository by associating the items with relevant metadata.


It’s worth noting that we DID talk about these distinctions in the SAA Arrangement and Description of Electronic Records workshop that I attended in May. But we definitely didn’t talk about them in these terms, which is why it has taken me the better part of four months to figure out what to do with this stuff—we were basically given the tools, told about what was required for a SIP/AIP, and left to make our own connections. I may just be slow on the uptake, but the relationships between the tools being used and the appropriate use of the metadata being collected in the arrangement and description are not necessarily obvious in my opinion. I’ve made this post (in hopefully-clear English) to clarify to myself which kind of metadata is which and to help my equally-or-less-tech savvy readers (all 3 of you) make the distinction as well.

For the rest of this series (and there WILL be a rest of it!) I am going to try to identify which kind of metadata is being created by which tool (and be ye not fooled, most of the tools I’m looking at ARE for metadata extraction, so there’ll be a lot more discussion of this).  Going back to the ones I’ve already done, the tools at point of ingest are largely in category 2: They create a manifest, with or without checksums, which I can import into a spreadsheet or database to produce a browsable list of the files in the collection, but without distinction between documents and metafiles at this point. We’ll get to categories 1 and 3 during accessioning, using DROID and JHOVE to get that other stuff (and for right now, I AM using JHOVE 1.7 instead of JHOVE2. Why I am doing that will be revealed in the next post).

Oh, and lest I forget—because of redundant information and XML structure, almost none of the metadata I am harvesting with these tools is usable out of the box. If I am lucky, I am able to pick and choose fields from the various tools and paste them into a master spreadsheet that collects all of the information I need. If I am slightly less lucky, the import from XML to spreadsheet is screwy because of the nested structure and I need to figure out a way to flatten it to the point where I can import it into the master (doing folder-by-folder analysis may be the way to go on this). If I am UNLUCKY, I will need to convert the metadata from its existing form into an interoperable standard, such as MODS, METS, or PREMIS (this is an especially fun game because I am not really familiar with any of the three—see bit about not taking a metadata course in library school). I may be saved on the latter by our lack of a TDR for access to born-digital objects—but then, that poses an entirely new problem for our potential digital collection. Stay tuned.

Thursday, August 2, 2012

E-records Use Testing: Ingest, Pt. 1 (Or, "In Which I Get Bailed Out By SAA Roundtable leaders"

Most of the existing e-records collections at UWM either predate me or predate the period during which I was concerned about silly things such as "authenticity." We STILL really don't have a separate e-records collection plan, as recommended by the AIMS report, but thanks to said report we are at least talking about these issues at staff meetings. So that's something, I guess? In any case, the transfers I have received since taking on this role have been received with MORE care than we had been doing earlier, but still not optimally from a digital preservation point of view. I was prepared to acknowledge this and move on.

Luckily, Mark Matienzo inadvertently came to my rescue through his capacity as co-chair of the EAD roundtable. As you the reader may already know, UWM serves as the official repository for the Society of American Archivists' own archives, which includes the records of sections and roundtables. This year the EAD roundtable is revamping their website, but wanted to preserve their old EAD Help Pages as a historical record of the development of EAD and support for same. The roundtable was kind enough to turn these pages into static web documents and subsequently zip them off through the ether via Yale's file-share service... so here we are. A chance for a fresh start at ingesting records properly! Stuff of obvious historical value! A chance to use the tools on Chris Prom's submission page! Joy to the wor--

Ah. "Most of [these tools] cannot be implemented without significant work to integrate them into an overall workflow or other applications."


You mean you can't unpack the source code in which these are provided?


Hahahahahahaha No. We're still trying to get an OAIS-compliant repository up and running, and I certainly don't have the expertise to compile code for addons. We are just now starting to even UNDERSTAND what in the world OAIS is even talking about, much less have an OAIS-compliant repository. Not that it's out of reach for us-- again, Chris Prom's site makes the process seem much less daunting--but we're not there yet. Part of that process involves having a seamless submission process to the repository, and I don't have the technical expertise to implement any of the tools on the submission page. Oh well. It's still a cleaner process than what we were doing before.


One thing I DO have the expertise for is creating a rudimentary SIP. Check it out:


















That's what I'm talking about, son. (Actually I had no idea what the requirements were even for this before I took the SAA workshop on Arrangement and Description, so please ignore me when I pretend to be an e-records thug.) In any case, you'll notice the folder for metadata, submission agreement (AKA "an email I saved from Thunderbird"), and two content folders-- an originals folder, which is not being touched from here on out, and a working copy, which is where I will practice my mangling of collections. (This collection sort of doesn't lend itself to much rearrangement, so I may choose another one when I get to that step.) I won't open any of the folders right now, especially not the "Metadata" folder, because there's not anything actually in there at the moment. Let's fix that, shall we?

My first go at creating some fixity and context metadata was the Duke Data Accessioner, which I learned about at last year's SAA. The Duke University Libraries developed this tool to move files off physical media and into a repository, copying relevant metadata and comparing file checksums before and after transfer to make sure that authenticity was not compromised. I like the Accessioner because it's a Java tool, meaning it's platform-neutral, and the GUI is (relatively) easy to navigate. There are two main problems: 1) Since it's a migrating tool, I haven't been able to figure out how to generate checksums without duplicating the files, and 2) the output on the machines in the archives looks like this:




Yikes. Not exactly the most human-readable document in the universe, though it could be OK if you have a way to crosswalk metadata. Let's see if we can do better. (This is, obviously, not the XML for the EAD pages.


NARA has developed a File Analyzer and Metadata Extractor, which we demoed in the SAA workshop as a no-frills alternative to generating ingest metadata. You'll notice that the link goes to GitHub, which should scare you if, like me, you are a clueful user but not actually a techie; luckily, the files themselves are mostly compiled and ready to go. Like the Duke Data Accessioner, the NARA tool comes as a java application with a GUI, which is also nice if you're uncomfortable with command lines. This tool outputs to a table which can then be exported to a text file, so let's see what it comes up with:


Muuuuch better. The text file output is tab-delimited so you can easily import it into spreadsheets for even easier reading. The other information in the DDA XML file isn't there, but there is a function to count by type, which gives you more information about the accession as a whole. Unfortunately, the tool is a bit simple, which means you have to run each test separately. Hmm. Still wonder if we can do better.

Another option is Karen's Directory Printer, which is unfortunately Windows-only, but does give you more information about each file in Tab-delimited format. Here's how that looks:
(I was going to show you this in excel format but as it turns out the delimiters are off and it doesn't import correctly. Awkward.)

Lastly, there's Bagger, a GUI for the Library of Congress' Bag-It Data packaging standard. Wrapping content in a Bag allows for easy transfer of materials-- potentially very useful if we ever move to a different repository space. I had to read through the manual for this one to see which components were necessary for the bag to be complete, but I eventually got there, and ended up with the following manifest:


Beautiful. Clean and human-readable, if needed. (Generally NOT needed because if you transfer bags the receiving computer can do the checksums by itself.)

So, which of these is going to go into our workflow? I'm leaning towards Bagger at the moment because it creates fixity checksums AND packages the data for ready transfer. (There's also a functionality there, "Holey Bag", which allows you to pull stuff in from the web-- I'll have to try that one.) NARA File Analyzer might be OK too for simplicity's sake. Of course, neither tool captures the metadata captured by the Duke Data Accessioner... but, as the man says, there's an app for that. Unfortunately, this post has already run long, so those will have to wait until next time.



Tuesday, July 31, 2012

E-records use testing: Introduction

Howdy, campers! Some of you may be aware from my various ramblings on Twitter etc. that I have volunteered/been enlisted to be the electronic records guy at the UWM Archives. This is not entirely an unwanted position-- I am very interested in this kind of stuff, and it's only going to become more important as the shift from paper goes on-- but it is nonetheless a challenging role because I am sort of making it up as I go along. I didn't take a single course in Library School specifically on any of metadata, databases, electronic records, or digital imaging (to say nothing about programming), and now that I have undertaken an effort to rethink the way we are dealing with e-records, that lack of specific training is obvious (to me, at least... I don't know how much it appears to be so with my colleagues, most of whom are less techie than I am).

Luckily, I am far from the only person in this particular boat. SAA has been very good about getting in front of this issue, most recently through their Digital Archives Specialists certificate program. Said program purports to "provide [its participants] with the information and tools [they] need to manage the demands of born-digital records" through a series of courses at various skill levels and in various domains of practice for electronic records. The full certificate program involves 9 courses and is not cheap, so for right now I'm not focusing on finishing that (although I would like to be able to do so in the future). I was, however, able to take a course from the sequence, Arranging and Describing Electronic Records, which I found very useful in introducing me to tools and topics for getting a better handle on processing these. And so, in light of that course, I thought, "Hey, I bet other people would be interested in what we're doing with these tools and processes here at UWM. (and/or happy to tell me what it is I'm doing wrong)." And so, here we are.

I am going to structure this post series as a chronicle of working with Archives collections through the lens of various tools that I am testing, having been tipped off to the existence of said tools through the ADER workshop and other sources. (Chris Prom's Practical E-Records blog in particular has been invaluable for this.) My intent is to present my experiences and difficulties with these born-digital collections in order through the various stages of Archival Records, to wit Ingest-->Accession-->Arrangement-->Description-->Access-->Preservation. I am also cognizant, however, of the fact that the best laid plans of mice and men oft gang agley, and that not all of the tools I'm going to be looking at fit neatly into one of these categories (e.g. Archivematica and the Duke Data Accessioner). I am, however, going to give my best shot at providing a chronicle of working with these records from beginning to end, whenever "end" might be. (I'm also aware that "end" might not end up so easily defined.) Of course, because this whole process is in fact in process, the beginning is not especially well-defined either-- see next post for details-- but I'm hoping working through it in this form will help fix it for the next accession to come down the road.

So that's going to be this blog for the next few posts. Hope my readers (all 3 of you) find it useful, or at least interesting. Do feel free to comment/point out miscues/heckle/etc., as that will help me figure out where we're going wrong and point at ways to fix it. (Oh boy, I've just given people license to flame on my blog... Asbestos underwear at the ready...)

Sunday, April 29, 2012

ARMA, Archivists, and Affordability

God Bless Jimmy McMillan. Helping people make Image Macros since 2010.
First, some context: this post is a response to comments in Maureen Callahan's post on You Ought To Be Ashamed regarding gender equity in hiring and promotion in Archives environments. Things got out of hand, as they tend to do: One person posted about how he didn't see the Glass Elevator as a real thing, another posted about how Supply and Demand would naturally correct the Archives market, and a third dismissed people who were chasing their dreams in the Archives world with "Good luck with that." I, foolishly, observed that most of the people taking issue with part or all of Maureen's post were records managers as opposed to archivists, and notably were records managers working in the private sector. Well, the foolish part was not that observation. The foolish part was my next logical leap, in which I tried to explain why the mindsets might be different:

Although officially I am both an archivist and a records manager at UWM (and am, you know, chair of SAA’s Records Management Roundtable), I feel much more of an affinity towards my archival colleagues largely because of this disconnect. The vast, vast majority of the ARMA programs I’ve attended are really focused towards dealing with records in a private sector environment, with perhaps some mention of government records and records management in a university setting coming in a distant third, if at all. This emphasis is natural on the face of it, since the preponderance of ARMA members are corporate, but dig a little deeper and you see structural issues: $175 for full membership, no gradations? $50+ for downloadable standards? _$1000_ for registration for the annual meeting? I don’t know many archivists/RMs in the public sector who can afford this, much less afford membership in both ARMA and SAA, and so they choose the one that is both cheaper and has a more direct relevance to their professional development. Thus, the divide widens, and groups like RMRT can only do so much to bridge it.
 I even said on Twitter that I was going to be stepping in it with this post, but this is something that has been bothering me for some time and it felt good to get it off my chest. Still, this paragraph didn't go unchallenged. Here's Peter Kurilecz:

so are you suggesting that ARMA should have a progressive dues structure like SAA? ie should they be like AIIM and have a name your price dues structure? How many people drop their SAA membership as they climb the salary ladder because they now have to pay more in dues? I took a look at the membership breakdown for SAA and after some analysis found that if they implemented a standard price ala ARMA (and not ARMA’s price) that SAA would have increased dues receipts. It would also make more sense from an accounting standpoint. I hear way too many folks whining about the cost of membership, but how many of those same folks are buying a Starbucks coffee everyday? Even a plain regular coffee at $2.50 for vente? figure the cost at that amount they’re paying $75 per month or $900 per year. or do it just 15 days per month and they are paying $450 a year. way more than membership.
how about a cell phone? what is the monthly cost? It all comes down to what is important to you. If it is really important you’ll find a way to pay for it.

I stand by my original comment, but this response suggests that I didn't do a wonderful job articulating what I meant. Let me take another whack at it.

Peter is, of course, right, that one's ability to afford membership in any of these professional organizations is a matter of priorities, and that if you want it badly enough you'll find the money. The thing is... I don't think ARMA does a good enough job making archivist/RM hybrids such as myself want it. Yes, there are and continue to be programs sponsored by ARMA and the locals that are interesting to archivists in the public sector, particularly at the government level. ARMA Milwaukee's Spring Seminar this year is on "Information Governance and Records Management in the Federal Government", which is very obviously aimed at public sector archivists-- so I overstated my case on that. Mea Culpa. But I don't think I'm reaching at all to say that most programs that ARMA sponsored are focused towards a very particular organizational culture, the kind where buy-in is achieved at C-level positions and/or direct coordination with legal or audit departments. Corporate or Fiscal environments are very good at this; Government environments less so, but there's still enough to get a semblance of cooperation (Witness the Obama memo on electronic records, which agencies have to at least to pretend to follow).

 My institution, conversely, is very much not like that. Getting the support of the CIO or the Provost does not at all guarantee that staff are going to follow appropriate records management practices, or adhere to taxonomies defined by upper management. Because I am in the Library rather than in Internal Audit or Legal Affairs, my power to enforce records management practices is limited to "soft power"-- going to individual offices and convincing them that establishing and following disposition schedules and records management policies is in their best interest from a legal, administrative, and historical standpoint. It's doable, but it's decentralized, and it can be very difficult, and I often have to operate on a shoestring budget (there's very little, if any, funding for a campus-wide EDMS, for example). Some of the programs and rhetoric at ARMA sessions and webinars acknowledge this difference in institutional culture and offer solutions for dealing with it. Many others do not.

In one sense, I don't blame ARMA or the locals for this focus on centralized culture, since most of their members come from that environment. But that also means that there is often not much incentive for archivists/records managers at institutions like mine to formally join ARMA and get the benefits that membership provides, because a lot of those benefits are just not as relevant as those found in other organizations, and in an era of shrinking personal and professional development budgets, sometimes a choice has to be made. If that choice is not in ARMA's favor, that reinforces ARMA's own incentive to cater to its existing members, program composition is altered, and a vicious cycle ensues.

It didn't get this way overnight, of course, and again, there is a LOT of good stuff to be had from ARMA even for university records managers such as myself. In my *opinion*-- I cannot emphasize enough that it is my opinion, because I have no concrete evidence-- a lot of it comes down to the money issue. To quote Peter again:


as for the cost of standards have you seen what ISO charges?. Should not the university pay for those standards since you depend upon them to do your work? do you not budget for the purchase of standards and other reference materials.
as for the cost of the conference you failed to mention that early bird registration gets a discount. What should the cost of the conference be? $500, $750, $250, free? do you want the conference at a nice venue or down market? price is but one factor

My own library pays for some-- by no means all-- of the ARMA standards. They are generally the ones that are directly relevant to my duties as University Records Officer, and they sort of look askance at me when I order them, because in general the analogous documents on the Archives side of my job are significantly less expensive. Speaking of which, I cannot envision a world in which the UWM Libraries would pay for me to attend the ARMA annual conference. In this world of budget cuts and shrinking travel budgets, their question would probably be the same as mine, to wit "What makes registration for ARMA worth 3x as much as registration for SAA?" I don't have an answer to that, and because I can't afford the registration fee myself without institutional assistance, I likely never will while in this position. (Since it's in Chicago this year I do plan on making it down to the Expo, since the price is right for that.) I cannot imagine that I am the only university archivist to  be in this position, and to me it seems emblematic of ARMA's structural focus issue-- a fee that high suggests to me that the board expects its members to be mostly or entirely comped by their institutions, which is going to happen less and less even for government RMs.

Again, I have nothing against ARMA. I've had a lapsed membership because of the brouhaha in WI affecting my take-home pay, but I'm hoping I can renew it soon because I actually do get a lot out of their publications and seminars. (I don't even necessarily agree with the graphic at the top of this post, though I do think that even token dues levels would be a sign of good faith on reaching out to underemployed or undercompensated RMs.) One day I WOULD like to run for an officer position, as Peter suggests (although I still feel too much like a young'un right now). SAA's Records Management Roundtable, of which I am chair this year, is as we speak working on some plans to reach out to ARMA and local chapters and look for ways that we can collaborate on education and advocacy, etc.

But there are many Archivists with RM portfolios in my position who look at what ARMA has to offer and say "why bother?" As a result, the organization loses those perspectives, and the gulf between the professions is maintained.

Sunday, April 15, 2012

23 Things for Archivists: Fair is Fair

Hello, all people who still inexplicably follow this blog after a two year hiatus! I am updating here because I am reviewing the SAA Reference, Access, and Outreach section's 23 Things for Archivists site for an upcoming issue of Archival Issues. This will be my first review in an Archival publication, and it is very exciti--

"Wait a second", you say. "You have been blogging, tweeting, etc. for several years now, if sporadically in some cases. You may or may not know a lot of this stuff already! How in the name of God are you going to review this fairly?" A fair question! And one that I asked myself while starting to go through the site. Is "playing dumb" the solution? No, because I DO know a lot of this stuff as covered, and to a certain extent can't hide that.

Then it occurred to me: I should go through the site as a participant! That way, I can see to what extent the instructions they give regarding use of the various Web 2.0 tools and concepts are useful and meaningful for other archivists. And, on the flip side, I do see not a few things, particularly in the intermediate section that I'm not as familiar with as I probably should be, especially considering I tend to be the techie guy in the UWM Archives. So this will be a good learning experience for me, in addition to being a good framework for the review. (Plus: I can use the posts I create for notes for the review itself, which is nice.)

OK... so let's see... Thing one has you set up a Blog... Uh... awkward. So, yes, OK, a little bit of playing dumb will be needed. Let's go to Wordpress and see if we can't figure something out from there-- I do technically blog there as well, but at least I haven't set one UP over there before. Progress? I think? I will crosspost my findings here, if for no other reason than to breathe some life into this moribund excuse for an Archives blog and maybe get me posting regularly again. Stay tuned.

Tuesday, August 17, 2010

Archives TCG: Nerdiest thing EVER.

Sooooo, this is supposed to be a wrap-up post on SAA 2010. Which I swear I am still going to do. But first for something a bit sillier. OK, check that, a LOT sillier.

A bit of context: 2011 is SAA's 75th Anniversary Year, which means a lot of ill-conceived nostalgic foolishness. Exhibit A: Archivist Trading Cards. No, really, check that link. This is a true thing that is happening that is being sponsored by SAA. Go ahead. I'll be here when you get back. (Let the record show as well that Student Archivists at Maryland thought of this first.)

Anyway. Have you read the call for archivist trading cards? A little frivolous for a professional organization, you say? A lot of the Archives Twitterati thought so too. In fact, we took it a step further: why just have trading cards when you can have a COLLECTABLE CARD GAME?

@cdibella: I'm sorry, but the prospect of #archives trading cards makes me giddy. Hans Booms, black box, that crazy macroappraisal diagram - I want.

@sheepeeh: @cdibella I may or may not have a set of attribute icons and monster cards in my sketchbook already.

@cdibella: @sheepeeh Omigod - too cool. SAA's example card is pretty darn staid, but there's definitely a lot of potential there.

@sheepeeh: @cdibella As soon as I heard about the trading cards, I started imagining an #ebz like game for archives :P #nerd (never a big Magic player)

@derangedescribe: @sheepeeh @cdibella Archives: The Processing? @herodotusjr could write the rules.

I am sure Ms. Goldman thought she was being funny because Magic: the Gathering is one of the Big Three topics I tweet about, the others being Archives and Politics. Well WHO'S LAUGHING NOW HUH?! I give you the introductory rules for ARCHIVES: THE PROCESSING, the first trading card game where you fight not for universal domination, but for domination of the ARCHIVES WORLD! MUAHAHAHAHAH *cough cough* Sorry.

(Note: These rules are highly influenced by Magic: the Gathering, so all apologies to Richard Garfield, Aaron Forsythe, Mark Rosewater, etc. None of the example cards are balanced at all and are likely to stay that way unless the full set is actually developed, which seems unlikely if it's just me. So in the unlikely event that you are reading this and want to submit cards or card ideas, please feel free. If you are one of my Magic friends who have drifted over here, I am so, so sorry for butchering the game. But the potential for lulz was just too high. Also, I am probably the biggest geek in the history of geekdom for doing this.)

OBJECT
You're an archives manager looking to achieve complete archives domination. Or failing that make those other repositories fall flat on their faces. (We don't go for those namby-pamby consortia here in the world of Archives: the Processing.)

Win the game by either reducing your opponents’ Reputation to 0 (starting from 20; when Reputation = 0 the head of that player's institution no longer sees a point to an archives and discontinues the program) or accumulating 20 Processing Points (starting from 0; when you hit 20 processing points you have cleared out your backlog and are acknowledged as an Archives rock star).

CARD TYPES
Resources: Analogous to lands in MTG, produce Funding instead of Mana. Come in basic and specialized flavors. Basic Resource Types:
· Public Grants: W
· Institutional Support: U
· Shady Sources: B
· Benefactors: R
· Private Grants: G

NHPRC
Resource
T: Add WW to your Funding Pool. This Funding can only be used on Arrangement, Description, or Preservation cards or to pay upkeep on Project Archivists.

Electronic Records Management Initiative
Resource
T: Add an amount of U to your Funding Pool equal to the number of Computer Artifacts you control.

ARCHIVISTS:
Analogous to creatures. Legendary if named (Greene/Meissner, Margaret Cross Norton, Schellenberg, etc.) Instead of power and toughness have Publishing Offence/Defense to put dents in Reputation. Usually require resource upkeep cost.

University Archivist 2U

Archivist
Salary U (During your upkeep, pay U or sacrifice this archivist.)
If you would tap University Archivist to add a processing counter to a University Collection, add two processing counters instead.
T: Draw a Card.
2/2

Tenure-Track Professor 4GG
Archivist
Salary GG (During your upkeep, pay GG or sacrifice this archivist.)
Rhetoric (This archivist may only be blocked by other archivists with Rhetoric.)
Tenure-Track Professor cannot be tapped to add a processing counter to a Collection.
Sacrifice a Student: Add a processing counter to a University collection.
6/4

STRATEGIES: Analogous to enchantments. Have one or more of seven subtypes (preservation, description, arrangement, appraisal, reference, outreach, acquisition). Provide benefits to player who controls them, sometimes include drawbacks. Cannot have more than one of each subtype on board at once.

Collection Policy 1UUU
Strategy—Acquisition Appraisal
As Collection Policy comes into play, name a Collection subtype. Collections of that subtype require 1 fewer Processing counters to Process. This effect can’t reduce the Processing cost below 1.
You may ignore any effects triggered by rejecting a collection.

More Product Less Process 2RR
Strategy—Arrangement Description
At the beginning of your upkeep, put an additional Processing counter on each collection you control.
You may only play one Action, Challenge, or Artifact per turn.


ACTIONS: Analogous to instants. May have one or more of seven subtypes and are usually, but not always, used for defensive or beneficial purposes. Discarded after playing.

Conference Presentation 1G
Action—Outreach
Search your library for a basic Resource and put it into play tapped. Then shuffle your library. You gain 2 Reputation.

Collection Sell-Off B
Action—Acquisition Appraisal
As an additional cost to play Collection Sell-Off, return an unprocessed collection you control to the accessions deck. You may add an amount of B to your Resources pool equal to the number of processing counters on that collection.


CHALLENGES: Analogous to sorceries. May have one or more of seven subtypes and are usually used for offensive purposes. Discarded after playing unless they have Ongoing supertype, in which case only one of each subtype can be put on the field at once.

Mildew 2B
Ongoing Challenge—Preservation
Affect Opponent
Collections affected opponent controls have “At the beginning of your upkeep, sacrifice this collection unless you pay 1.”

Inconsiderate Researcher XRR
Challenge—Reference Preservation Remove X processing counters from target collection. Inconsiderate Researcher does X damage to that collection’s controller.


ARTIFACTS: “Tools of the trade”, usually have a beneficial effect for controller. May be tapped or sacrificed for additional benefit.
Hollinger Box 2
Artifact
Preservation Action or Challenge cards cost 1 less to play.
T: Add U to your resource pool.

Reading Room Reference Collection 5
Artifact
Archivists you control get +1/+1 for each other archivist you control.
At the beginning of your draw phase, draw an additional card.


COLLECTIONS: Free cards revealed from the Accession pile. Have processing point value which shows how much processing they require and how many points they provide once processed. May or may not have additional benefits. Untapped: Unprocessed; Tapped: Processed

Photo Series
Collection—Visual Records
When you complete processing on Photo Series, you gain 3 Reputation. If you also control an artifact named Content Management System, you gain 6 Reputation instead.
If you reject Photo Series, the next time you would gain Reputation, you gain no Reputation instead.

4

Unsolicited Benefactor Papers
Collection—Paper Manuscript
At the beginning of your first main phase, if Unsolicited Benefactor Papers are processed, you may add RR to your funding pool and lose 1 reputation.
If you reject Unsolicited Benefactor Papers, sacrifice a resource and lose 4 reputation.

6


TURN ORDER
· UNTAP
· UPKEEP: All “During your upkeep” things happen. Active Player places one “free” processing counter on an unprocessed collection he controls (representing his/her own processing efforts that turn).
· ACCESSION: Active Player reveals top card of communal collections deck. S/He may choose to accession the collection, in which case it enters play unprocessed under his/her control, or to reject it, in which case it’s put on bottom of deck. There may be consequences for rejecting a collection as noted on the collection card. “Accession” or “Appraisal” Action cards may be played at this time by any player.
· DRAW: Player draws one card from own constructed deck (action, funding, archivist, strategy, challenge, artifact).
· FIRST MAIN: Players may play one Resource Card per turn during this phase. Any number of non-Resource cards may be played. Cards in play may change these limits. Any player may also play Action Cards of any subtype during this phase.
· PROCESS: Active player may tap any number of his archivists to add that many processing counters to his collection. OR he can attack the reputation of an opposing archives (Representing a withering scholarly article published somewhere). Opposing archives may block with any untapped archivists available to prevent damage to the defending player's reputation. If an Archivist takes damage equal to his reputation defense, that archivist is Fired and goes to the discard pile. “Arrangement”, “Description”, or “Preservation” Action cards may be played at this time by any player.
· SECOND MAIN: See First Main for cards which may be played during this phase.
· DEACCESSION: Damage is removed from Archivists and “Until End of Turn” effects end. All Resources remaining in your pool drain and do one damage to your Reputation per resource (Administrators don't like it when you don't spend the money that you have been allocated). Any player may play Action cards at this time.