Image Image Image Image Image Image Image Image Image

2016 February

Sara Bijani


February 26, 2016

Marxists, Anarchists, and American Digital Archives

February 26, 2016 | By | No Comments

I’ll start with a story. This project began with the very broad objective of constructing a recyclable content management interface with OHMS incorporation, to be reused at a later stage in my dissertation project. I stumbled into the Finally Got the News content I’ve incorporated in this iteration of the build entirely by chance, but I was drawn into the material by a story.

In the early 2000s, my advisor—a labor and gender historian—was researching a project on masculinity in the American automotive past at the Reuther Library when one of the archivists handed her a VHS copy of Finally Got the News. The archive had been given a stack of these VHS tapes, to be distributed to anyone interested in Michigan’s labor past. This was the revolutionary ethic of film activism that drove the creation of this project in the 1970s, and that ethic had been kept alive by efforts of this sort for decades. Today, the internet helps to keep this activism alive, with the full length film hosted on Youtube for anyone to watch and download and share.

Read More

Nikki Silva


February 26, 2016

Mapping Morton Village – Figuring Things Out

February 26, 2016 | By | No Comments

The past few weeks have been eventful for the Mapping Morton Village project, since Autumn’s blog post on 2/11, we have completed all of the content for the website, and continued working on the interactive map. Mapbox has been a bit of a struggle for us, as we could not get the data for Morton Village to appear on our map. We wanted multiple layers to show the work done during each year of excavation (1980s, 2008-2014). Within these layers we would have the content that we have been working on (see Autumn’s blog post) available as pop-ups on specific pit features or structures. We were unsure how to pull our shapefile layers from Mapbox into our website’s code to create these ‘year’ layers, anything we tried we could not see our data on the map. We researched extensively to try to find an answer to our problem, and were not having much luck until Autumn posted on the GIS Stack Exchange site with our issues and a few users were able to help us.

Our problem had been that we were trying to pull the tilesets, which are vector data, directly into our code, which will not work. To pull tilesets into the code, we needed to create an editor project in Mapbox Classic for each of our year layers, containing the structure and feature data as geoJSON files. Once we figured this out, I converted our shapefiles into geoJSON, and Autumn was able to adjust the code and add the map-id for each of our editor projects. We can toggle these layers on and off in the map, and we have also put content with each of their corresponding pit features/structures (see photos below). This makes things a little easier, because we have both the pit features and structures in the same layer (which we didn’t before) and we can easily format the data in the Mapbox Editor project for each layer and it automatically updates on our map (styling, descriptions, etc.). We have made a lot of progress in the past few weeks and we are excited about continuing to build the Mapping Morton Village interactive map! Read More



February 21, 2016

The ivory tower: power, privilege and the iron-gate that surrounds knowledge

February 21, 2016 | By | No Comments

Big, ambitious projects require energy and huge sets of data that isn’t easily accessible. No surprise there! But, why? Why is archival research a maddening, mind-numbing, stress-inducing and life-reducing process? At the crux of this heated debate are signed over oral histories and other vital materials, donated or part of projects that are now at universities or repositories affiliated with the American Folklife Center under the Library of Congress that are needed to complete the website This is my story: Detroit 1967. Unsurprisingly, the result of these benign tasks are a few gray hairs huddled at or near the temples because of frustrations with accessibility and limitations with knowledge yielding teeth-grinding attempts to work with public university(ies) and/or the Library of Congress and their third party affiliations. Hindering my workflow and efficiency are issues of public versus private, which are now the center of discussion for this post. Why is so-called public information privatized? Why is knowledge hoarded? Why is knowledge owned?

Problems of accessibility aren’t new in the academy, you are preaching to the choir. One of the hopes with digital innovation among other things was to digitize files and to circumvent the librarian or the archivist that stood in the way of the researcher. It happened, yes, but not in the droves that one would hope for. Please understand, quite a bit is digitized just not what I needed for my research—cue the stress-inducing and life-reducing negotiation process to get the information in its various mediums.

Photo “Current Archive” by Flickr user carmichaellibrary.  Used under Attribution 2.0 Generic

Photo “Current Archive” by Flickr user carmichaellibrary. Used under Attribution 2.0 Generic

The irony of knowledge accessibility within the university, a public institution has proven to be a major problem with This is my story: Detroit 1967. Universities pride themselves as being bastions of knowledge. Knowledge that comes with a price tag that isn’t exactly monetary, per se, but relentless borderline nuisance, pest-like behavior for what should be a relatively simple task. You make a call or send an email, give the librarian the necessary information and voila you have what you need. From my experience it isn’t that straightforward. I’ve had to supply everything shy of my master plan to take over the world. Yes, I’m being facetious, but it is the truth. I’m annoyed, highly, but not deterred to make this project come to fruition. Here’s the problem, I need oral histories that are public, at least, some of them are, but some are protected by legal release. The legal release is a document that protects the interviewee and itemizes what and how the oral history is to be used and what for. Now, some legal releases and their terms are lenient, others are stern and have detailed restrictions. Hence, my feverish need to work with these entities to get these rich stories.

This may seem very foolish of me as I’ve done archival research in the past, but I’m going to put this out there to challenge the robust conversations on private versus public. If information is public its public. PERIOD. DOT. END OF STORY. There shouldn’t be any gray area or limitations. If there are conditions than it shouldn’t be public. That was easy! Conversely, private has its varying levels hence its complexities, but don’t allow its intricacies thwart your research or research mission.

This may across as a gripe and less of an overall assessment as to what’s wrong with the academy, as it pertains to information in their possession, but more so the lack of distinction between public and private, and their streamlining of this data. Lastly, once given access to the requested material it shouldn’t feel like walking into Fort Knox is any easier or seeking trade secrets from the CEO of Apple or Google is as effortless as ordering Sunday brunch than getting data from an institution or archive. Information from public universities should not require a vile of O positive blood, with a guide to ensure that you are where you are suppose to be with key card/key lock entry. I’m just saying! A massive change is needed and soon.


Lisa Bright


February 18, 2016

Creating Structure for TOMB

February 18, 2016 | By | No Comments

As Katy mentioned over on the Digital Archaeology Institute blog, we’re focusing on two major steps in the development of our project: 1) creating the framework, and 2) developing the content. For more details about the content development, head on over to Katy’s post.

Some of the major steps in creating TOMB’s framework will include developing the main map, creating the GeoJSON file the individual site information will pull from, and creating the website structure. I’d like to take a moment and discuss the importance of focusing on website structure. Sitting down and really thinking about how your entire site will be laid out, and how users will navigate through your content is an incredibly important first step that many new developers tend to overlook. I’m sure we’ve all been to bad website before, ones that are poorly thought out where you’re left thinking “where the heck do I find what I’m looking for?!”.

For me, this means taking a step back from the digital realm and putting pen to paper. At this point I’m not concerned with things like fiddling around with the pages to create better search engine optimization (SEO), or ADA compliance. That’s a level of voodoo magic I’ll deal with later (in what at this point feels like a previous life, I worked as a marketing assistant for a campus department focusing on SEO and pay per click advertising so I’ve got a few tricks up my sleeves). What we’re focusing on now is structuring the website so that our intended users, students and teachers, have an enjoyable and educational experience with the site. If you’re new to this process and looking for details instructions/suggestions for creating good website information architecture and content I highly recommend this guide created by Princeton. Read More



February 18, 2016

Building the Plane While Flying It; Or, Understanding the Politics of the Sonic through Earwitnessing Participant and User Collaboration

February 18, 2016 | By | No Comments

Crank.Spin.Putter-Putter-Putter. Click. Swipe. Type. These are the sounds that circulate in my mind as I architect the #hearmyhome project. Most days, it feels like I am building the plane while flying it. Working to circulate and collaborate with participants, I network the project on Twitter, Facebook, and Instagram while simultaneously actually designing the platform. Other days, I feel as if I am a mere observer, watching, lurking, and learning from users whose soundscapes are helping me begin to earwitness to the everyday. Despite these setbacks and feels of failure, I want to talk about sound as a way to “hear” participant and user collaboration.

In addition to the CHI Fellowship, my colleague Cassie Brownell and I received support from the NCTE Research Initiative Grant to explore sound, more broadly, as a mechanism for understanding community literacies and cultural rhetorics. As I detailed in earlier posts, the #hearmyhome project examines everyday sonic compositions as expressive means for articulating culture(s). We were curious how composing with sound may attune us towards difference; or, what Vasudevan would call a “multimodal cosmopolitanism.” What I find most insightful, however, are not the sound symphony products, but rather the array of questions we receive as users and participants try to collaborate. “Am I doing this right?” “Should mine look like yours?” “What are you doing for the hard of hearing? How do we ‘listen’ and participate in the project?” These questions have invited us to take a step back and examine not only the formal structure of the project (the layout, design, and blueprint of HTML/CSS) but also the purpose and politics of participation. To whom and for whom are we listening, connecting, and building with?

Examples of #SE1 and #SE2 Soundscapes

Two sonic events (#SE) into the #hearmyhome project, we will continue to build, expand, and forward these types of inquiry while also working to co-construct (with participation from users and participants) a networked map detailing particular locations of soundscapes and sonic ecologies. As a sonic archive that examines everyday cultural heritage through rhythmic rituals and mundane music, we value an expansive range of voices. As we gear up for #SE3 (sonic event 3), we invite you to record, to lurk, to share, to like, and to participate in the project. Jump into the cockpit and help us earwitness the everyday.

Autumn Beyer


February 11, 2016

Mapping Morton Village: Writing the Content

February 11, 2016 | By | No Comments

In this post, I would like to discuss what will be included within the Mapping Morton Village interactive map. For the past several weeks, Nikki Silva and myself have been working on the written content of Mapping Morton Village. We decided to write the content of the site with the public in mind, focusing on giving general background information. The map will have multiple layers, showing the extent of each year’s excavations. Each layer will highlight a different aspect of archaeological excavation and research, as well as give examples from Morton Village.

Read More



February 10, 2016

Structuring the Fields

February 10, 2016 | By | No Comments

     It is time to show the fields we are using in our database on Baptismal Records for Slave Societies (BARDSS). In previous posts, we pointed out that this database was possible thanks to a project hosted at Vanderbilt University and led by professor Jane Landers. Landers and her team have been travelling to different places in the Americas to digitized endangered parish records. They have uploaded to the web these records for public and free access. Although we are using only baptismal records from Africans and African descendants, the Ecclesiastical and Secular Sources for Slaves Societies contain burial, marriage, and many other type of civil records. All these documents have a particularity that makes them a perfect candidate for a digital database project. Regarding period of time, place of origin or language, these records are quite homogeneous. The explanation lies in the centralized nature of the Catholic church. Thus, we are not facing the disparity of information that has faced other similar digital projects. 

This is an example of a baptismal record from the parish of “San Carlos” in Matanzas, Cuba.

baptismal record

These are some of the fields from this particular baptismal record:

  1.  Date of baptism: Sunday, May 30, 1830
  2.  Priest: D. Manuel Francisco Garcia
  3. Age Category: “Parbulo” (Infant)
  4. Date of birth: May 2nd, 1830
  5. Filiation: legitimate (born from married parents)
  6. Father’s name: Francisco (it is also Criollo)
  7. Mother’s name: Maria de la O
  8. Nation: Ganga (African denomination used in Cuba)
  9. Legal status: Slave
  10. Owner: D. Francisco Hernandez Fiallo
  11. Name of the baptized individual: Felipe
  12. Godmother’s name: Ceferina
  13. Godmother’s African “nation”: Mina            

Baptismal records are fairly homogeneous regarding period of time or location:

Screen Shot 2016-02-10 at 1.04.18 PM

Finally, after some discussions and after comparing different baptismal records from diverse regions and period of time, we created this relational diagram. The following diagram show all the fields from BARDSS and the hierarchical relation among them:






February 9, 2016

The Importance of Fields in Database Projects

February 9, 2016 | By | No Comments


       We discussed in previous posts about the importance of selecting representative fields when we are creating a database based on historical records. It is critical to go back again to this point due to the importance it has while designing a functional digital database. We all know that historical sources contain disparate universes of data. Historians, in general, extract from the documents what they need for they own research. This selectiveness inherently to the historiographical craft makes sources manageable for us. We simplify, mutilate, and make documents “legible” in order to answer our own questions. We ignore or overlook elements that we consider are not significative for our research. For instance, if we are working on different type of sources such as  inquisitorial and plantation records, and we are looking at religious practices of Africans in the Americas, we are going to privilege the testimonies of slaves on the legal trial or their ethnicities recorded in some plantation papers. Probably, we will overlook the sugarmills machinery because it is not significative to make our point. However, if we are creating a database of plantation records from Louisiana, and that database aims to be comprehensive, we probably would like to include as many fields as possible such as sugar mills machinery. For doing a historical digital database, it is crucial to think about it on the most broader possible way. A database is not just an individual enterprise tributing to our particular research. It is a repository for potential multiples types of historical inquiries.  

        However, like it is the case for a conventional monograph, we need a central theme for a database. It is essential that we are clear about what is our subject because the fields need to be connected among them around the main topic. For instance, slaves themselves are the main protagonists of a database on runaway slaves. In a relational diagram of fields, the slaves are at the center while owners, physical marks, date of capture, and “nation” are subfields tributing to the slave or main entity. Take now the example of the most successful digital project on the slave trade: the Trans-Atlantic Slave Trade Database (TSTSB). The main subject is the slave ship. Every field in the database is centered around those vessels transporting forced human cargoes from Africa to the Americas. Variables such as flags, date of departure, captain, owners, number of slaves or mutiny on board are instances related to a particular ship. The TSTDB resulted from the diverse type of documents located around the globe. These disparate records were written in different languages, with diverse purposes, over more than three centuries, and by disparate historical actors. Many of this documents had been used before by historians to write their classic monographs and some of these historians collaborated later to enlarge the TSTDB. Therefore, the question to ask is how was possible to translate such diverse historical sources into a single and coherent project without losing sight of comprehensiveness.

      First of all, the authors of the project are renowned specialists not only on the topic of the slave trade but also on quantitative studies. They simplified to standardize. I think this the key to create a manageable digital database when the universe of documents we are using is extremely heterogeneous. After a careful study, and based on their years of experience, the authors of the TSTDB determined those fields that were likely to show up on documents related to the Atlantic slave trade. For instance, documents usually mention information such as the ship name, the captain, number of captives, date of departure/arrival or nationality of the vessel. The fact that sometimes the name of the vessel is not mentioned does not make any difference about the importance of including that field. In the same way, that sometimes the color of the vessel is mentioned in some documents is not a reason to include that information as an individual field. Why? because the aesthetic of the ship is not something that appears regularly in the sources. As a consequence, that feature does not deserve a particular field. If we create a field for every detail from the documents in order to create a database, the result would be an oddly high number of empty fields. The database would not be functional.

The other element we have to take into account is that we will deal during the process with software developers and their programming language. They need a clear project based on coherent and interrelated fields. Programmers in general, in particular, those accustomed to create databases based on contemporary data, do not understand completely our initial intention of putting together a database based on fragmentary data. Take the example of a programmer that have done digital platforms for credit card companies. He/she has been databasing customers. He/she is used to a coherent and complete set of data. Unlike the aforementioned case, historians have to deal usually with fragmentary data. Thus, programmers have to create relationships between fields that could be or not entirely populated. Second, It happens often that historians resist simplifying their information when it come to formulate their digital projects. This attitude is based on epistemological principles that make sense while writing monographs, but that are not completely functional while creating a digital database. This is not a matter of gathering all the data we think  are or could be significative information for potential research. We have to choose fields that regularly appears in the documents in order to standardize them which mean, make a functional digital database. Our solution for exceptional or not usual data is an empty box where we write complementary information that did not make it as a separate field because its lack of representativeness. Fortunately, we did not face that issue while creating BARDSS. Our database is based on an extremely coherent set of information regarding time and space. After all, baptismal records were from the beginning, intended to be a sort of legible and coherent collection of data on population. Next post we will show some documents and how we extracted the information from them and transformed into a relational diagram




February 7, 2016

Databasing Historical Records: Some of the Challenges

February 7, 2016 | By | No Comments

       Structuring a database is not an easy task. During this year of work, we have faced many challenges that have required from us great intellectual efforts and reflection. Nevertheless, I have heard from “digital humanists” and programmers that because we have a software developer, we are not making the database, that someone is doing it for us. The underlying argument is that we need knowledge on basic principle of programming such as HTML and CSS to claim authorship in the making-process. Having that programming skills today is helpful. However, that our participation on programming is limited does not mean we are not the main creators of the database. This blog shows some of the main challenges that make us -the historians- crucial for this type of project and it is, in part, an answer to technocratic point of views on the relationship historians and software developers.

First, the concept of the project –databasing baptismal records–, is ours. This project is not something that anyone could have imagined without the proper historical training. You need to know about sources, their internal logic, the institutions that produced them, paleography, and other language skills. It is important to decide the fields that can be extracted from the sources without violating the integrity of the documents. We have to respect historical concepts and to know that their meanings changed over time. We decided how to organize the fields in a coherent and hierarchical way. We need to translate our needs to programmers without historical training. We, historians, are the most important actor. Thus, HTML and CSS play a minor role to conceive the idea. The developing part is crucial, but should not be confused with the first step. This assertion is true for those cases where social scientists rely on programmers to materialize their projects.

      We had important elements in our advantage when we started this project. First, the digitized copies of the original documents are available online. The project “Ecclesiastical & Secular Sources for Slaves Societies” (ESSSS) has digitized and posted online the parish records from Colombia, Brazil, Cuba, and Florida. Without this amazing repository, our database would have been impossible. These baptismal records are geographically, linguistically, and temporally diverse but, due to the centralized nature of the Catholic church, they are also homogeneous sources, regardless of language, period, and region. This circumstance makes them the perfect candidate to build a transnational standardized database. It makes also doable to move the data from the digitized documents to an accessible, searchable, malleable, and “cleaner” digital format. It sounds easier than it is though.

      Defining the categories or fields that will be in the search tool is definitely challenging. Even when the documents are homogeneous, there is often new information showing up we need to decide if it deserves an individual field or not. Databases must have a limited universe of regular fields to make them functional.  We restricted our variables to those that regularly appear in the documents and those which do not show up frequently are included in the field “Miscellaneous.” Deciding the fields is not the only challenge. Naming the fields is another difficult step. Take the example of race and ethnicity. Categories, language, and meanings of race differ over time and by region. For instance, the are sometime equatable categories of race from the Portuguese and from the Spanish-speaking world. Anglo-speaking regions have had different definition of race. In both cases, race categories are subjected to change over time. We do not want to violate the documents, thus, we kept race as it appears in the sources, including the original language. Something similar happens with African ethnic designations in the Americas. Across different regions, African origins are defined in every document as nations. We keep the term “nation” as it appears in the document, although sometimes these categories do not represent and ethnic identity that carried meaning in an African context. These decisions resulted after long discussions and after reading the most important historiography on the topic. There is always a great space for disagreement. The next post will discuss some elements we took in account while structuring our fields. 




February 5, 2016

This is my story: The beginning of reclaiming the past to look to the future

February 5, 2016 | By | No Comments

This is my story: Detroit 1967 is in the infancy of development. So, what is it again? It is a multimedia archive and repository that serves to catalog and historicize this canonical and significant time in the 20th century with oral histories from eyewitnesses and participants of the rebellion. This endeavor is a continuation of a project of promise and curiosity that started in summer 2009 when interning at the ABC affiliate in Detroit WXYZ ABC 7. Much of what has been written and indexed into the historical report is ahistorical, asociocultural and asocioeconomic and missing the qualitative and critical ethnographic approach.


Image “July, 1967 — Investigation Team checks conditions at Washtenaw County Jail, in cell filled with detainees from the Detroit riots” by Flickr user Wystan used under CC BY 2.0

As a media professional and oral historian, I’ve arranged a couple of interviews and reached out to a few universities libraries (University of Michigan, Rutgers University and Wayne State University) for research assistance, as they have materials that would support my research and educational target of free and accessible information. Also, I’m reading several texts on urban rebellion, Detroit and racial segregation to compliment all of the newspapers stories I’ve read in The Michigan Chronicle, The Detroit Free Press and The Detroit News. 

Forging the technical aspects of this project, to build the website, I’m using Omeka, it is a content management system developed by George Mason University for the humanities. I’ve used it in my master’s program, so I’m quite familiar with the program and its setup. Without being excessively techie, to make the material available, the oral history metadata synchronizer is a plug-in that will make my video and transcripts a great body of scholarship working as one.

The research will transform the digital space with phenomenal stories from willing interviewees and from there begin to change the narrative of the four days of chaos, the city and nearly fifty years to follow to one of unrelenting perseverance. Following the uprising Detroit became an urban scientific experiment being poked, prodded, exploited and devastated. In Fall 2009, Time ran a special report announcing their year-long assignment focused on Detroit, examining what went wrong with the motor city. Their report would confirm my scientific experiment theory but expose other massive infrastructure issues that to some extent seemed orchestrated e.g. deindustrialization of the city, massive white and Black flight and job outsourcing.

This is my story: Detroit 1967 will get people to speak their truth of events in time for the 50th anniversary and add to what happened to this once thriving mecca.

If you would like to contribute to this project or know someone who would be of great value, please send me an email at