Image Image Image Image Image Image Image Image Image

Blog

Autumn Beyer

By

February 11, 2016

Mapping Morton Village: Writing the Content

February 11, 2016 | By | No Comments

In this post, I would like to discuss what will be included within the Mapping Morton Village interactive map. For the past several weeks, Nikki Silva and myself have been working on the written content of Mapping Morton Village. We decided to write the content of the site with the public in mind, focusing on giving general background information. The map will have multiple layers, showing the extent of each year’s excavations. Each layer will highlight a different aspect of archaeological excavation and research, as well as give examples from Morton Village.

Read More

jfelipe195

By

February 10, 2016

Structuring the Fields

February 10, 2016 | By | No Comments

     It is time to show the fields we are using in our database on Baptismal Records for Slave Societies (BARDSS). In previous posts, we pointed out that this database was possible thanks to a project hosted at Vanderbilt University and led by professor Jane Landers. Landers and her team have been travelling to different places in the Americas to digitized endangered parish records. They have uploaded to the web these records for public and free access. Although we are using only baptismal records from Africans and African descendants, the Ecclesiastical and Secular Sources for Slaves Societies contain burial, marriage, and many other type of civil records. All these documents have a particularity that makes them a perfect candidate for a digital database project. Regarding period of time, place of origin or language, these records are quite homogeneous. The explanation lies in the centralized nature of the Catholic church. Thus, we are not facing the disparity of information that has faced other similar digital projects. 

This is an example of a baptismal record from the parish of “San Carlos” in Matanzas, Cuba.

baptismal record

These are some of the fields from this particular baptismal record:

  1.  Date of baptism: Sunday, May 30, 1830
  2.  Priest: D. Manuel Francisco Garcia
  3. Age Category: “Parbulo” (Infant)
  4. Date of birth: May 2nd, 1830
  5. Filiation: legitimate (born from married parents)
  6. Father’s name: Francisco (it is also Criollo)
  7. Mother’s name: Maria de la O
  8. Nation: Ganga (African denomination used in Cuba)
  9. Legal status: Slave
  10. Owner: D. Francisco Hernandez Fiallo
  11. Name of the baptized individual: Felipe
  12. Godmother’s name: Ceferina
  13. Godmother’s African “nation”: Mina            

Baptismal records are fairly homogeneous regarding period of time or location:

Screen Shot 2016-02-10 at 1.04.18 PM

Finally, after some discussions and after comparing different baptismal records from diverse regions and period of time, we created this relational diagram. The following diagram show all the fields from BARDSS and the hierarchical relation among them:

BARDSSdatabase

 

 

jfelipe195

By

February 9, 2016

The Importance of Fields in Database Projects

February 9, 2016 | By | No Comments

 

       We discussed in previous posts about the importance of selecting representative fields when we are creating a database based on historical records. It is critical to go back again to this point due to the importance it has while designing a functional digital database. We all know that historical sources contain disparate universes of data. Historians, in general, extract from the documents what they need for they own research. This selectiveness inherently to the historiographical craft makes sources manageable for us. We simplify, mutilate, and make documents “legible” in order to answer our own questions. We ignore or overlook elements that we consider are not significative for our research. For instance, if we are working on different type of sources such as  inquisitorial and plantation records, and we are looking at religious practices of Africans in the Americas, we are going to privilege the testimonies of slaves on the legal trial or their ethnicities recorded in some plantation papers. Probably, we will overlook the sugarmills machinery because it is not significative to make our point. However, if we are creating a database of plantation records from Louisiana, and that database aims to be comprehensive, we probably would like to include as many fields as possible such as sugar mills machinery. For doing a historical digital database, it is crucial to think about it on the most broader possible way. A database is not just an individual enterprise tributing to our particular research. It is a repository for potential multiples types of historical inquiries.  

        However, like it is the case for a conventional monograph, we need a central theme for a database. It is essential that we are clear about what is our subject because the fields need to be connected among them around the main topic. For instance, slaves themselves are the main protagonists of a database on runaway slaves. In a relational diagram of fields, the slaves are at the center while owners, physical marks, date of capture, and “nation” are subfields tributing to the slave or main entity. Take now the example of the most successful digital project on the slave trade: the Trans-Atlantic Slave Trade Database (TSTSB). The main subject is the slave ship. Every field in the database is centered around those vessels transporting forced human cargoes from Africa to the Americas. Variables such as flags, date of departure, captain, owners, number of slaves or mutiny on board are instances related to a particular ship. The TSTDB resulted from the diverse type of documents located around the globe. These disparate records were written in different languages, with diverse purposes, over more than three centuries, and by disparate historical actors. Many of this documents had been used before by historians to write their classic monographs and some of these historians collaborated later to enlarge the TSTDB. Therefore, the question to ask is how was possible to translate such diverse historical sources into a single and coherent project without losing sight of comprehensiveness.

      First of all, the authors of the project are renowned specialists not only on the topic of the slave trade but also on quantitative studies. They simplified to standardize. I think this the key to create a manageable digital database when the universe of documents we are using is extremely heterogeneous. After a careful study, and based on their years of experience, the authors of the TSTDB determined those fields that were likely to show up on documents related to the Atlantic slave trade. For instance, documents usually mention information such as the ship name, the captain, number of captives, date of departure/arrival or nationality of the vessel. The fact that sometimes the name of the vessel is not mentioned does not make any difference about the importance of including that field. In the same way, that sometimes the color of the vessel is mentioned in some documents is not a reason to include that information as an individual field. Why? because the aesthetic of the ship is not something that appears regularly in the sources. As a consequence, that feature does not deserve a particular field. If we create a field for every detail from the documents in order to create a database, the result would be an oddly high number of empty fields. The database would not be functional.

The other element we have to take into account is that we will deal during the process with software developers and their programming language. They need a clear project based on coherent and interrelated fields. Programmers in general, in particular, those accustomed to create databases based on contemporary data, do not understand completely our initial intention of putting together a database based on fragmentary data. Take the example of a programmer that have done digital platforms for credit card companies. He/she has been databasing customers. He/she is used to a coherent and complete set of data. Unlike the aforementioned case, historians have to deal usually with fragmentary data. Thus, programmers have to create relationships between fields that could be or not entirely populated. Second, It happens often that historians resist simplifying their information when it come to formulate their digital projects. This attitude is based on epistemological principles that make sense while writing monographs, but that are not completely functional while creating a digital database. This is not a matter of gathering all the data we think  are or could be significative information for potential research. We have to choose fields that regularly appears in the documents in order to standardize them which mean, make a functional digital database. Our solution for exceptional or not usual data is an empty box where we write complementary information that did not make it as a separate field because its lack of representativeness. Fortunately, we did not face that issue while creating BARDSS. Our database is based on an extremely coherent set of information regarding time and space. After all, baptismal records were from the beginning, intended to be a sort of legible and coherent collection of data on population. Next post we will show some documents and how we extracted the information from them and transformed into a relational diagram

 

jfelipe195

By

February 7, 2016

Databasing Historical Records: Some of the Challenges

February 7, 2016 | By | No Comments

       Structuring a database is not an easy task. During this year of work, we have faced many challenges that have required from us great intellectual efforts and reflection. Nevertheless, I have heard from “digital humanists” and programmers that because we have a software developer, we are not making the database, that someone is doing it for us. The underlying argument is that we need knowledge on basic principle of programming such as HTML and CSS to claim authorship in the making-process. Having that programming skills today is helpful. However, that our participation on programming is limited does not mean we are not the main creators of the database. This blog shows some of the main challenges that make us -the historians- crucial for this type of project and it is, in part, an answer to technocratic point of views on the relationship historians and software developers.

First, the concept of the project –databasing baptismal records–, is ours. This project is not something that anyone could have imagined without the proper historical training. You need to know about sources, their internal logic, the institutions that produced them, paleography, and other language skills. It is important to decide the fields that can be extracted from the sources without violating the integrity of the documents. We have to respect historical concepts and to know that their meanings changed over time. We decided how to organize the fields in a coherent and hierarchical way. We need to translate our needs to programmers without historical training. We, historians, are the most important actor. Thus, HTML and CSS play a minor role to conceive the idea. The developing part is crucial, but should not be confused with the first step. This assertion is true for those cases where social scientists rely on programmers to materialize their projects.

      We had important elements in our advantage when we started this project. First, the digitized copies of the original documents are available online. The project “Ecclesiastical & Secular Sources for Slaves Societies” (ESSSS) has digitized and posted online the parish records from Colombia, Brazil, Cuba, and Florida. Without this amazing repository, our database would have been impossible. These baptismal records are geographically, linguistically, and temporally diverse but, due to the centralized nature of the Catholic church, they are also homogeneous sources, regardless of language, period, and region. This circumstance makes them the perfect candidate to build a transnational standardized database. It makes also doable to move the data from the digitized documents to an accessible, searchable, malleable, and “cleaner” digital format. It sounds easier than it is though.

      Defining the categories or fields that will be in the search tool is definitely challenging. Even when the documents are homogeneous, there is often new information showing up we need to decide if it deserves an individual field or not. Databases must have a limited universe of regular fields to make them functional.  We restricted our variables to those that regularly appear in the documents and those which do not show up frequently are included in the field “Miscellaneous.” Deciding the fields is not the only challenge. Naming the fields is another difficult step. Take the example of race and ethnicity. Categories, language, and meanings of race differ over time and by region. For instance, the are sometime equatable categories of race from the Portuguese and from the Spanish-speaking world. Anglo-speaking regions have had different definition of race. In both cases, race categories are subjected to change over time. We do not want to violate the documents, thus, we kept race as it appears in the sources, including the original language. Something similar happens with African ethnic designations in the Americas. Across different regions, African origins are defined in every document as nations. We keep the term “nation” as it appears in the document, although sometimes these categories do not represent and ethnic identity that carried meaning in an African context. These decisions resulted after long discussions and after reading the most important historiography on the topic. There is always a great space for disagreement. The next post will discuss some elements we took in account while structuring our fields. 

 

farleyj7

By

February 5, 2016

This is my story: The beginning of reclaiming the past to look to the future

February 5, 2016 | By | No Comments

 

 

 

Jim+Hubbard+Detroit+5

Detroit Rebellion 1967 Source Google Images

This is my story: Detroit 1967 is in the infancy of development. So, what is it again? It is a multimedia archive and repository that serves to catalog and historicize this canonical and significant time in the 20th century with oral histories from eyewitnesses and participants of the rebellion. This endeavor is a continuation of a project of promise and curiosity that started in summer 2009 when interning at the ABC affiliate in Detroit WXYZ ABC 7. Much of what has been written and indexed into the historical report is ahistorical, asociocultural and asocioeconomic and missing the qualitative and critical ethnographic approach.

As a media professional and oral historian, I’ve arranged a couple of interviews and reached out to a few universities libraries (University of Michigan, Rutgers University and Wayne State University) for research assistance, as they have materials that would support my research and educational target of free and accessible information. Also, I’m reading several texts on urban rebellion, Detroit and racial segregation to compliment all of the newspapers stories I’ve read in The Michigan Chronicle, The Detroit Free Press and The Detroit News. 

Forging the technical aspects of this project, to build the website, I’m using Omeka, it is a content management system developed by George Mason University for the humanities. I’ve used it in my master’s program, so I’m quite familiar with the program and its setup. Without being excessively techie, to make the material available, the oral history metadata synchronizer is a plug-in that will make my video and transcripts a great body of scholarship working as one.

The research will transform the digital space with phenomenal stories from willing interviewees and from there begin to change the narrative of the four days of chaos, the city and nearly fifty years to follow to one of unrelenting perseverance. Following the uprising Detroit became an urban scientific experiment being poked, prodded, exploited and devastated. In Fall 2009, Time ran a special report announcing their year-long assignment focused on Detroit, examining what went wrong with the motor city. Their report would confirm my scientific experiment theory but expose other massive infrastructure issues that to some extent seemed orchestrated e.g. deindustrialization of the city, massive white and Black flight and job outsourcing.

This is my story: Detroit 1967 will get people to speak their truth of events in time for the 50th anniversary and add to what happened to this once thriving mecca.

If you would like to contribute to this project or know someone who would be of great value, please send me an email at detroitphd2019@gmail.com

jfelipe195

By

February 2, 2016

AHA presentation and discussion of BARDSS

February 2, 2016 | By | No Comments

       In January, we presented our project, the Baptismal Record Database for Slaves Societies, at the Annual Meeting of the American Historical Association.  This was the second time we showed our project in public. The first time we did it was during a workshop organized by Vanderbilt University, by professor Jane Landers in November 2015. On that occasion, we presented BARDSS in front of renowned scholars working in several digital projects related to African slavery and the Atlantic Slave Trade. It was an exciting opportunity to discuss many topics of great concern for digital humanists working on databases today such as how to standardise fields, how to put together different databases about similar but not the same topic, or how to define conflicting concepts such as race and nation which usually change dramatically along the Americas. After two intense days of discussion, the only implicit agreement was that there is a need to link diverse but related digital projects on slavery. In that direction, professor Walter Hawthorne coordinated a group of panels for the Annual Meeting of the American Historical Association.

      Our presentation at the AHA had little to add to what we already have done three months before at Vanderbilt. At that point, we had already agreed with our programmer to work on the visualization/search interface. Thus, we presented how we envisioned this interface. Most important, we showed the different search tools that users will have available and the charts and graphics that the system will create after this search. We discussed again some of the challenges we faced while drafting BARDSS. Some of them, already pointed out, were, for instance, to choose what fields from the documents deserved to be in the main search tool and not in the miscellaneous section or how we would treat different languages across the Americas. We were not sure at the beginning of this enterprise if we had to translate definition of races, even if this would be possible. We kept race definitions it in their original languages, and the reasons we took these decision will be another blog content. Rather, I would like to focus on one of the main issues that we discussed at the AHA: Is it possible to merge different databases in a single database?

The question raised because some of the similarities of the projects presented at that panel. In particular, because professor Patrick Manning presented his interesting project of creating a meta-database of human population. The questions were addressed mainly to him because the ambitious character of his project. There are basic fields, all we agreed, that can be compared or subsumed into a single project, such as sex, age, height, professions. There are other databases related to specific universe of documents that make little sense out of their documentary logic. A database on runaway slaves had particularities that does not exist for other type of databases like, for instance, date of capture. The same applies to projects on liberated Africans that contains non-replicable data such as the capture of the ships where the slaves were transported to the Americas. The main challenge is –this is still the issue- to create a dialog among different projects; Is it possible to create at least a sort of soft linkability. This is a discussion still opened to more points of view.

 

Sara Bijani

By

January 30, 2016

Simulacra and Simulation and my Journey into the Third Order of Copyright Law

January 30, 2016 | By | No Comments

I’ve been thinking a lot about what it means to protect a representation these days. Anyone remember reading Baudrillard? I remember reading that whole treatise on simulated reality years ago and associating the whole thing with war and television. These days, I’m pretty sure he was thinking about copyright. Just kidding. I’m pretty sure he was thinking about high modernity and everything that travels with it, including copyright. “Capital, which is immoral and unscrupulous, can only function behind a moral superstructure,” a hellish and mundane everyday space from which rules of ownership and entitlement emanate, along with all the other things that make the late industrial age go round.[1]

Read More

Bernard C. Moore

By

January 29, 2016

The Process of Digitization

January 29, 2016 | By | No Comments

My past couple of posts have been more on the political and ethical side of digitizing materials for the Namibia Digital Repository. This post will approach the project from the other side: the process of digitization. For those who are conducting historical research, digitizing materials is a necessity if we are going to ever finish these dissertations in an organized and structured manner. So even for those who aren’t pursuing a digitization project for their CHI Fellowship, this blog post may help you in other ways.

Read More

Lisa Bright

By

January 28, 2016

Changing Directions – Introducing TOMB

January 28, 2016 | By | No Comments

As Katy mentioned in our recent Digital Archaeology Institute blog post, she and I have decided to take our project in a different direction. We originally proposed a project called ossuaryKB, a mortuary method knowledge base. However, as we’ve been working toward the project over the last semester, we hit quite a few roadblocks. After sitting down recently we realized that ossurayKB wasn’t really the project we had a passion for. What we really wanted to make was a tool that was more orientated towards the public learning about mortuary archaeology. So we are proud to announce our new project… TOMB: The Online Map of Bioarchaeology.

TOMB will center around an interactive map featuring case studies and exemplars from mortuary archaeology and bioarchaeological studies. The site will be a space for students and the public to learn more about this field, and still serve as a place for anthropologists to share their research and provide updates. For more details on the project description, please see Katy’s blog post.

This refocusing of the project means that my goals for CHI will also change. Previously I’d discussed the challenges surrounding building a SQL database for ossuaryKB. TOMB will require a different set of technical resources. Over the next three months, I will build the functional structure of the site using a combination of Bootstrap and Leaflet . Specifically, I will be using the open web mapping application template developed by Bryan McBride called bootleaf.

The bootleaf template is available on Github (https://github.com/bmcbride/bootleaf), and well commented. Although I’ve created a website centered around mapping before using bootstrap (Mortuary Mapping), I used CartoDB to make the maps. This will be my first time using leaflet. Thankfully my project partner Katy used bootleaf to create IELDRAN, and have excellent comments on her use of bootleaf on her Github repository(https://github.com/bonesdontlie/Commented-ieldran).

We’re both very excited about the potential TOMB creates, and I look forward to sharing my bootleaf learning experience.

Nikki Silva

By

January 27, 2016

Mapping Morton Village: Creating the Interactive Map

January 27, 2016 | By | No Comments

Creating the Interactive Map

For the past two weeks, as Autumn Beyer worked on coding our site, I have been working on the interactive map for our joint CHI Fellowship project – Mapping Morton Village. I had some problems at the beginning, including a computer that would not function and some confusion as to the format required for the map data, which have both been remedied. We are using Mapbox to create the maps, which requires the maps to be georeferenced (i.e. assigning real world coordinates to the map). I already had shapefiles for the map – however our map was not georeferenced to real-world coordinates, but to our own site grid. I contacted the co-project director for the Morton Village Archaeological Project, Dr. Michael Conner at Dickson Mounds Museum in Lewistown, IL and he was able to send me georeferenced shapefiles (Thanks Mike!). Read More