Doing Digital History

Just another WordPress site

Menu Close

Category: Slavery (page 1 of 2)

Final Portfolio Reflection

The Spring semester of 2019 is now drawing to a close. Overall, I am glad I took this class. I learned about many different digital methods that historians and academics in related fields use to analyze and present topics of interest related to history to various audiences. While slavery was not really on the top of my list in terms of topics I was eager to research, I nonetheless could appreciate the amount of quality work that was and is currently being done by academics and even volunteers in an effort to keep the conversation going about the complicated topic of slavery. The time it takes to not only write a transcript of a letter or other primary source material but also detailed, descriptive metadata as well as do a close reading of the material to be able to write a narrative or make an argument is mind-boggling, as I have now experienced this process first-hand thanks to this class. I commend the people who do this kind of work.

As the semester has gone on, I have gotten bogged down with more and more work in the class to the point where I haven’t fully been able to catch up. Still, my thinking has changed over time with regard to my knowledge of the uses of digital methods and historical thinking because of the class and the activities I have done throughout the semester. Within the first few weeks of the semester, my classmates and I were given an introduction to historical thinking skills we would be developing throughout the semester. This primer was very useful to me and is something that I can take away from this course even after it is over. Some of what we learned was typical of what I had learned in my past years of schooling about understanding a resource and researching a topic. On the other hand, I also learned about how historians think, giving me insight into a mindset that allows me to explore historical materials more deeply.

Over the next few weeks, we learned about different kinds of digital tools historians use to analyze historical materials. The only tool I was already somewhat familiar with was ESRI, though I hadn’t used the StoryMap tool before; I just worked with geospatial data in a class I took in the past. The other tools, including MALLET, Voyant, the narrative map tools like StoryMap JS, and Flourish, all have their uses and are good to know for anyone looking into doing historical data analyses and making visualizations of the information learned from such analyses. As I learned about these different tools, I was struck by the wide variety of methods historians and other academics can use to analyze a set of materials. For example, if a researcher wanted to identify topics in a large set of text-heavy documents, the use of a text analysis tool like Voyant could cut down on time significantly rather than combing through the material manually.

Now that I have more of an understanding of how to think like a historian and use digital tools and methods, I have revisited my visualization about the number of enslaved people embarked and disembarked on Intra-American slave trade voyages to revise it. The revised edition can be found here: https://public.flourish.studio/visualisation/338756/. I changed the dependent variable from number of people embarked and disembarked, which wasn’t telling much of a story, to mortality rate. I also decided to use the database for Trans-Atlantic voyages in addition to the Intra-American voyage database. That way, there would be a comparison between the two types of voyages, which may be of interest to someone wanting to compare the two experiences. Finally, I added a source attribution at the bottom of the new visualization, which was missing in my first data visualization.

I think the growth in my understanding of how to visualize data is reflected in the changes that could be seen from my old visualization to the new one. My first piece, I feel, did not convey a particularly compelling message. Comparing the total number of enslaved people that embarked with the total number disembarked does show some change over time, but there wasn’t much to be concluded other than that Intra-American slave voyages didn’t have a significant rate of disappearance and death of enslaved passengers and that the number of people embarked in 25 year intervals continually increased from 1626 to 1800. Having raw numbers can be somewhat compelling, but the barely visible changes between the number embarked and the number disembarked didn’t seem to warrant much of a need for the comparison. My new visualization, in contrast, has a more interesting story to tell. Using a mortality rate in percentage terms makes each death more significant and may compel the viewer to want to learn more. Comparing the Trans-Atlantic voyages with the Intra-American voyages show how the former were more deadly compared to the latter. I think this is another point I feel the former visualization lacked. These differences show how I have evolved my thinking in constructing data visualizations.

I would like to close this reflection with some thoughts on the kind of work I would like to do and approaches I would like to learn going forward. Though I am not too interested in continuing to learn about historical methods or pursue the topic of slavery, as my major is in the environmental sciences, there are several approaches that interest me. I have some experience with programming languages like R and Python, so I have worked with some packages for text analysis and data visualization. This class, Doing Digital History, has helped expose me to other tools that could be used to do those things. I am highly interested in looking into more specialized tools like Voyant. I also haven’t really done much with MALLET yet, so I would like to learn more about how to use that tool. I think the broad topics taught to me by this course really gave me a good taste of the kinds of digital tools that I could use to perform data analysis applied to my area of study.

Narrative Map Construction

For the week of March 19-21, various narrative map making tools like StoryMapJS and ESRI StoryMap were shown to the class, and we were instructed to construct our own narrative map using one of the tools. The map’s purpose would be to tell a rich story about an event related to the readings we were doing along the way.

Information Architecture (Construction)

During the week of Feb 19-21 for class, we did an exercise where we learned how a website’s information architecture is put together. I learned that digital history project websites can be structured using two general methods. These two methods change how the user can interact with a project’s website and are used for different purposes.

Data Visualization (Construction)

For this activity, I used Flourish to construct a data visualization. The visualization I created can be seen here: https://public.flourish.studio/visualisation/324850

Intra-American Slave Trade graphic

My first Flourish visualization

I used information from the Voyages.org website’s Intra-American Slave Trade Database to visualize the number of enslaved people embarked on intra-American slave trade voyages compared to the number of enslaved people disembarked on those voyages in 25-year intervals from 1601 to 1800. I used a bar/column chart for this visualization because I thought the comparison between the total number embarked and the total number disembarked on vessels carrying enslaved people throughout the Americas. As can be seen by the side-by-side comparison, the fewer people disembarked on voyages in general than embarked. There is not a significant decrease, however, and the fates of the people who didn’t get to disembark is unknown — they could have died or even escaped. There is another conclusion one can reach with this visualization, however; ever since 1626, it appears that the number of slaves embarked and disembarked on intra-American voyages steadily increase as time goes on, reaching a stark peak during the interval from 1776-1800.

Data Visualization (Critique)

Over the past few weeks, I’ve been exploring different ways to deal with digital history methods of analyzing data to produce visualizations. I have also learned how to critique data visualizations and make analyses on the ways they are used.

My process when I engage with and analyze historical data visualizations has not changed much from the beginning. I still look at the information about the creators of the visualization and any possible legends, keys or how-to instructions before I examine the content of the visualization. I have known about this step in the process for a long time and understand how crucial it is to understand any kind of graphic. Getting the context of a visualization is arguably one of the most important things to do in order to understand it. Without knowledge of the circumstances with which a graphic was produced, the graphic loses its usefulness for answering or asking questions about the topic. In the case of historical data visualizations, the topic includes the place of whatever is being visualized in history and certain key historical events. After the preliminary look-over, I then would look at the actual content and try to understand the message that is being communicated. I would interpret how the quantitative and/or qualitative data relate to each other based on the legend, labels, etc. and the type of representation being used like bar chart, histogram, or scatter plot.

One way I can think of where my process changed is that near or at the end of the process, it can be useful to think about what the visualization is lacking and how it may have omitted things, whether this is done on purpose in an effort to mislead the viewer or due to an oversight by the creator. Before taking this class, I didn’t think of using this technique to comprehend a visualization thoroughly because it is difficult to think about what isn’t there as opposed to what is. I believe adding this step to my process of analyzing a historical visualization is a beneficial improvement. An essay by Frederick W. Gibbs about critiquing data-driven historical visualizations mentions some additional questions I could ask about visualizations to analyze them.

When creating representations of data largely done through software (and especially at large scales), must representations remain free of direct manipulation after an initial algorithmic rendering? Is it acceptable to alter a computed representation in order to highlight a particular feature? To what extent might that be considered subversive or misleading? To what extent is that simply better communication? Is the visualization more about the unadulterated output of the tool (even if unfortunately treated as a black box) or about communicating an interesting historical phenomenon?

Those questions relate to the larger question of whether or not the visualization needed to solely be developed with computational methods. This is another way to pick apart the context of a representation and think critically about what is being presented.


Gibbs, Frederick W. “New Forms of History: Critiquing Data and Its Representations.” The American Historian, 2016.http://tah.oah.org/february-2016/new-forms-of-history-critiquing-data-and-its-representations/.

Received Data and Derived Data (Critique)

Received data and derived data are categories in which various kinds of data can be sorted into. These two categories have different uses due to the differences in their origins. Received data usually come from primary sources that were created to be used as data. Examples include financial ledgers and membership rosters, even census data. Derived data, meanwhile, are calculated from base data, thus considered “derived.” Both of these types of data can be tidied into a format that is machine readable, which would facilitate the data analysis process.

I think that tidying data is useful for historians because of the previously mentioned effect of allowing data to be machine readable and facilitating data analysis, which can reveal patterns and trends that wouldn’t otherwise be found through manual work. It can cut down on time that could be spent on actually making interpretations and discovering insights from the wealth of historical data that is available and evolving day by day. The form in which the data was received or derived is significant because it shapes the way people can interact with and analyze the data. For example, if there is a data set that is arranged into long, skinny columns, it would be best to analyze the data using a tool like Python rather than Microsoft Excel.

Data Map Construction

-Temporary Entry-

Data Map Critique

Data maps are a tool that people use in things like historical analyses. They take data that can be georeferenced (tied to a physical location in space and a point in time) and visualize the data with a map. This type of tool is essentially geospatial in nature because of this. An example of a data map can be found here: http://dsl.richmond.edu/emancipation/. The map shown in the link visualizes emancipation during the Civil War era in the American South. Below, I will be critiquing the technology used to construct data maps.

Data-driven geospatial visualizations and analyses have transformed people’s thinking about history. The creation of this technology changes the questions one can ask and potentially answer about the past. Now that data can be analyzed and placed on a map, one can ask questions about the time and locations of various historical events. Visualizations can convey a large amount of information quickly since people tend to process visual information more quickly than textual information. That makes data maps incredibly useful for gaining insights into both the spatial and temporal aspects of historical events.

Like most things, there are cons in addition to the pros. Patterns can get lost in noisy data, making it hard to come to a conclusion. One can lie with data maps as well. The amount of information at different scales can be used to obscure details or downplay a trend that might be seen at smaller or larger scales. This is easier to do with a static map that can’t be zoomed in or out on.

Critique of Narrative Maps

This week, we learned about spatial analysis and products like narrative maps that historians have constructed to provide an analysis of the lived history of people. Those who work on the Stanford University Spatial History Project view their work as different from traditional historical work in a few ways; their work is collaborative rather than done individually, focuses on complex, data-driven visualizations rather than texts, still images, or static maps, use digital history and therefore computers, is open-ended, and focuses on space in addition to time rather than exclusively on time (White, 2010). Narrative maps, being a subset of tools used by people who work on such projects, are part of the movement in the historical field to bring to light narratives that derive their meaning from not only moments in time but also locations one can pinpoint or trace on a map using computers that can quickly and effectively handle large amounts of data.

White, Richard. “What Is Spatial History?” The Spatial History Project, February 1, 2010. http://web.stanford.edu/group/spatialhistory/cgi-bin/site/pub.php?id=29.

Text Analysis

The Colored Conventions Project is a digital transcription project aiming to “bring the buried history of nineteenth-century Black organizing to digital life.” With the help of volunteers, the minutes of “Colored Conventions” that took part from the 1830s onward until the end of the nineteenth century across the United States have been transcribed.

This week, we learned about text analysis using tools such as MALLET, Google N-Grams, and Voyant. I decided to use Voyant on the Colored Conventions Project Corpus to analyze the texts within the corpus. I downloaded the files from the website and then uploaded them to the Voyant Tools desktop site and got this analysis as a result. The top ten most frequently-occurring words were, from most to least frequent: “convention,” “committee,” “people,” “colored,” “state,” “Mr.,” “shall,” “resolved,” “men,” and “motion.” This gives people an idea of the language used in a majority of the texts found in the Colored Conventions Project Corpus. Using the TermsBerry tool, one can see which most frequently-appearing terms were used in close proximity with which others and gives further insights into the topics of the texts. For example, “American” was used most often with, in order of most to least frequent, “society,” “people,” “citizens,” “slavery,” “government,” and “liberty.” With Voyant’s suite of tools, it is possible to get a good overview of some important terms used within a corpus spanning many years and lengthy documents that would be tedious if not impossible to physically read and summarize with pure manpower.

© 2019 Doing Digital History. All rights reserved.

Theme by Anders Norén.

css.php