UVA Logo

IT services blog

Showing 12 of 12 Results

07/03/2023
Anson Parker

Stable Diffusion is in the news a lot, along with other AI image editors such as MidJourney and Dall-E.  It is open source

One popular tutorial for making QR codes that are more artistic https://stable-diffusion-art.com/qr-code/

If you want to follow along on your own desktop you'll need to follow the following tutorials

Cost: Free open source

Use cases: generating a wide variety of imagery based on keyword prompts and as part of generalized workflows

Support: Discord and google are the best sources for support

This post has no comments.
06/23/2023
Anson Parker

In the past we've looked at Data Viz and exploratory analysis with Streamlit, used Streamlit as the backend for library open publishing research, and had some fun with GIS and Streamlit to study local canopies.  In this review we're looking at how ChatGPT can produce Streamlit boilerplate code and then how GitHub Codespaces may be used to do even faster prototyping with additional capabilities. 

In the rapidly evolving field of library sciences, leveraging innovative technologies is crucial for efficient management and seamless user experiences. Python Streamlit, ChatGPT, and GitHub Codespaces offer a powerful combination of tools that can enhance various aspects of library sciences, from data visualization to user interaction. In this article, we will explore the benefits, use cases, associated costs, and means of obtaining support for utilizing these tools in the university and library sciences domain.

I. Python Streamlit:  an open-source framework originally released in 2019, designed for building interactive web applications with minimal code. It enables library professionals to create intuitive and visually appealing interfaces for data exploration, analysis, and dissemination. 

  1. Cost: Python Streamlit is free and open-source, making it an economical choice for library sciences projects.

  2. Common Use Cases:

    1.  Data Visualization: Streamlit simplifies the process of creating dynamic visualizations to present data in an engaging manner, aiding in data-driven decision-making.

    2.  User Interfaces: With Streamlit, libraries can build user-friendly interfaces for search, content faceting etc... improving user experience and accessibility.

    3. Prototyping and Testing: Streamlit's rapid development capabilities make it ideal for prototyping new library services or experimenting with user workflows. This comes in especially handy with the Codespaces integration

  3. Support: Python Streamlit has an active and supportive community. The official Streamlit documentation, community forums, and GitHub repository are excellent resources for learning and troubleshooting. Additionally, there are various online tutorials and blog posts available to aid in getting started.  On grounds there is support in most libraries for Python, and there as well as various short courses offered on ad hoc basis.

II. ChatGPT: powered by OpenAI's language model, ChatGPT enables libraries to create conversational agents for improved user engagement and personalized assistance (like help writing the first draft of this article!). It allows users to interact with the library's services through natural language conversations. Every article detailing ChatGPT or other AI tools should include a caveat - it's possible for AI to make mistakes.  Libraries should begin working on policies to determine when to open the doors to this powerful new technology, and how to set up guardrails to prevent the spread of misinformation.  Let's delve into the details:

  1. Cost: OpenAI offers a range of pricing plans for using ChatGPT. The cost depends on factors such as usage, model capacity, and API calls. It is recommended to review the OpenAI pricing page for specific details.

  2. Possible Use Cases:

    1.  Virtual Reference Services: ChatGPT can be used to provide automated virtual reference services, answering user queries and providing assistance in real-time.

    2. Recommender Systems: libraries can develop intelligent recommender systems that suggest relevant books, articles, or resources based on user preferences.

    3. User Support: ChatGPT can handle frequently asked questions, guiding users through library services, and offering support for common issues with some minor training

  3. Support: OpenAI provides comprehensive developer documentation and guides to assist in integrating ChatGPT into applications. The OpenAI community forum and support channels are valuable resources for addressing queries and troubleshooting issues.

III. GitHub Codespaces: GitHub Codespaces offers a cloud-based development environment that enables seamless collaboration and version control for library sciences projects. It provides a hassle-free setup for development and facilitates team collaboration.  Although Codespaces is flexible it can help to use a Codespace template that has some configurations baked in.  Port forwarding, Python versioning, and other libraries can be rolled out automagially - for this example I used https://github.com/robmarkcole/streamlit-codespace that had the above as well as the streamlit library itself pre-loaded.  Here's some more information on Codespaces:

  1. Cost: GitHub Codespaces offers both free and paid plans. The pricing structure is based on factors such as the number of concurrent Codespace instances and storage requirements. Detailed pricing information is available on the GitHub website.

  2. Common Use Cases:

    1.  Collaborative Development: Codespaces enables multiple library professionals to work together on codebases, fostering efficient collaboration and reducing development time.

    2. Testing and Debugging: Codespaces provides an isolated and controlled environment for testing and debugging library-related code, ensuring code quality and reliability.

    3. Continuous Integration/Continuous Deployment (CI/CD): By integrating Codespaces with CI/CD pipelines, libraries can automate the process of building, testing, and deploying applications.  Github Actions provide additional CI/CD opportiunities

  3. Support: GitHub provides extensive documentation and guides for getting started with Codespaces. The GitHub Community Forum and support resources are available to address any questions or issues that arise during development.

Conclusion: Incorporating Python Streamlit, ChatGPT, and GitHub Codespaces into library sciences can significantly enhance data visualization, user interaction, and collaborative development. The combination of these tools empowers library professionals to deliver improved services and user experiences. With their affordability, diverse use cases, and strong community support, adopting these technologies can lead to transformative outcomes in library sciences projects.

This post has no comments.
10/20/2021
profile-icon Anson Parker

We're not experts, however we're working on getting CPACC certified this year - thank you Christa @Virginia Tech!

First we got our metrics set up - we went big picture here.  That was not too important for our site, since the content is largely references to other content (we're a library, right?) however for anyone else interested this a reasonably complete list

note on webaim responses (#6 here) = there are a couple of widgets on our site that are auto-generated by our framework that have some w3c errors, however upon inspection they don't really interfere with navigation or much of anything else, so most pages have 2 errors, and we're not too worried about it, however it's on the agenda for future work

 

a link to our spreadsheet review of the guides.hsl.virginia.edu site

 

METRIC #1 * Photos should never be altered to artificially create diversity.
(we will audit for the use of stock photography, following UVA Health diversity recommendations here, https://www.uvahealthbrand.com/standards/policies/diversity)
METRIC #2 * Monitor the choice of photos and video subjects to accurately and authentically reflect the diversity of actual student, faculty, and staff demographics.
(we will audit for race/ethnic representation and offer actual #'s and %'s of represented groups on our website in photos and videos)
METRIC #3 * When possible, write (or rewrite) communications to be in the plural form by using the plural pronoun of “they” instead of the singular pronouns of “he” and “she.” If it is not possible to write in the plural form, use the singular pronoun of “s/he” to be more inclusive.
(we will audit for gender narrowed references and/or unnecessary uses of gender)
METRIC #4 * Use the gender-neutral nouns of “people,” “person” or “parent” instead of “man,” ”women,” “father” or “mother.” For example, use “chairperson” instead of “chairman” or state that “all people are created equal” instead of “all men are created equal.”
METRIC #5 * Capitalize the “b” in the term Black when referring to people in a racial, ethnic or cultural context. The lowercase black is a color, not a person.
METRIC #6 * All websites should be accessible as defined by w3c standards.
(We will use the WebAim (Wave) tool for accessibility auditing of all pages) and count every error regardless of merit
METRIC #7 * All videos should have closed captioning.
(as described, we will audit for this)
METRIC #8 * Be mindful of the use of symbols such emojis on social media. For example, choose different emojis of color to represent the diversity of the organization/community.
(as described, we will audit for this in our webpages in case they are used on webpages but I don't expect this will be an issue)
METRIC #9 Indigenous and Aboriginal are identities, not adjectives, and should be capitalized to avoid confusion between indigenous plants and animals and Indigenous human beings. Avoid referring to Indigenous people as possessions of states or countries. Instead of “Virginia’s Indigenous people,” write “Indigenous people of Virginia,”
METRIC #10 LGBTQ is acceptable in all references for members of the lesbian, gay, bisexual, transgender, queer/questioning, asexual, ally and intersex community. It does not need to be defined. If sources prefers another acronym, such as LGBTQIA+ that is acceptable too.
METRIC #11 Capitalize the proper names of nationalities, peoples, races, tribes, etc. However, use only when relevant to the story. When identifying someone by race or nationality, be sensitive to the person’s preference and standard accepted phrases. For example, do not use Oriental for people who are Asian. See Hispanic and Native American entries.
METRIC #12 Acceptable forNative American people in the U.S. Follow the person’s preference. Where possible, be precise and use the name of the tribe: He is a Navajo commissioner. Such words or terms as wampum, warpath, powwow, teepee, brave, squaw, etc., can be disparaging and offensive (when not referring to something by its formal name). Do not appropriate these phrases for non-cultural uses, such as using the term “powwow” to refer to holding a meeting.

  *   First Nation is the preferred term for native tribes in Canada.
  *   Tribes from Alaska prefer Alaska Native.
  *   Lowercase tribe/tribal and reservation except as part of the formal name.
  *   Use Indian only for people from India.
  *   On second reference, Native/Natives is acceptable.
METRIC #13   *   Transgender is an adjective, not a noun. Do not use the term “transgendered.”
  *   The physical changes made to a transgender person’s body are referred to as “transition,” not “sex change.”

This post has no comments.
09/21/2021
profile-icon Anson Parker

Project Lead - Maitri Patel, MPH—Advocacy and Clinical Application Coordinator

The Challenge

Patients - and sometimes Doctors and Nurses - consistently express difficulties and feelings of being overwhelmed while navigating the UVA Health system. Patient dissatisfaction is heavily determinant on access to care. Patient tardiness to appointments and use of medical staff for navigational advice create areas of waste within the hospital system that could be better mitigated with an indoor mapping service.

Our Solution

Our team has been assembled to integrate mapping data and tools from commercial and open-data repositories to build a best-in-class wayfinding product for the patients, medical staff, and community members who navigate within the walls of the UVA Health system. Our main goals are to minimize patient barriers to access of care by allowing detailed parking information and step-by-step instructions to walk through the hospital based on patient input of physician/department/appointment location. We hope that after beta-testing and success of navigation,  we will expand the application to allow for multi-stop function, various language functions, and spoken instructions to maximize accessibility. After the 2D rendering is complete, we hope to utilize LiDAR technology to supplement the maps and consider augmented reality applications.

Where

Using foundational work established by Derrick Stone and the Mazemap team, we will begin our mapping in the main hospital and expand to cover the battle building, Couric Cancer Center, and West Complex.

How:

Using the Mazemap online system, we will map out the location pins of each department of the UVA main hospital and surrounding hospitals. Using the pre-programmed routing algorithms, we will confirm efficient and accessible patient pathways.  We are also working to integrate with bus schedules from public transportation and UVa transit, as well as information such as pedestrian and wheelchair access resources. 

When:

Week of 9/20: Testing LIDAR tooling in the UVA Health Science Library 

Sept 23 8-9PM Zoom link - team meeting

October 6th: Contacted Derrick Stone and got in touch with the Mazemap team

October 19th: Completed the Mazemap orientation and added all new team members to the mazemap user editor team

Week of October 25th: Divided remaining location pins to be programmed into Mazemap

November: Complete 2D location pinning for UVA main hospital and surrounding buildings

December: Internally test the routing paths for accuracy, efficiency, and accessibility paths

January: Begin beta-testing using medical staff and students

January: Contact select departments for input on most common paths utilized by patients from their department

February: Create screenshots and QR codes for most commonly utilized paths and continue developing the mazemap application based on consumer feedback.

Some initial tests in MazeMap

Creating predefined maps - such as this one from the Guyenet Lab in Pinn Hall over to the Cafeteria

maze map directions

In the toolbar the user gets detailed visual and textual directions to aid in wayfinding

detailed mazemap instructions

 

Support Team:

Technical Advisor - Joe Jamison, Visitable.org

Students and staff

Hollis Cutler: GIS routing & Python coding

Nora Dale: OpenStreetMap transportation lead

Michelle Miles: Accessibility expert and design lead

Lena Nguyen: OpenStreetMap project & community lead

Anson Parker: Health Science Library IT 

Erich Purpur: Research Librarian

Derrick Stone: Computer Science and Software Programming Lead

And a special thanks to MazeMap

Steven Newman: VP of Sales, Mazemap

Tatiana Kosmida: Mazemap Troubleshooting Lead

Daniel Schjetne: Mazemap Troubleshooting Lead

Additional Notes:

We may need to integrate this work with existing infrastructure such as OpenStreetMap and other Charlottesville City maps, and we look forward to investigating this in the future.

this is an openstreetmap view of the hospital and library

 

This post has no comments.
07/26/2021
profile-icon Anson Parker

As a collaborator in the Lyrasis Catalyst 2021-2022 award with the Science and Engineering Library we are pleased to produce our first documentation in the series done in collaboration Visitable.org - Accessibility Information and Disability Inclusion professionals

Our goal working with Visitable was to look at our spaces from a wheelchair accessibility perspective and consider which apps provide the simplest workflow for accessibility professionals to use when working with the LIDAR equipped iPad Pro 12. 

<TLDR> 3d scanner app is a convenient off-the-shelf app you may use with confidence.  All the features are free, and you're not locking yourself in from a file-format perspective. All that and it has some convenient workflow features for working with accessibility professionals.

Want to participate? visit codeforcville.org/lidardb

</TLDR>

 

Notes from Visitable.org

July 13th I went scanning with Joe Jamison - founder of Visitable.org. All of the scans were done with the 3d scanner app, and most are posted here for review https://sketchfab.com/alibama77/collections/uva-health-science-library.  Starting at the front of the library and working our way down the elevator and in to the bathroom and group study rooms we worked to evaluate 3d scanning in the context of space accessibility analysis.  Here are Joe's notes  

 

3d scanning strengths:

  • Can get a full picture, and get overall takeaways
  • Easier to view depth
  • Quicker for making measurements (if knowing where and how to scan) and giving feedback
  • Easy to share with colleagues, customers, and users to review and make their own measurements if they'd like

Manual testing strengths:

  • Small details and barriers are easy to see
  • Measurements are more accurate, helpful for measuring small lips and door thresholds
  • Pictures are a more efficient way to help users see a holistic view rather than scanning a full room
  • More thorough, includes looking at attitudinal barriers as well as asking clarifying questions on policy and practices

3d scanning weaknesses:

  • Might be difficult or impossible to see small details, such as door thresholds
  • Visualizations aren't as clear as pictures: reflective surfaces distort shapes, corners are not clearly defined, etc.
  • Might be a learning curve to figure out where and how to scan, which is a little harder to communicate than providing instructions on where to take pictures
  • Scanning roofs/ceilings and bigger spaces for visualizations takes longer than pictures
  • Some tools, such as the Sketchfab lab tool, do not make it easy to measure within units desired
  • Cannot measure slope or door pressure with existing tools

Manual testing weaknesses:

  • Slower to take measurements and record them - makes overall process of pictures and measurements slower
  • More steps in the process to share reviews with customers, colleagues, and users
  • Taking measurements with a tape measure is potentially less accessible/ more difficult than scanning an area

3d scanning 101

We downloaded about a dozen different 3d scanning apps, and most of them required paid subscriptions - i went through the free trials on several other tools, but really wasn't impressed and everything came in to a toss-up between two great products on the closed source side, and then the open source tool that comes from the robotics space and that shows strong potential for bringing game-changing developments in to the accessibility space.

Off-the-shelf Winners

For working offline and general flexibility + user interface there's no question - the 3d scanner app is your go-to app.  Use the low resolution to capture large spaces, or dig in to a single room or two at a time with the high res tool.  You can export the file a bunch of different ways and do any technical analysis you want, and exporting to the web is simple https://sketchfab.com/alibama77/collections/uva-health-science-library is a collection of scans done in the library and seems to work ok in the new Sketchfab lab tool here https://labs.sketchfab.com/experiments/measurements/.  Familiarity with some 3d viewing tools is still going to be a plus, and for users who are comfortable in sketchup or other architectural tools the additional controls are available for you to work with.

https://sketchfab.com/3d-models/1st-floor-mens-0b64b75eea1c40a88aec4f021f7389ae in the labs tool here

https://labs.sketchfab.com/experiments/measurements/#!/models/0b64b75eea1c40a88aec4f021f7389ae

their measuring tool correctly shows the distance in our men's room between a shelf and a wall at 83cm which is right on the cusp of being too small for a wheelchair to pass

sketchup 3d measuring tool

 

For convenience a with just measuring spaces poly.cam is a solid contender.  It only allows you to post 3d scans to the web, but it comes with some convenient measuring tools and this model of a hallway in the hospital was one of the first scans I took and the interface is intuitive and elegant.

https://poly.cam/capture/AD7A946A-0BE5-4769-87A0-258A3376D170

polycam's 3d scanning app has some measuring tools built in

 

Open Source and Robotics Perspective 

Also tested but not ready for a full review is RTAB_MAP.  This is an important tool for the next part of our discussion which is a more automated approach to the analysis and looking at accessibility from a robotics perspective.  This has the capacity to tie in to epic large scale products, and being the only open source tool I've seen for processing LIDAR in the iOS ecosystem it is by far the most interesting tool to test.  With an active forum, 260 open issues and 463 closed issues on github it is a very active community with incredibly technical leaders in the LIDAR processing field. 

 

  3d scanner app poly.cam rtab_map
Scanning interface the low polygon scan does a great job on large areas and is intuitive, the high res "paint" approach is clumsy in large spaces. High res version doesn't show you where you've already been well best interface - the light blue to polygon camera change is intuitive

the point cloud interface is useful for showing people exactly what the machine sees at the most basic level, and from a training perspective is a great place to begin a training session.  The pose overlay data is helpful for explaining that dimension of mapping.

Web File Sharing sketchfab.com you can updload directly to poly.com none
Stability crashes occasionally, but pretty good. really solid crashes a lot. on the ipad i found myself stopping it when it said memory was at around 800mb
File formats excellent choices none - web based publishing only, paid subscription unlocks other formats has a pbstream.db format that saves pose data. also offers point cloud exports
Community https://www.3dscannerapp.com/ discord

http://introlab.github.io/rtabmap/, forum, github

Accssibility notes need to test need to test

the fact that this tool is harder to use for sighted users actually reinforces the idea that these tools should be better automated, and that ultimately human intervention in the process is not going to be

Quality my favorite to use when showing friends for the quality and overall ease it's a high quality product, easy to use, and the web-sharing interface with measuring tools is super convenient

this is probably where we're going to spend a lot more time.... and it's open source, so that's awesome

This post has no comments.
06/01/2021
profile-icon Anson Parker

Exploratory data analysis (EDA)

allows developers and programmers to provide stakeholders with a clearer understanding of what questions may reasonably be asked of a dataset with very little programming effort.  How much data is actually present in every row or "what are the unique, or most common values in this column" are some basic questions can help shave up to 30% of the data science workflow experience off according to some random source on the internet, and from my perspective is just an essential first step, period.

Carnegie Mellon has a deep-dive chapter on the subject

https://www.stat.cmu.edu/~hseltman/309/Book/chapter4.pdf

and here's a brief and reasonably concise overview https://www.svds.com/value-exploratory-data-analysis/

EDA in Python

Pandas profiling and Sweetviz are simple installs that work well with Streamlit, 

To test you can set up a streamlit share and then install 

here's some python code wrapped in streamlit that provides both for you to test with a CSV of your choosing

https://github.com/alibama/code-for-cville/blob/master/divides.py 

--- i  pulled most of this from the video here...  

https://www.youtube.com/watch?v=zWiliqjyPlQ - this video goes in depth - i skipped to about minute 30 to get in to the sweetviz stuff and then headed over to the Github repo 

https://github.com/Jcharis/Streamlit_DataScience_Apps/blob/master/EDA_app_with_Streamlit_Components/app.py

to use this file as a basis for an even more stripped down version seen above

This post has no comments.
04/30/2021
profile-icon Anson Parker

Getting started is simple

  1. Set up a github repo - you're certainly welcome to fork ours here https://github.com/carrlucy/HSL_OA
  2. Set up a streamlit share account https://share.streamlit.io/ this may take a day or two - so plan ahead :)
  3. Connect the two - there are some pictures here https://guides.hsl.virginia.edu/it-services-blog/zoombites/Geopandas-and-streamlit-to-display-local-tree-data-in-deckgl

Now we're off to the races - you should have a URL where your app is going to be showing up every time your github code gets updated https://share.streamlit.io/carrlucy/hsl_oa/main is our app link

Tech notes... Caching.... it's a thing

I feel like a jerk for not testing the streamlit caching tools in the past.  They're amazing.  What a difference it makes with these larger queries.  Just add @st.cache() before a function and it the results get cached... done.  We do all the processing in the pandas data frame after that and it's super speedy

Creating a development app was a great way to test against our main code - we

  1. forked a development branch on github and 
  2. selected the new branch on streamlit sharing
  3. Boom! new app to test on https://share.streamlit.io/carrlucy/hsl_oa/development

The gift of gab

Every aspect of this process is open source, and all the development and support involves real people.  Introduce yourself to the community.  The groups we reached out to in working on this project included

 

This post has no comments.
12/08/2020
profile-icon Anson Parker

I'm not a good programmer.  85 times and about two hours total.  That's how many edits and saves before I got something out the front door that might pass as an "app" by some definition.  If you're looking for the Tableau experience this is not it... yet

https://share.streamlit.io/alibama/cville-trees/main - here's a link to the app.  It filters trees from the UVa equity atlas http://equity-atlas-uvalibrary.opendata.arcgis.com/ based on species, and allows the user the ability to change the size of the marker for the tree on the map.  Eventually I want to do some canopy analysis, and this is a small start.

The Streamlit components in markets as diverse as molecular structure visualization or building a Voronoi maps of Trader Joe's distributions evidences that their team is considering a big picture - and to be able to get a relative non-programmer like myself using a tool and getting a draft out the door in one day is pretty impressive.  

My workflow went between Sreamlit's discussion boards located here https://discuss.streamlit.io/ and a bit of stackoverflow... given that i now have a Python app akin to what jupyter and voila aspire towards, or Shiny apps in R achieve with a nifty backend in github that really simplifies developing. 

Streamlit Sharing

This is the streamlit sharing app administration dashboard.  There's a list of the existing apps on this page, as well as the menu to create new apps from scratch or in a new repo or an existing templateThe main streamlit dashboard has a list of the existing projects as well as a menu to create new apps either from scratch or from an existing template

 

 

 

 

 

 

once you choose to create a new app from scratch you can connect it to your github repo where you'll be storing your code

Once you go to create a new app from scratch you'll have the opportunity to connect the app over to the github repo and name your main file and branch (if you're using an existing repo with code in it already)

 

 

 

 

 

 

Packages.txt and Requirements.txt files in Streamlit Sharing

this is a screenshot of a streamlit github repo with three files - the streamlit_app.py file that holds the python scripts, the requirements.txt that holds the python packages, and packages.txt file that contains binaries installable through apt-get in basic debian repositoriesStreamlit Sharing is still in beta, however I got my invitation to participate within an hour or so of request, maybe sooner.... i don't recall.

Streamlit Sharing connects github.com's infrastructure to a containerized Streamlit server to allow essentially no-click app creation - hit save in github, the app automatically updates on the streamlit side almost instantly - no other commands necessary.  

This is a screenshot of a streamlit github repo with three files - the streamlit_app.py file that holds the python scripts and references the data.  The requirements.txt file contains python packages that would normally be installed with pip - to be honest i'm not sure how or whether conda is part of this process.  Lastly there is the packages.txt file that contains references to binaries that would be typically installed through apt-get in basic debian repositories.  

In our example we have the following Python libraries in our requirements.txt file

  pydeck For Deck.Gl integration
  pandas https://pandas.pydata.org/ for data processing
  streamlit because we're working with streamlit.... 
  shapely https://pypi.org/project/Shapely/ for polygon management
  fiona Python's GDAL API https://pypi.org/project/Fiona/
  geopandas Geospatial data processing in /ython https://geopandas.org/
  pygeos Extends Geopandas abilitie

 

Our packages.txt file has the following lines

gdal-bin
python-rtree

these two programs give the underlying container the ability to do the heavy geospatial lifting 

Once the packages.txt and requirements.txt files are in the repository they will be automatically discovered during the app baking process.  Adding new sources in to these files may require you to reboot the app - and this brings us in to the first bit of streamlit infrastructure

screenshot of a streamlit environment from the front end of the app with the streamlit menu opened from the bottom left of the screen

 

This is a screenshot of the streamlit app with the app management console open

 

 

 

 

 

 

 

screenshot closeup of the streamlit app control panelThe streamlit control panel provides some utilities for managing your app - it's located on the bottom left of the screen

  1. more in-depth debugging tools - available through the log file download
  2.  below that is the reboot app option - this will reload the requirements.txt and packages.txt file and is necessary to use if you add new binaries in to your app
  3. delete app... it deletes the app...and below that are some documentation and support tools

 

 

 

 

 

Inside the app itself is a pretty light logic load.  I really like deck.gl, however hadn't really worked with it before - it's pretty elegant and comes with some neat pan-tilt tools i haven't seen front and center before... has a modern design flavor to it in general

Anyhow - with 3 lines of code I added one dropdown select menu that manipulates the Geopandas dataframe and generates a data set that the deck gl library can consume

treetype = trees['Common_Name'].drop_duplicates() # select all of the trees from the dataframe and filter by unique values to create a useful dropdown menu list
tree_choice = st.sidebar.selectbox('Tree type:', treetype) # render the streamlit widget on the sidebar of the page using the list we created above for the menu
trees=trees[trees['Common_Name'].str.contains(tree_choice)] # create a dataframe for our deck.gl map to use in the layer as the data source and update it based on the selection made above

and similarly here's a streamlit slider widget for controlling the size of the points inside the deck.gl map

dotradius = st.sidebar.slider('Tree dot radius') # this creates a slider widget called "tree dot radius"

layer = [
    pdk.Layer(
        "GeoJsonLayer",
        data=trees,
        getFillColor=[60, 220, 255],
        getRadius=dotradius, #here's the streamlit slider widget being used to determine the size of the point on the deckgl map
    ),

]

And that's really all i've done so far.  In the next part of this work I'm hoping to start doing some actual geopandas data processing to see where Ash trees land on or near existing buildings as we did in this jupyter notebook + geopandas tutorial here https://guides.hsl.virginia.edu/it-services-blog/zoombites/using-tree-data-in-a-python-3-jupyter-notebook 

Additional Discussion

  • Here's an interesting discussion on streamlit as the next tableau...  https://medium.com/@fabio.annovazzi/is-streamlit-a-threat-to-powerbi-and-tableau-d20af60d594
  • A cool component for using leaflet with streamlit for feedback =https://github.com/andfanilo/streamlit-light-leaflet

Thank You!

An enormous shout out to Randy Zwitch who leaned in on the forums and Martin Fleishman of Geopandas for encouraging me to do something useful like documentation ;)  It's a pleasure to be a part of an internationally diverse community - something that Open Source continues to deliver.

This post has no comments.
12/01/2020
profile-icon Anson Parker

First up a big shout out to Joe Orozco and the Virginia chapter of the National Foundation for the Blind for helping us vet our process here.  Your work is inspirational and has helped drive our mission of greater accessibility in measurable ways - below is a description of one.

Accessibility has always been a central tenet in our web development process at the library.  Working in a health system encourages these goals, and our administration addresses this mission by providing time to research the latest strategies in content accessibility, as well as opportunities to implement what we learn in the real world.  

Most recently the tech team was given an opportunity in the form of online course content from the National Medical Library.  Brought to the table by our Associate Director Dan Wilson - the content was housed in a custom built framework, heavy in javascript and inaccessible widgets, and the surveys were built in Surveymonkey and i-framed in to the site - not ideal.   We decided to use the Drupal framework to handle the basic content and user management, and then Course and Webform were installed to provide a course functionality, combining surveys and content as well as the ability for users to track their progress, and administrators to review course analytics.

These days many front-end surveys are accessible - many people at UVa use our Qualitrics framework - and they do a great job of acknowledging the pros and cons of different survey widgets.  When we took on this project, however, we wanted to reach a little further and focus on making the course creation process as accessible as possible.

 

 

When a front end accessibility becomes an administrative accessibility headache

This is a picture of an inaccessible survey widget where a Likert Quality scale is used in a grid format.  While the labels are clear on the top, they are not easily identifiable above each radio button, making them less accessible than other available survey widgets

This is a picture of an inaccessible survey widget where a Likert Quality scale is used in a grid format.  While the labels are clear on the top, they are not easily identifiable above each radio button, making them less accessible than other available survey widgets

Resolving this accessibility is pretty simple - take the grid-styled group question and create a unique question for each row... this is very simple, and using webform we can even clone the questions.  Not too bad for the course administrator... 

One of the detractions of webform, however, is the reliance on text entry for each select option.  While this is useful because it gives a clear and accessible link for each question it can require a lot of repetitive typing - introducing room for error and making the administrative less accessible than we wanted.

This is a picture of the text that creates a select list in the drupal webform module

This is a picture of the text that creates a select list in the drupal webform module.  Each option has to be specified, and a machine name has to be created also... technically speaking it's accessible, however if you're creating a long survey and have to do this every time it's not very attractive.  It's also just not great best practices since it's not clear how analytics would connect the dots for reporting

 

By adding another module - known as values - we now have these drop-down lists of choices that pre-fill the select options and simplify user interface.  

After installing the values module we now have these drop-down lists of choices that pre-fill the select options and simplify user interface

Here's a picture of the administrative side of the values module in action - the drop down list provides pre-created select option lists that have their own administrative interface for managing, translating etc...

Using the values module is straightforward.  Install the module how you would any other module (drush, composer, or manually), and turn on the sub-module for webforms.  Once enabled the instructions on the page for values are clear - you'll need to go to the values section and add select options this is an image of the values page.  On the left column is a description of the select list group and on the right side are the options to edit, delete, or export your lists.  Lists may also be imported.

This is an image of the values page.  On the left column is a description of the select list group and on the right side are the options to edit, delete, or export your lists.  Lists may also be imported.  Here is a look at the translations available for this module as well https://localize.drupal.org/translate/projects/values

Drupal is a large framework, and some of the key tools are not as simple as one might hope for out of the box, however with no programming experience and a bit of digging around in google there are a lot of opportunities to leverage the framework.  Improving small details of content creation helps ensure that accessibility is introduced not as an afterthought tied to some administrative checkbox, but rather as a starting point and core point from which other aspects may develop.  

 

This post has no comments.
07/22/2020
profile-icon Anson Parker

Tactile mapping for conferences and events @ UVa

Fabrication review opens up design and data science opportunities with a vision on policy driven accommodation improvements on grounds

laser etched braille - the dots were carved approximately .5mm deep in to the 3.5mm thick woodThis is an image of braille etched in to 3.5mm wood where the dots are about .5mm above the surface of the laser etched area.  In other words we burned around the braille - removing the background and leaving the dots to form readable letters
 

Acknowledgements:

A special thanks to fabrication friends across grounds - Melissa Goldman at the Architecture School, William Guilford in Biomedical Engineering, Erich Purpur in the Science and Engineering Library, and Sam Flippo at the Drama Department.  Without their physical presence this work would not have been possible.

To the data scientists - Will Rourke from Robertson Media Center, Ammon Shepherd from the Scholars Lab, Erich Purpur in Science and Engineering, Jeff Owen in Facilities Management - for helping bring this process in to the 3rd millennium.

To my colleagues in the Health Sciences Library - David Moody, Kyle Bowman, Bart Ragon and Andrea Denton for helping me connect the dots internally and across grounds (get it- braille joke?), as well as providing the freedom and guidance to do this research.

And most importantly to the people driving the mission of this project - Catherine Bacik at the Virginia Department for the Blind and Vision Impaired, Lori Kressin with Accessibility Resources, Barbara Zunder with Student Health, and Melvin Mallory of ADA Compliance.

Abstract:

Creating tactile maps is generally considered to be a time consuming and expensive process requiring significant expertise in the field.  Here we examine rapid prototyping techniques so simple and affordable that events like conferences - where vendor booths, food tables, and other ephemeral items - are made more accessible through tactile maps.  After a modest review of the literature and with some familiarity of UVa fabrication utilities we were able to go from UVa floorplans (provided by facilities management) to accessible tactile maps - with material costs as low as $1 and production times in the 20-30 minute realm.  Looking more broadly at wayfinding we also review opportunities to improve accessibility through better design and data science work.

 

Introduction:

`Tactile maps are physical representations of spaces - they are useful for helping blind and vision impaired people navigate spaces and maintain greater autonomy.  Tactile mapping has been in use for many years, however the availability and affordability of rapid prototyping tools opens up opportunities to integrate the production of tactile mapping in ways previously not possible.  Here we explore in depth a two part production technique for conference events - where known mapped spaces may have additional items like vendor booths or food tables.    Secondly we look to open source digital wayfinding technologies and apps for more realtime real space alerting.

 

Initial Goals:

  • Create an attractive, sanitary, stationary tactile maps that can be placed in the library and made useful for all patrons.  Consider a call to artists to make “landmark maps” working to the specs of other maps, but in various media?
  • Create tactile map “hand-outs” - low cost replications of “mold” maps that help in wayfinding to and around the library.  These dispensable maps may be made available to community centers, senior centers, and anywhere else that might benefit from having visually impaired wayfinding needs for the hospital and its surrounding environment.
  • Create digital models that are accessible online and can be rendered in real time such that data science work may be done

Methods:

A first search pass in to tactile mapping led us to https://www.touch-mapper.org.  With the help of Erich Purpur and the Brown Science and Engineering makerbot 5 models were printed.  All of these failed to print well for various reasons.  After communicating with Sam Flippo in the Theater Department and William Guillford in Biomechanical Engineering we were informed that 3d printed standard low-cost PLA filament doesn’t play well with vacuum thermoforming. This meant that downstream products such as affordable duplicates would not work with this process and led us back to the drawing board.  The other two readily available alternatives locally available were laser cutting and CNC milling.  CNC is a gold standard in manufacturing that can provide micron level tolerances with low effort.  Laser, on the other hand, is a somewhat less precise, but much faster to develop and pre-process.  We looked to laser cutting and found some literature on the subject https://wiki.openstreetmap.org/wiki/User:Head/HaptoRender and decided to move forward.

Affordability and ease of production and reproduction are the guiding principles of this investigation.  Also in the interest of scientific diligence and reproducibility all instructions are given in open source and other freely available software.  Our “minimum value product” cost $1 and an hour in graphics development plus an additional $1.00 and 2 minutes per duplicate.  This does not take in to account access to equipment, however it does provide a sense of cost and effort.  

Materials and Methods

First Draft - 3d printing

3d printing is a ubiquitous, affordable, and relatively efficient way to render physical data - and we found a website https://touch-mapper.org/ - that automated tactile map generation for 3d printers using Open Street map data.  The process is simple - input a desired address and scale and wait for your map to render.  You may either download the map to print yourself or order the map online to be delivered…  We downloaded ours and went to work

 

In short the results weren’t great… but at least we failed quickly :)  Here are some notes

Results - Cons

  • Problems printing - taking the stl file to a 3d printer in the science and engineering library proved to warp heavily. Tests at 100%, 90%, 70%, and 60% all failed… but there was a failure behind the scenes lurking
  • Standard PLA models won’t work with vacuum forming duplication machines.  The PLA plastic is likely to melt on to the master, meaning that even if our 3d printed form worked it wouldn’t be a great tool for making duplicates
  • Also something of a failure in the sense that it doesn’t work for inside our buildings

Results - Pros & opportunities to improve

  • There are many types of filament.  ABS plastic, nylon, and carbon fiber alternatives all exist
  • Open Street Map is a great data set for street level content - there are content paths for vision impairment

 

Second Draft - laser etching & vacuum forming

Our second effort added a lot of variables.  To begin with we decided to get the original CAD designs from the UVa facilities department.   The immediate benefit of this approach is that we now are developing a process for working inside our buildings - something that Open Street Map data would not do.  After some processing, described below, we were able to use the A-School laser cutters to produce our first accurate maps.  Several tests were made to adjust the depth and precision of the maps, and after several runs we had a presentable mold.  The mold was taken over to the Theater department where - after additional physical editing the first vacuum form duplicate was made.  This process start to finish requires as little as 45 minutes and is the basis of our first inquiry back to the community.

 

Process details

Software used

  • LibreCAD / QCAD - open source alternatives for autocad… the CAD work being done is very limited, so these tools seem sufficient for now
  • GIMP - filling in blanks on from CAD design gaps
  • InkScape - are being used for development.

Step 1) CAD work

Initial files provided in DXF format - an industry standard and in the case of UVa these floorplans have an abundance of detail.  Tactile maps benefit from a minimalist approach - so using LibreCAD (QCAD also looks suitable) we removed all of the layers that weren’t absolutely necessary.  LibreCAD has a DXF->PNG exporter - and this is what we rendered for the next step in our process.

Notes: This is probably the first opportunity for some data science.  As a “for instance” we could run a traccar server to gather real GPS data from users at the < 1m level and then apply it to CAD files to select the right right lines for our files?  

Step 2) Graphics work

GIMP is a well known and well documented graphics manipulation program.  Once the DXF file is exported to PNG we can manipulate the thickness of lines (aka walls),  the size of braille writing, and other details that make a map more usable.

Notes: This is the first phase where better and more consistent design approaches need to be developed and documented.  Once a standard thickness for walls is achieved, or how to place the number of steps between places?  Etc.. etc..

Step 3) Raster laser printing

Once the final PNG is saved in GIMP it may be lasered on to materials.

 

Notes:  My tests are listed below with screenshots etc... times varied between 20 and 30 minutes for a 10” square model, and depths of cut ranged from 1.1 to .7mm.  Too deep on the wood makes things braille break off during the vacuum forming, too shallow and the markings do not transfer

Notes: we may want to polyurethane, shelaq, or otherwise smooth the surface of the laser for smoother duplication.

We may also want to have things like arrows and such that we can overlay on these master molds so that events may have their own maps at minimal cost (instead of re-creating the whole map, just add overlays with new braille, event booths etc...

 

 

Step 4) Vacuum forming

 

The vacuum forming machine in the theater department is enormous - it can handle up to 4’x6’ objects

We used the machine over in the Theater department under the guidance of Sam Flippo - and so the ease of production may be exaggerated due to her high level of skill in the matter… that said the process was seamless and intuitive.

From the laser the wooden or acrylic model the model must have holes drilled in to it - these may be

3rd Draft - Planning an event...

 

 

 

Digital workflow:

Started out with the DXF floorplans from UVa facilitiesscreenshot of librecad - an open source CAD design system where the outline of the library is visible

Installed Librecad (free)

The layers on the right can get turned on and off - i turned off as many layers as seemed possible...

From DXF -> PNG

Using Gimp - primarily filled in the blanks, made the image blockier, removed more details, added Grade 2 Braille - to begin with just the stairs, exit, help, and restrooms are labeled... need to figure out a legend plan

 

 

 

 

regular version of floorplan where the background is white and thus not etched.  this is an issue because the braille then becomes recessed instead of pronounced on the woodinverted mostly black version - now that the image has been inverted the braille is part of the segment not touched by the laser

added Swell braille font https://www.tsbvi.edu/download-braille-and-asl-specialty-fonts

 

printed in raster format

printer settings this is a screen shot printer settings... i don't think this administrative tool is very accessible and work needs to be done to improve on it

 

 

Cutting results

 

notes

material

machine

images

Started with an etch setting of 60%power, 70%speed, and 200ppi - 21 minutes = results aren't looking good

the dots are blurred because they weren’t 100% white in the image

Birch 3/16

100W laser

laser etched braille on 3.5mm luon wood

80% power, 50% speed 200PPI = full 1mm deep cuts, looking very solid

 

Birch 3/16

100W laser

 

 80% power, 50%, speed 200PPI

The acrylic was too thin - 1/16” - it started warping after processing for a while

1/16 acrylic

100W laser

 

100% & 60%, speed 200PPI  

Better results - moving faster seemed to help

1/16 acrylic

100W laser

raised braille on acrylic - this acrylic was too thin and warped slightly as can be seen in the image

100%, 40% speed 200 PPI = 30 minutes

Slow work, looking good

Birch 3/16

100W laser

a nicely burned tactile map.  the walls and braille are almost 1mm pronounced from the background of the material and crisp lines are easy to see and feel

100% power, 60% speed ppi 200 … garbage.  The grain of the wood makes this a useless product except perhaps for backing with the thinner birch plywood.

¾ plywood

 

 

Multiple tests from 100% power burnout issues, to 40% power 70%speed = etching, not overburning but not really going deep enough… not a great product

Cork

 

 

100% power 50% speed = decent… might slow it down some more and get a better line..or cut it twice?

 

acrylic ¼”

 

 

100% power 60% speed… great precision, but if cutting with the grain the ridges can get wonky

Bamboo

 

 

100% power 30% speed 200ppi

 

… beautiful… a bit slow

acrylic

 

 

This post has no comments.
05/19/2020
profile-icon Anson Parker

It has been shown that the coronavirus dissipates from the air in around 15 minutes under normal conditions. Our IT team wanted to look at providing a scalable solution that would be unobtrusive and allow us to minimize physical and temporal overlap in the library, while still providing some required social distancing improvements.

Traccar is an open source GPS monitoring tool.  Users download an app that connects to a central server; gathering and displaying their location in real time on a web-based map.  Users may set up notifications for when someone enters or leaves a region, and may thus coordinate physical activities and minimize risk.  

Also Traccar provides an API and file export options for data science and data visualization.  Currently the library is looking at how this data might be useful for helping in wayfinding tools for ADA accommodations, as well as improving the efficiency of housekeeping by discovering and grouping high traffic areas.

Using the App

undefined

Adding a user in the library server requires downloading the Traccar app for either android or ios

Once you have that downloaded and installed the server we are using is http://34.239.45.23:8082/

You will need to provide your Device Identifier (located in the app) and once that's ready you can begin starting and stopping tracking within the app

 

undefined

 

Above is a picture of me sitting at home.  As mentioned Traccar web exports are in xlsx format (there is a JSON api and a python library for that, and haven't had time to dig in there yet) anyhow - here's a jupyter notebook with the Traccar -> geojson converter 

https://colab.research.google.com/drive/1xNVCTpqiQtb5qQ41aZ2YWMhFfv1JhWwC?usp=sharing

once the conversion is done the geojson can be used to do some mapping = here's the last month or so of data.

 

qgis mapping

This is the big picture, and includes my recent trip to grab donuts

 

 

 

 

 

 

Once we zoom in it's kinda cool - you can see there are some point cloud densities near what turn out to be the stairs and also a fair bit of wandering behind the service desk...   

 qgis stairwayundefined

 

Notes, issues, things to do:

Overall resolution - probably accurate within ~15-20 feet on the top floor, not nearly as accurate going in the bottom floor... across the board floor level detection is not really a doable thing... elevation data is an open problem space, and getting the interior maps availble for research is an ongoing issue.   

https://osminedit.pavie.info is a website that aims to democratize this process by allowing floorplans to be imported, georeferenced, and tagged with information that would be **really** useful for ADA accommodations.   

understanding motion and trajectory data - I used the vector point to path plugin to make path files from the paths - http://movingpandas.org/ has some interesting takes on grouping trajectory data to highlight most common routes and https://scikit-mobility.github.io/scikit-mobility/ may be helpful if we are able to parse the floor level data and get that mess sorted out

 

 

 

This post has no comments.
02/03/2020
profile-icon Anson Parker

We decided to use https://snipeitapp.com/ for our current inventory system.  After some minimal configuration we standardized our process.

Features -

  • Labeling so that service desk can report which machine is having an issue -  QR code generation etc...
  • Surplus management - when machines get sent to UVa surplus having their data in the system makes the process simpler - ETF management is in the system - at any time we can print a list of all ETF equipment by location.
  • Ability to check out equipment to staff - We wanted to add more flexibility in our equipment checkout policies.  By keeping a record on hand of where equipment is we make more equipment available to more faculty and staff in a more timely fashion.
  • Web-based - previous inventory systems have existed that only worked on a single computer - we didn't want a local solution bound to local hardware or management
  • Warranty management - at-a-glance read-outs for warranty information.  
  • Depreciation calculations - built-in depreciation tables for predicting futures
  • Low cost - hosting on amazon lightsail.  Backups are available from aws, software stack is industry standard.   

Items in the system

  • Labs and classroom computers (and used the built in labeling system plus a dymo-450 label maker)
  • Faculty & Staff Equipment - we track all computers, ipads, cameras etc... monitors & telephone mac addresses are coming during the annual in-service
  • Printers - tracked by IP - since we are an all HP shop we also use their proprietary web management tool - located downstairs in the tech offices
  • Items labeled to include all stationary and ETF items
  • Mobile items may or may not be labeled - depending on the preference of the user.  This is to accommodate for the fact that many mobile devices operate under non-ideal circumstances, and the label tags may not be durable enough.  Devices outside of the library are also intended to be in the inventory system in a "checked out" format as per our documentation

If you're logged in to the system you can see our FAQs for managing the system here 

https://hsl-virginia.libanswers.com/tech-support/search/?t=0&q=inventory

Other inventory systems in place / available

  • JAMF - great system for mac products
  • SCCM - manages many of our 
  • KACE - may have inventory capabilities sufficient for windows and mac machines going forward
  • HP Jet Admin - for managing printers

 

This post has no comments.
Field is required.