UVA Logo

IT services blog

Showing 10 of 0 Results

07/05/2023
Anson Parker
No Subjects

In this tutorial, we will walk you through how to effectively manage the LibCal interface for your room reservation system. We'll cover essential features such as space administration, editing locations and spaces, managing calendars, and exploring email settings. Let's get started!

Managing Spaces and Locations:

  • Access the LibCal interface through the Space Admin page using your URL.
  • Navigate to the Spaces and Equipment section.
  • Select "Edit location" to modify the settings of your spaces.
  • Edit space details, directions, and other relevant information.
  • Use the "Status" option to make a space inactive, hiding it from the public view.
  • Manage calendars and exceptions within the "Billing Date" section.
  • Set custom time settings or specify closures on specific dates.

Managing Spaces and Categories:

  • Explore the Spaces and Categories section.
  • Utilize bulk editing options for facets if necessary.

Exploring Email Settings:

  • Access the Settings page.
  • Navigate to the email settings section.
  • Configure email notifications that can be sent from the system.
  • Add content and references for future use.

 

This post has no comments.
07/03/2023
Anson Parker

Stable Diffusion is in the news a lot, along with other AI image editors such as MidJourney and Dall-E.  It is open source

One popular tutorial for making QR codes that are more artistic https://stable-diffusion-art.com/qr-code/

If you want to follow along on your own desktop you'll need to follow the following tutorials

Cost: Free open source

Use cases: generating a wide variety of imagery based on keyword prompts and as part of generalized workflows

Support: Discord and google are the best sources for support

This post has no comments.
06/23/2023
Anson Parker

In the past we've looked at Data Viz and exploratory analysis with Streamlit, used Streamlit as the backend for library open publishing research, and had some fun with GIS and Streamlit to study local canopies.  In this review we're looking at how ChatGPT can produce Streamlit boilerplate code and then how GitHub Codespaces may be used to do even faster prototyping with additional capabilities. 

In the rapidly evolving field of library sciences, leveraging innovative technologies is crucial for efficient management and seamless user experiences. Python Streamlit, ChatGPT, and GitHub Codespaces offer a powerful combination of tools that can enhance various aspects of library sciences, from data visualization to user interaction. In this article, we will explore the benefits, use cases, associated costs, and means of obtaining support for utilizing these tools in the university and library sciences domain.

I. Python Streamlit:  an open-source framework originally released in 2019, designed for building interactive web applications with minimal code. It enables library professionals to create intuitive and visually appealing interfaces for data exploration, analysis, and dissemination. 

  1. Cost: Python Streamlit is free and open-source, making it an economical choice for library sciences projects.

  2. Common Use Cases:

    1.  Data Visualization: Streamlit simplifies the process of creating dynamic visualizations to present data in an engaging manner, aiding in data-driven decision-making.

    2.  User Interfaces: With Streamlit, libraries can build user-friendly interfaces for search, content faceting etc... improving user experience and accessibility.

    3. Prototyping and Testing: Streamlit's rapid development capabilities make it ideal for prototyping new library services or experimenting with user workflows. This comes in especially handy with the Codespaces integration

  3. Support: Python Streamlit has an active and supportive community. The official Streamlit documentation, community forums, and GitHub repository are excellent resources for learning and troubleshooting. Additionally, there are various online tutorials and blog posts available to aid in getting started.  On grounds there is support in most libraries for Python, and there as well as various short courses offered on ad hoc basis.

II. ChatGPT: powered by OpenAI's language model, ChatGPT enables libraries to create conversational agents for improved user engagement and personalized assistance (like help writing the first draft of this article!). It allows users to interact with the library's services through natural language conversations. Every article detailing ChatGPT or other AI tools should include a caveat - it's possible for AI to make mistakes.  Libraries should begin working on policies to determine when to open the doors to this powerful new technology, and how to set up guardrails to prevent the spread of misinformation.  Let's delve into the details:

  1. Cost: OpenAI offers a range of pricing plans for using ChatGPT. The cost depends on factors such as usage, model capacity, and API calls. It is recommended to review the OpenAI pricing page for specific details.

  2. Possible Use Cases:

    1.  Virtual Reference Services: ChatGPT can be used to provide automated virtual reference services, answering user queries and providing assistance in real-time.

    2. Recommender Systems: libraries can develop intelligent recommender systems that suggest relevant books, articles, or resources based on user preferences.

    3. User Support: ChatGPT can handle frequently asked questions, guiding users through library services, and offering support for common issues with some minor training

  3. Support: OpenAI provides comprehensive developer documentation and guides to assist in integrating ChatGPT into applications. The OpenAI community forum and support channels are valuable resources for addressing queries and troubleshooting issues.

III. GitHub Codespaces: GitHub Codespaces offers a cloud-based development environment that enables seamless collaboration and version control for library sciences projects. It provides a hassle-free setup for development and facilitates team collaboration.  Although Codespaces is flexible it can help to use a Codespace template that has some configurations baked in.  Port forwarding, Python versioning, and other libraries can be rolled out automagially - for this example I used https://github.com/robmarkcole/streamlit-codespace that had the above as well as the streamlit library itself pre-loaded.  Here's some more information on Codespaces:

  1. Cost: GitHub Codespaces offers both free and paid plans. The pricing structure is based on factors such as the number of concurrent Codespace instances and storage requirements. Detailed pricing information is available on the GitHub website.

  2. Common Use Cases:

    1.  Collaborative Development: Codespaces enables multiple library professionals to work together on codebases, fostering efficient collaboration and reducing development time.

    2. Testing and Debugging: Codespaces provides an isolated and controlled environment for testing and debugging library-related code, ensuring code quality and reliability.

    3. Continuous Integration/Continuous Deployment (CI/CD): By integrating Codespaces with CI/CD pipelines, libraries can automate the process of building, testing, and deploying applications.  Github Actions provide additional CI/CD opportiunities

  3. Support: GitHub provides extensive documentation and guides for getting started with Codespaces. The GitHub Community Forum and support resources are available to address any questions or issues that arise during development.

Conclusion: Incorporating Python Streamlit, ChatGPT, and GitHub Codespaces into library sciences can significantly enhance data visualization, user interaction, and collaborative development. The combination of these tools empowers library professionals to deliver improved services and user experiences. With their affordability, diverse use cases, and strong community support, adopting these technologies can lead to transformative outcomes in library sciences projects.

This post has no comments.
10/20/2021
profile-icon Anson Parker

We're not experts, however we're working on getting CPACC certified this year - thank you Christa @Virginia Tech!

First we got our metrics set up - we went big picture here.  That was not too important for our site, since the content is largely references to other content (we're a library, right?) however for anyone else interested this a reasonably complete list

note on webaim responses (#6 here) = there are a couple of widgets on our site that are auto-generated by our framework that have some w3c errors, however upon inspection they don't really interfere with navigation or much of anything else, so most pages have 2 errors, and we're not too worried about it, however it's on the agenda for future work

 

a link to our spreadsheet review of the guides.hsl.virginia.edu site

 

METRIC #1 * Photos should never be altered to artificially create diversity.
(we will audit for the use of stock photography, following UVA Health diversity recommendations here, https://www.uvahealthbrand.com/standards/policies/diversity)
METRIC #2 * Monitor the choice of photos and video subjects to accurately and authentically reflect the diversity of actual student, faculty, and staff demographics.
(we will audit for race/ethnic representation and offer actual #'s and %'s of represented groups on our website in photos and videos)
METRIC #3 * When possible, write (or rewrite) communications to be in the plural form by using the plural pronoun of “they” instead of the singular pronouns of “he” and “she.” If it is not possible to write in the plural form, use the singular pronoun of “s/he” to be more inclusive.
(we will audit for gender narrowed references and/or unnecessary uses of gender)
METRIC #4 * Use the gender-neutral nouns of “people,” “person” or “parent” instead of “man,” ”women,” “father” or “mother.” For example, use “chairperson” instead of “chairman” or state that “all people are created equal” instead of “all men are created equal.”
METRIC #5 * Capitalize the “b” in the term Black when referring to people in a racial, ethnic or cultural context. The lowercase black is a color, not a person.
METRIC #6 * All websites should be accessible as defined by w3c standards.
(We will use the WebAim (Wave) tool for accessibility auditing of all pages) and count every error regardless of merit
METRIC #7 * All videos should have closed captioning.
(as described, we will audit for this)
METRIC #8 * Be mindful of the use of symbols such emojis on social media. For example, choose different emojis of color to represent the diversity of the organization/community.
(as described, we will audit for this in our webpages in case they are used on webpages but I don't expect this will be an issue)
METRIC #9 Indigenous and Aboriginal are identities, not adjectives, and should be capitalized to avoid confusion between indigenous plants and animals and Indigenous human beings. Avoid referring to Indigenous people as possessions of states or countries. Instead of “Virginia’s Indigenous people,” write “Indigenous people of Virginia,”
METRIC #10 LGBTQ is acceptable in all references for members of the lesbian, gay, bisexual, transgender, queer/questioning, asexual, ally and intersex community. It does not need to be defined. If sources prefers another acronym, such as LGBTQIA+ that is acceptable too.
METRIC #11 Capitalize the proper names of nationalities, peoples, races, tribes, etc. However, use only when relevant to the story. When identifying someone by race or nationality, be sensitive to the person’s preference and standard accepted phrases. For example, do not use Oriental for people who are Asian. See Hispanic and Native American entries.
METRIC #12 Acceptable forNative American people in the U.S. Follow the person’s preference. Where possible, be precise and use the name of the tribe: He is a Navajo commissioner. Such words or terms as wampum, warpath, powwow, teepee, brave, squaw, etc., can be disparaging and offensive (when not referring to something by its formal name). Do not appropriate these phrases for non-cultural uses, such as using the term “powwow” to refer to holding a meeting.

  *   First Nation is the preferred term for native tribes in Canada.
  *   Tribes from Alaska prefer Alaska Native.
  *   Lowercase tribe/tribal and reservation except as part of the formal name.
  *   Use Indian only for people from India.
  *   On second reference, Native/Natives is acceptable.
METRIC #13   *   Transgender is an adjective, not a noun. Do not use the term “transgendered.”
  *   The physical changes made to a transgender person’s body are referred to as “transition,” not “sex change.”

This post has no comments.
09/21/2021
profile-icon Anson Parker

Project Lead - Maitri Patel, MPH—Advocacy and Clinical Application Coordinator

The Challenge

Patients - and sometimes Doctors and Nurses - consistently express difficulties and feelings of being overwhelmed while navigating the UVA Health system. Patient dissatisfaction is heavily determinant on access to care. Patient tardiness to appointments and use of medical staff for navigational advice create areas of waste within the hospital system that could be better mitigated with an indoor mapping service.

Our Solution

Our team has been assembled to integrate mapping data and tools from commercial and open-data repositories to build a best-in-class wayfinding product for the patients, medical staff, and community members who navigate within the walls of the UVA Health system. Our main goals are to minimize patient barriers to access of care by allowing detailed parking information and step-by-step instructions to walk through the hospital based on patient input of physician/department/appointment location. We hope that after beta-testing and success of navigation,  we will expand the application to allow for multi-stop function, various language functions, and spoken instructions to maximize accessibility. After the 2D rendering is complete, we hope to utilize LiDAR technology to supplement the maps and consider augmented reality applications.

Where

Using foundational work established by Derrick Stone and the Mazemap team, we will begin our mapping in the main hospital and expand to cover the battle building, Couric Cancer Center, and West Complex.

How:

Using the Mazemap online system, we will map out the location pins of each department of the UVA main hospital and surrounding hospitals. Using the pre-programmed routing algorithms, we will confirm efficient and accessible patient pathways.  We are also working to integrate with bus schedules from public transportation and UVa transit, as well as information such as pedestrian and wheelchair access resources. 

When:

Week of 9/20: Testing LIDAR tooling in the UVA Health Science Library 

Sept 23 8-9PM Zoom link - team meeting

October 6th: Contacted Derrick Stone and got in touch with the Mazemap team

October 19th: Completed the Mazemap orientation and added all new team members to the mazemap user editor team

Week of October 25th: Divided remaining location pins to be programmed into Mazemap

November: Complete 2D location pinning for UVA main hospital and surrounding buildings

December: Internally test the routing paths for accuracy, efficiency, and accessibility paths

January: Begin beta-testing using medical staff and students

January: Contact select departments for input on most common paths utilized by patients from their department

February: Create screenshots and QR codes for most commonly utilized paths and continue developing the mazemap application based on consumer feedback.

Some initial tests in MazeMap

Creating predefined maps - such as this one from the Guyenet Lab in Pinn Hall over to the Cafeteria

maze map directions

In the toolbar the user gets detailed visual and textual directions to aid in wayfinding

detailed mazemap instructions

 

Support Team:

Technical Advisor - Joe Jamison, Visitable.org

Students and staff

Hollis Cutler: GIS routing & Python coding

Nora Dale: OpenStreetMap transportation lead

Michelle Miles: Accessibility expert and design lead

Lena Nguyen: OpenStreetMap project & community lead

Anson Parker: Health Science Library IT 

Erich Purpur: Research Librarian

Derrick Stone: Computer Science and Software Programming Lead

And a special thanks to MazeMap

Steven Newman: VP of Sales, Mazemap

Tatiana Kosmida: Mazemap Troubleshooting Lead

Daniel Schjetne: Mazemap Troubleshooting Lead

Additional Notes:

We may need to integrate this work with existing infrastructure such as OpenStreetMap and other Charlottesville City maps, and we look forward to investigating this in the future.

this is an openstreetmap view of the hospital and library

 

This post has no comments.
07/26/2021
profile-icon Anson Parker

As a collaborator in the Lyrasis Catalyst 2021-2022 award with the Science and Engineering Library we are pleased to produce our first documentation in the series done in collaboration Visitable.org - Accessibility Information and Disability Inclusion professionals

Our goal working with Visitable was to look at our spaces from a wheelchair accessibility perspective and consider which apps provide the simplest workflow for accessibility professionals to use when working with the LIDAR equipped iPad Pro 12. 

<TLDR> 3d scanner app is a convenient off-the-shelf app you may use with confidence.  All the features are free, and you're not locking yourself in from a file-format perspective. All that and it has some convenient workflow features for working with accessibility professionals.

Want to participate? visit codeforcville.org/lidardb

</TLDR>

 

Notes from Visitable.org

July 13th I went scanning with Joe Jamison - founder of Visitable.org. All of the scans were done with the 3d scanner app, and most are posted here for review https://sketchfab.com/alibama77/collections/uva-health-science-library.  Starting at the front of the library and working our way down the elevator and in to the bathroom and group study rooms we worked to evaluate 3d scanning in the context of space accessibility analysis.  Here are Joe's notes  

 

3d scanning strengths:

  • Can get a full picture, and get overall takeaways
  • Easier to view depth
  • Quicker for making measurements (if knowing where and how to scan) and giving feedback
  • Easy to share with colleagues, customers, and users to review and make their own measurements if they'd like

Manual testing strengths:

  • Small details and barriers are easy to see
  • Measurements are more accurate, helpful for measuring small lips and door thresholds
  • Pictures are a more efficient way to help users see a holistic view rather than scanning a full room
  • More thorough, includes looking at attitudinal barriers as well as asking clarifying questions on policy and practices

3d scanning weaknesses:

  • Might be difficult or impossible to see small details, such as door thresholds
  • Visualizations aren't as clear as pictures: reflective surfaces distort shapes, corners are not clearly defined, etc.
  • Might be a learning curve to figure out where and how to scan, which is a little harder to communicate than providing instructions on where to take pictures
  • Scanning roofs/ceilings and bigger spaces for visualizations takes longer than pictures
  • Some tools, such as the Sketchfab lab tool, do not make it easy to measure within units desired
  • Cannot measure slope or door pressure with existing tools

Manual testing weaknesses:

  • Slower to take measurements and record them - makes overall process of pictures and measurements slower
  • More steps in the process to share reviews with customers, colleagues, and users
  • Taking measurements with a tape measure is potentially less accessible/ more difficult than scanning an area

3d scanning 101

We downloaded about a dozen different 3d scanning apps, and most of them required paid subscriptions - i went through the free trials on several other tools, but really wasn't impressed and everything came in to a toss-up between two great products on the closed source side, and then the open source tool that comes from the robotics space and that shows strong potential for bringing game-changing developments in to the accessibility space.

Off-the-shelf Winners

For working offline and general flexibility + user interface there's no question - the 3d scanner app is your go-to app.  Use the low resolution to capture large spaces, or dig in to a single room or two at a time with the high res tool.  You can export the file a bunch of different ways and do any technical analysis you want, and exporting to the web is simple https://sketchfab.com/alibama77/collections/uva-health-science-library is a collection of scans done in the library and seems to work ok in the new Sketchfab lab tool here https://labs.sketchfab.com/experiments/measurements/.  Familiarity with some 3d viewing tools is still going to be a plus, and for users who are comfortable in sketchup or other architectural tools the additional controls are available for you to work with.

https://sketchfab.com/3d-models/1st-floor-mens-0b64b75eea1c40a88aec4f021f7389ae in the labs tool here

https://labs.sketchfab.com/experiments/measurements/#!/models/0b64b75eea1c40a88aec4f021f7389ae

their measuring tool correctly shows the distance in our men's room between a shelf and a wall at 83cm which is right on the cusp of being too small for a wheelchair to pass

sketchup 3d measuring tool

 

For convenience a with just measuring spaces poly.cam is a solid contender.  It only allows you to post 3d scans to the web, but it comes with some convenient measuring tools and this model of a hallway in the hospital was one of the first scans I took and the interface is intuitive and elegant.

https://poly.cam/capture/AD7A946A-0BE5-4769-87A0-258A3376D170

polycam's 3d scanning app has some measuring tools built in

 

Open Source and Robotics Perspective 

Also tested but not ready for a full review is RTAB_MAP.  This is an important tool for the next part of our discussion which is a more automated approach to the analysis and looking at accessibility from a robotics perspective.  This has the capacity to tie in to epic large scale products, and being the only open source tool I've seen for processing LIDAR in the iOS ecosystem it is by far the most interesting tool to test.  With an active forum, 260 open issues and 463 closed issues on github it is a very active community with incredibly technical leaders in the LIDAR processing field. 

 

  3d scanner app poly.cam rtab_map
Scanning interface the low polygon scan does a great job on large areas and is intuitive, the high res "paint" approach is clumsy in large spaces. High res version doesn't show you where you've already been well best interface - the light blue to polygon camera change is intuitive

the point cloud interface is useful for showing people exactly what the machine sees at the most basic level, and from a training perspective is a great place to begin a training session.  The pose overlay data is helpful for explaining that dimension of mapping.

Web File Sharing sketchfab.com you can updload directly to poly.com none
Stability crashes occasionally, but pretty good. really solid crashes a lot. on the ipad i found myself stopping it when it said memory was at around 800mb
File formats excellent choices none - web based publishing only, paid subscription unlocks other formats has a pbstream.db format that saves pose data. also offers point cloud exports
Community https://www.3dscannerapp.com/ discord

http://introlab.github.io/rtabmap/, forum, github

Accssibility notes need to test need to test

the fact that this tool is harder to use for sighted users actually reinforces the idea that these tools should be better automated, and that ultimately human intervention in the process is not going to be

Quality my favorite to use when showing friends for the quality and overall ease it's a high quality product, easy to use, and the web-sharing interface with measuring tools is super convenient

this is probably where we're going to spend a lot more time.... and it's open source, so that's awesome

This post has no comments.
06/01/2021
profile-icon Anson Parker

Exploratory data analysis (EDA)

allows developers and programmers to provide stakeholders with a clearer understanding of what questions may reasonably be asked of a dataset with very little programming effort.  How much data is actually present in every row or "what are the unique, or most common values in this column" are some basic questions can help shave up to 30% of the data science workflow experience off according to some random source on the internet, and from my perspective is just an essential first step, period.

Carnegie Mellon has a deep-dive chapter on the subject

https://www.stat.cmu.edu/~hseltman/309/Book/chapter4.pdf

and here's a brief and reasonably concise overview https://www.svds.com/value-exploratory-data-analysis/

EDA in Python

Pandas profiling and Sweetviz are simple installs that work well with Streamlit, 

To test you can set up a streamlit share and then install 

here's some python code wrapped in streamlit that provides both for you to test with a CSV of your choosing

https://github.com/alibama/code-for-cville/blob/master/divides.py 

--- i  pulled most of this from the video here...  

https://www.youtube.com/watch?v=zWiliqjyPlQ - this video goes in depth - i skipped to about minute 30 to get in to the sweetviz stuff and then headed over to the Github repo 

https://github.com/Jcharis/Streamlit_DataScience_Apps/blob/master/EDA_app_with_Streamlit_Components/app.py

to use this file as a basis for an even more stripped down version seen above

This post has no comments.
04/30/2021
profile-icon Anson Parker

Getting started is simple

  1. Set up a github repo - you're certainly welcome to fork ours here https://github.com/carrlucy/HSL_OA
  2. Set up a streamlit share account https://share.streamlit.io/ this may take a day or two - so plan ahead :)
  3. Connect the two - there are some pictures here https://guides.hsl.virginia.edu/it-services-blog/zoombites/Geopandas-and-streamlit-to-display-local-tree-data-in-deckgl

Now we're off to the races - you should have a URL where your app is going to be showing up every time your github code gets updated https://share.streamlit.io/carrlucy/hsl_oa/main is our app link

Tech notes... Caching.... it's a thing

I feel like a jerk for not testing the streamlit caching tools in the past.  They're amazing.  What a difference it makes with these larger queries.  Just add @st.cache() before a function and it the results get cached... done.  We do all the processing in the pandas data frame after that and it's super speedy

Creating a development app was a great way to test against our main code - we

  1. forked a development branch on github and 
  2. selected the new branch on streamlit sharing
  3. Boom! new app to test on https://share.streamlit.io/carrlucy/hsl_oa/development

The gift of gab

Every aspect of this process is open source, and all the development and support involves real people.  Introduce yourself to the community.  The groups we reached out to in working on this project included

 

This post has no comments.
12/08/2020
profile-icon Anson Parker

I'm not a good programmer.  85 times and about two hours total.  That's how many edits and saves before I got something out the front door that might pass as an "app" by some definition.  If you're looking for the Tableau experience this is not it... yet

https://share.streamlit.io/alibama/cville-trees/main - here's a link to the app.  It filters trees from the UVa equity atlas http://equity-atlas-uvalibrary.opendata.arcgis.com/ based on species, and allows the user the ability to change the size of the marker for the tree on the map.  Eventually I want to do some canopy analysis, and this is a small start.

The Streamlit components in markets as diverse as molecular structure visualization or building a Voronoi maps of Trader Joe's distributions evidences that their team is considering a big picture - and to be able to get a relative non-programmer like myself using a tool and getting a draft out the door in one day is pretty impressive.  

My workflow went between Sreamlit's discussion boards located here https://discuss.streamlit.io/ and a bit of stackoverflow... given that i now have a Python app akin to what jupyter and voila aspire towards, or Shiny apps in R achieve with a nifty backend in github that really simplifies developing. 

Streamlit Sharing

This is the streamlit sharing app administration dashboard.  There's a list of the existing apps on this page, as well as the menu to create new apps from scratch or in a new repo or an existing templateThe main streamlit dashboard has a list of the existing projects as well as a menu to create new apps either from scratch or from an existing template

 

 

 

 

 

 

once you choose to create a new app from scratch you can connect it to your github repo where you'll be storing your code

Once you go to create a new app from scratch you'll have the opportunity to connect the app over to the github repo and name your main file and branch (if you're using an existing repo with code in it already)

 

 

 

 

 

 

Packages.txt and Requirements.txt files in Streamlit Sharing

this is a screenshot of a streamlit github repo with three files - the streamlit_app.py file that holds the python scripts, the requirements.txt that holds the python packages, and packages.txt file that contains binaries installable through apt-get in basic debian repositoriesStreamlit Sharing is still in beta, however I got my invitation to participate within an hour or so of request, maybe sooner.... i don't recall.

Streamlit Sharing connects github.com's infrastructure to a containerized Streamlit server to allow essentially no-click app creation - hit save in github, the app automatically updates on the streamlit side almost instantly - no other commands necessary.  

This is a screenshot of a streamlit github repo with three files - the streamlit_app.py file that holds the python scripts and references the data.  The requirements.txt file contains python packages that would normally be installed with pip - to be honest i'm not sure how or whether conda is part of this process.  Lastly there is the packages.txt file that contains references to binaries that would be typically installed through apt-get in basic debian repositories.  

In our example we have the following Python libraries in our requirements.txt file

  pydeck For Deck.Gl integration
  pandas https://pandas.pydata.org/ for data processing
  streamlit because we're working with streamlit.... 
  shapely https://pypi.org/project/Shapely/ for polygon management
  fiona Python's GDAL API https://pypi.org/project/Fiona/
  geopandas Geospatial data processing in /ython https://geopandas.org/
  pygeos Extends Geopandas abilitie

 

Our packages.txt file has the following lines

gdal-bin
python-rtree

these two programs give the underlying container the ability to do the heavy geospatial lifting 

Once the packages.txt and requirements.txt files are in the repository they will be automatically discovered during the app baking process.  Adding new sources in to these files may require you to reboot the app - and this brings us in to the first bit of streamlit infrastructure

screenshot of a streamlit environment from the front end of the app with the streamlit menu opened from the bottom left of the screen

 

This is a screenshot of the streamlit app with the app management console open

 

 

 

 

 

 

 

screenshot closeup of the streamlit app control panelThe streamlit control panel provides some utilities for managing your app - it's located on the bottom left of the screen

  1. more in-depth debugging tools - available through the log file download
  2.  below that is the reboot app option - this will reload the requirements.txt and packages.txt file and is necessary to use if you add new binaries in to your app
  3. delete app... it deletes the app...and below that are some documentation and support tools

 

 

 

 

 

Inside the app itself is a pretty light logic load.  I really like deck.gl, however hadn't really worked with it before - it's pretty elegant and comes with some neat pan-tilt tools i haven't seen front and center before... has a modern design flavor to it in general

Anyhow - with 3 lines of code I added one dropdown select menu that manipulates the Geopandas dataframe and generates a data set that the deck gl library can consume

treetype = trees['Common_Name'].drop_duplicates() # select all of the trees from the dataframe and filter by unique values to create a useful dropdown menu list
tree_choice = st.sidebar.selectbox('Tree type:', treetype) # render the streamlit widget on the sidebar of the page using the list we created above for the menu
trees=trees[trees['Common_Name'].str.contains(tree_choice)] # create a dataframe for our deck.gl map to use in the layer as the data source and update it based on the selection made above

and similarly here's a streamlit slider widget for controlling the size of the points inside the deck.gl map

dotradius = st.sidebar.slider('Tree dot radius') # this creates a slider widget called "tree dot radius"

layer = [
    pdk.Layer(
        "GeoJsonLayer",
        data=trees,
        getFillColor=[60, 220, 255],
        getRadius=dotradius, #here's the streamlit slider widget being used to determine the size of the point on the deckgl map
    ),

]

And that's really all i've done so far.  In the next part of this work I'm hoping to start doing some actual geopandas data processing to see where Ash trees land on or near existing buildings as we did in this jupyter notebook + geopandas tutorial here https://guides.hsl.virginia.edu/it-services-blog/zoombites/using-tree-data-in-a-python-3-jupyter-notebook 

Additional Discussion

  • Here's an interesting discussion on streamlit as the next tableau...  https://medium.com/@fabio.annovazzi/is-streamlit-a-threat-to-powerbi-and-tableau-d20af60d594
  • A cool component for using leaflet with streamlit for feedback =https://github.com/andfanilo/streamlit-light-leaflet

Thank You!

An enormous shout out to Randy Zwitch who leaned in on the forums and Martin Fleishman of Geopandas for encouraging me to do something useful like documentation ;)  It's a pleasure to be a part of an internationally diverse community - something that Open Source continues to deliver.

This post has no comments.
12/01/2020
profile-icon Anson Parker

First up a big shout out to Joe Orozco and the Virginia chapter of the National Foundation for the Blind for helping us vet our process here.  Your work is inspirational and has helped drive our mission of greater accessibility in measurable ways - below is a description of one.

Accessibility has always been a central tenet in our web development process at the library.  Working in a health system encourages these goals, and our administration addresses this mission by providing time to research the latest strategies in content accessibility, as well as opportunities to implement what we learn in the real world.  

Most recently the tech team was given an opportunity in the form of online course content from the National Medical Library.  Brought to the table by our Associate Director Dan Wilson - the content was housed in a custom built framework, heavy in javascript and inaccessible widgets, and the surveys were built in Surveymonkey and i-framed in to the site - not ideal.   We decided to use the Drupal framework to handle the basic content and user management, and then Course and Webform were installed to provide a course functionality, combining surveys and content as well as the ability for users to track their progress, and administrators to review course analytics.

These days many front-end surveys are accessible - many people at UVa use our Qualitrics framework - and they do a great job of acknowledging the pros and cons of different survey widgets.  When we took on this project, however, we wanted to reach a little further and focus on making the course creation process as accessible as possible.

 

 

When a front end accessibility becomes an administrative accessibility headache

This is a picture of an inaccessible survey widget where a Likert Quality scale is used in a grid format.  While the labels are clear on the top, they are not easily identifiable above each radio button, making them less accessible than other available survey widgets

This is a picture of an inaccessible survey widget where a Likert Quality scale is used in a grid format.  While the labels are clear on the top, they are not easily identifiable above each radio button, making them less accessible than other available survey widgets

Resolving this accessibility is pretty simple - take the grid-styled group question and create a unique question for each row... this is very simple, and using webform we can even clone the questions.  Not too bad for the course administrator... 

One of the detractions of webform, however, is the reliance on text entry for each select option.  While this is useful because it gives a clear and accessible link for each question it can require a lot of repetitive typing - introducing room for error and making the administrative less accessible than we wanted.

This is a picture of the text that creates a select list in the drupal webform module

This is a picture of the text that creates a select list in the drupal webform module.  Each option has to be specified, and a machine name has to be created also... technically speaking it's accessible, however if you're creating a long survey and have to do this every time it's not very attractive.  It's also just not great best practices since it's not clear how analytics would connect the dots for reporting

 

By adding another module - known as values - we now have these drop-down lists of choices that pre-fill the select options and simplify user interface.  

After installing the values module we now have these drop-down lists of choices that pre-fill the select options and simplify user interface

Here's a picture of the administrative side of the values module in action - the drop down list provides pre-created select option lists that have their own administrative interface for managing, translating etc...

Using the values module is straightforward.  Install the module how you would any other module (drush, composer, or manually), and turn on the sub-module for webforms.  Once enabled the instructions on the page for values are clear - you'll need to go to the values section and add select options this is an image of the values page.  On the left column is a description of the select list group and on the right side are the options to edit, delete, or export your lists.  Lists may also be imported.

This is an image of the values page.  On the left column is a description of the select list group and on the right side are the options to edit, delete, or export your lists.  Lists may also be imported.  Here is a look at the translations available for this module as well https://localize.drupal.org/translate/projects/values

Drupal is a large framework, and some of the key tools are not as simple as one might hope for out of the box, however with no programming experience and a bit of digging around in google there are a lot of opportunities to leverage the framework.  Improving small details of content creation helps ensure that accessibility is introduced not as an afterthought tied to some administrative checkbox, but rather as a starting point and core point from which other aspects may develop.  

 

This post has no comments.
Field is required.