Tool: ArcGIS Pro 2.6.1
Technique: Annotation, Labeling and Symbology
A series of maps were created for the book published by WWF-Malaysia and FORMADAT (Forum Masyarakat Adat Dataran Tinggi Borneo) back in 2020 called Nature in the Heart of Borneo.
This book was meant as a guide to some of the natural attractions at Northern parts of Sarawak. If it was clear, Northern Sarawak is where the we have our very own highlanders which consist of primarily the Lundayeh/Lun Bawang, Sa'ban and Kelabit people. Some of the beautiful settlements up in the north that should not be missed are Ba'kelalan and Long Semadoh. They have beautiful homestays and even more beautiful landscapes with trekking activities lined up for tourists. And this is the culmination of ardent passion by my two absolutely wonderful colleagues, Alicia Ng and Cynthia Chin.
Most part of the maps were made using readily available basemap provided by Esri in their Living Atlas. But in entirety, many of the features and details are drawn manually within ArcGIS Pro. Like many other mapmakers out there, the labeling feature is horrendously temperamental and I either end up using annotations instead.
In summary, technically, there are 2 lessons learned here:
1οΈβ£ Establish concept or pick an idea before you start drawing
A concept of the map and palette should be established at the earliest stage possible. And don't just throw the task of making maps and split them evenly between cartographers. They won't have similar ideas or similar interpretations of the concept. It'll only give you double the pain of creating the maps again from scratch.
2οΈβ£ Omit borders
If you're making maps for books, don't border trying to make borders and fully utilize the whole layout. In the end, you'll need to export out your maps and they will resize it anyway and it'll compromise the maps you created. As if it wasn't graining enough in the first place, it'll look absolutely microscopic by the time they're done.
Hey again folks! I am here for the second part of Python environmental setup for a geospatial workspace. I published the first part of this post two weeks ago. So if you've not yet read that, I'll catch you up to speed with our checklist:
Install Python β
Install Miniconda β
Install the basic Python libraries β
Create a new environment for your workspace
Install geospatial Python libraries
Since we have actually manually set up our base environment quite thoroughly with all the basic libraries needed, to make our work easier, we can just clone the base environment and install all the additional essential libraries needed for geospatial analysis. This new environment will be called geopy. Feel free to use a name you identify most with.
Why don't we just create a new environment? Well, it means we have to start installing the Python libraries again from scratch. Although it is no trouble to do so, we want to avoid installing so many libraries all at once. As I mentioned in Part 1, there is always a risk where incomplete dependencies in one library will affect the installation of other libraries that you intend to install in one go. Since we already have a stable and usable base environment, we can proceed to use it as a sort of pre-made skeleton that we will build our geospatial workspace with.
1οΈβ£ At the Anaconda Command Prompt, type the following:
2οΈβ£ Press Enter and the environment will be clone for you. Once it is done, you can use the following command to check the availability of your environment ππ»
You should be able to see your geopy environment listed along with the base environment.
Here we will proceed with the installation of a few geospatial Python libraries that are essential to reading and exploring the vectors and rasters.
πΊ fiona: This library is the core that some of the more updated libraries depend on. It is a simple and straightforward library that reads and writes spatial data in the common Python IOs without relying on the infamous GDAL's OGR classes.
πΊ shapely: shapely library features the capability to manipulate and edit spatial vector data in the planar geometric plane. It is one of the core libraries that recent geospatial Python libraries rely on to enable the reading and editing of vector data.
πΊ pyproj: is the Python interface for the cartographic projections and coordinate system libraries. Another main library that enables the 'location' characteristics in your spatial data to be read.
πΊ rasterio: reads and writes raster formats and provides a Python API based on Numpy N-dimensional arrays and GeoJSON.
πΊ geopandas: extends the pandas library to allow spatial operations on the geometric spatial data i.e shapefiles.
π As you might have noticed, we won't be doing any direct gdal library installation. It's mainly due to the fact that its installation is a process that seems to be accompanied by misery at every turn and involved workarounds that are pretty inconsistent for different individuals. Does it mean that we won't be using it for our Pythonic geospatial analysis? Heck no. But we will be taking advantage of the automatic dependency installation that comes with all the libraries above. The rasterio library depends on gdal and by installing it, we integrate the gdal library indirectly into our geospatial environment. I found that this method is the most fool-proof. Let's proceed to the installation of these libraries.
1οΈβ£ At the Anaconda Command Prompt, should you start from the beginning, ensure that your geopy environment is activated. If not, proceed to use the following command to activate geopy.
Once activated, we can install the libraries mentioned one after another. Nevertheless, you also have the option of installing them in one go directly using a single command ππ»
π geopandas is not included in this line-up NOT because we do not need it. It's another temperamental library that I prefer to isolate and install individually. If gdal is a rabid dog...then geopandas is a feral cat. You never know how-when-why it doesn't like you and forces a single 10-minute installation drag to hours.
3οΈβ£ Once you're done with installing the first line-up above, proceed with our feral cat below ππ»
4οΈβ£ Use the conda list command again to check if all the libraries have been installed successfully.
πEt voilΓ‘! Tahniah! You did it!π
π― The Jupyter Notebook
It should be the end of the road for the helluva task of creating the geospatial environment. But you're going to ask how to start using it anyway. To access this libraries and start analyzing, we can easily use the simple and straight-forward Jupyter Notebook. There are so many IDE choices out there but for data analysis, Jupyter Notebook suffices for me so far and if you are not familiar with Markdown, this tool will ease you into it slowly.
Jupyter Notebook can be installed in your geopy environment as follows:
And proceed to use it by prompting it open via the command prompt
It ain't that bad, right? If you're still having problems with the steps, do check out the real-time video I created to demonstrate the installation. And feel free to share with us what sort of problems you have encountered and the workaround or solutions you implemented! It's almost never a straight line with this, trust me. As mentioned in the previous post, check out the quick demo below ππ»
See you guys again for another session on geospatial Python soon!
Story Map is a web application template product that has been popularized in ArcGIS Online for a user-friendly and comprehensive narrative of maps. TheΒ βCascadeβ template has become the seamless interface of choice due to itβs ribbon transitions and availability of content streaming from external sources.Β
Please refer to the following link for resources used in this webinar:
Story Map for Noobs: Cascade web application
π Availability: Retracted in 2021
Tool: ArcGIS Pro, ArcGIS Pro Deep Learning extension, Python, Jupyter Notebook Technique: Deep learning; semantic segmentation, cartography, remote sensing
The presentation of abstract outlining the implementation of deep learning in land cover classification across the Borneo island. It uses the Sentinel-2 image data and the band combination that differentiates the bareland, tree cover as well as waterbodies and croplands whilst training the U-Net model using the referenced data collected.
Please find the abstract published here:
Warta Geologi, Vol. 47, No. 1, April 2021
The presentation slide can be accessed at the following link ππ»:
Split by Attributes GP tool....when would you actually use this?
There are times when you're making a map but symbolizing using the symbology feature is not enough to characterize the data visually. Thus, having this tool makes cartographical work a little easier by generating copies of the original data, split into separate layers based on the attribute that we need. By doing this, it makes the task of adding the legend much easier in the layout as well.
Most often, when making maps for slide presentation, you would want to segregate data into separate layers with certain uniform values for a certain attribute and a create a new data layer which we can use over and over again.
Although definition query can help with visualizing and showing the features with the attribute value that we want, we may want to create a separate data to avoid compromising the original data or constantly repeating the task of typing/configuring the SQL commands.
This tool is valid for shapefiles and feature classes. Any other data types may need to be converted into those two formats before you can run it. Check out the long-winded demo below:
Since this tool is actually a Python script, it can be integrated into a code for batch geoprocessing or model for iteration over many data layers or interconnection to other tools; automation at its full-on glory! π
Survey123 for ArcGIS is perhaps, one of those applications that superficial nerds like me would like; it's easy to configure, kiddie-level degree of customization with 'coding' (for that fragile ego-stroke) and user-friendly template to use.Β
No app development/coding experience is required to publish a survey form and believe it or not, you can, personalize your survey to not look so meh.Β
It took me some time to stumble through the procedures of enabling this feature before I understand the 'ArcGIS Online' ecosystem to which this app is chained to.Β
So how do we do it? And why doesn't it work pronto?
This issue may be due to the fact that when we first start creating our forms, we go through the generic step-by-step procedures that leave little to imagination what was happening. Most of the time, we're too eager to find out how it really work.Β
When we publish a Survey123 form; be it from the Survey123 website portal or the Survey123 Connect for ArcGIS software, we are actually creating and publishing a folder that contains a hosted feature layer and a form. It is on that hosted feature layer that we add, delete, update or edit data it. From ArcGIS Online, it looks like any feature service that we publish out of ArcGIS Desktop or ArcGIS Pro, save for the special folder it is placed in with a 'Form' file.Β
To enable any offline function in any hosted feature layer in ArcGIS Online, you will need to enable the 'Sync' feature. So far, in many technical articles that I have gone through to learn how to enable this offline feature always goes back to 'Prepare basemaps for offline use'. It is a tad bit frustrating. But my experience when deal with 'Collector for ArcGIS' gave me the sense of epiphany when it comes to Survey123. So when you have prepared your Survey123 form for offline usage and it still doesn't work...do not be alarmed and let's see how to rectify the issue.Β
1. Locate your survey's hosted feature layer
At your ArcGIS Online home page, click 'Content' at the main tab. We're going to go directly to your hosted feature layer that was generated for your survey when you published.Β
Locate your survey folder. Click it openΒ
In the survey folder, navigate to the survey's hosted feature layer and click 'Options' button; the triple ellipses icon
At at the dropdown, click 'View item details'. Please refer to the screenshot below:Β
2. Change the hosted feature layer settings
At the item details page, navigate to the 'Settings' button at the main header and click it. This will prompt open the settings page for the feature layer. Refer to the screenshot below:
At the 'Settings' page, there are two tabs at the subheader; 'General' and 'Feature layer (hosted)'. Click 'Feature layer (hosted)' to configure its settings.
At the 'Feature layer (hosted)' option, locate the 'Editing' section. Here, check the 'Enable sync' option. This is the option that will enable offline data editing. Please refer to the following screenshot:Β
Don't forget to click 'Save'
With this, your hosted feature layer which serves as the data model is enabled for synchronization. Synchronization helps to sync back any changes you've made when you're out on the field collecting data; editing, adding, deleting or update...depending on what feature editing you've configured.Β
It's pretty easy once you get the hang of it and just bear in mind that the data hierarchy in the ArcGIS Online universe are as follows:
Feature layer (hosted) > Web map > Web application
Once you get that out of the way, go crazy with your data collection without any worries!
Ok.Β
I wanna know why have I never heard of this online tool before. Like, what the hell is wrong with the social media? Is something wrong with Twitter or Instagram or something that they never caught on mapshaper?Β Or was it just me and my hazardous ignorance, yet again?
Have you tried this free nifty online tool that literally simplify crazy complicated shapefile polygons like itβs no oneβs business?!
It started with some last minute inspiration on how to collate data from 3 different regions; developed from remote sensing techniques which vary from one another. The common output here is to turn all of them into a vector file; namely shapefile, and start working on the attribute to ease merging of the different shapefile layers.
Once merged, this shapefile is to be published as a hosted feature layer into the ArcGIS Online platform and incorporated into a webmap that serves as a reference data to configure/design a dashboard. What is a dashboard? It's basically an app template in ArcGIS Online that summarizes all the important information in your spatial data. It's a fun app to create, no coding skills required. Check out the gallery here for reference:
Operations Dashboard for ArcGIS Gallery
There are two common ways to publish hosted feature layer into ArcGIS Online platform.
Method 1: Zip up the shapefile and upload it as your content. This will trigger the command inquiring if you would like to publish it as a hosted feature layer. You click 'Yes' and give it a name and et voila! You have successfully publish a hosted feature layer.
Method 2: From an ArcGIS Desktop or ArcGIS Pro, you publish them as feature service (as ArcMap calls them) or web layer (as the its sister ArcGIS Pro calls them). Fill up the details and enabling the function then hit 'Publish' and it will be in the platform should there be no error or conflicting issues.
So, what was the deal with me and mapshaper?Β
π A fair warning here and please read these bullet points very carefully:
I need you to remember...I absolve any responsibility of what happens to your data should you misinterpreted the steps I shared.Β
Please alwaysΒ ππ» Β BACK ππ» UPΒ ππ»Β Β YOURΒ ππ» DATA. Donβt even try attempting any tools or procedure that I am sharing without doing so. Please. Cause I am an analyst too and hearing someone else forget to save their data or create a backup is enough to make me die a little inside.Β
For this tool, please export out the attribute table of your shapefile because this tool willΒ CHANGE YOUR SHAPEFILE ATTRIBUTES.Β
When I was publishing the vector I have cleaned and feature-engineered via ArcGIS Pro...it took so long that I was literally dying inside. I'm not talking about 20 minutes or an hour. It took more than 12 hours and it did not conjure the 'Successfully published' notification as I would've expected from it.
So at around 5.30 am, I randomly type 'simplify shapefily online free'. Lo and behold, there was mapshaper.
All I did was, zip up my polygon, drag it to the homepage and it will bring you to the option of choosing the actions that will be executed while the data is being imported into mapshaper:
detect line intersections
snap vertices
This option will help you to detect the intersections of lines within your vector/shapefile. This can help identify topological error.
The option to snap vertices will snap together points of similar or almost identical coordinate system. But it does not work with TopoJSON formats.
There is something interesting about this options too; you can enter other types of customized options provided by the tool from its command line interface! But hold your horses peeps. I did not explore that because here, we want to fix an issue and we'll focus on that first. I checked both options and import them in.
This will bring the to a page where there you can start configuring options and method to simplify your vector.
To simplify your shapefile, you can have both options to prevent the shape of the polygon being compromised; prevent shape removal, and to utilize the planar Cartesian geometry instead of the usual geoid longitude and latitude; use planar geometry. The implication of the second option is not obvious to me yet since all I wanted was to get the data simplified for easy upload and clean topology, thus, I chose both options to maintain the shape and visibility of all my features despite the highest degree of simplification.
Alike to the options of methodology for simplication in the mainstream software, I can see familiar names:
Douglas-Peuker
Visvalingam / effective area
Visvalingam / weighted area
First and foremost, I had no slightest idea of what these were. Like for real. I used to just go first for the default to understand what sort of output it will bring me. But here, the default; Visvalingam / weighted area, seemed like the best option. What are these methodologies of simplification? There are just algorithms used to help simplify your vectors:
π― Douglas-Peucker algorithm decimates a curve composed of line segments to a similar curve with fewer points (Ramer-Douglas-Peucker algorithm, Wikipedia; 2021).
π― Visvalingam algorithm is a line simplication operator that works eliminating any less significant points of the line based on effective area concept. That basically means that the triangle formed by each of the line points with two of its immediate neighboring points (Visvalingam Algorithm | aplitop).
π― Visvalingam algorithm with weight area is another version of Visvalingam algorithm of subsequent development where an alternative metrics is used and weighted to take into account the shape (Visvalingam & Whelan, 2016).
For reasons I can't even explain, I configured my methodology to utilize the third option and now that I have the time to google it, Thank God I did.
Then, see and play with the magic at the 'Settings' slider where you can adjust and view the simplification made onto the vector! I adjusted it to 5%. The shape retained beautifully. And please bear in mind, this vector was converted from a raster. So, what I really wanted is the simplified version of the cleaned data and to have them uploaded.
Now that you've simplified it, export it into a zipped folder of shapefile and you can use it like any other shapefile after you extracted it.
Remember when I say you have got to export your table of attributes out before you use this tool? Yea...that's the thing. The attribute table will shock you cause it'll be empty. Literally. With only the OBJECTID left. Now, with that attribute table you've backed up, use the 'Join Table' tool in ArcGIS Pro or ArcMap and join the attribute table in without any issues.
Phewh!!
I know that it has alot more functions than this but hey, I'm just getting started. Have you ever done anything more rocket science than I did like 2 days ago, please share it with the rest of us. Cause I gotta say, this thing is cray!! Love it so much.
mapshaper developer, if you're seeing this, I π€π» you!
UPDATE
I have been asked about the confidentiality of the data. I think this is where you understand the reason behind the fact that they will work even with using just theΒ β.shpβ file of the shapefile since _that_ is the vector portion of the shapefile.Β
Shapefile is a spatial data format that is actually made up of 4 files; minimum. Each of these files share the same name with different extensions; .prj, .shx, .shp and .dbf. Although I am not familiar with what .shx actually accounts for, the rest of them are pretty straightforward:
.prj: stores the projection information
.dbf: stores the tabulated attributes of each features in the vector file
.shp: stores the shape/vector information of the shapefile.Β
So, as the tool indicate, it actually helps with the vector aspect of your data which is crucial in cartography.Β
Tool: ArcGIS Pro 2.9.3 Technique: Overlay analysis, visualization via remote sensing technique
These maps are developed to aid or supplement the Natural Capital Valuation (NatCap) initiative. As cited by WWF:
An essential element of the Natural Capital Project is developing tools that help decision makers protect biodiversity and ecosystem services.
One of the site included in this initiative by WWF-Malaysia is the Heart of Borneo (HoB). Specifically for this exercise, the visualization of policy and land use eventually become the data input utilized in the tool InVest that generates the models and maps for the economic values of ecosystem services within the landscape of interest.
The generation of the data mainly includes superficial remote sensing to assess the status of the land use in the respective concessions using Sentinel-2 satellite image with specific band combination to identify tree cover, particularly mangrove forest.
Coding is one of the things I have aspired to do since like...forever! But finding a resource in-sync with my comprehension, schedule and able to retain my interest long enough is a challenge.
I have the attention span of a gnat so, I jumped everywhere! If I am not actively engaged with the learning, I just can't do it. And I know...we have DataCamp, Udemy, Khan Academy and even Kaggle...but I either can't keep up, too poor to pay for the full course or it couldn't sync with me enough. I believe I can say that most of the exercise doesn't 'vibe' with me.
Recently, I committed myself to my one passion; running. It's one of my favorite activities when I was back in school but the will to really run died a decade ago. I have recently picked up my running shoes and ran my little heart out despite having the speed of a running ant; aging perhaps? And I owe my hardcore will to the motivation of earning what I paid when I decided to join a 1-month long virtual run of 65km. It is called the 'Pave Your Path' virtual run organized by
Running Station
. Nailed it 2 days ago after 13 sessions of 5km - yes, you can accumulate the distance from multiple runs. It made me realize that...it's not that bad. The 'near-death' experience while running kinda turned me into a daredevil these days when it comes to undertaking some things I'd whine about doing a few months back.
"If I can go through dying every single evening for 5km long run...I can handle this,"
My thoughts exactly every time I feel so reluctant to finish some tasks I believe I could hold off for some time.
Naturally, I plan my work rigorously and despite the flexibility of my schedule and my detailed plans, I still have a hard time trying to nail the last coffin to my projects. Usually, it's due to my brain's exhaustion from overthinking or I am just truly tired physically. Which is a weird situation given I do not farm for a living. Even so, I was lethargic all the time.
But when I started running a month ago, things kind of fall into places for me. Maybe...just maybe...I've become more alert than I used to. I still have my ignorance of things that I believe do not concern my immediate attention but I seem to be able to network my thoughts faster than I used to.
It might be just me, feeling like a new person due to my sheer willpower to not burn my RM60 paid for the virtual run, but it did feel like there was a change.
For that, I managed to confirm what I have suspected all along - I am one of those people who love drills. I like things to be drilled into my head until I by-heart it into efficiency and then focus on polishing the effectiveness.
Thus...for coding, I committed myself to
freeCodeCamp
. By hook or by crook, I'll be coding by first quarter next year or someone's head is gonna roll!
It's an interactive learning experience simple enough for me to start, straightforward enough to not make me waste my time searching for answers and it's free. God bless Quincy Larson.
Going back to the program outlined in freeCodeCamp, I find it fascinating that they start off with HTML. I have no arguments there. My impatience made me learn my lesson - you run too fast, you're going to burn out painfully and drop dead before you halfway through. HTML is a very gentle introduction to coding for newbies since it's like LEGO building blocks where you arrange blocks and match two to create something. I didn't have to go crazy with frustration is I don't 'get' it. Yes, we would all want some Python lovin' and I think alot of coders I came to know have raved about how simple it is to learn. But I think, it is an opinion shared by 'experienced' coders who wished Python was there when they first started coding. Someone once told me, what you think is the best based on others' experiences may not be the best for you...and I agree with this. After alot of deliberations and patience at my end, starting over again this time feels, unlike the dreaded looming doom I've always had back then.
Are you into coding? What do you code and what's you're language preference? Where did you learn coding? Feel free to share with me!
I'm hitting the backed-up reading list that I've accumulated in my Zotero. It's annoying and you procrastinate the task of reading as much as possible when you're in that potato phase. I am demotivated, bored, constantly tired, and feel like devoting myself to reading storybooks for life. If I can get paid for all the hours I sleep every time I feel like signing out from life, I could be making a decent living. But, too bad, I don't.
I do not endorse any products or review anything since I feel like, to each, your own. So, I'm not going to tell you what works best or how some tips can magically fix your life. I am lucky that I have an incredible academic supervisor, a flexible boss at work, a very academic-oriented sibling, and a supportive squad of friends. Even with all that, I am still depressed. So, if you're down on the low at the moment, you're not alone. But when you have made a promise, you will look like a total flake if you don't deliver. So, you gotta move your ass anyway, right?
I just started reading papers again and it was so hard. Two weeks go by without me making any progress...just stuck on one paper and not retaining a single piece of information at all. All that forehead and nothing...nothing sticks. So you can say that I am hating life right now. But, today...I manage to reach some sort of compromise with myself and it starts to feel good. So, I would like to share it with you guys who could be struggling to get the engine started as well.
π― Literature Review Catalog
My supervisor is an awesome human being. He's the manager/cheerleader/mentor/Allfather/Captain America/Britney Spears to my lackluster academic history. He had been keeping tabs on me despite my intermittent anxious mood that swings like a freaking metronome, so you can say that he practically keeps my boat afloat at this unprecedented time. For our proposal writing (there's a whole army of us that he's supervising), he shared something valuable. The 'Literature Review Catalog'.
Yes. It's an Excel Sheet. Nothing fancy with very normal columns that indicates the papers/resource you've read. Looks simple and useful. The columns are populated as follows:
Year: The year of publication.
Author: Short author list.
Country (Study Area): The areas that are being studied in this research. If you're an Earth Science student like me, you can narrow it down to countries. But I think overall, countries are the most general part of discriminating different studies.
Main Keyword: I create my own keywords to develop my own system of comprehension. But I do create a column for the keywords found in the paper itself.
Issue & Objectives: You can find this information from the Abstract and Introduction part of the paper.
Proposed Method: This can be found in the Results section but I usually scan through the Methodology to add in more information when I do second round scanning of the paper.
Findings & Conclusions: I add in more notes on information that is new to me here in addition to the conclusion. New information can be extracted when you do another once-over of the paper and a conclusion can be obtained from the Conclusion section.
Reference: You can find references that are relevant to your studies from this paper! So why not? Right?
But, it's the laborious work that comes with it that turns my stomach. It scares the hell out of me despite any motivational speech I give myself. But it can all make sense when you pair it with the following method ππ»ππ»ππ»
π― How To Read A Paper Quickly & Effectively | Easy Research Reading Technique
This is the gem my sister told me about yesterday. I brushed it off since it stresses me out to see people sharing their speed-reading techniques, study tips, and how to ace all the subjects in the world or how to get a 4.0 GPA. It really isn't the good people's fault and I blame it on my constantly anxious self. I don't even know what's wrong with me, so...it's not them. It's me. But, here, we're gonna work on 'me'. So, give this 10 minutes video a watch. It's worth it because Dr. Amina Yonis really knows what she's talking about and what's even better, she really is an advocate for effective reading/studying. It's short enough for you to maintain your attention span and you will learn how to actually 'evaluate' your reading materials; are they worth the second shot at reading? Is there any added value to it?
To summarize, what you should look out for:
Title: Read the title and find the keywords
Abstract: Lookout for the results and methods in a simple sentence
Introduction: Read the first and last paragraphs. Most of the time, the first paragraph highlights the satellite view of the crisis and the last paragraph zooms straight for the objective.
Results: Pay attention to the headings since that more or less highlights what was it that they find. If there aren't any headings, try looking at them by paragraph. Scan them through.
Conclusion: This summarizes everything in the research paper.
After the 'Conclusion', you may feel like it is an info/findings that you've already expected or grasped, and you may just proceed and read other new ones in your pile. But if you need to dive deeper, jump to the 'Results' again for the key figures or results and limitations.
So ...
How do you go about reading this and what has it got to do with the 'Literature Review Catalog'? Well, using this efficient reading method and taking out the notes into the columns will help you condense all the important information and helps you stop re-reading constantly the details that are not paramount to your study.
π― Forest App
To amp up and see if it was effective, I actually timed myself with the 'Forest App'. I have been estranged from it since my potato phase, but now, it's back to being that BFF I need. It took 10 minutes to go through all the steps and if the paper isn't heavy-laden, 5 minutes to fill it into the 'Literature Review Catalog'. I manage to think and ask questions in my head as I fill in the columns and I believe that's the most important part of the effective reading that we need as someone who's jumping into a very dynamic environment of scrutinizing existing work. You can use any sort of timer to actually give a sense of urgency to your work - it does help to a certain extent. So, if you intend to have fun creating a forest of pretty trees while making good of your focus time, check out this video!
π― Reference Manager
And please please please, organize/record your references responsibly using reference management software. Some swears by Mendeley, or the good ol' EndNote. There's also Flowcite and Citationsy. Use them. Don't download those papers indiscriminately without recording the details that can help you sync them straight to your word processor using viable plugins. I personally use Zotero. It comes with a Chrome plugin and Microsoft Word plugin that you can download separately. It's compatible with Linux and iOS operating system. I used to park my work at Mendeley, but I find Zotero more powerful and flexible enough to use and it actually helps me to make the effort to remember what I actually downloaded rather than rely on the convenience of going back and forth to cloud storage. And it's open-source. So, try it out to create an organized library.
To all the aspiring scholars out there, when you win, we all win. Share your phase and troubles with the #studyblr or here with me. Emotional support is important and if the internet does not give you peace of mind, sign out and unplug. It's ok. When you're ready to work, reach out to anyone you think will respond positively and want to help you succeed. We can't all do things alone. So, start that power-up playlist and start working!