Looking for a good RSS feed reader

I listened to a recent DL and @MichaelTunnell mentioned that post/topics/rooms etc. all had their own RSS feed/link, and that got me thinking.

I read on the internet 2-3 hours/day. In addition to my passion for linux, I’m trying to closely monitor when Asia opens up to inbound air travel and instead of surfing to the relevant websites for that information, I was wondering if an RSS reader would be better/easier?

What might you recommend?
Thank you.

RSS Readers are great but they require websites to have an RSS Reader to use unless you use a webscraper so I dont know if those kinds of websites would have that kind of feature. Feel free to share a link to what kind of site you are referring to and I will check to see if it would support it or not.

I think I will make an RSS Reader video on my channel and Article on FPL to provide help to those wanting to get into RSS because it is really awesome to use one of those.

I use Tiny Tiny RSS but I don’t really recommend it because it requires a server and stuff like that to self-host so if you’ve never done self hosting before then it would be jumping into the deep end.

There are quite a few options on Linux for desktop options though keep in mind none of these are recommendations as I do not use them. Just giving the options.

KDE Akregator
Newsboat if you want a terminal option
Liferea - not a pretty app but functional
RSSGuard
GNOME Feeds

Thank you Michael. (great website, discourse, and shows BTW)
I’m watching these.



https://www.cathaypacific.com/cx/en_US/travel-information/travel-preparation/travel-advisories/notice-regarding-travel-restrictions.html

1 Like

I still use an RSS reader for most of my news content, even though a great part of it is actually only FOSS.
It is liking having a curated list of news sites or blogs I really like to read. It is self made and no algorithm is messing with it.

What reader do you use?

A top RSS Reader:https://itsfoss.com/liferea-rss-client/

1 Like

Thank you, I’ll check it out

Always enjoyed the RSS, especially for those site not updated on regular basis. I’ve used feedly for the last couple of years. Easy for me with the mobile apps.

I did not mention it because it is CLI only. I use newsboat (older version is called newsbeuter) from the terminal. It is the one reader that I use exclusively since the demise of Google Reader and it can download podcasts via podboat, it is integrated into it.

https://newsboat.org/

1 Like

I use an RSS reader to keep up with new releases of software.

The reader I use is called RSS Guard and is a QT based reader.

1 Like

Yo u can even use RSS with Reddit

1 Like

I write bash scripts to rake in the podcasts that I want to hear. I parse the RSS Feeds to dig out the info of the latest episode, log the process (for error correction if something goes wrong), amplify the audio (as ears my ears aren’t 18 anymore), and stick the bash script in a cron job that puts any new episodes into a folder that in turn gets pushed to my phone each morning at 6AM.

You might be able to look at this code and get an idea how to pull this data into your terminal or a data file through a cron job. If i were looking for this type of up to the minute new data, I would probably send a condensed version of the retrieved data to my phone via a text message when it comes in.

Here is the script that I use to get the newest episode of Destination Linux:

#!/usr/bin/bash
#
#############################
#
# Gets the latest version of Destination Linux:
#
# Date of creation: November 28, 2019
# Date of last Edit: December 28, 2019
#
# Mark Cain
#

# Perimeters that are used for the podcast:
rss_feed="https://destinationlinux.org/feed/mp3/"
podcast_directory="/home/mark/Music/podcasts/destination_linux"
podcast_name="Destination_Linux_"


echo -e "################################################" >> ${podcast_directory}/${podcast_name}.log
echo -e "###   Starting up the Podcast fetcher for ${podcast_name}" >> ${podcast_directory}/${podcast_name}.log
echo "###   "$(date '+%Y/%m/%d %H:%M:%S') >> ${podcast_directory}/${podcast_name}.log

#
# The first thing we need to do is get the rss feed so that we can start the process of
# getting the data that we want. Get the rss feed and output it to temp.txt
# wget optons:
#     -q = be quit.  Don't show the process and give a report
#     -O = (capital Oh) Output the results to a file
#
echo -e "###   Checking for the latest Episode to Download at ${rss_feed}" >> ${podcast_directory}/${podcast_name}.log

wget ${rss_feed} -qO ${podcast_directory}/temp.txt


#
# Check to see if the wget was successfully executed
if [ $? -eq 0 ]
then
   echo -e "###   Retrived the rss feed" >> ${podcast_directory}/${podcast_name}.log
   # keep a copy of the rss_feed
   cp ${podcast_directory}/temp.txt ${podcast_directory}/temp/$(date '+%Y-%m-%d-%H-%M')_rss_feed.txt

else
   echo -e "###   There is a problem in getting the rss feed" >> ${podcast_directory}/${podcast_name}.log
   echo -e "###   "$(date '+%Y/%m/%d %H:%M:%S') >> ${podcast_directory}/${podcast_name}.log
   echo -e "###   Aborting" >> ${podcast_directory}/${podcast_name}.log
   echo -e "################################################\n\n" >> ${podcast_directory}/${podcast_name}.log
   exit
fi

#
# Now that we've got the rss feed, lets get the actual URL of the show.  We do that by finding the
# first match for "enclosure", split the line up where there are double quotes and store the second field
# into the bash variable "show_url"
#
show_url=$(grep enclosure ${podcast_directory}/temp.txt | head -n1 | awk -F"\"" '{ print $2 }')
echo -e "###   The actual location of the podcast is ${show_url}" >> ${podcast_directory}/${podcast_name}.log


#
# Now let's get the title of the show to use in naming the podcast file.  The title of the latest podcast is in the 5th
# occurance of "title".  Use sed to get everything between the <title></title> tags.  Then replace all spaces with underscores.
# Dig out the Episode number at the end of the show title
# Now append the Episode title to the end of the Podcast name and store it in bash variable "show_title"
#

episode_number=$(grep title $podcast_directory/temp.txt | sed -n '5p' | sed 's/^.*Destination Linux \([0-9]\{1,\}\).*<.*/Episode_\1_/')
#                                                         # Get 5th   # find the number after "Destination Linux" for format it correctly

#                                                     # Get 5th   # find the number after "Destination Linux" and format it correctly
show_title=$(grep title $podcast_directory/temp.txt | sed -n '5p' | sed 's/.*<title>\(.*\).*<.*$/\1/' |                       sed 's/\ *$//g'       | sed 's/[,/\?!]//g'   | sed 's/\&\#[0-9]\{3\};/_/g'    | sed 's/\ /_/g'          | sed 's/_\{2,\}/_/g').mp3
#                                                                 # Keep whats between the <title> tags and before            # delete any          # delete any strange   # replace unicode characters     # swap in underscores     # remove consequtive
#                                                                 # the phrase "| Destination Linux"                          # trailing spaces     # characters           # in the title                   # for spaces              # underscores


show_title=${podcast_name}${episode_number}${show_title}

echo -e "###   Digging out and constructing show title: ${show_title}" >> ${podcast_directory}/${podcast_name}.log

# Now lets see if the file aleady exits.  If so, we can end the process here
FILE=${podcast_directory}/${show_title}
if [ -f "$FILE" ]
then
    echo -e "###   The latest Episode has already been processed.  We'll stop the process now." >> ${podcast_directory}/${podcast_name}.log
    echo -e "###   "$(date '+%Y/%m/%d %H:%M:%S') >> ${podcast_directory}/${podcast_name}.log
    echo -e "################################################\n\n" >> ${podcast_directory}/${podcast_name}.log
    rm ${podcast_directory}/temp.txt
    exit
fi

#
# Download the file
echo -e "###   Downloading the file ${show_url} as ${show_title}" >> ${podcast_directory}/${podcast_name}.log
youtube-dl -o "${podcast_directory}/${show_title}" ${show_url}

#
# Check to see if the youtube-dl was successfully executed
if [ $? -eq 0 ]
then
   echo -e "###   Retrived the audio file successfully" >> ${podcast_directory}/${podcast_name}.log
else
   echo -e "###   There is a problem in getting the audio file with youtube-dl " >> ${podcast_directory}/${podcast_name}.log
   echo -e "###   Aborting" >> ${podcast_directory}/${podcast_name}.log
   echo -e "###   "$(date '+%Y/%m/%d %H:%M:%S') >> ${podcast_directory}/${podcast_name}.log
   echo -e "################################################\n\n" >> ${podcast_directory}/${podcast_name}.log
   exit
fi

#
# Let's normalize the audio of the new file and copy this to a temp directory
echo -e "###   Start normalizing the audio at "$(date '+%M:%S') >> ${podcast_directory}/${podcast_name}.log
ffmpeg -hide_banner -stats -loglevel panic -i "${podcast_directory}/${show_title}" -vn -filter:a dynaudnorm=m=25 ~/temp/${show_title}


# process the file to boost the audio by 150%
# and copy it back to this directory
echo -e "###   Start boosting the audio at "$(date '+%M:%S') >> ${podcast_directory}/${podcast_name}.log
ffmpeg -hide_banner -stats -y -loglevel panic -i ~/temp/${show_title} -filter:a volume=1.5 ${podcast_directory}/${show_title}
echo -e "###   Finished boosting the audio at "$(date '+%M:%S') >> ${podcast_directory}/${podcast_name}.log

# do a little clean up
rm ~/temp/${show_title}
rm ${podcast_directory}/temp.txt

echo -e "###   Coping the file to the hopper" >> ${podcast_directory}/${podcast_name}.log
# put a copy of this in the hopper folder on the desktop to be copied to phone
cp ${podcast_directory}/${show_title} /home/mark/Desktop/hopper
echo -e "###   Finished" >> ${podcast_directory}/${podcast_name}.log
echo -e "###   "$(date '+%Y/%m/%d %H:%M:%S') >> ${podcast_directory}/${podcast_name}.log
echo -e "################################################\n\n" >> ${podcast_directory}/${podcast_name}.log
3 Likes

Wow, these are great suggestions. Thank you.

You do realize there are browser add-ons/extensions which do RSS you can add to your browser?

I’ve tried a bunch of RSS readers. Currently using NewsBlur ( https://www.newsblur.com/ ). It’s pretty good, but there’s still room for improvement. It’s time for me to try out other readers that you guys are mentioning.

Luke Smith recently did a video about this, including how to extract RSS feeds from Twitter, YouTube, etc. Don’t let the title/thumbnail scare you away, there is actually good content inside. :joy:

If you’re interested in syncing your feeds across devices, there are freemium services like Inoreader, or you can self-host a service such as Tiny Tiny RSS.

Hi there,

I bookmarked this topic cuz I am also looking to have my feeds on desktop;
I tried Feedreader, inoreader, feedly, etc; and they are good, but the free versions
have some kind of cap.

Now I am using QuiteRSS + RSS Bridge to hunt down the feeds (thanks DannyBoy -The Shinning ? - for the vid link) and it works great.

Now, I wonder, is it possible to keep the OMPL file sync-ed on a cloud like Mega - Dropbox or such?.

Thx.

1 Like

Syncing? I thought that’s what you were using QuiteRSS for—why do you need an external tool?

Also, what do you mean by “The Shinning?”

EDIT: Welcome to the DLN forums! Sorry, didn’t see the banner right away.

Sync it to a cloud of my own as a back up, I can do it manually of course, but sometimes I do silly things on my system; I’m still a linux noob.

In the movie “The Shinning”, the boy’s name is Danny. Or that is how I remember it.

Cheers.

Yes, it’s actually quite easy to automate this. Perhaps you could set up a cron job with rclone. Chris Titus, despite not being my favorite Linux content creator, has a good tutorial on this.

Just FYI, this is good for backing up something small like your OPML file, but I would advise a more robust solution for a full-system backup. Feel free to DM me if you need more help.

Oh, The Shining? I haven’t seen that movie. My nickname isn’t a reference to anything; it’s just one I’ve had my whole life.