RSS feeds to shows

Hi!
I recently got into the habit of managing my different podcast and etc. subscriptions on my pc via an RSS feed. Is there a way to find RSS feeds for the DLN shows such as Destination Linux, This Week in Linux etc., or will I have to try to find them on podcast platforms or youtube and use an external RSS-ifyer for this purpose?
Thanks!

You should be able to find most of the RSS feeds by doing a site specific search in google: e.g. “RSS site:https://destinationlinux.org” or “RSS https://tuxdigital.com

These searches would turn up:
https://destinationlinux.org/feed/mp3/
https://tuxdigital.com/feed/thisweekinlinux-mp3

FWIW, Here is the bash script that I use to rake the Destination Linux podcast. I put this script, and others like it, in a cronjob and then push the processed podcast to my phone each morning at 6AM.

#!/usr/bin/bash
#
#############################
#
# Gets the latest version of Destination Linux:
#
# Date of creation: November 28, 2019
# Date of last Edit: December 28, 2019
#
# Mark Cain
#

# Perimeters that are used for the podcast:
rss_feed="https://destinationlinux.org/feed/mp3/"
podcast_directory="/home/mark/Music/podcasts/destination_linux"
podcast_name="Destination_Linux_"


echo -e "################################################" >> ${podcast_directory}/${podcast_name}.log
echo -e "###   Starting up the Podcast rake for ${podcast_name}" >> ${podcast_directory}/${podcast_name}.log
echo "###   "$(date '+%Y/%m/%d %H:%M:%S') >> ${podcast_directory}/${podcast_name}.log

#
# The first thing we need to do is get the rss feed so that we can start the process of
# getting the data that we want. Get the rss feed and output it to temp.txt
# wget optons:
#     -q = be quit.  Don't show the process and give a report
#     -O = (capital Oh) Output the results to a file
#
echo -e "###   Checking for the latest Episode to Download at ${rss_feed}" >> ${podcast_directory}/${podcast_name}.log

#wget ${rss_feed} -qO ${podcast_directory}/temp.txt
wget ${rss_feed} -qO ${podcast_directory}/temp.txt


#
# Check to see if the wget was successfully executed
if [ $? -eq 0 ]
then
   echo -e "###   Retrieved the rss feed" >> ${podcast_directory}/${podcast_name}.log
   # keep a copy of the rss_feed
   cp ${podcast_directory}/temp.txt ${podcast_directory}/temp/$(date '+%Y-%m-%d-%H-%M')_rss_feed.txt

else
   echo -e "###   There is a problem in getting the rss feed" >> ${podcast_directory}/${podcast_name}.log
   echo -e "###   "$(date '+%Y/%m/%d %H:%M:%S') >> ${podcast_directory}/${podcast_name}.log
   echo -e "###   Aborting" >> ${podcast_directory}/${podcast_name}.log
   echo -e "################################################\n\n" >> ${podcast_directory}/${podcast_name}.log
   exit
fi

#
# Now that we've got the rss feed, lets get the actual URL of the show.  We do that by finding the
# first match for "enclosure", split the line up where there are double quotes and store the second field
# into the bash variable "show_url"
#
show_url=$(grep enclosure ${podcast_directory}/temp.txt | head -n1 | awk -F"\"" '{ print $2 }')
echo -e "###   The actual location of the podcast is ${show_url}" >> ${podcast_directory}/${podcast_name}.log


#
# Now let's get the title of the show to use in naming the podcast file.  The title of the latest podcast is in the 5th
# occurrence of "title".  Use sed to get everything between the <title></title> tags.  Then replace all spaces with underscores.
# Dig out the Episode number at the end of the show title
# Now append the Episode title to the end of the Podcast name and store it in bash variable "show_title"
#

episode_number=$(grep title $podcast_directory/temp.txt | sed -n '5p' | sed 's/^.*Destination Linux \([0-9]\{1,\}\).*<.*/Episode_\1_/')
#                                                         # Get 5th   # find the number after "Destination Linux" for format it correctly

#                                                     # Get 5th   # find the number after "Destination Linux" and format it correctly
show_title=$(grep title $podcast_directory/temp.txt | sed -n '5p' | sed 's/.*<title>\(.*\).*<.*$/\1/' |                       sed 's/\ *$//g'       | sed 's/[,/\?!]//g'   | sed 's/\&\#[0-9]\{3\};/_/g'    | sed 's/\ /_/g'          | sed 's/_\{2,\}/_/g').mp3
#                                                                 # Keep whats between the <title> tags and before            # delete any          # delete any strange   # replace unicode characters     # swap in underscores     # remove consecutive
#                                                                 # the phrase "| Destination Linux"                          # trailing spaces     # characters           # in the title                   # for spaces              # underscores


show_title=${podcast_name}${episode_number}${show_title}

echo -e "###   Digging out and constructing show title: ${show_title}" >> ${podcast_directory}/${podcast_name}.log

# Now lets see if the file already exits.  If so, we can end the process here
FILE=${podcast_directory}/${show_title}
if [ -f "$FILE" ]
then
    echo -e "###   The latest Episode has already been processed.  We'll stop the process now." >> ${podcast_directory}/${podcast_name}.log
    echo -e "###   "$(date '+%Y/%m/%d %H:%M:%S') >> ${podcast_directory}/${podcast_name}.log
    echo -e "################################################\n\n" >> ${podcast_directory}/${podcast_name}.log
    rm ${podcast_directory}/temp.txt
    exit
fi

#
# Download the file
echo -e "###   Downloading the file ${show_url} as ${show_title}" >> ${podcast_directory}/${podcast_name}.log
youtube-dl -o "${podcast_directory}/${show_title}" ${show_url}

#
# Check to see if the youtube-dl was successfully executed
if [ $? -eq 0 ]
then
   echo -e "###   Retrieved the audio file successfully" >> ${podcast_directory}/${podcast_name}.log
else
   echo -e "###   There is a problem in getting the audio file with youtube-dl " >> ${podcast_directory}/${podcast_name}.log
   echo -e "###   Aborting" >> ${podcast_directory}/${podcast_name}.log
   echo -e "###   "$(date '+%Y/%m/%d %H:%M:%S') >> ${podcast_directory}/${podcast_name}.log
   echo -e "################################################\n\n" >> ${podcast_directory}/${podcast_name}.log
   exit
fi

#
# Let's normalize the audio of the new file and copy this to a temp directory
echo -e "###   Start normalizing the audio at "$(date '+%M:%S') >> ${podcast_directory}/${podcast_name}.log
ffmpeg -hide_banner -stats -loglevel panic -i "${podcast_directory}/${show_title}" -vn -filter:a dynaudnorm=m=25 ~/temp/${show_title}


# process the file to boost the audio by 150%
# and copy it back to this directory
echo -e "###   Start boosting the audio at "$(date '+%M:%S') >> ${podcast_directory}/${podcast_name}.log
ffmpeg -hide_banner -stats -y -loglevel panic -i ~/temp/${show_title} -filter:a volume=1.5 ${podcast_directory}/${show_title}
echo -e "###   Finished boosting the audio at "$(date '+%M:%S') >> ${podcast_directory}/${podcast_name}.log

# do a little clean up
rm ~/temp/${show_title}
rm ${podcast_directory}/temp.txt

echo -e "###   Coping the file to the hopper" >> ${podcast_directory}/${podcast_name}.log
# put a copy of this in the hopper folder on the desktop to be copied to phone
cp ${podcast_directory}/${show_title} /home/mark/Desktop/hopper
echo -e "###   Finished" >> ${podcast_directory}/${podcast_name}.log
echo -e "###   "$(date '+%Y/%m/%d %H:%M:%S') >> ${podcast_directory}/${podcast_name}.log
echo -e "################################################\n\n" >> ${podcast_directory}/${podcast_name}.log
1 Like

I love the script and I’m gonna try it out, but isn’t it basically doing what gPodder (the application, not the site) does?

You can also pull in the top secret TWIL Opus feed.

https://tuxdigital.com/feed/thisweekinlinux-ogg

Have fun.

Ask Noah Show
This Week in Linux main feed - opus feed
Das Geek
Destination Linux
DLN Xtend
Hardware Addicts
Linux For Everyone
Sudo Show

Voilà

1 Like

Perhaps. I don’t know what gPodder does. I do the script because from my experience most of the pod catchers I have used, wanted me to listen to ads, wanted me to reveal my interest and user subscription info, wanted me to listen to the podcast while I was on the grid, or wanted me to download the podcast under “their” naming conventions. The script allows me to be informed AND be totally independent and free!