Search engine alternatives

Gave Mojeek a go and it’s search is almost unusable for anything too obscure.

Qwant however was so exceptional I had to double-check that they use their own indexing service and crawlers because it felt like DDG.

You guys need to check out this quote…

"Ensure neutrality and impartiality

Qwant indexes the whole Web with no discrimination. It applies its sorting algorithms with the same requirements, without trying to put forward some websites because it would suit some particular business, political or moral agenda. With Qwant, the information is treated equally, with a constant care for impartiality. Moreover, because we never try to know who you are, we don’t try to offer results that would make you feel comfortable with your own opinions. Qwant presents the reality of a complex world, with diverse opinions, which make it rich and worth living."

Our philosophy - Qwant Help Center

I mean I don’t want to call this one early but dayum.

2 Likes

Excellent find. I’ll definitely take a look at it.

Any dig deep enough into Qwant to determine how they are making money? Inquiring minds want to know…

I also switched to Searx, but use an existing instance chosen from this list: SearXNG and searx instances . Based on the filters I settled down on https://searx-private-search.de/ , as this is also located in Germany with EU laws in effect as an additional plus. Now I am trusting this server.

1 Like

Found this “about Qwant”
“Designed and made in France with passion, Qwant is the first European search engine to have its own web indexing technology, which protects user privacy by refusing any tracking devices for advertising purposes. Users can search in total privacy and security. Unlike the main search engines on the market, Qwant does not install any cookies on the user’s browser, does not try to find out who they are or what they are doing, and does not keep a history of the queries they make. With a pleasant interface that leaves plenty of room for results, Qwant makes it possible to efficiently find the information you are looking for on the entire Web, while respecting total neutrality. Qwant treats all indexed sites and services without discrimination, without modifying the order of results according to the user’s own interests or sensibilities.”
Seeing that they’re French, that means that they have to comply with the GDPR. And at least, that’s something. There have been some hefty fines doled out to companies who got caught here in Europe.
So at least, that’s something.
How they make their money still needs to be seen.

BTW Qwant was also the best single alternative search engine I found too. But it is not Open Source like Searx is, so we need to trust them without the ability to know if it is true. Also it still saves the IP adresses and shares the data with Microsoft:

What data does Qwant collect when I search?

When you search on Qwant, we naturally receive your search terms, as well as the IP address of your computer or mobile device, and information about your browser (the “User Agent”). We use this data to process your inquiry and return your inquiry, as well as the corresponding answers. We pseudonymize what we need to keep for statistics and for transfer to our technology and business partners.

Why are you transferring data to Microsoft, and what data is it?

Microsoft provides some of the search results you see on our pages, and provides ads to the keywords in your search inquiry. This means that we need to send Microsoft some information related to your search that allows our partner to return results and ads relevant to that search, and to prevent fraudulent clicks or other activities that are not permitted by our Terms of Use.

In order to detect fraud, Qwant uses a specialized service offered by Microsoft, which does not have access to the keywords of your search. Only your IP address and the browser (your “User Agent”) are communicated to this specialized service to calculate a fraud probability score. Keywords are sent separately to another service that does not know your IP address.

Processing of our users’ queries

To respond to the query by displaying results and ads matching your search, as well as for security and reliability purposes of its services (detection of spam, automated activity, fraudulent clicks on ads…), Qwant processes the following data:

2 Likes

Yeah, I’m pretty sure this is what pushed me away from Qwant awhile back now. I tried a number of ‘alternative’ search engines, and while DDG is maybe not ‘perfect’, it’s the best that I found. Qwant is very nice, but this sharing of info with Microsoft is just not OK with me.

I did find that Qwant makes it’s money off of ads and that, for me at least, those ads are only on Qwants search page. I didn’t see any ads on the search results page.

What does everyone think about using searx, but configuring it to only use Qwant? The next question will be, which searx page to use.

1 Like

With SearX i’m worried about skin-in-the-game and accountability.

Some random dude hosting a SearX instance doesn’t have much to lose and no one (or very few people) to blow the whistle if they do implement things they’re not transparent about.

If hypothetically you wanted to mine date or do something nefarious SearX is a great place to start because if you’re found out you can just pop back up somewhere else or flood trackers with your nodes.

It’s an interesting question what an “ideal” SearX instance might look like and answering that would help in the finding of one as close to that as possible.

The only true way is to host SearX yourself on your machine. But it would be only available to your machine I guess (don’t know), unless you make it public. That also means you can’t access the search from any system or phone otherwise. But it is the ultimate private search engine for you in this case: Searx - LinuxReviews has some explanation how do to that, I might do it if it’s not complicated. maybe

Agreed. There are some project-based searx pages out there.

Here is one:

This one being run by the Garuda distro.

I also need to check myself because this frikkin’ sucks and i’m beginning to justify the failings of alternatives like qwant.

1 Like

About Qwant finances, its primary resources comes from private investors. I’ll have to trace back the french article I found a while ago to say which exactly. The only I remember is the latest to join : Huawei (that caused some emotion in french tech :sweat_smile:)

They lost money every year since launching but apparently the investors still believe it can live up to the hype once they’ll be able to be independent from Bing

3 Likes

This is coming down to identifying the lesser of too many evils.

For testing purposes, I dont know much about SearX or this instance.

Example for adding a SearX instance in Brave (garudalinux.org)

In the address bar, go to:

brave://settings/searchEngines

image

# Entries
SearX
:x
https://searx.garudalinux.org/?q=%s

After saving either make it a default or in the address bar type:

:x Search Term Here
1 Like

There is another method. Just go to the searx instance, click in the URL bar to expand it’s menu. Down at the bottom it says “next time search with…” And you’ll see the searx logo. Click it and its now added to the menu of search engines. You can set it as default through the usual method.

This is my take on most of the available search engine alternatives:

TL;DR: SearX and Brave Search are two good albeit imperfect options.

Another good option is Whoogle Search (proxy for pure Google results), but it has fewer available and less reliable public instances.

3 Likes

Problem

garudalinux search engine went down today.
https://searx.garudalinux.org/?q=

I’ve been asking around about SearX and i’ve heard the best way to run it is randomizing the instance you use each search.

That’s acheivable using an extension similar to libredirect but I don’t recommend it because the developers are anonymous (see here) and I don’t particular want to add more people to my trust chain than necessary.

Solution

There’s something called a Data URL which allows you to render an arbitrary website from the URL string.

Paste this in your address bar to see how it works:

data:text/html,<script>alert('Hello from Ulfnic')</script>

What that means is I can make a Data URL website that accepts a search query from the browser and forwards it to a random search engine by reading the querystring value it appends to the end.

This example randomizes between searx.be and searx.xyz (note: I have not vetted these instances)

You can add/edit/remove as many as you like.

data:text/html,
<script>
urls=[
'https://searx.be/?q=',
'https://searx.xyz/?q=',
];
url=urls[Math.floor(Math.random()*urls.length)];
query=window.location.href.split('!--?').pop();
window.location.href=url+query;
</script><!--?%s

Installation is as easy as just copy/pasting into a field:

  • Go to: brave://settings/searchEngines?search=search
  • Click “Add” next to “Other search engines”

  • Either set it as a default or in the address bar use: :r My Search Term
1 Like

Or, better yet, set up your own Yacy server. Not only will you personally be free of censorship and google tracking , but by adding another node you can help others who have restricted searches from finding them

I love this question on their FAQ, “Isn’t P2P illegal?” lol

Here’s my dummy question… my instinct tells me if I self-host search i’ll be extremely trackable because my searches will originate from my server’s IP (assuming it’s not somehow behind a VPN).

The closest I can find in the FAQ is this which is jargon to me:

Will running YaCy jeopardize my privacy?

YaCy respects user privacy. All password- or cookies-protected pages are excluded from indexing. Additionally, pages loaded using GET or POST parameters are not indexed by default. Thus, only publicly accessible, non-password-protected pages will be indexed.

For a detailed explanation on the technique: How YaCy protects your privacy wrt to personalized pages.

Can other people find-out about my browsing log/history?

There’s no way to browse the pages that are stored on a peer. A search of the pages is only possible on a precise word. The words are themselves dispatched over the peers thanks to the distributed hash tables (DHT). Then the hash tables of the peers are mixed, which makes retrieving the history of browsing of a certain peer impossible.