I'm the creator of Runnaroo. The title is a fun little rib at Neeva, which has been getting a lot of press recently.
I initially launched Runnaroo in a Show HN [0] at the end of February, and wanted to do a followup because the site has grown considerably in features and users over the last fews of months.
The core idea of Runnaroo revolves around a search engine of federated data sources to provide the most relevant and highest quality results. We are now over 50 Deep Search sources, and adding more weekly.
Some examples of queries that I believe provide better results than peer search engines.
One think that I would like to see solved by indie search engines is the ability to break the search bubble and see whats usually hidden on 10th page of google search results. This is a serious problem in all big search engines.
One interesting way to solve search bubble problem is to have an option to filter out results from high traffic websites and blogs who invest heavily in CEO and pollute search results.
Having this filter will surely open a completely different world of information that's very hard to search.
Good grief, this is astonishing.
The other day I tried tracking down the names for various avatar species in VRChat. Surely someone had a list curated... but bing, google, and ddg all let me down with ancient wikis or completely unrelated sites, no matter my quoting or alternate search terms (animals, avatars, creatures, etc.).
fwiw, It's not on google's first few pages for me on that exact same search, so you aren't going crazy.
This thread does show up though. There are so few relevant results that I wouldn't be surprised if this thread alone is giving enough weight to vrcarena.com to already be rolling out to some people, hence the discrepancy in these comments.
Oops.. sorry about that!Engineers have been notified and are looking into the issue.
If you see a captcha, be sure to fill it out and submit to continue your search
Edit: it looks like it doesn't support anything that is not in Latin letters at all.
Okay, this is crazy. This week I spent 5-10 minutes scrambling my brain to find this article [1], it was the first search result for "no more free sodas" (now, to be fair, it's now the first result for me on Google, too).
Thanks for bringing this onto my radar, I've added it to my pile of cool links!
I was quite impressed from what I’ve seen so far. The most frustrating thing about google how unreliable/irrelevant the results are when your query has anything to to with something that can be payed for, you get immediately spammed by low quality commercial bs, while ddg is practically useless for very specific searches I.e an error message in an app etc.
That would be number one on my list too. I normally skip but it can be endless until you got to a result that looks authentic and not some needy click farm.
While they're not bad per say, they;re never detailed enough of a reference. 99% of my webdev searches are now "mdn <topic>" as MDN is typically the most detailed resource on the topic (short of reading the exhaustingly verbose standards).
Of course now that I search "mdn <topic>" for everything I'm not training google's engine to match <topic> with the mdn resource, which helps cement w3schools as the top result for the topic
If you're using Firefox you can also go to MDN, right click the search box and click "Add a Keyword for this Search...". It's (almost) like a DDG Bang but local to your browser (and for me it's a lot faster too).
It gets even more crazy once you realize Firefox is just saving a bookmark with a form submission template and you can make forms that "send" GET requests to data URIs and if you combine that with the keyword search you can run tiny web apps from your bookmarks using "commands" in your omnibar!
I find the ddg search results load faster and are better indexed than the MDN built-in search, same goes for most sites that do their own indexing and are also indexed on big search engines.
In my experience the search on MDN sucks, and I usually get much better MDN results by searching for "mdn <searchterm>" on DDG than I get for "<searchterm>" on mdn.
> This is a serious problem in all big search engines
I have never really understood why search engines do not let customers create blacklists of domains they do not want to see in the results. Are they really so cynical bastards that they understand that the crappy SEO results they cram down our throats are so good money to them that they can't afford us to hide them?
No, I mean, in what way can this feature be implemented and avoid sharing any PII (assuming the list itself is not PII, since that list could potentially be the same used by other users)? What about a text file hosted somewhere, and the only identifiable information is the cookie containing the link to that file?
I think you've done something very interesting that makes me rethink the idea of a "search engine".
For all the data Google has, and all the AI and technology, their search isn't very "smart" at allowing you to contextualize or filter the searches very well.
For example, a button to remove social, to offer meta-data (such as whether a question is answered/unanswered) or offer features that might help me find what I am looking for.
Google has a huge index, but naturally, there's going to be 'noise', and maybe I want to see what's on page 50, or want to try and 'filter' some of the more popular sites.
Their UI for enabling this is non-existent.
Another example: Google BigQuery has the entire Reddit archive, they could surface meta-data such as the number of comments or whether entire comment chains are deleted (although, doing that against Reddit's API at scale would be a very difficult challenge and perhaps not welcomed by reddit).
But to me, as a user of search, that might influence what content I end up viewing, because perhaps a thread with 0 comments isn't so useful to me, or a thread full of [deleted] comments is worthless and should be de-indexed (and deciding what is valuable is a very hard question to answer).
I think you are _radically_ underestimating the cost (and overestimating the benefit) of adding a feature to Google's index.
When you are writing a small-scale search system over a specialized corpus where the index fits in the memory of a single machine it is easy to imagine and realize nifty features.
When your search engine draws 100MW you have a completely different perspective.
You're telling me that google makes $30 for every financial service ad I click but are cheaping out on the signal processing? Is the decline search result quality in the last couple years the result of a business analyst arbitraging sensitivity to result quality vs compute costs?
Only randos on HN think search quality is declining. If the search quality metrics that they use internally suffered a sustained decline, everyone from Sundar down would be fired.
They are declining, and I’ve met many non HN randos who agree. Quality doesn’t correlate with their metrics; likely some sort of mix of “engagement” and mostly how much they make from ads.
A whole ton of SEO. Any keyword from which you can make reasonable amounts of money will be cluttered with sites made by SEO people, who are really good at gaming the index, but really bad at anything else (i.e. actually giving proper information on the keyword you're searching for).
Spam has become an issue again in my native language. For a couple of years Google did a great job at removing spam, but somehow in the past 2 years it has resurfaced.
I don't think that "search quality" as perceived by end users is the metric that Google is trying to optimize - my guess is that their real goal is to maximize revenue from ads. Now if Google's ad revenue were to start declining, that would make their executives fear for their jobs.
Every public statement made on the record by Google search quality people, from Matt Cutts to Udi Manber to low-level engineers' tweets and blogs says that search quality never uses revenue as a metric and they don't generally even speak to the ads side of the business.
Subjectively, my search results have become more cluttered by loosely fitting results than what I'm actually searching for. Even when I'm trying to find a page I've been to before I often need to play keyword roulette and add half a dozen site bans for the likes of Pinterest, bit, quota, and the unpronouncable malware sites Google hasn't purged yet. This is a daily affair. When I'm searching for (generic consumer junk) the result is always available from Amazon,best buy,Wal-Mart,target,and the like. As soon as I stop searching for generic crap Google is worse today than they were years ago because these retailers, review spam sites, etc get boosted to the top of the results despite parameters in the query that should have filtered out these bad results.
In other words. The Google I see has gotten worse at product discovery, and finding relevant debug discussions.
If you feel that the results for your personal subset of queries are getting worse because Google is losing the fight against spam, that is a defensible position and we could discuss it.
The idea that Google intentionally trades away search quality to gain revenue is absurd and pretty much a loonie conspiracy theory.
I just looked at my Google search history and just judging from the last few searches it has been dead accurate. In particular, the result for "mass of a single nitrile glove" is amazing. The top result is an info box of glove mass by size. I don't even know how they do that. Compare the results of the same query on DDG. Those results are garbage.
That is, by the way, how Google defines "search quality". Thousands of paid analysts run searches on Google and Bing and rate the outcome.
Google's losing fight to spammers is a part of the problem.
I'm not arguing that Google is sacrificing quality for revenue. Though I'm sure Google has the run the numbers on such practices.
Instead, I'm arguing that Google search has lost some ability to differentiate between generic and specific. I'm thinking of any of countless cases where I give up on keyword bingo and make a phone call or post in a chat app to get the information I couldn't squeeze out of a Google search.
Google often chops and screws keywords that should not be.
Quoting does not yield expected results, even when I can ultimately find the exact quote on an eventually discovered result.
Fuzzy matches often pollute results for similar but distinct searches. Differentiation is not always easy or even possible.
Fuzzy matches regularly fuzz the wrong words for my searches.
Results often outdated by years or even over a decade.
Limiting by time often returns results outside the range. Mostly in the 'results' before the results.
I often find myself thinking some searches would be easier with a hand written regex.
I think this fuzzy logic of search pisses off people like us (the HN demographic) but makes most users happy. Have you seen how tech illiterate people use Google? They type in questions and phrases with words like "a" and "the" in them. This is who google is optimizing for, because there are hundreds, maybe thousands, of people like this for every one of us.
You touched on what I was driving at in my original comment. There is no doubt whatsoever that if you were able to make a full pass over the corpus with a regular expression you could find docs you can't find on Google's search. But that's obviously not how their search works. They have to make it work at their scale, which dictates the format of the index, which in turn limits the possibilities for query operators. They have to make these design choices so that their product can exist at all.
"Grep the world" is a fine strategy for corpora up to a certain size, and I do wish there was a product that just stored everything I've ever seen and let me run expensive searches on that.
Rando here, subjectively it has been declining for my searches. In aggregate it might have become better so that my aunts' lousy searches are smartly reinterpreted, since el goog manages to "know the person" better than they know themselves. Not everyone wants that!
edit: this is surveillance capitalism at it's finest
well i guess that's unfalsifiable, since you're saying this on hn. i have been saying that the search result quality has been getting worse for years... not on every single query, just the long-tail queries that comprise the majority of my searches (guess i'm just an hn weirdo xd).
I think all search engines fail in providing a list of ordered results. What you actually want is a look at the graph that they were plucked from. As things stand, it's not just a case of being unable to see the forest for the trees, but being unable to see the forest because you are forced to infer its structure from a small basket of leaves.
The only company I've seen attempt anything like this is the (criminally under-recognized) carrotsearch.
While I do not agree with storing user-deleted information, you can change the "reddit" URL to "removeddit" to see most of the comments. Alternatively, "ceddit" will only show mod-deleted comments.
> For all the data Google has, and all the AI and technology, their search isn't very "smart" at allowing you to contextualize or filter the searches very well.
Google exists to make money from 99.9% of users, the ones who don't read HN, so that's pretty normal if you don't find it accurate
How does Runnaroo compare to search engines like duckduckgo, searx.me and in the sense of openness and accessible source codes?
Is it open source? Does it have an openly accessible plugin/search feature ecosystem? How do you think people would trust you when it comes to the privacy aspects?
All good questions. Trust will only come with time and a track record of making user centric decisions.
Runnaroo currently does not save any information on user search queries, and a core element of the tool is the privacy aspect.
If you need an open source hackable search engine, Searx is a better choice, and the most privacy centric option because you can run your own instance.
The GP mentioned that the title originally included a "rib". We took that part out of the title. Some people might read the GP comment and wonder what it was talking about. When we make a change to a title or URL and notice a comment whose meaning is affected by the change, I try to reply with the missing information.
This is intriguing. Have you considered allowing external developers to provide Deep Search integrations?
I'm imagining a platform where developers can submit Deep Search modules that would run in a sandbox with access to your index, metadata stores and knowledge graph. These modules would return a confidence score and results for a given query, with no ability to snoop on users by exfiltrating data. They could answer a single query or handle an entire domain. The best ones would rise in prominence according to their performance.
One big hurdle is that developers would have to be properly incentivized to return accurate results, but if that can be achieved, you could end up with something that is way better than Google.
My search engine quick test is "lojban" and Runnaroo's results were among the best I've seen.
I've been using DuckDuckGo as my search engine of choice 95%+ of the time, only falling back to Google if DDG's results seem weak. I've added Runnaroo to my bookmarks toolbar. I'm going to try it out as my second choice search engine, demoting Google to third place.
Probably scraping with a sea of proxy IPs or a rented botnet
One could test this hypothesis by doing the same thing to it--spamming search queries from numerous IPs--to see if Runnaroo's IPs stop working (get 429'd or captchawalled). I don't really have time to whiteknight for G, but OP should be prepared for them to take notice. (Maybe create a global rate limit.)
I'm interested in how the ranking algorithm works because it seems really good.
Edit: Runaroo is now down. Looks like he was ahead of me on the global rate limit suggestion.
Hi Chris! Looks great! Are you considering a domain blacklist feature? For example, when searching for C++ documentation, I rather never be served results from cplusplus.com.
I wish something like the Web Monetization API was a viable path to monetization, but I don't think it is in the current state.
I believe the only current path to monetization that preserves user privacy is context based display ads that work like billboards in the real world. I am working on that now, but the idea is a flat fee to display the ad for N days regardless of views or clicks.
It is certainly challenging, but DDG and Startpage both exist and seem to be profitable.
The key is to monetize and 1.) ensure user privacy, 2.) ensure the monetization method doesn't create perverse incentives in contention of providing better search results for users.
How aggressively are you willing to moderate and kill bad ads?
The biggest issue I see with Google/Bing/AOL/Yahoo Search (and even DDG!) is malicious advertising. It's led me to the general view that any search ads are hostile by default, and I've often found outright dysfunction when it comes to shutting down bad actors.
The other issue I'd be curious about is your position on trademark squatting. i.e., where if I'm Best Buy, my competitors can buy ads over the search term "best buy", so if I want my own site to be the first "result" for my own business name, I have to buy ads.
The only way it would work is heavy moderation on the ads served.
Great question on trademark squatting, and I would love to hear the communities' thoughts.
If someone searches for 'nike shoes' and gets an ad for Reebok; is that useful to the user? Half of me thinks that it is not a relevant ad because the searcher clearly stated their intent, but the other half thinks that it might be because they are both shoe companies and their could be value in seeing related products from different brands.
I am leaning toward it not being a relevant ad, but I could see the argument the other way as well.
I think the important point is that if you are selling placement above the first result, trademark squatting means you're actually extorting the trademark holder: Pay up or we'll put a competitor above you. Nobody should have to pay to be the top result for a searcher looking for them directly.
Of course, this problem could alternately be solved by always putting the trademark holder as the top result for the term, and showing relevant ads immediately below. You can still sell to competitors, but nobody feels like they're forced to pay you to own their own name, which is what happens at Google.
Selling the top result is possibly the most problematic thing about Google. And very few nontechnical users can even identify the first organic result. They click the ad thinking it's the top result, not because they found the ad valuable or relevant.
You are right. I understand the temptation from a business perspective, but it is not an ethical practice IMO.
What does the rule look like that enforces that? No ads can be placed that contain the trademark of another company? Would that be too restrictive in practice?
I think it's probably something that'd need to be curated, because the line is probably "I am looking for x, not results about x", sort of like how when your site sees a search for Twitter, it just goes to Twitter. Presumably if I am looking for "Twitter bots", I am not looking for Twitter but products or apps that interact with Twitter and ads are likely to be relevant, not obstructive.
So there should possibly be a way to register specific trademarks as having an authoritative destination that should be the top result, and above any ad placements, but any additional qualifiers in the search term beyond the trademark should exempt this behavior?
That would be fine with me. Ultimately if I knew the first listed result when someone searched "mapquest" was mapquest.com and not a malicious ad, I'd switch every senior citizen I know to using that search engine immediately.
Maybe the answer is that you buy display time, but with no keywords. Like in the real world, everyone will see the ad.
Or you can just buy ads for a given "theme" and the search engine will try to interpret queries and assign theme the most close theme, displaying relevant ads.
I'm working on a similar system. I want to build a business around it, but if you want the Infinite Auction code I can let you use that (only fair, since I nicked the idea from Project Wonderful).
Runnaroo is lacking in design Visually everything is very similar and hard to quickly parse.
I want to immediately be able to scan results to tell what url's they are from. Two issues, the url is light grey and blends in with the text body. There is no favicon.
This maintenance seems poorly timed:
"Thank you for checking out Runnaroo!
Unfortunetely, we are temporarily taking the site down while we perform some updates. Apologies to our existing users. We are working to get back up to be better than before."
I just wanted to say that I was impressed by privacy statement being short and clear. I clicked on the privacy link expecting multiple pages of overly complicated writing.
Deep Searching: the inclusion of relevant results from other targeted search engines to deliver better results quicker.
That doesn't sound like "deep" searching to me. That sounds like search aggregation, like DogPile used to do a couple of decades ago.
To me, "deep searching" would mean the company has its own crawler that indexes the content that the other search engines ignore or discard because it hasn't been updated in the last six hours. The world is losing its knowledgebase because companies like Google only care about what's trending, not what's information.
I want a search engine that shows me all the things that Google has decided aren't important because they're not trendy. Show me the stale web. Show me things that are so good they don't need to be repackaged every six months. Show me hobby sites, reference sites, stores of knowledge that don't exist solely to play the SEO game. Show me things I can't get anywhere else.
I'll give Runnaroo a chance. Hopefully it doesn't disappoint. The world doesn't need another bubblegum search engine.
Maybe reverse-SEO would be good: list higher those sites with less SEO-complicance lol
I have an analogy:
I'm new to programming and have been trying to build a web app using python. I went through a lot of websites and youtube videos, and the most fancy-looking ones where more often than not incomplete or bad. When there was a video and the "youtuber" would hit me asking for subscription or like all the time, or fancy 3d animations etc, I would know quality of content itself probably wouldn't be very good. Not to my surprise, the best tutorial I have found features a guy with his webcam only, with nothing fancy at all - you might call it a sloppy "production", if you want, but it's far better than anything else I've seen! - and it was never a top ranked search for my queries oon google.
For anyone wondering, I'm talking about grinberg's mega-tutorial on flask (https://blog.miguelgrinberg.com).
Looks nice, thanks for sharing.
I do find the three pieces of configuration to be inconsistent however. The first two have you flip a switch _on_ to turn something _off_. "Turn Off Quick Directs" and "Turn Off Deep Searching" would probably be better as "Quick Directs" and "Deep Searching" where the toggle defaults to the "on" state. They look very similar to iOS controls so I think the users' understanding of on/off state is already there.
As for the third option "Strict Search On", "On" seems redundant at best and misleading at worst. Misleading because I'm not sure if the text is going to change when I toggle the switch. Meaning I don't know if this is a static label telling me what the setting is or a dynamic label telling me what the current state is.
In summary, I would make the button text more consistent and change their default states:
[x] Quick Directs [x] Deep Searching [] Strict Search
As a side note, the controls are different on the main page vs the result pages. On the homepage they are switches while on the results pages they are check boxes.
Thank you everyone for checking Runnaroo out. I shared it this morning mostly on a whim, and definitely did not prepare even close for the level of traffic being on the front page of Hacker News would bring. Runnaroo has been my solo side project for the last few months because I believed a better web search was possible, and I greatly appreciate all the constructive feedback.
I am bringing the site down for the next few hours to address some issues, but will have it up again soon. Apologies for any timeouts!
Do you have a starter story or something? I'd love to hear more about your tech stack, how you source search results via api's, scraping, how much data on serps do you store or is it mostly just in the background via api's ? I can imagine being more like google would cost google amounts just to keep server farms of data. How are you managing server costs is your monthly bill huge or do you run it all on small infrastructure somehow?
I've been wanting to ship anything besides doing my freelance work, but never get there, totally inspired to launch something this year. Thanks, for showing what one person can build!
A couple of bits of feedback: 1. The searches I tried did return decent / good results, so well done! 2. The image search results page is not very useful / usable - there's no keyboard navigation or filtering (add "vector" as a filter and I'll be back regularly). 3. Dark mode would be nice. 4. What's your business model?
Appreciate the feedback, and I agree. The image (and news for that matter) search tabs are now kind of just a box check, but they are both on the roadmap to built out to be made much more usable. Adding a dark mode will be pretty simple and I can add it to the list.
I wish something like the Web Monetization API [0] was a viable path to monetization, but I don't think it is in the current state.
I believe the only current path to monetization that preserves user privacy is context based display ads that work like billboards in the real world. I am working on that now, but the idea is a flat fee to display the ad for N days regardless of views or clicks.
Thanks for the response. I'm definitely going to keep an eye on this for the future. One other small thing to mention, default searches in Safari are chosen from a fixed list, DDG worked around this limitation before making it on to the list by releasing a Safari extension that switched out the default search which could be an option for you until the day you make that list.
FYI I would happily pay a modest yearly subscription fee to any service that would save as much time as yours potentially does. Keep up the great work!
...More accurately than DDG, which returns lots of results, all of which do not contain that text. Other searches so far have been far less swamped by topten trash as well. Thanks! I'll add this one to my list of search engines to try when I need accurate results.
I searched for "common" - https://www.runnaroo.com/search?term=common and noticed the first result had a json payload as the description, which isn't visible on the site. Seems like a possible bug.
I do not know how you're doing it, but I just ran quite a few searches for obscure film photography and chemistry topics and the results were much better than DDG and less commercialized than Google. Very impressive!
Was very skeptical - every time I try some new search engine it just gives really bad results. Tried some quite esoteric things and was very surprised - also for non-English content. Trying this as my default search now! Congrats.
Edit: suggestions could be useful. For now, I'm editing the Firefox plugin to get suggestions from Google.
How? With Bing you can pay but they require you to not do anything with the results, such as caching, filtering, sorting, removing the tracking links... With Google it just seems not available.
I believe Startpage.com still has a deal with Google to repackage their search results. The search results are identical so not sure what terms Google puts on them.
It's a bit weird that searching for "twitter" redirects me straight to https://twitter.com/ - I understand the intent there but I'd rather get search results - I know how to navigate using my browser URL bar.
I was wondering about the business model for this:
> We may also add an affiliate code to some results returned that result in small commissions being paid back to Runnaroo if you visit or make purchases at those sites.
I don't have an opinion on this one way or another, but it's an interesting approach.
I took that from one of the ways that DDG monetizes results. It's more aspirational right now than actually in practice. I was briefly in the Amazon affiliate program, but they told me search engines aren't allowed as part of their TOS.
DDG has a deal with Amazon. All product searches show Amazon up above with payment for that. Not sure if it's exclusive or if DDG has deals with other eCommerce players.
I think there's still a lot to be done in search engines when contextual and personalized results are concerned. I want my search engine to search not just the public internet, but also the niche stuff only I have access to. If I type "pizza", I want local pizzerias. If I type a name of a friend, I want to see their Facebook profile, as well as our conversations on Messenger, Slack and WhatsApp web. If I type "xxx crashing with code 608", where xxx is an internal service of some organization I work for, I want to see past Github issues, Slack conversations, Sentry reports and Jira tickets. The fact my search engine can't search my emails, conversations and private resources makes it much, much less useful than it could have been.
I genuinely don't understand why people dislike it so much when software processes information about them, for their own benefit. I use Google, Facebook etc. a lot, and I've never witnessed any bad consequences of being tracked. If not for that strange aversion, technology could be so, so much better.
> I genuinely don't understand why people dislike it so much when software processes information about them, for their own benefit. I use Google, Facebook etc. a lot, and I've never witnessed any bad consequences of being tracked. If not for that strange aversion, technology could be so, so much better.
I think it's partly because we in the western world have a lot of exposure to hypothetical dystopias in our entertainment and rhetoric. And that's probably based in part on the paranoia that we had during the Cold War. I wonder if it's different in Eastern Europe, where you are IIRC.
On a related topic, how would you feel about websites being able to know that you're running a screen reader? In the American blind community, many have expressed concerns about possible discrimination against blind people. I would be in favor, but then I'm partially sighted (low vision).
Definitely in favor, as long as I can disable that, i.e. in incognito mode. There are some things that could work so, so much better, especially with some modern frameworks (think Flutter) that don't render to DOM unless absolutely necessary. There would be no accessibility mode buttons, everything would, you know, just work.
Yeah, and companies that do A/B testing of their UIs could get valuable data, like knowing that we're actually out there trying to use their new, probably less accessible UI.
While I didn't use them and have no comments on their search quality I did admire Cliqz for their ambition. They were building and running their own index until earlier this year. (https://cliqz.com/announcement.html)
A small feedback: You have a circle on the page that says "A better search engine. Learn why →". But only the tiny text is clickable. It would be great if that whole circle was clickable. Bigger link targets are always better.
Dark mode please! Just a quick UI tip: theres a reason that neither DDG, Google, or even HN don't have cards or separators for each result. I think the backgrounds of each result make them all look the same, and it's kind of impacting readability. My mind gets distracted by the rectangles and the shadows, and its just one more thing to sort through mentally.
IT-related search results seem super relevant. I wish other subjects would work equally well. E.g. supplement/condition-related search could also result in links to sites like selfhacked.com in the top (the actual articles there include references to scientific papers to back every particular statement) which Google actively hides from its results despite it's among my first go-tos.
Tried a few queries, and the direct redirect when I type in facebook is more than questionable. If Google did this I couldn't properly research news or facts about facebook. It is a really questionable practice.
That said, I would rather use this page than DDG, as long as I don't get lies about privacy stuffed in my face.
I call that a Quick Direct. It's not for everyone, and you can disable that feature under the advanced search options.
Surprisingly (to me at least) that is the feature that resonates the most with non-technical users. A lot of people use Google just as a navigational tool to go to a website they know about. This saves them a step for some highly popular websites, but your point is still well taken.
It would be okay if I typed in facebook.com or faecbook.com, sure. But not allowing me to see the index for "facebook" is really not what I expect from a search engine. What if I want to type "facebook" and see general information, or news? I can't as your search redirects me away.
Maybe offer a card: "Do you want to visit Facebook?" would suit the best of both worlds. Or, if you must, a 10-second auto redirect.
from my anecdotal experience: having anything other than a hard redirect entirely misses the point of this feature. A click or waiting 10 seconds pretty much nullifies all the benefit. I think an opt-out is the best way to go. Any technical enough user that is bothered by it should be competent enough to disable it or be aware what happens when they are trying to get general information.
Observing my dad using his iPad, he regularly types sites without the TLD with his only intention being to get over to facebook.com
How often do you search for just "facebook" (or any other site) without any other qualifiers and expect to get information or news? If you search for "facebook" on Google all of the results above the fold just go to facebook.com page anyway. If you want info about the most recent facebook scandal you'll likely get better results from any search engine if you add more search terms e.g. "facebook scandal"
> Surprisingly (to me at least) that is the feature that resonates the most with non-technical users. A lot of people use Google just as a navigational tool to go to a website they know about. This saves them a step for some highly popular websites, but your point is still well taken.
And I think that's the crux of the confusion here. HN users are on the more technical side and it seems you've covered some of the technical search terms well.
I think it would help to pick a target demographic and focus on that. For Google that's non-technical or low-technical users, which leaves many others scratching their heads as more advanced search features start disappearing and junk content ranks higher and higher every year. I don't think it's possible to please all demographics simultaneously.
Mentioned below by another commenter, for what this feature is, it makes sense to keep it opt-out. A technical user can figure out how to change it, while the target audience of it, would not (or not as easily).
Could make a subdirectory, subdomain, or even different domain e.g. runnaroo.tech that allows you to keep the user-friendly defaults on runnaroo.com, and more technical users could go to an alt location with a different set of defaults, and perhaps more features exposed by default.
Similar to having a mobile, lite, or no-js version of a site.
Runnaroo has become my default search engine since a couple of months.
1. Search results are more relevant. In comparison to google. I noticed a reduction in spammy results like slant, fake blogs, gitmemory, etc. There is still room for improvement.
2. Claims to respect users' privacy
3. Languages other than English are handled fine. In this regard runnaroo is usable for everyday searches, unlike DDG. Queries in a foreign language produce results of the same or lower quality than those from google.
I know the author is reading this, so here is some feedback.
- Make the search engine available for firefox mobile. I know there is an extension but it can't be installed on phones.
- Improve the behavior of the search box in mobile phones.
Nice work! One bug/feature I noticed: Once focused on the image tab, if I revise my search, the image tab loses focus and the primary search results tab gains focus. It wasn’t the expected behavior for me.
I wish the about page had some information about the people who built this because I feel trust is increasingly important in my rubric for choosing what services I use.
I've been using it since this was posted here. I am ok with the search results, given the privacy I get back.. I do miss the google cards though... type EPL get all results for the Premiere League... type NBA get all games coming up... type movies and see what's coming out near me.
is everything old new again? have we hit some sort of cyclic point in internet search quality where we need this once more? dogpile was ended when pagerank showed up.
I wanted so hard to love DDG, but it failed so hard esp for developer questions (90% of my searches).... but DAMN this is what I wanted in a replacement. I've already replaced google, though native dark mode would be cool, but I can just use stylish for that ;)
Why did you name it Runnaroo? Search engine are supposed to be easy to spell and type. During the last 3 days of my usage, I misspelt Runnaroo every 3rd time I opened it.
Howto "Show HN": post a project, ignore all best (technical) practices discussed on HN, update your site to "Unfortunately, we are temporarily taking the site down..."
Looks good and I'm going to try it as my default search engine for a bit. There are a handful of small UI issues that make me wish it were open source so I could fix them.
It’s easy to make a decent small search engine: nobody is working to game your algorithm. Vastly harder to do this as you scale to google size and popularity.
That's a great argument for a more diverse set of search competitors, especially with their own index. Imagine if we had 10 viable search engines rather than just the 1 (or 1.5) we currently have. With multiple ranking algorithms a SEO-bait site would work on one or two but unlikely on all 10.
Search seems to be down due to the HN hug of death (as stated on the landing page). Looking forward to giving this a bash whenever it comes back up again.
why is it that all search engines still look like Google in the 90's. Can't the UI become fresher?
also I think SEO really ruined the search experience. do you have plans to use other ranking factors? Maybe social upvoting or so to improve ranking of actual good content as opposed to crappy listicles which is 50% of search results right now?
> Unfortunately, we are temporarily taking the site down while we perform some updates. Apologies to our existing users. We are working to get back up to be better than before.
I'm the creator of Runnaroo. The title is a fun little rib at Neeva, which has been getting a lot of press recently.
I initially launched Runnaroo in a Show HN [0] at the end of February, and wanted to do a followup because the site has grown considerably in features and users over the last fews of months.
The core idea of Runnaroo revolves around a search engine of federated data sources to provide the most relevant and highest quality results. We are now over 50 Deep Search sources, and adding more weekly.
Some examples of queries that I believe provide better results than peer search engines.
react.js: https://www.runnaroo.com/search?term=react.js
creatine effects research: https://www.runnaroo.com/search?term=creatine+effects+resear...
metallica tabs: https://www.runnaroo.com/search?term=Metallica+tabs
parkinson podcast: https://www.runnaroo.com/search?term=parkinson+podcast
bootstrap collapse link: https://www.runnaroo.com/search?term=bootstrap+collapse+link
[0] https://news.ycombinator.com/item?id=22422604