Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Unverified Commit 05fe2ee0 authored by Noémi Ványi's avatar Noémi Ványi Committed by GitHub
Browse files

pick engine fixes (#3306)

* [fix] google engine: results XPath

* [fix] google & youtube - set EU consent cookie

This change the previous bypass method for Google consent using
``ucbcb=1`` (6face215) to accept the consent using ``CONSENT=YES+``.

The youtube_noapi and google have a similar API, at least for the consent[1].

Get CONSENT cookie from google reguest::

    curl -i "https://www.google.com/search?q=time&tbm=isch" \
         -A "Mozilla/5.0 (X11; Linux i686; rv:102.0) Gecko/20100101 Firefox/102.0" \
         | grep -i consent
    ...
    location: https://consent.google.com/m?continue=https://www.google.com/search?q%3Dtime%26tbm%3Disch&gl=DE&m=0&pc=irp&uxe=eomtm&hl=en-US&src=1
    set-cookie: CONSENT=PENDING+936; expires=Wed, 24-Jul-2024 11:26:20 GMT; path=/; domain=.google.com; Secure
    ...

PENDING & YES [2]:

  Google change the way for consent about YouTube cookies agreement in EU
  countries. Instead of showing a popup in the website, YouTube redirects the
  user to a new webpage at consent.youtube.com domain ...  Fix for this is to
  put a cookie CONSENT with YES+ value for every YouTube request

[1] https://github.com/iv-org/invidious/pull/2207
[2] https://github.com/TeamNewPipe/NewPipeExtractor/issues/592

Closes: https://github.com/searxng/searxng/issues/1432

* [fix] sjp engine - convert enginename to a latin1 compliance name

The engine name is not only a *name* its also a identifier that is used in
logs, HTTP headers and more.  Unicode characters in the name of an engine could
cause various issues.

Closes: https://github.com/searxng/searxng/issues/1544


Signed-off-by: default avatarMarkus Heiser <markus.heiser@darmarit.de>

* [fix] engine tineye: handle 422 response of not supported img format

Closes: https://github.com/searxng/searxng/issues/1449


Signed-off-by: default avatarMarkus Heiser <markus.heiser@darmarit.de>

* bypass google consent with ucbcb=1

* [mod] Adds Lingva translate engine

Add the lingva engine (which grabs data from google translate).  Results from
Lingva are added to the infobox results.

* openstreetmap engine: return the localized named.

For example: display "Tokyo" instead of "東京都" when the language is English.

* [fix] engines/openstreetmap.py typo: user_langage --> user_language

Signed-off-by: default avatarMarkus Heiser <markus.heiser@darmarit.de>

* Wikidata engine: ignore dummy entities

* Wikidata engine: minor change of the SPARQL request

The engine can be slow especially when the query won't return any answer.
See https://www.mediawiki.org/wiki/Wikidata_Query_Service/User_Manual/MWAPI#Find_articles_in_Wikipedia_speaking_about_cheese_and_see_which_Wikibase_items_they_correspond_to



Co-authored-by: default avatarLéon Tiekötter <leon@tiekoetter.com>
Co-authored-by: default avatarEmilien Devos <contact@emiliendevos.be>
Co-authored-by: default avatarMarkus Heiser <markus.heiser@darmarit.de>
Co-authored-by: default avatarEmilien Devos <github@emiliendevos.be>
Co-authored-by: default avatarta <alt3753.7@gmail.com>
Co-authored-by: default avatarAlexandre Flament <alex@al-f.net>
parent 85034b49
Loading
Loading
Loading
Loading
+4 −2
Original line number Diff line number Diff line
@@ -108,8 +108,8 @@ filter_mapping = {
# specific xpath variables
# ------------------------

# google results are grouped into <div class="g ..." ../>
results_xpath = '//div[@id="search"]//div[contains(@class, "g ")]'
# google results are grouped into <div class="jtfYYd ..." ../>
results_xpath = '//div[contains(@class, "jtfYYd")]'
results_xpath_mobile_ui = '//div[contains(@class, "g ")]'

# google *sections* are no usual *results*, we ignore them
@@ -223,6 +223,7 @@ def request(query, params):
        'oe': "utf8",
        'start': offset,
        'filter': '0',
        'ucbcb': 1,
        **additional_parameters,
    })

@@ -235,6 +236,7 @@ def request(query, params):
    params['url'] = query_url

    logger.debug("HTTP header Accept-Language --> %s", lang_info.get('Accept-Language'))
    params['cookies']['CONSENT'] = "YES+"
    params['headers'].update(lang_info['headers'])
    if use_mobile_ui:
        params['headers']['Accept'] = '*/*'
+2 −0
Original line number Diff line number Diff line
@@ -109,6 +109,7 @@ def request(query, params):
        **lang_info['params'],
        'ie': "utf8",
        'oe': "utf8",
        'ucbcd': 1,
        'num': 30,
    })

@@ -121,6 +122,7 @@ def request(query, params):
    params['url'] = query_url

    logger.debug("HTTP header Accept-Language --> %s", lang_info.get('Accept-Language'))
    params['cookies']['CONSENT'] = "YES+"
    params['headers'].update(lang_info['headers'])
    params['headers']['Accept'] = (
        'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8'
+4 −1
Original line number Diff line number Diff line
@@ -104,6 +104,7 @@ def request(query, params):
        **lang_info['params'],
        'ie': "utf8",
        'oe': "utf8",
        'ucbcb': 1,
        'gl': lang_info['country'],
    }) + ('&ceid=%s' % ceid)  # ceid includes a ':' character which must not be urlencoded

@@ -111,6 +112,8 @@ def request(query, params):
    params['url'] = query_url

    logger.debug("HTTP header Accept-Language --> %s", lang_info.get('Accept-Language'))

    params['cookies']['CONSENT'] = "YES+"
    params['headers'].update(lang_info['headers'])
    params['headers']['Accept'] = (
        'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8'
+69 −0
Original line number Diff line number Diff line
# SPDX-License-Identifier: AGPL-3.0-or-later
"""
  Google Play Apps
"""

from urllib.parse import urlencode
from lxml import html
from searx.utils import (
    eval_xpath,
    extract_url,
    extract_text,
    eval_xpath_list,
    eval_xpath_getindex,
)

about = {
    "website": "https://play.google.com/",
    "wikidata_id": "Q79576",
    "use_official_api": False,
    "require_api_key": False,
    "results": "HTML",
}

categories = ["files", "apps"]
search_url = "https://play.google.com/store/search?{query}&c=apps&ucbcb=1"


def request(query, params):
    params["url"] = search_url.format(query=urlencode({"q": query}))
    params['cookies']['CONSENT'] = "YES+"

    return params


def response(resp):
    results = []

    dom = html.fromstring(resp.text)

    if eval_xpath(dom, '//div[@class="v6DsQb"]'):
        return []

    spot = eval_xpath_getindex(dom, '//div[@class="ipRz4"]', 0, None)
    if spot is not None:
        url = extract_url(eval_xpath(spot, './a[@class="Qfxief"]/@href'), search_url)
        title = extract_text(eval_xpath(spot, './/div[@class="vWM94c"]'))
        content = extract_text(eval_xpath(spot, './/div[@class="LbQbAe"]'))
        img = extract_text(eval_xpath(spot, './/img[@class="T75of bzqKMd"]/@src'))

        results.append({"url": url, "title": title, "content": content, "img_src": img})

    more = eval_xpath_list(dom, '//c-wiz[@jsrenderer="RBsfwb"]//div[@role="listitem"]', min_len=1)
    for result in more:
        url = extract_url(eval_xpath(result, ".//a/@href"), search_url)
        title = extract_text(eval_xpath(result, './/span[@class="DdYX5"]'))
        content = extract_text(eval_xpath(result, './/span[@class="wMUdtb"]'))
        img = extract_text(
            eval_xpath(
                result,
                './/img[@class="T75of stzEZd" or @class="T75of etjhNc Q8CSx "]/@src',
            )
        )

        results.append({"url": url, "title": title, "content": content, "img_src": img})

    for suggestion in eval_xpath_list(dom, '//c-wiz[@jsrenderer="qyd4Kb"]//div[@class="ULeU3b neq64b"]'):
        results.append({"suggestion": extract_text(eval_xpath(suggestion, './/div[@class="Epkrse "]'))})

    return results
+8 −7
Original line number Diff line number Diff line
@@ -85,13 +85,13 @@ def request(query, params):
    # subdomain is: scholar.google.xy
    lang_info['subdomain'] = lang_info['subdomain'].replace("www.", "scholar.")

    query_url = 'https://'+ lang_info['subdomain'] + '/scholar' + "?" + urlencode({
        'q':  query,
        **lang_info['params'],
        'ie': "utf8",
        'oe':  "utf8",
        'start' : offset,
    })
    query_url = (
        'https://'
        + lang_info['subdomain']
        + '/scholar'
        + "?"
        + urlencode({'q': query, **lang_info['params'], 'ie': "utf8", 'oe': "utf8", 'start': offset, 'ucbcb': 1})
    )

    query_url += time_range_url(params)

@@ -99,6 +99,7 @@ def request(query, params):
    params['url'] = query_url

    logger.debug("HTTP header Accept-Language --> %s", lang_info.get('Accept-Language'))
    params['cookies']['CONSENT'] = "YES+"
    params['headers'].update(lang_info['headers'])
    params['headers']['Accept'] = (
        'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8'
Loading