【Es】es deep paging问题

    科技2022-08-11  98

    1.概述

    关联博客:【elasticsearch】elasticsearch 搜索结果的含义

    转载:https://blog.csdn.net/gwd1154978352/article/details/82943037

    2.分页搜索

    在正常搜索之后添加

    ?from=0&size=2

    注:当数据量达到50000条以上时,用下面的scroll滚动的方式进行代替

    GET /_search?from=0&size=2

    返回结果

    { "took": 3, "timed_out": false, "_shards": { "total": 16, "successful": 16, "skipped": 0, "failed": 0 }, "hits": { "total": 8, "max_score": 1, "hits": [ { "_index": ".kibana", "_type": "doc", "_id": "config:6.4.0", "_score": 1, "_source": { "type": "config", "updated_at": "2018-09-18T09:30:18.949Z", "config": { "buildNum": 17929, "telemetry:optIn": true } } }, { "_index": "blog", "_type": "article", "_id": "eTmX5mUBtZGWutGW0TNs", "_score": 1, "_source": { "title": "New version of Elasticsearch released!", "content": "Version 1.0 released today!", "priority": 10, "tags": [ "announce", "elasticsearch", "release" ] } } ] } }

    3. deep paging问题

    deep paging简单来说,就是搜索的特别深,比如总共有60000条数据,现在有3个primary shard,每个shard上分20000条,每页是10条数据,这个时候你要搜索到第1000页,实际上要拿到的是10001-10010,该怎么拿呢?

    请求首先可能是打到一个不包含这个index的shard的node上,这个node就是一个coordinate node,这个coordinate node就会将搜索请求转发到index的三个shard所在的node上去。

    要搜索60000条数据中的第1000页,实际上每个shard都要将内部的20000条数据中的第1-10010条数据拿出来,不是10条,是10010条数据,3个shard每个shard都返回10010条数据给coordinate node,coordinate node会收到总共30030条数据,然后排序取到所需的那10条数据,其实就是我们要的最后的第1000页的10条数据。

    举个例子,现在有60个带编号的球(从1到60),我现在随机给他们放到三个篮子里面(他们在篮子里面已经排好序了),现在我要取出第10-12个球,那我是不是应该先把各个篮子里面前12个球取出来放到一起(篮子里面的球是随机放的,无规律),共计36个球,然后汇总进行排序后,在这个结果中取出第10-12个球!!!

    3.1 缺点

    搜索过深的时候就需要在coordinate node上保存大量的数据,还要进行大量数据的排序,排序之后再取出对应的那一页,所以这个过程,既消耗网络宽带,耗费内存,还消耗cpu。这就是deep paging的性能问题,我们应该尽量避免出现这种deep paging操作。

    3.2 解决方案

    为了解决上面的问题,elasticsearch提出了一个scroll滚动的方式,这个滚动的方式原理就是通过每次查询后,返回一个scroll_id。根据这个scroll_id 进行下一页的查询。可以把这个scroll_id理解为通常关系型数据库中的游标。但是,这种scroll方式的缺点是不能够进行反复查询,也就是说,只能进行下一页,不能进行上一页。

    经过分析,如果数据达到了50000条以上,那么用户基本上是不会考虑每条都去看的,用户需要的是最后对数据分析处理后的结果。而如果小于50000条的时候我们可以使用from size的方式进行分页的查询。那么这种方式存在是为了什么情景呢。应该是为了分批次的检索所有数据。

    3.3 实现步骤

    1.首先取出前2条,并且得到scroll_id(这里的3s代表的是持续滚动时间,如果过了3秒钟,还没有查询下一页,那么这个scroll_id就会失效)。

    GET /_search?scroll=3s&size=2

    返回结果

    { "_scroll_id": "DnF1ZXJ5VGhlbkZldGNoEAAAAAAAAAPIFnhEVi1HVGViVFJxYzdlczBoRFI0clEAAAAAAAADyhZ4RFYtR1RlYlRScWM3ZXMwaERSNHJRAAAAAAAAA8sWeERWLUdUZWJUUnFjN2VzMGhEUjRyUQAAAAAAAAPJFnhEVi1HVGViVFJxYzdlczBoRFI0clEAAAAAAAADzRZ4RFYtR1RlYlRScWM3ZXMwaERSNHJRAAAAAAAAA8wWeERWLUdUZWJUUnFjN2VzMGhEUjRyUQAAAAAAAAPOFnhEVi1HVGViVFJxYzdlczBoRFI0clEAAAAAAAADzxZ4RFYtR1RlYlRScWM3ZXMwaERSNHJRAAAAAAAAA9cWeERWLUdUZWJUUnFjN2VzMGhEUjRyUQAAAAAAAAPQFnhEVi1HVGViVFJxYzdlczBoRFI0clEAAAAAAAAD0RZ4RFYtR1RlYlRScWM3ZXMwaERSNHJRAAAAAAAAA9UWeERWLUdUZWJUUnFjN2VzMGhEUjRyUQAAAAAAAAPWFnhEVi1HVGViVFJxYzdlczBoRFI0clEAAAAAAAAD0hZ4RFYtR1RlYlRScWM3ZXMwaERSNHJRAAAAAAAAA9MWeERWLUdUZWJUUnFjN2VzMGhEUjRyUQAAAAAAAAPUFnhEVi1HVGViVFJxYzdlczBoRFI0clE=", "took": 10, "timed_out": false, "_shards": { "total": 16, "successful": 16, "skipped": 0, "failed": 0 }, "hits": { "total": 8, "max_score": 1, "hits": [ { "_index": ".kibana", "_type": "doc", "_id": "config:6.4.0", "_score": 1, "_source": { "type": "config", "updated_at": "2018-09-18T09:30:18.949Z", "config": { "buildNum": 17929, "telemetry:optIn": true } } }, { "_index": "blog", "_type": "article", "_id": "eTmX5mUBtZGWutGW0TNs", "_score": 1, "_source": { "title": "New version of Elasticsearch released!", "content": "Version 1.0 released today!", "priority": 10, "tags": [ "announce", "elasticsearch", "release" ] } } ] }

    }

    2.再次查询下一页,注意,这里查询时不需要指定index,只需要指定scroll_id和本次的持续滚动时间。

    说白了,想要第几页,循环请求几次就行了,在设置的时间内scroll_id是不会变的

    GET /_search/scroll?scroll=3s&scroll_id=DnF1ZXJ5VGhlbkZldGNoEAAAAAAAAAWtFnhEVi1HVGViVFJxYzdlczBoRFI0clEAAAAAAAAFuxZ4RFYtR1RlYlRScWM3ZXMwaERSNHJRAAAAAAAABa4WeERWLUdUZWJUUnFjN2VzMGhEUjRyUQAAAAAAAAWwFnhEVi1HVGViVFJxYzdlczBoRFI0clEAAAAAAAAFrxZ4RFYtR1RlYlRScWM3ZXMwaERSNHJRAAAAAAAABbIWeERWLUdUZWJUUnFjN2VzMGhEUjRyUQAAAAAAAAWxFnhEVi1HVGViVFJxYzdlczBoRFI0clEAAAAAAAAFvBZ4RFYtR1RlYlRScWM3ZXMwaERSNHJRAAAAAAAABbMWeERWLUdUZWJUUnFjN2VzMGhEUjRyUQAAAAAAAAW0FnhEVi1HVGViVFJxYzdlczBoRFI0clEAAAAAAAAFtRZ4RFYtR1RlYlRScWM3ZXMwaERSNHJRAAAAAAAABbYWeERWLUdUZWJUUnFjN2VzMGhEUjRyUQAAAAAAAAW3FnhEVi1HVGViVFJxYzdlczBoRFI0clEAAAAAAAAFuBZ4RFYtR1RlYlRScWM3ZXMwaERSNHJRAAAAAAAABbkWeERWLUdUZWJUUnFjN2VzMGhEUjRyUQAAAAAAAAW6FnhEVi1HVGViVFJxYzdlczBoRFI0clE=

    或者

    POST /_search/scroll { "scroll" : "3s", "scroll_id":"DnF1ZXJ5VGhlbkZldGNoEAAAAAAAAAXIFnhEVi1HVGViVFJxYzdlczBoRFI0clEAAAAAAAAF1RZ4RFYtR1RlYlRScWM3ZXMwaERSNHJRAAAAAAAABccWeERWLUdUZWJUUnFjN2VzMGhEUjRyUQAAAAAAAAXWFnhEVi1HVGViVFJxYzdlczBoRFI0clEAAAAAAAAFyRZ4RFYtR1RlYlRScWM3ZXMwaERSNHJRAAAAAAAABcoWeERWLUdUZWJUUnFjN2VzMGhEUjRyUQAAAAAAAAXLFnhEVi1HVGViVFJxYzdlczBoRFI0clEAAAAAAAAFzBZ4RFYtR1RlYlRScWM3ZXMwaERSNHJRAAAAAAAABc0WeERWLUdUZWJUUnFjN2VzMGhEUjRyUQAAAAAAAAXOFnhEVi1HVGViVFJxYzdlczBoRFI0clEAAAAAAAAFzxZ4RFYtR1RlYlRScWM3ZXMwaERSNHJRAAAAAAAABdAWeERWLUdUZWJUUnFjN2VzMGhEUjRyUQAAAAAAAAXSFnhEVi1HVGViVFJxYzdlczBoRFI0clEAAAAAAAAF0RZ4RFYtR1RlYlRScWM3ZXMwaERSNHJRAAAAAAAABdQWeERWLUdUZWJUUnFjN2VzMGhEUjRyUQAAAAAAAAXTFnhEVi1HVGViVFJxYzdlczBoRFI0clE=" }

    删除对应scroll_id

    当我们搜索完毕或者说已经滚动到最后的时候,我们可以选择删除scroll_id

    DELETE /_search/scroll/DnF1ZXJ5VGhlbkZldGNoEAAAAAAAAAXIFnhEVi1HVGViVFJxYzdlczBoRFI0clEAAAAAAAAF1RZ4RFYtR1RlYlRScWM3ZXMwaERSNHJRAAAAAAAABccWeERWLUdUZWJUUnFjN2VzMGhEUjRyUQAAAAAAAAXWFnhEVi1HVGViVFJxYzdlczBoRFI0clEAAAAAAAAFyRZ4RFYtR1RlYlRScWM3ZXMwaERSNHJRAAAAAAAABcoWeERWLUdUZWJUUnFjN2VzMGhEUjRyUQAAAAAAAAXLFnhEVi1HVGViVFJxYzdlczBoRFI0clEAAAAAAAAFzBZ4RFYtR1RlYlRScWM3ZXMwaERSNHJRAAAAAAAABc0WeERWLUdUZWJUUnFjN2VzMGhEUjRyUQAAAAAAAAXOFnhEVi1HVGViVFJxYzdlczBoRFI0clEAAAAAAAAFzxZ4RFYtR1RlYlRScWM3ZXMwaERSNHJRAAAAAAAABdAWeERWLUdUZWJUUnFjN2VzMGhEUjRyUQAAAAAAAAXSFnhEVi1HVGViVFJxYzdlczBoRFI0clEAAAAAAAAF0RZ4RFYtR1RlYlRScWM3ZXMwaERSNHJRAAAAAAAABdQWeERWLUdUZWJUUnFjN2VzMGhEUjRyUQAAAAAAAAXTFnhEVi1HVGViVFJxYzdlczBoRFI0clE=

    删除所有scroll_id

    DELETE /_search/scroll/_all
    Processed: 0.011, SQL: 8