Are you still receiving some of the records on the ES side, or does it stopped receiving records altogether? After Graylog was created. [2022/03/24 04:19:21] [debug] [http_client] not using http_proxy for header [2022/03/22 03:57:48] [ warn] [engine] failed to flush chunk '1-1647920802.180669296.flb', retry in 1160 seconds: task_id=608, input=tail.0 > output=es.0 (out_id=0) "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"4uMmun8BI6SaBP9luq99","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [out coro] cb_destroy coro_id=3 Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [out coro] cb_destroy coro_id=5 [2022/03/24 04:19:30] [debug] [upstream] KA connection #104 to 10.3.4.84:9200 is now available "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"JuMoun8BI6SaBP9lIP7t","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [retry] re-using retry for task_id=1 attempts=2 [engine] failed to flush chunk '1-1612396545.855856569.flb', retry in 1485 seconds: task_id=143, input=forward.0 > output=tcp.0 but sometimes this is the last thing I see of the chunk. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=104226845 events: IN_ATTRIB Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [http_client] not using http_proxy for header [2022/03/24 04:20:20] [debug] [input:tail:tail.0] scanning path /var/log/containers/.log [2022/03/24 04:19:54] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [input chunk] update output instances with new chunk size diff=862 I use 2.0.6,no matter set Type _doc or Replace_Dots On,i still see mass warn log above. Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [input chunk] update output instances with new chunk size diff=862 Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 is now available [2022/03/24 04:19:34] [debug] [upstream] KA connection #103 to 10.3.4.84:9200 has been assigned (recycled) Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [ warn] [engine] failed to flush chunk '1-1648192111.878474491.flb', retry in 14 seconds: task_id=10, input=tail.0 > output=es.0 (out_id=0)
TLS error: unexpected EOF Issue #6165 fluent/fluent-bit Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY In this step, I have 5 fluentd pods and 2 of them were OOMkilled and restart several times. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"1OMmun8BI6SaBP9liJWQ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [input chunk] update output instances with new chunk size diff=697 First check Troubleshooting targets section above. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"_OMmun8BI6SaBP9l_8nZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Retry_Limit False [2022/03/24 04:19:49] [debug] [out coro] cb_destroy coro_id=3 [2022/03/24 04:20:25] [debug] [outputes.0] task_id=2 assigned to thread #1 Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY <source> type forward bind :: port 24000 </source> ~ <match fluent_bit> type . Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [http_client] not using http_proxy for header
flush.go:221 org_id=fake msg="failed to flush user" #5531 - Github Fri, Mar 25 2022 3:08:37 pm | [2022/03/25 07:08:37] [debug] [http_client] not using http_proxy for header Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [upstream] KA connection #36 to 10.3.4.84:9200 is now available Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [out coro] cb_destroy coro_id=14 [2022/03/24 04:20:20] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=1756313 watch_fd=8 [2022/03/24 04:19:21] [debug] [outputes.0] task_id=2 assigned to thread #0 I am trying to send logs of my apps running on an ECS Fargate Cluster to Elastic Cloud. Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 is now available [2022/03/24 04:20:20] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=69179340 watch_fd=6 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","id":"-Mmun8BI6SaBP9l_8nZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [http_client] not using http_proxy for header Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. ): k3s 1.19.8, use docker-ce backend, 20.10.12.
From fluent-bit to es: [ warn] [engine] failed to flush chunk #5145 Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=104048905 watch_fd=14 I have also set Replace_Dots On. Expected behavior Minimally that these messages do not tie-up fluent-bit's pipeline as retrying them will never succeed. To Reproduce. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"feMnun8BI6SaBP9l3vN-","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"AOMmun8BI6SaBP9l_8rZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:37 pm | [2022/03/25 07:08:37] [debug] [retry] re-using retry for task_id=7 attempts=2 Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [task] created task=0x7ff2f183a0c0 id=9 OK Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [retry] new retry created for task_id=7 attempts=1 Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [http_client] not using http_proxy for header [2022/03/24 04:21:20] [debug] [input:tail:tail.0] inode=35326801 with offset=0 appended as /var/log/containers/hello-world-89knq_argo_main-f011b1f724e7c495af7d5b545d658efd4bff6ae88489a16581f492d744142807.log * Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [outputes.0] task_id=15 assigned to thread #0 [2022/03/24 04:20:51] [debug] [out coro] cb_destroy coro_id=6 Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [upstream] KA connection #36 to 10.3.4.84:9200 has been assigned (recycled) Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [upstream] KA connection #35 to 10.3.4.84:9200 is now available Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 is now available "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"A-Mmun8BI6SaBP9l_8rZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [retry] re-using retry for task_id=16 attempts=2 [2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/traefik-5dd496474-84cj4_kube-system_traefik-686ff216b0c3b70ad7c33ceddf441433ae1fbf9e01b3c57c59bab53e69304722.log, inode 34105409 Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [http_client] not using http_proxy for header Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=34055641 watch_fd=11 Fluentd does not handle a large number of chunks well when starting up, so that can be a problem as well. [2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/metrics-server-7b4f8b595-v67pp_kube-system_metrics-server-e1e425c84b9462fb800c3655c86c1fd8320b98067c0f43305806cb81b7120b4c.log, inode 67182317 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"dOMnun8BI6SaBP9l3vN-","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=34055641 events: IN_ATTRIB I'm using fluentd logging on k8s for application logging, we are handling 100M (around 400 tps) and getting this issue. [OUTPUT] Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [retry] new retry created for task_id=15 attempts=1
Failed to Flush Buffer - Read Timeout Reached / Connect_Write #590 - Github [2022/03/24 04:19:24] [debug] [retry] new retry created for task_id=1 attempts=1 Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [retry] re-using retry for task_id=11 attempts=2 [2022/03/24 04:19:38] [ warn] [engine] failed to flush chunk '1-1648095560.254537600.flb', retry in 9 seconds: task_id=1, input=tail.0 > output=es.0 (out_id=0) es 7.6.2 fluent/fluent-bit 1.8.15. Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=69479190 watch_fd=15 Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [retry] re-using retry for task_id=5 attempts=2 Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY You signed in with another tab or window. Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [out coro] cb_destroy coro_id=15 Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=3076476 watch_fd=10 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"1eMmun8BI6SaBP9liJWQ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=104226845 removing file name /var/log/containers/hello-world-dsfcz_argo_wait-3a9bd9a90cc08322e96d0b7bcc9b6aeffd7e5e6a71754073ca1092db862fcfb7.log The text was updated successfully, but these errors were encountered: [2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [debug] [task] created task=0x7ff2f183a2a0 id=10 OK . Fri, Mar 25 2022 3:08:22 pm | [2022/03/25 07:08:22] [debug] [outputes.0] HTTP Status=200 URI=/_bulk
Failed to flush chunks' Issue #3499 fluent/fluent-bit GitHub Retry_Limit False. Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [input chunk] update output instances with new chunk size diff=650 Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [ warn] [engine] failed to flush chunk '1-1648192110.850147571.flb', retry in 37 seconds: task_id=9, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [outputes.0] HTTP Status=200 URI=/_bulk [2022/03/24 04:19:21] [debug] [task] created task=0x7f7671e38540 id=0 OK Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [http_client] not using http_proxy for header fluentd 1.4.2. elasticsearch-plugin 7.1.0. elasticsearch 7.1.0. added the waiting-on-feedback label.
msg="failed to flush user" err="open /data/loki/chunks - Github Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [retry] new retry created for task_id=8 attempts=1 Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY [2022/03/24 04:20:06] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Bug Report Describe the bug When Fluent Bit 1.8.9 first restarts to apply configuration changes, we are seeing spamming errors in the log like: [2021/10/30 02:47:00] [ warn] [engine] failed to flush chunk '2372-1635562009.567200761.flb',. Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 has been assigned (recycled) Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [task] created task=0x7ff2f183a840 id=13 OK Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 is now available a retry for websocket plugin has been triggered, with another handshake and data flush. failed to flush the buffer in fluentd looging. [2022/03/24 04:20:34] [debug] [http_client] not using http_proxy for header [2022/04/17 14:48:10] [ warn] [engine] failed to flush chunk '1-1650206880.316011791.flb', retry in 16 seconds: task_id=4, input=tail.0 > output=es.0 (out_id=0 . Fluentd will wait to flush the buffered chunks for delayed events. Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [ warn] [engine] failed to flush chunk '1-1648192121.87279162.flb', retry in 10 seconds: task_id=15, input=tail.0 > output=es.0 (out_id=0) 1.8.12 all got same error, @dezhishen I set the "Write_Operation upsert", then pod error, did not start fluent-bit normally. [2022/03/24 04:19:38] [debug] [out coro] cb_destroy coro_id=2 Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Hi everyone! "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"H-Moun8BI6SaBP9lIP7t","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. {"took":2579,"errors":true,"items":[{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"G-Mmun8BI6SaBP9lh4vZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [retry] new retry created for task_id=13 attempts=1 Another solution can be to convert your Angular site to a PWA (Progressive Web App). [2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/svclb-traefik-twmt7_kube-system_lb-port-443-ab3854479885ed2d0db7202276fdb1d2142db002b93c0c88d3d9383fc2d8068b.log, inode 34105877 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"0-Mmun8BI6SaBP9liJWQ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [outputes.0] task_id=6 assigned to thread #1 Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=104051102 events: IN_ATTRIB "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"j-Mmun8BI6SaBP9l7LpS","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:44 pm | [2022/03/25 07:08:44] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [http_client] not using http_proxy for header Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. [2022/03/24 04:19:24] [ warn] [engine] failed to flush chunk '1-1648095560.205735907.flb', retry in 10 seconds: task_id=0, input=tail.0 > output=es.0 (out_id=0) Have a question about this project? "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"0OMmun8BI6SaBP9liJWQ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [outputes.0] HTTP Status=200 URI=/_bulk we can close this issue. Fluentd collects log data in a single blob called a chunk.When Fluentd creates a chunk, the chunk is considered to be in the stage, where the chunk gets filled with data.When the chunk is full, Fluentd moves the chunk to the queue, where chunks are held before being flushed, or written out to their destination.Fluentd can fail to flush a chunk for a number of reasons, such as network issues or . Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Logstash_Prefix node Skip to content Toggle navigation **note: removed the leading slash form the first source tag. [2022/05/21 02:00:33] [ warn] [engine] failed to flush chunk '1-1653098433.74179197.flb', retry in 6 seconds: task_id=1, input=tail.0 > output=forward.0 (out_id=0) [2022/05/21 02:00:37] [ info] [engine] flush chunk '1-1653098426.49963372.flb' succeeded at retry 1: task_id=0, input=tail.0 > output=forward.0 (out_id=0) [2022/05/21 02:00:39 . Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. [2022/03/24 04:20:00] [debug] [retry] re-using retry for task_id=2 attempts=4 [2022/03/24 04:19:50] [ warn] [engine] failed to flush chunk '1-1648095560.297175793.flb', retry in 9 seconds: task_id=2, input=tail.0 > output=es.0 (out_id=0) Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. [2022/03/24 04:20:34] [debug] [outputes.0] task_id=0 assigned to thread #0 @evheniyt thanks. I'm trying to configure Loki to use Apache Cassandra both for index and chunk storage. Fri, Mar 25 2022 3:08:33 pm | [2022/03/25 07:08:33] [debug] [task] created task=0x7ff2f183a480 id=11 OK Fri, Mar 25 2022 3:08:33 pm | [2022/03/25 07:08:33] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 is now available Fri, Mar 25 2022 3:08:47 pm | [2022/03/25 07:08:47] [debug] [retry] re-using retry for task_id=2 attempts=4 Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=69479190 file has been deleted: /var/log/containers/hello-world-dsfcz_argo_main-13bb1b2c7e9d3e70003814aa3900bb9aef645cf5e3270e3ee4db0988240b9eff.log [2022/03/24 04:21:08] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. [OUTPUT] Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY [2022/03/24 04:20:20] [debug] [input:tail:tail.0] inode=35353618 with offset=0 appended as /var/log/containers/hello-world-dsxks_argo_main-3bba9f6587b663e2ec8fde9f40424e43ccf8783cf5eafafc64486d405304f470.log Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [out coro] cb_destroy coro_id=19 Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [input chunk] update output instances with new chunk size diff=650 [2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_glob add(): /var/log/containers/hello-world-wpr5j_argo_main-55a61ed18250cc1e46ac98d918072e16dab1c6a73f7f9cf0a5dd096959cf6964.log, inode 35326802 Expected behavior logs from the source folder should've been transferred to elasticsearch. Fri, Mar 25 2022 3:08:47 pm | [2022/03/25 07:08:47] [ warn] [engine] failed to flush chunk '1-1648192099.641327100.flb', retry in 60 seconds: task_id=2, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Helm chart configuration. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"_eMmun8BI6SaBP9l_8nZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY
Fluentbit failed to send logs to elasticsearch ( Failed to flush chunk Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [outputes.0] HTTP Status=200 URI=/_bulk
fluent-bit when trying to use OUTPUT gelf getting connection timeout Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [ warn] [engine] failed to flush chunk '1-1648192113.5409018.flb', retry in 15 seconds: task_id=11, input=tail.0 > output=es.0 (out_id=0) If that doesn't help answer your questions, you can connect to the Promtail pod to . "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"z-Mmun8BI6SaBP9liJWQ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Though I do not found the reason of OOM and flush chunks error, I decide to reallocate normal memory to fd pod.