栏目分类:
子分类:
返回
终身学习网用户登录
快速导航关闭
当前搜索
当前分类
子分类
实用工具
热门搜索
终身学习网 > IT > 软件开发 > 后端开发 > Java

flink 常见问题汇总

Java 更新时间:发布时间: 百科书网 趣学号
flink 1.11 No operators defined in streaming topology. Cannot execute.

代码如下:

import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.table.api.DataTypes;
import org.apache.flink.table.api.Table;
import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
import org.apache.flink.table.descriptors.Csv;
import org.apache.flink.table.descriptors.FileSystem;
import org.apache.flink.table.descriptors.Schema;
import org.apache.flink.types.Row;


public class TableTest3_FileOutput {
    public static void main(String[] args) throws Exception {
        // 1. 创建环境
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        env.setParallelism(1);

        StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);

        // 2. 表的创建:连接外部系统,读取数据
        // 读取文件
        String filePath = "F:\project\flink20210920\src\main\resources\sensor.txt";
        tableEnv.connect(new FileSystem().path(filePath))
                .withFormat(new Csv())
                .withSchema(new Schema()
                        .field("id", DataTypes.STRING())
                        .field("timestamp", DataTypes.BIGINT())
                        .field("temp", DataTypes.DOUBLE())
                )
                .createTemporaryTable("inputTable");

        Table inputTable = tableEnv.from("inputTable");
//        inputTable.printSchema();
//        tableEnv.toAppendStream(inputTable, Row.class).print();

        // 3. 查询转换
        // 3.1 Table API
        // 简单转换
        Table resultTable = inputTable.select("id, temp")
                .filter("id === 'sensor_6'");

        // 聚合统计
        Table aggTable = inputTable.groupBy("id")
                .select("id, id.count as count, temp.avg as avgTemp");

        // 3.2 SQL
        tableEnv.sqlQuery("select id, temp from inputTable where id = 'senosr_6'");
        Table sqlAggTable = tableEnv.sqlQuery("select id, count(id) as cnt, avg(temp) as avgTemp from inputTable group by id");

        // 4. 输出到文件
        // 连接外部文件注册输出表
        String outputPath = "F:\project\flink20210920\src\main\resources\out.txt";
        tableEnv.connect(new FileSystem().path(outputPath))
                .withFormat(new Csv())
                .withSchema(new Schema()
                                .field("id", DataTypes.STRING())
//                        .field("cnt", DataTypes.BIGINT())
                                .field("temperature", DataTypes.DOUBLE())
//                        .field("avgTemp", DataTypes.DOUBLE())
                )
                .createTemporaryTable("outputTable");

//        tableEnv.toRetractStream(aggTable, Row.class).print("agg");
        resultTable.insertInto("outputTable");  // TODO 报错? 原因?
//        aggTable.insertInto("outputTable");  // 文件系统不支持聚合后有更新操作
//        resultTable.executeInsert("outputTable",true);
//        tableEnv.insertInto(resultTable, "outputTable", new SteramQueryConfig());
        env.execute("test");
    }
}

受到https://blog.csdn.net/wtmdcnm/article/details/117821106以及官网描述
修改测试代码如下:

// 1. 创建环境
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        env.setParallelism(1);

        StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);
        tableEnv.executeSql("CREATE TABLE MyTable(n" +
                "`id` STRING," +
                "`timestamp` BIGINT," +
                "`temp` DOUBLE" +
                ") WITH (n" +
                "  'connector' = 'filesystem',n" +
                "  'path' = 'F:\project\flink20210920\src\main\resources\sensor.txt',n" +
                "  'format' = 'csv'n" +
                ")");

        Table resultTable = tableEnv.sqlQuery("select id, temp from MyTable where id = 'sensor_1'");

        tableEnv.executeSql("CREATE TABLE MyTable1(n" +
                "`id` STRING," +
                "`temp` DOUBLE" +
                ") WITH (n" +
                "  'connector' = 'filesystem',n" +
                "  'path' = 'F:\project\flink20210920\src\main\resources\out',n" +
                "  'format' = 'csv'n" +
                ")");

        resultTable.executeInsert("MyTable1");

执行成功

please declare primary key for sink table when query contains update/delete record.

"id STRING primary key," + // 聚合类操作必须要指定一个主键

转载请注明:文章转载自 www.051e.com
本文地址:http://www.051e.com/it/295703.html
我们一直用心在做
关于我们 文章归档 网站地图 联系我们

版权所有 ©2023-2025 051e.com

ICP备案号:京ICP备12030808号