安装hue-3.11.0并迁移数据库至postgresql

    xiaoxiao2021-03-25  84

    环境

    系统:centos7

    hdp版本:2.5

    hue版本:3.11.0

    1. 依赖包安装

    yum install krb5-devel cyrus-sasl-gssapi cyrus-sasl-deve libxml2-devel libxslt-devel mysql mysql-devel openldap-devel python-devel python-simplejson sqlite-devel libffi-devel gmp-devel

    2. hue下载

    官网:http://gethue.com/category/release/

    安装

    1. 解压到/opt

    cd /opt tar -zxvf hue-3.11.0.tgz cd hue-3.11.0

    2. 新建用户组hue和用户hue

    groupadd hue useradd -g hue -G root hue

    3. 编译并更改文件所属用户用户组

    cd hue-3.11.0 make apps chown -R hue:hue /opt/hue-3.11.0 //不然会报数据库不能读的问题

    4. 配置

    vim /opt/hue-3.11.0/desktop/conf/hue.ini

    4.1 hdfs

    [[hdfs_clusters]] # HA support by using HttpFs [[[default]]] # Enter the filesystem uri fs_defaultfs=hdfs://hadoop1.b3.com:8020 # NameNode logical name. ## logical_name= # Use WebHdfs/HttpFs as the communication mechanism. # Domain should be the NameNode or HttpFs host. # Default port is 14000 for HttpFs. webhdfs_url=http://hadoop1.b3.com:50070/webhdfs/v1 # Change this if your HDFS cluster is Kerberos-secured ## security_enabled=false # In secure mode (HTTPS), if SSL certificates from YARN Rest APIs # have to be verified against certificate authority ## ssl_cert_ca_verify=True # Directory of the Hadoop configuration ## hadoop_conf_dir=$HADOOP_CONF_DIR when set or '/etc/hadoop/conf'

    4.2 yarn

    [[yarn_clusters]] [[[default]]] # Enter the host on which you are running the ResourceManager resourcemanager_host=hadoop2.b3.com # The port where the ResourceManager IPC listens on ## resourcemanager_port=8032 # Whether to submit jobs to this cluster submit_to=True # Resource Manager logical name (required for HA) ## logical_name= # Change this if your YARN cluster is Kerberos-secured ## security_enabled=false # URL of the ResourceManager API resourcemanager_api_url=http://hadoop2.b3.com:8088 # URL of the ProxyServer API proxy_api_url=http://hadoop2.b3.com:8088 # URL of the HistoryServer API history_server_api_url=http://hadoop2.b3.com:19888 # URL of the Spark History Server spark_history_server_url=http://hadoop4.b3.com:18088 # In secure mode (HTTPS), if SSL certificates from YARN Rest APIs # have to be verified against certificate authorit

    4.3 hive

    [beeswax] # Host where HiveServer2 is running. # If Kerberos security is enabled, use fully-qualified domain name (FQDN). hive_server_host=hadoop2.b3.com # Port where HiveServer2 Thrift server runs on. hive_server_port=10000 # Hive configuration directory, where hive-site.xml is located ## hive_conf_dir=/etc/hive/conf

    4.4 hbase

    [hbase] # Comma-separated list of HBase Thrift servers for clusters in the format of '(name|host:port)'. # Use full hostname with security. # If using Kerberos we assume GSSAPI SASL, not PLAIN. hbase_clusters=(Cluster|hadoop1.b3.com:16010)

    其他的组件看注释安装就行了

    5. 启动

    /opt/hue-3.11.0/build/env/bin/supervisor -d //-d为后台启动

    将自带的sqlite3换为postgresql

    1. 先结束掉hue的进程

    2. 在postgresql新建用户hue,数据库hue,密码hue,授予读取权限

    3. 对sqlite3数据备份(新安装的可跳过)

    /opt/hue-3.11.0/build/env/bin/hue dumpdata > /tmp/hue_db_dump.json

    4. 安装psycopg2

    yum install python-psycopg2

    5. 修改hue.ini

    [[database]] # Database engine is typically one of: # postgresql_psycopg2, mysql, sqlite3 or oracle. # # Note that for sqlite3, 'name', below is a path to the filename. For other backends, it is the database name # Note for Oracle, options={"threaded":true} must be set in order to avoid crashes. # Note for Oracle, you can use the Oracle Service Name by setting "host=" and "port=" and then "name=<host>:<port>/<service_name>". # Note for MariaDB use the 'mysql' engine. engine=postgresql_psycopg2 host=hadoop1.b3.com port=5432 user=hue password=hue name=hue //这行是数据库名,要写,我就在这里卡了很久 # Execute this script to produce the database password. This will be used when 'password' is not set. ## password_script=/path/script ## name=desktop/desktop.db ## options={}

    6. 迁移数据库

    这里切记要用hue用户,不然会失败

    sudo -u hue /opt/hue-3.11.0/build/env/bin/hue syncdb --noinput sudo -u hue /opt/hue-3.11.0/build/env/bin/hue migrate

    下面是输出

    [root@hadoop4 conf]# sudo -u hue /opt/hue-3.11.0/build/env/bin/hue syncdb --noinput Syncing... Creating tables ... Creating table auth_permission Creating table auth_group_permissions Creating table auth_group Creating table auth_user_groups Creating table auth_user_user_permissions Creating table auth_user Creating table django_openid_auth_nonce Creating table django_openid_auth_association Creating table django_openid_auth_useropenid Creating table django_content_type Creating table django_session Creating table django_site Creating table django_admin_log Creating table south_migrationhistory Creating table axes_accessattempt Creating table axes_accesslog Installing custom SQL ... Installing indexes ... Installed 0 object(s) from 0 fixture(s) Synced: > django.contrib.auth > django_openid_auth > django.contrib.contenttypes > django.contrib.sessions > django.contrib.sites > django.contrib.staticfiles > django.contrib.admin > south > axes > about > filebrowser > help > impala > jobbrowser > metastore > proxy > rdbms > zookeeper > indexer Not synced (use migrations): - django_extensions - desktop - beeswax - hbase - jobsub - oozie - pig - search - security - spark - sqoop - useradmin - notebook (use ./manage.py migrate to migrate these) [root@hadoop4 conf]# sudo -u hue /opt/hue-3.11.0/build/env/bin/hue migrate Running migrations for django_extensions: - Migrating forwards to 0001_empty. > django_extensions:0001_empty - Loading initial data for django_extensions. Installed 0 object(s) from 0 fixture(s) Running migrations for desktop: - Migrating forwards to 0024_auto__add_field_document2_is_managed. > pig:0001_initial > oozie:0001_initial > oozie:0002_auto__add_hive > oozie:0003_auto__add_sqoop > oozie:0004_auto__add_ssh > oozie:0005_auto__add_shell > oozie:0006_auto__chg_field_java_files__chg_field_java_archives__chg_field_sqoop_f > oozie:0007_auto__chg_field_sqoop_script_path > oozie:0008_auto__add_distcp > oozie:0009_auto__add_decision > oozie:0010_auto__add_fs > oozie:0011_auto__add_email > oozie:0012_auto__add_subworkflow__chg_field_email_subject__chg_field_email_body > oozie:0013_auto__add_generic > oozie:0014_auto__add_decisionend > oozie:0015_auto__add_field_dataset_advanced_start_instance__add_field_dataset_ins > oozie:0016_auto__add_field_coordinator_job_properties > oozie:0017_auto__add_bundledcoordinator__add_bundle > oozie:0018_auto__add_field_workflow_managed > oozie:0019_auto__add_field_java_capture_output > oozie:0020_chg_large_varchars_to_textfields > oozie:0021_auto__chg_field_java_args__add_field_job_is_trashed > oozie:0022_auto__chg_field_mapreduce_node_ptr__chg_field_start_node_ptr > oozie:0022_change_examples_path_format > oozie:0023_auto__add_field_node_data__add_field_job_data > oozie:0024_auto__chg_field_subworkflow_sub_workflow > oozie:0025_change_examples_path_format > desktop:0001_initial > desktop:0002_add_groups_and_homedirs > desktop:0003_group_permissions > desktop:0004_grouprelations > desktop:0005_settings > desktop:0006_settings_add_tour > beeswax:0001_initial > beeswax:0002_auto__add_field_queryhistory_notify > beeswax:0003_auto__add_field_queryhistory_server_name__add_field_queryhistory_serve > beeswax:0004_auto__add_session__add_field_queryhistory_server_type__add_field_query > beeswax:0005_auto__add_field_queryhistory_statement_number > beeswax:0006_auto__add_field_session_application > beeswax:0007_auto__add_field_savedquery_is_trashed > beeswax:0008_auto__add_field_queryhistory_query_type > desktop:0007_auto__add_documentpermission__add_documenttag__add_document > desktop:0008_documentpermission_m2m_tables > desktop:0009_auto__chg_field_document_name > desktop:0010_auto__add_document2__chg_field_userpreferences_key__chg_field_userpref > desktop:0011_auto__chg_field_document2_uuid > desktop:0012_auto__chg_field_documentpermission_perms > desktop:0013_auto__add_unique_documenttag_owner_tag > desktop:0014_auto__add_unique_document_content_type_object_id > desktop:0015_auto__add_unique_documentpermission_doc_perms > desktop:0016_auto__add_unique_document2_uuid_version_is_history > desktop:0017_auto__add_document2permission__add_unique_document2permission_doc_perm > desktop:0018_auto__add_field_document2_parent_directory > desktop:0019_auto > desktop:0020_auto__del_field_document2permission_all > desktop:0021_auto__add_defaultconfiguration__add_unique_defaultconfiguration_app_is > desktop:0022_auto__del_field_defaultconfiguration_group__del_unique_defaultconfigur > desktop:0023_auto__del_unique_defaultconfiguration_app_is_default_user__add_field_d > desktop:0024_auto__add_field_document2_is_managed - Loading initial data for desktop. Installed 0 object(s) from 0 fixture(s) Running migrations for beeswax: - Migrating forwards to 0014_auto__add_field_queryhistory_is_cleared. > beeswax:0009_auto__add_field_savedquery_is_redacted__add_field_queryhistory_is_reda > beeswax:0009_auto__chg_field_queryhistory_server_port > beeswax:0010_merge_database_state > beeswax:0011_auto__chg_field_savedquery_name > beeswax:0012_auto__add_field_queryhistory_extra > beeswax:0013_auto__add_field_session_properties > beeswax:0014_auto__add_field_queryhistory_is_cleared - Loading initial data for beeswax. Installed 0 object(s) from 0 fixture(s) Running migrations for hbase: - Migrating forwards to 0001_initial. > hbase:0001_initial - Loading initial data for hbase. Installed 0 object(s) from 0 fixture(s) Running migrations for jobsub: - Migrating forwards to 0006_chg_varchars_to_textfields. > jobsub:0001_initial > jobsub:0002_auto__add_ooziestreamingaction__add_oozieaction__add_oozieworkflow__ad > jobsub:0003_convertCharFieldtoTextField > jobsub:0004_hue1_to_hue2 > jobsub:0005_unify_with_oozie > jobsub:0006_chg_varchars_to_textfields - Loading initial data for jobsub. Installed 0 object(s) from 0 fixture(s) Running migrations for oozie: - Migrating forwards to 0027_auto__chg_field_node_name__chg_field_job_name. > oozie:0026_set_default_data_values > oozie:0027_auto__chg_field_node_name__chg_field_job_name - Loading initial data for oozie. Installed 0 object(s) from 0 fixture(s) Running migrations for pig: - Nothing to migrate. - Loading initial data for pig. Installed 0 object(s) from 0 fixture(s) Running migrations for search: - Migrating forwards to 0003_auto__add_field_collection_owner. > search:0001_initial > search:0002_auto__del_core__add_collection > search:0003_auto__add_field_collection_owner - Loading initial data for search. Installed 0 object(s) from 0 fixture(s) ? You have no migrations for the 'security' app. You might want some. Running migrations for spark: - Migrating forwards to 0001_initial. > spark:0001_initial - Loading initial data for spark. Installed 0 object(s) from 0 fixture(s) Running migrations for sqoop: - Migrating forwards to 0001_initial. > sqoop:0001_initial - Loading initial data for sqoop. Installed 0 object(s) from 0 fixture(s) Running migrations for useradmin: - Migrating forwards to 0006_auto__add_index_userprofile_last_activity. > useradmin:0001_permissions_and_profiles > useradmin:0002_add_ldap_support > useradmin:0003_remove_metastore_readonly_huepermission > useradmin:0004_add_field_UserProfile_first_login > useradmin:0005_auto__add_field_userprofile_last_activity > useradmin:0006_auto__add_index_userprofile_last_activity - Loading initial data for useradmin. Installed 0 object(s) from 0 fixture(s) Running migrations for notebook: - Migrating forwards to 0001_initial. > notebook:0001_initial - Loading initial data for notebook. Installed 0 object(s) from 0 fixture(s)

    错误

    1. hue 开启notebook,pyspack 时会出现 CSRF问题

    解决:通过 ambari,使livy的csrf为false

    转载请注明原文地址: https://ju.6miu.com/read-17457.html

    最新回复(0)