Hive安装简介
Hive是基于Hadoop的数据仓库平台。 Hive提供了类SQL查询语言。Hive的数据存储于HDFS中。一般情况下,用户提交的查询将被Hive转换为MapReduce作业并提交给Hadoop运行。 我们从Hive的安装开始,逐步学习Hive的方方面面。 安装Hive 安装前提 l Java 6 l Hadoop
Hive是基于Hadoop的数据仓库平台。
Hive提供了类SQL查询语言。Hive的数据存储于HDFS中。一般情况下,用户提交的查询将被Hive转换为MapReduce作业并提交给Hadoop运行。
我们从Hive的安装开始,逐步学习Hive的方方面面。
安装Hive
安装前提
l Java 6
l Hadoop
选择哪一个版本请参照Hive官方文档。安装Have是不需要特别设置关于Hadoop的信息,只要保证HADOOP_HOME环境变量正确设置就可以了。
安装
我们选择下载0.11.1稳定版本。下载地址:
http://mirrors.hust.edu.cn/apache/hive/stable/
1) 解压安装包到指定的目录:
tar xzf hive-0.11.0.tar.gz
2) 设置环境变量
export HIVE_INSTALL=/opt/Hive-0.11.0
export PATH=$PATH:$HIVE_INSTALL/bin
3)输入以下命令进入Shell
Hive
Hive交互环境( Shell)
Shell是我们和Hive交互的主要工具。
Hive的查询语言我们称为HiveQL。HiveQL的设计受到了MySQL的很多影响,所以如果你熟悉MySQL的话,你会发现使用HiveQL是同样的方便。
进入Shell后,输入以下命令看看Hive是否工作正常:
SHOW TABLES;
输出结果为:
OK
Time taken: 8.207seconds
如果输出结果显示有错误,可能是Hadoop没有运行,或者HADOOP_HOME变量没有真确设置。
和SQL一样,HiveQL一般是大小写无关的(字符串比较除外)。
输入命令是按Tab键,Hive将提示所有可用的输入。(命令自动完成)
第一次使用该命令可能会花上好几秒中甚至更长,因为Hive将创建metastore数据库(存储于metastore_db目录,此目录在你运行hive时所在目录之下,所以第一次运行Hive时,请先进入到合适的目录下)。
我们也可以直接从命令行运行hive脚本,比如:
hive –f /home/user/ hive.q
其中,-f 后面跟上脚本文件名(包括路径)。
无论是在交互模式还是非交互模式下,hive一般都会输出一些辅助信息,比如执行命令的时间等。如果你不需要输出这些消息,可以在进入hive时加上-s选项,比如:
hive –S
注意:S为大写
简单示例
我们以以下数据作为测试数据,结构为(班级号,学号,成绩)。
C01,N0101,82
C01,N0102,59
C01,N0103,65
C02,N0201,81
C02,N0202,82
C02,N0203,79
C03,N0301,56
C03,N0302,92
C03,N0306,72
执行以下命令:
create table student(classNostring, stuNo string, score int) row format delimited fields terminated by ',';
其中,定义表结构和SQL类似.。其它设置表示字段间以逗号分隔,一行为一个记录。
load data local inpath '/home/user/input/student.txt'overwrite into table student;
输出结果如下:
Copying data fromfile:/home/user/input/student.txt
Copying file:file:/home/user/input/student.txt
Loading data to tabledefault.student
rmr: DEPRECATED: Please use 'rm-r' instead.
Deleted/user/hive/warehouse/student
Table default.student stats:[num_partitions: 0, num_files: 1, num_rows: 0, total_size: 117, raw_data_size:0]
这个命令将student.txt文件内容加载到表student中。这个加载操作将直接把student.txt文件复制到hive的warehouse目录中,这个目录由hive.metastore.warehouse.dir配置项设置,默认值为/user/hive/warehouse。Overwrite选项将导致Hive事先删除student目录下所有的文件。
Hive不会对student.txt做任何格式处理,因为Hive本身并不强调数据的存储格式。
此例中,Hive将数据存储于HDFS系统中。当然,Hive也可以将数据存储于本地。
如果不加overwrite选项,且加载的文件在Hive中已经存在,则Hive会为文件重新命名。比如不加overwrite选项将以上命令执行两次,则第二次加载后,hive中新产生的文件名将会是“student_copy_1.txt”。(和Hadoop权威教程中描述的不一致,读者请慎重验证)
接下来,我们执行以下命令:
select * from student;
输出如下:
C01 N0101 82
C01 N0102 59
C01 N0103 65
C02 N0201 81
C02 N0202 82
C02 N0203 79
C03 N0301 56
C03 N0302 92
C03 N0306 72
执行以下命令:
Select classNo,count(score) fromstudent where score>=60 group by classNo;
输出如下:
C01 2
C02 3
C03 2
由此看见,HiveQL的使用和SQL及其类似。我们用到了group和count,其实在后台Hive将这些操作都转换成了MapReduce操作提交给Hadoop执行,并最终输出结果。

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

0.What does this article do? We propose DepthFM: a versatile and fast state-of-the-art generative monocular depth estimation model. In addition to traditional depth estimation tasks, DepthFM also demonstrates state-of-the-art capabilities in downstream tasks such as depth inpainting. DepthFM is efficient and can synthesize depth maps within a few inference steps. Let’s read about this work together ~ 1. Paper information title: DepthFM: FastMonocularDepthEstimationwithFlowMatching Author: MingGui, JohannesS.Fischer, UlrichPrestel, PingchuanMa, Dmytr

The performance of JAX, promoted by Google, has surpassed that of Pytorch and TensorFlow in recent benchmark tests, ranking first in 7 indicators. And the test was not done on the TPU with the best JAX performance. Although among developers, Pytorch is still more popular than Tensorflow. But in the future, perhaps more large models will be trained and run based on the JAX platform. Models Recently, the Keras team benchmarked three backends (TensorFlow, JAX, PyTorch) with the native PyTorch implementation and Keras2 with TensorFlow. First, they select a set of mainstream

Facing lag, slow mobile data connection on iPhone? Typically, the strength of cellular internet on your phone depends on several factors such as region, cellular network type, roaming type, etc. There are some things you can do to get a faster, more reliable cellular Internet connection. Fix 1 – Force Restart iPhone Sometimes, force restarting your device just resets a lot of things, including the cellular connection. Step 1 – Just press the volume up key once and release. Next, press the Volume Down key and release it again. Step 2 – The next part of the process is to hold the button on the right side. Let the iPhone finish restarting. Enable cellular data and check network speed. Check again Fix 2 – Change data mode While 5G offers better network speeds, it works better when the signal is weaker

I cry to death. The world is madly building big models. The data on the Internet is not enough. It is not enough at all. The training model looks like "The Hunger Games", and AI researchers around the world are worrying about how to feed these data voracious eaters. This problem is particularly prominent in multi-modal tasks. At a time when nothing could be done, a start-up team from the Department of Renmin University of China used its own new model to become the first in China to make "model-generated data feed itself" a reality. Moreover, it is a two-pronged approach on the understanding side and the generation side. Both sides can generate high-quality, multi-modal new data and provide data feedback to the model itself. What is a model? Awaker 1.0, a large multi-modal model that just appeared on the Zhongguancun Forum. Who is the team? Sophon engine. Founded by Gao Yizhao, a doctoral student at Renmin University’s Hillhouse School of Artificial Intelligence.

The latest video of Tesla's robot Optimus is released, and it can already work in the factory. At normal speed, it sorts batteries (Tesla's 4680 batteries) like this: The official also released what it looks like at 20x speed - on a small "workstation", picking and picking and picking: This time it is released One of the highlights of the video is that Optimus completes this work in the factory, completely autonomously, without human intervention throughout the process. And from the perspective of Optimus, it can also pick up and place the crooked battery, focusing on automatic error correction: Regarding Optimus's hand, NVIDIA scientist Jim Fan gave a high evaluation: Optimus's hand is the world's five-fingered robot. One of the most dexterous. Its hands are not only tactile

New SOTA for multimodal document understanding capabilities! Alibaba's mPLUG team released the latest open source work mPLUG-DocOwl1.5, which proposed a series of solutions to address the four major challenges of high-resolution image text recognition, general document structure understanding, instruction following, and introduction of external knowledge. Without further ado, let’s look at the effects first. One-click recognition and conversion of charts with complex structures into Markdown format: Charts of different styles are available: More detailed text recognition and positioning can also be easily handled: Detailed explanations of document understanding can also be given: You know, "Document Understanding" is currently An important scenario for the implementation of large language models. There are many products on the market to assist document reading. Some of them mainly use OCR systems for text recognition and cooperate with LLM for text processing.

Recently, the military circle has been overwhelmed by the news: US military fighter jets can now complete fully automatic air combat using AI. Yes, just recently, the US military’s AI fighter jet was made public for the first time and the mystery was unveiled. The full name of this fighter is the Variable Stability Simulator Test Aircraft (VISTA). It was personally flown by the Secretary of the US Air Force to simulate a one-on-one air battle. On May 2, U.S. Air Force Secretary Frank Kendall took off in an X-62AVISTA at Edwards Air Force Base. Note that during the one-hour flight, all flight actions were completed autonomously by AI! Kendall said - "For the past few decades, we have been thinking about the unlimited potential of autonomous air-to-air combat, but it has always seemed out of reach." However now,

FP8 and lower floating point quantification precision are no longer the "patent" of H100! Lao Huang wanted everyone to use INT8/INT4, and the Microsoft DeepSpeed team started running FP6 on A100 without official support from NVIDIA. Test results show that the new method TC-FPx's FP6 quantization on A100 is close to or occasionally faster than INT4, and has higher accuracy than the latter. On top of this, there is also end-to-end large model support, which has been open sourced and integrated into deep learning inference frameworks such as DeepSpeed. This result also has an immediate effect on accelerating large models - under this framework, using a single card to run Llama, the throughput is 2.65 times higher than that of dual cards. one
