首页 > 编程知识 正文

大数据管理与应用学什么,大数据技术和应用

时间:2023-05-04 22:02:01 阅读:138932 作者:1134

厦门大学数据库实验室http://dblab.xmu.edu.cn

厦门大学数据库实验室博客http://dblab.xmu.edu.cn/post/5663

1 ) Hadoop安装教程_独立/伪分布式配置Hadoop2.6.0(2.7.1)/Ubuntu14.04 ) 16.04 ) ) ) ) ) ) ) ) )。

3358 db lab.xmu.edu.cn/blog/install-Hadoop /

创建Hadoop用户sudo useradd-m Hadoop-s/xd dlm/bashsudopasswdhadoopsudoadduserhadoopsudo #并安装SSH,无需SSH密码即可安装sudo app 结束刚才的ssh localhostcd ~/.ssh/#。 如果没有此目录,请首先运行sshLocalhostSSH-Keyyy。如果按回车,都将加入cat./id _ RSA.pub./authorized _ keys #许可证请注意在lib/jvm目录中存储JDK文件CD-#的区分大小写。刚才,通过FTP软件将JDK安装软件包jdk-8u162-linux-x64.tar.gz定向到此目录已上传至JDK-8u 162-Linux-x64.tar.gz-c/usr/lib的JVM CD~vim~//bashrcexportjava _ home=/usr/lib/JVM 添加到lib : $ { JRE _ home }/lib # export path=$ { Java _ home }/xd dlm 3360 $ path #添加source~/.bashrc Java-version # 解压到local并下载到CD/usr/local/sudo mv./Hadoop-2.6.0/./Hadoop文件权限CD/usr/local/Hadoop./xd dlm/Hadoop veved inputCP./etc/etc )要更改的Hadoop jar./share/Hadoop/MapReduce/Hadoop-MapReduce-examples-*.jar grep./input 需要修改core-site.xml和hdfs-site.xml的CD/usr/local/Hadoop./xd dlm/hdfsnamenode-format CD/usr/local/hadool /sxddlm/stop-dfs.sh #关2 ) Hadoop安装教程_伪分布式配置_CentOS6.4/Hadoop2.6.0

3358 db lab.xmu.edu.cn/blog/install-Hadoop-in-centos /

3 ) Hadoop3.1.3安装教程(独立/伪分布式Hadoop3.1.3/Ubuntu18.04(16.04 ) ) ) ) ) )

3358 db lab.xmu.edu.cn/blog/2441-2 /

4 ) Hadoop群集安装配置教程_Hadoop2.6.0_Ubuntu/CentOS

3358 db lab.xmu.edu.cn/blog/install-Hadoop-cluster /

5 )大数据技术原理与应用第二章大数据处理架构Hadoop学习指南

http://db lab.xmu.edu.cn/blog/285 /

6 )大数据技术原理与应用第三章分布式文件系统HDFS学习指南

3358 db lab.xmu.edu.cn/blog/290-2 /

# CD/usr/local/Hadoop./xd dlm/HD fsdfsmkdir包括目录和文件操作

p /user/hadoop./xddlm/hdfs dfs –ls ../xddlm/hdfs dfs –ls /user/hadoop./xddlm/hdfs dfs –rm –r /input./xddlm/hdfs dfs -put /home/hadoop/myLocalFile.txt input./xddlm/hdfs dfs –cat input/myLocalFile.txt./xddlm/hdfs dfs -get input/myLocalFile.txt /home/hadoop/下载./xddlm/hdfs dfs -cp input/myLocalFile.txt /input

7)使用Eclipse编译运行MapReduce程序_Hadoop2.6.0_Ubuntu/CentOS
http://dblab.xmu.edu.cn/blog/hadoop-build-project-using-eclipse/

/*wordCount.java*/package org.apache.hadoop.examples;import java.io.IOException;import java.util.Iterator;import java.util.StringTokenizer;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.fs.Path;import org.apache.hadoop.io.IntWritable;import org.apache.hadoop.io.Text;import org.apache.hadoop.mapreduce.Job;import org.apache.hadoop.mapreduce.Mapper;import org.apache.hadoop.mapreduce.Reducer;import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;import org.apache.hadoop.util.GenericOptionsParser; public class WordCount { public WordCount() { } public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); String[] otherArgs = (new GenericOptionsParser(conf, args)).getRemainingArgs(); if(otherArgs.length < 2) { System.err.println("Usage: wordcount <in> [<in>...] <out>"); System.exit(2); } Job job = Job.getInstance(conf, "word count"); job.setJarByClass(WordCount.class); job.setMapperClass(WordCount.TokenizerMapper.class); job.setComxddlmerClass(WordCount.IntSumReducer.class); job.setReducerClass(WordCount.IntSumReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); for(int i = 0; i < otherArgs.length - 1; ++i) { FileInputFormat.addInputPath(job, new Path(otherArgs[i])); } FileOutputFormat.setOutputPath(job, new Path(otherArgs[otherArgs.length - 1])); System.exit(job.waitForCompletion(true)?0:1); } public static class IntSumReducer extends Reducer<Text, IntWritable, Text, IntWritable> { private IntWritable result = new IntWritable(); public IntSumReducer() { } public void reduce(Text key, Iterable<IntWritable> values, Reducer<Text, IntWritable, Text, IntWritable>.Context context) throws IOException, InterruptedException { int sum = 0; IntWritable val; for(Iterator i$ = values.iterator(); i$.hasNext(); sum += val.get()) { val = (IntWritable)i$.next(); } this.result.set(sum); context.write(key, this.result); } } public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable> { private static final IntWritable one = new IntWritable(1); private Text word = new Text(); public TokenizerMapper() { } public void map(Object key, Text value, Mapper<Object, Text, Text, IntWritable>.Context context) throws IOException, InterruptedException { StringTokenizer itr = new StringTokenizer(value.toString()); while(itr.hasMoreTokens()) { this.word.set(itr.nextToken()); context.write(this.word, one); } } }}

版权声明:该文观点仅代表作者本人。处理文章:请发送邮件至 三1五14八八95#扣扣.com 举报,一经查实,本站将立刻删除。