数据持久化编程学习总结
一、JDBC编程 1. 使用JDBC规范 在数据库编程方面,最先使用的数据持久化技术无疑是JDBC 可以说JDBC(Java Data Base Connectivity)是学习其它数据持久化技术的基础 Java中访问数据库使用的就是JDBC,基本操作有CRUD(Create-Read-Update-Delete) JDBC定义
一、JDBC编程
1. 使用JDBC规范
在数据库编程方面,最先使用的数据持久化技术无疑是JDBC
可以说JDBC(Java Data Base Connectivity)是学习其它数据持久化技术的基础
Java中访问数据库使用的就是JDBC,基本操作有CRUD(Create-Read-Update-Delete)
JDBC定义了数据库的连接,SQL语句的执行以及查询结果集的遍历,一般操作步骤如下:
1. 注册驱动:DriverManager.registerDriver(driver);
2. 建立连接:Connection conn = DriverManager.getConnection(url, "username","password");
3. 获取对象:Statement stmt = conn.createStatement();
4. 执行查询:ResultSet rs = stmt.executeQuery(sqlstring);
5. 处理结果:while (rs.next()){doing something about the result}
6. 释放连接:rs.close(); stmt.close();conn.close();
总结:在初学阶段,无疑是必须学会使用原生态的JDBC进行数据库编程
优点:JDBC为数据库编程提供了可能,规范了数据库的连接和操作方式
缺点:JDBC API和SQL语句与Servlet和JSP夹杂在一起
每次进行数据库操作都要进行对象的创建与销毁
二、JDBC高级应用
1. 使用DAO模式
大量进行JDBC编程后,就积累了不少经验和发现不少缺点,于是对JDBC进行分层和模块化
而DAO(Data Access Object)和POJO(Plain Old Java Object)则是JDBC下常用的模式
在DAO模式出现之前,操作数据库的代码与业务代码均出现在Servlet或者JSP中
SQL语句、Java语句和Html语句夹杂在一起了,导致开发效率很底下
而使用了DAO模式后,所有的JDBC API和SQL语句均移到了DAO层
实现分层后Servlet、JSP只与Java Bean、DAO层交互,而不会有JDBC API和SQL语句
这无疑增加了程序的清晰性、可读性,而且其可重用性比较好
2. 使用DBCP
在JDBC编程中,每一次的数据操作,都要创建并销毁conn对象、stmt对象和rs对象
繁琐的创建和销毁这些对象无疑会消耗一定的时间和IO资源,在并发访问时尤其明显
使用数据源DBCP(DataBase connection pool)技术可以解决这一问题
数据源一般配置在xml文件中,使用数据源会自动进行优化和管理,一般配置如下:
<property name="driverClassName" value="driverClassName"></property> <property name="url" value="jdbc url"></property> <property name="username" value="username"></property> <property name="password" value="password"></property>
总结:DAO模式解决了JDBC API和SQL语句与JSP的夹杂问题并实现了分层
DBCP则为繁琐的创建和销毁对象提供了解决方法
三、使用ORM框架Hibernate进行数据库编程
1. ORM框架的基本原理
DAO模式无非就是手动将POJO拆分并拼装成SQL语句和将SQL查询结果拼装回POJO
在使用了JDBC高级技术和DAO模式进行编程后,仍然需要编写大量的SQL语句
而ORM通过xml配置文件或使用Java注解的方式把Java对象映射到数据库上
这样ORM(Object-Relative Database-Mapping)框架就能自动生成SQL语句
2. 使用ORM框架Hibernate进行数据库编程
Hibernete是ORM框架的一种,同样能够自动生成SQL语句
在DAO模式中,一个简单的Person POJO如下(省略getter和setter方法):
public class Person { private Integer id; private String name; }
对应于数据库的表person
create table if not exists person ( id int primary key auto_increment, name varchar(20) not null, );
使用Java注解后Person POJO实体类能映射到数据库上,并能自动生成SQL语句
代码如下(省略getter和setter方法):
@Entity @Table(name = "person") public class Person{ @Id @GeneratedValue(strategy= IDENTITY) private Integer id; @Column(name = "name") private String name; }
Hibernate使用Session和HQL语句进行数据库的相关操作,如查询数据的操作如下:
Session session =HibernateSessionFactory.getSessionFactory().openSession(); String queryString = "select p.id,p.name from Person p"; //查询并输出所有的记录 List<Object[]> personList =session.createQuery(queryString).list(); for(Object[] row : personList){ for(Object obj : row) System.out.print(" " + obj); System.out.println(); } session.close();
总结:ORM这类的框架解决了DAO层需要编写大量的SQL语句的问题
Hibernate使用HQL解决了数据库的移植问题
优点:无需再编写大量的SQL语句并解决了数据库移植问题
缺点:在数据库事务操作上然后要编写较多的代码
四、使用JPA规范进行数据库编程
1. 使用JPA规范
由于人们使用各种不同的数据库如Oracle、DB2、MySQL和SQL Server等进行数据存储
所以进行数据库连接的方式必然多种多样,而JDBC则规范了数据库的连接方式
同样的道理,各种ORM框架的出现必然会使开发和维护的难度升级
所以Java官方又推出了JPA规范,旨在规范各种ORM框架,使其有统一的接口和方法
使用JPA规范进行数据库编程只需指定一种ORM框架作为底层的实现,如Hibernate
如果需要更换其它的ORM框架则只需在配置文件中修改,类似于更换其它的数据库
而JPA规范则使用EntityManager进行相关的数据库操作,如查找操作如下:
public boolean findPersonByName(String name) { EntityManagerFactory emf = Persistence.createEntityManagerFactory("persistence-unitname"); EntityManager em = emf.createEntityManager(); Person person = em. findPersonByName(name); if (a == null) return false; return true; }
总结:JPA需要指定一种ORM框架作为底层的实现
JPA也是使用Java注解配置POJO,使用EntityManager进行相关的数据库操作
优点:JPA规范旨在规范各种ORM框架,使其有统一的接口和方法
缺点:仍然需要对事务管理进行编程
五、使用SpringDAO进行数据库编程
SpringDAO对JDBC进行了封装,结合DAO模式进行使用
SpringDAO规范使用JDBCTemplate进行相关的数据库操作,如查找操作如下:
public int getPersonCount(){ String sql = "select count(*) from person"; return getJdbcTemplate().queryForInt(sql); }
总结:SpringDAO对JDBC进行了封装,隐藏了JDBC API,只需使用getJdbcTemplate()方法
类似于使用DAO模式,只是封装了JDBC和提供了事务管理
优点:能够通过使用Spring进行事务管理
隐藏和封装了JDBCAPI
缺点: 类似于使用DAO模式,仍然需要编写和使用大量的SQL语句
六、使用SpringORM进行数据库编程
SpringORM就是为了解决SpringDAO的缺点,让其完善起来
这样一来,SpringORM就有了所有的优点,包括能够使用DAO模式进行分层
能够使用ORM框架解决编写大量的SQL语句的问题
隐藏和封装了JDBC API,只需使用getHibernateTemplate()方法
能够使用HQL解决数据库的移植问题,并且通过使用Spring进行事务管理
总结:使用SpringORM进行数据持久化编程是相对比较理想的
补充:使用SSH框架进行Java Web编程能够做到合理分层
能将业务逻辑、数据持久化和表现逻辑明确分开,思路清晰
表现逻辑层中的Struts2是MVC框架,能够进行页面导航和实现视图显示
在结构上表现为使用action进行页面导航,使用JSP作为视图界面
数据持久层中的Hibernate则是持久化ORM框架,能够自动生成SQL语句
在结构上表现为使用DAO和POJO(domain)实现数据持久化
业务逻辑层的Spring则能使用简单的封装好的JDBC进行CRUD和事务管理
在结构上表现为使用service进行业务管理

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

Facing lag, slow mobile data connection on iPhone? Typically, the strength of cellular internet on your phone depends on several factors such as region, cellular network type, roaming type, etc. There are some things you can do to get a faster, more reliable cellular Internet connection. Fix 1 – Force Restart iPhone Sometimes, force restarting your device just resets a lot of things, including the cellular connection. Step 1 – Just press the volume up key once and release. Next, press the Volume Down key and release it again. Step 2 – The next part of the process is to hold the button on the right side. Let the iPhone finish restarting. Enable cellular data and check network speed. Check again Fix 2 – Change data mode While 5G offers better network speeds, it works better when the signal is weaker

I cry to death. The world is madly building big models. The data on the Internet is not enough. It is not enough at all. The training model looks like "The Hunger Games", and AI researchers around the world are worrying about how to feed these data voracious eaters. This problem is particularly prominent in multi-modal tasks. At a time when nothing could be done, a start-up team from the Department of Renmin University of China used its own new model to become the first in China to make "model-generated data feed itself" a reality. Moreover, it is a two-pronged approach on the understanding side and the generation side. Both sides can generate high-quality, multi-modal new data and provide data feedback to the model itself. What is a model? Awaker 1.0, a large multi-modal model that just appeared on the Zhongguancun Forum. Who is the team? Sophon engine. Founded by Gao Yizhao, a doctoral student at Renmin University’s Hillhouse School of Artificial Intelligence.

Recently, the military circle has been overwhelmed by the news: US military fighter jets can now complete fully automatic air combat using AI. Yes, just recently, the US military’s AI fighter jet was made public for the first time and the mystery was unveiled. The full name of this fighter is the Variable Stability Simulator Test Aircraft (VISTA). It was personally flown by the Secretary of the US Air Force to simulate a one-on-one air battle. On May 2, U.S. Air Force Secretary Frank Kendall took off in an X-62AVISTA at Edwards Air Force Base. Note that during the one-hour flight, all flight actions were completed autonomously by AI! Kendall said - "For the past few decades, we have been thinking about the unlimited potential of autonomous air-to-air combat, but it has always seemed out of reach." However now,

The latest video of Tesla's robot Optimus is released, and it can already work in the factory. At normal speed, it sorts batteries (Tesla's 4680 batteries) like this: The official also released what it looks like at 20x speed - on a small "workstation", picking and picking and picking: This time it is released One of the highlights of the video is that Optimus completes this work in the factory, completely autonomously, without human intervention throughout the process. And from the perspective of Optimus, it can also pick up and place the crooked battery, focusing on automatic error correction: Regarding Optimus's hand, NVIDIA scientist Jim Fan gave a high evaluation: Optimus's hand is the world's five-fingered robot. One of the most dexterous. Its hands are not only tactile

FP8 and lower floating point quantification precision are no longer the "patent" of H100! Lao Huang wanted everyone to use INT8/INT4, and the Microsoft DeepSpeed team started running FP6 on A100 without official support from NVIDIA. Test results show that the new method TC-FPx's FP6 quantization on A100 is close to or occasionally faster than INT4, and has higher accuracy than the latter. On top of this, there is also end-to-end large model support, which has been open sourced and integrated into deep learning inference frameworks such as DeepSpeed. This result also has an immediate effect on accelerating large models - under this framework, using a single card to run Llama, the throughput is 2.65 times higher than that of dual cards. one

70B model, 1000 tokens can be generated in seconds, which translates into nearly 4000 characters! The researchers fine-tuned Llama3 and introduced an acceleration algorithm. Compared with the native version, the speed is 13 times faster! Not only is it fast, its performance on code rewriting tasks even surpasses GPT-4o. This achievement comes from anysphere, the team behind the popular AI programming artifact Cursor, and OpenAI also participated in the investment. You must know that on Groq, a well-known fast inference acceleration framework, the inference speed of 70BLlama3 is only more than 300 tokens per second. With the speed of Cursor, it can be said that it achieves near-instant complete code file editing. Some people call it a good guy, if you put Curs

Last week, amid the internal wave of resignations and external criticism, OpenAI was plagued by internal and external troubles: - The infringement of the widow sister sparked global heated discussions - Employees signing "overlord clauses" were exposed one after another - Netizens listed Ultraman's "seven deadly sins" Rumors refuting: According to leaked information and documents obtained by Vox, OpenAI’s senior leadership, including Altman, was well aware of these equity recovery provisions and signed off on them. In addition, there is a serious and urgent issue facing OpenAI - AI safety. The recent departures of five security-related employees, including two of its most prominent employees, and the dissolution of the "Super Alignment" team have once again put OpenAI's security issues in the spotlight. Fortune magazine reported that OpenA

Llama3, the majestic king of open source, the original context window is only... 8k, which makes me swallow back the words "it smells so good". Today, when 32k is the starting point and 100k is common, is this intentional to leave room for contributions to the open source community? The open source community certainly didn't miss this opportunity: now with just 58 lines of code, any fine-tuned version of Llama370b can automatically scale to 1048k (one million) contexts. Behind the scenes is a LoRA, extracted from a fine-tuned version of Llama370BInstruct that extends good context, and the file is only 800mb. Next, using Mergekit, you can run it with other models of the same architecture or merge it directly into the model. 1048k context used
