ディレクトリ 検索
前言 何为PostgreSQL? PostgreSQL简史 格式约定 更多信息 臭虫汇报指导 I. 教程 章1. 从头开始 1.1. 安装 1.2. 体系基本概念 1.3. 创建一个数据库 1.4. 访问数据库 章2. SQL语言 2.1. 介绍 2.2. 概念 2.3. 创建新表 2.4. 向表中添加行 2.5. 查询一个表 2.6. 表间链接 2.7. 聚集函数 2.8. 更新 2.9. 删除 章3. 高级特性 3.1. 介绍 3.2. 视图 3.3. 外键 3.4. 事务 3.5. 窗口函数 3.6. 继承 3.7. 结论 II. SQL语言 章4. SQL语法 4.1. 词法结构 4.2. 值表达式 4.3. 调用函数 章5. 数据定义 5.1. 表的基本概念 5.2. 缺省值 5.3. 约束 5.4. 系统字段 5.5. 修改表 5.6. 权限 5.7. 模式 5.8. 继承 5.9. 分区 5.10. 其它数据库对象 5.11. 依赖性跟踪 章 6. 数据操作 6.1. 插入数据 6.2. 更新数据 6.3. 删除数据 章7. 查询 7.1. 概述 7.2. 表表达式 7.3. 选择列表 7.4. 组合查询 7.5. 行排序 7.6. LIMIT和OFFSET 7.7. VALUES列表 7.8. WITH的查询(公用表表达式) 章8. 数据类型 8.1. 数值类型 8.2. 货币类型 8.3. 字符类型 8.4. 二进制数据类型 8.5. 日期/时间类型 8.6. 布尔类型 8.7. 枚举类型 8.8. 几何类型 8.9. 网络地址类型 8.10. 位串类型 8.11. 文本搜索类型 8.12. UUID类型 8.13. XML类型 8.14. 数组 8.15. 复合类型 8.16. 对象标识符类型 8.17. 伪类型 章 9. 函数和操作符 9.1. 逻辑操作符 9.2. 比较操作符 9.3. 数学函数和操作符 9.4. 字符串函数和操作符 9.5. 二进制字符串函数和操作符 9.6. 位串函数和操作符 9.7. 模式匹配 9.8. 数据类型格式化函数 9.9. 时间/日期函数和操作符 9.10. 支持枚举函数 9.11. 几何函数和操作符 9.12. 网络地址函数和操作符 9.13. 文本检索函数和操作符 9.14. XML函数 9.15. 序列操作函数 9.16. 条件表达式 9.17. 数组函数和操作符 9.18. 聚合函数 9.19. 窗口函数 9.20. 子查询表达式 9.21. 行和数组比较 9.22. 返回集合的函数 9.23. 系统信息函数 9.24. 系统管理函数 9.25. 触发器函数 章10. 类型转换 10.3. 函数 10.2. 操作符 10.1. 概述 10.4. 值存储 10.5. UNION 章11. 索引 11.1. 介绍 11.2. 索引类型 11.3. 多字段索引 11.4. 索引和ORDER BY 11.5. 组合多个索引 11.6. 唯一索引 11.7. 表达式上的索引 11.8. 部分索引 11.9. 操作类和操作簇 11.10. 检查索引的使用 章12. Full Text Search 12.1. Introduction 12.2. Tables and Indexes 12.3. Controlling Text Search 12.4. Additional Features 12.5. Parsers 12.6. Dictionaries 12.7. Configuration Example 12.8. Testing and Debugging Text Search 12.9. GiST and GIN Index Types 12.10. psql Support 12.11. Limitations 12.12. Migration from Pre-8.3 Text Search 章13. 并发控制 13.1. 介绍 13.2. 事务隔离 13.3. 明确锁定 13.4. 应用层数据完整性检查 13.5. 锁和索引 章14. 性能提升技巧 14.1. 使用EXPLAIN 14.2. 规划器使用的统计信息 14.3. 用明确的JOIN语句控制规划器 14.4. 向数据库中添加记录 14.5. 非持久性设置 III. 服务器管理 章15. 安装指导 15.1. 简版 15.2. 要求 15.3. 获取源码 15.4. 升级 15.5. 安装过程 15.6. 安装后的设置 15.7. 支持的平台 15.8. 特殊平台的要求 章16. Installation from Source Code on Windows 16.1. Building with Visual C++ or the Platform SDK 16.2. Building libpq with Visual C++ or Borland C++ 章17. 服务器安装和操作 17.1. PostgreSQL用户帐户 17.2. 创建数据库集群 17.3. 启动数据库服务器 17.4. 管理内核资源 17.5. 关闭服务 17.6. 防止服务器欺骗 17.7. 加密选项 17.8. 用SSL进行安全的TCP/IP连接 17.9. Secure TCP/IP Connections with SSH Tunnels 章18. 服务器配置 18.1. 设置参数 18.2. 文件位置 18.3. 连接和认证 18.4. 资源消耗 18.5. 预写式日志 18.6. 查询规划 18.7. 错误报告和日志 18.8. 运行时统计 18.9. 自动清理 18.10. 客户端连接缺省 18.12. 版本和平台兼容性 18.11. 锁管理 18.13. 预置选项 18.14. 自定义的选项 18.15. 开发人员选项 18.16. 短选项 章19. 用户认证 19.1. pg_hba.conf 文件 19.2. 用户名映射 19.3. 认证方法 19.4. 用户认证 章20. 数据库角色和权限 20.1. 数据库角色 20.2. 角色属性 20.3. 权限 20.4. 角色成员 20.5. 函数和触发器 章21. 管理数据库 21.1. 概述 21.2. 创建一个数据库 21.3. 临时库 21.4. 数据库配置 21.5. 删除数据库 21.6. 表空间 章22. 本土化 22.1. 区域支持 22.2. 字符集支持 章23. 日常数据库维护工作 23.1. Routine Vacuuming日常清理 23.2. 经常重建索引 23.3. 日志文件维护 章24. 备份和恢复 24.1. SQL转储 24.2. 文件系统级别的备份 24.3. 在线备份以及即时恢复(PITR) 24.4. 版本间迁移 章25. 高可用性与负载均衡,复制 25.1. 不同解决方案的比较 25.2. 日志传送备份服务器 25.3. 失效切换 25.4. 日志传送的替代方法 25.5. 热备 章26. 恢复配置 26.1. 归档恢复设置 26.2. 恢复目标设置 26.3. 备服务器设置 章27. 监控数据库的活动 27.1. 标准Unix工具 27.2. 统计收集器 27.3. 查看锁 27.4. 动态跟踪 章28. 监控磁盘使用情况 28.1. 判断磁盘的使用量 28.2. 磁盘满导致的失效 章29. 可靠性和预写式日志 29.1. 可靠性 29.2. 预写式日志(WAL) 29.3. 异步提交 29.4. WAL配置 29.5. WAL内部 章30. Regression Tests 30.1. Running the Tests 30.2. Test Evaluation 30.3. Variant Comparison Files 30.4. Test Coverage Examination IV. 客户端接口 章31. libpq-C库 31.1. 数据库联接函数 31.2. 连接状态函数 31.3. 命令执行函数 31.4. 异步命令处理 31.5. 取消正在处理的查询 31.6. 捷径接口 31.7. 异步通知 31.8. 与COPY命令相关的函数 31.9. Control Functions 控制函数 31.10. 其他函数 31.11. 注意信息处理 31.12. 事件系统 31.13. 环境变量 31.14. 口令文件 31.15. 连接服务的文件 31.16. LDAP查找连接参数 31.17. SSL支持 31.18. 在多线程程序里的行为 31.19. 制作libpq程序 31.20. 例子程序 章32. 大对象 32.1. 介绍 32.2. 实现特点 32.3. 客户端接口 32.4. 服务器端函数 32.5. 例子程序 章33. ECPG - Embedded SQL in C 33.1. The Concept 33.2. Connecting to the Database Server 33.3. Closing a Connection 33.4. Running SQL Commands 33.5. Choosing a Connection 33.6. Using Host Variables 33.7. Dynamic SQL 33.8. pgtypes library 33.9. Using Descriptor Areas 33.10. Informix compatibility mode 33.11. Error Handling 33.12. Preprocessor directives 33.13. Processing Embedded SQL Programs 33.14. Library Functions 33.15. Internals 章34. 信息模式 34.1. 关于这个模式 34.2. 数据类型 34.3. information_schema_catalog_name 34.4. administrable_role_authorizations 34.5. applicable_roles 34.6. attributes 34.7. check_constraint_routine_usage 34.8. check_constraints 34.9. column_domain_usage 34.10. column_privileges 34.11. column_udt_usage 34.12. 字段 34.13. constraint_column_usage 34.14. constraint_table_usage 34.15. data_type_privileges 34.16. domain_constraints 34.18. domains 34.17. domain_udt_usage 34.19. element_types 34.20. enabled_roles 34.21. foreign_data_wrapper_options 34.22. foreign_data_wrappers 34.23. foreign_server_options 34.24. foreign_servers 34.25. key_column_usage 34.26. parameters 34.27. referential_constraints 34.28. role_column_grants 34.29. role_routine_grants 34.30. role_table_grants 34.31. role_usage_grants 34.32. routine_privileges 34.33. routines 34.34. schemata 34.35. sequences 34.36. sql_features 34.37. sql_implementation_info 34.38. sql_languages 34.39. sql_packages 34.40. sql_parts 34.41. sql_sizing 34.42. sql_sizing_profiles 34.43. table_constraints 34.44. table_privileges 34.45. tables 34.46. triggered_update_columns 34.47. 触发器 34.48. usage_privileges 34.49. user_mapping_options 34.50. user_mappings 34.51. view_column_usage 34.52. view_routine_usage 34.53. view_table_usage 34.54. 视图 V. 服务器端编程 章35. 扩展SQL 35.1. 扩展性是如何实现的 35.2. PostgreSQL类型系统 35.3. User-Defined Functions 35.4. Query Language (SQL) Functions 35.5. Function Overloading 35.6. Function Volatility Categories 35.7. Procedural Language Functions 35.8. Internal Functions 35.9. C-Language Functions 35.10. User-Defined Aggregates 35.11. User-Defined Types 35.12. User-Defined Operators 35.13. Operator Optimization Information 35.14. Interfacing Extensions To Indexes 35.15. 用C++扩展 章36. 触发器 36.1. 触发器行为概述 36.3. 用 C 写触发器 36.2. 数据改变的可视性 36.4. 一个完整的例子 章37. 规则系统 37.1. The Query Tree 37.2. 视图和规则系统 37.3. 在INSERT,UPDATE和DELETE上的规则 37.4. 规则和权限 37.5. 规则和命令状态 37.6. 规则与触发器得比较 章38. Procedural Languages 38.1. Installing Procedural Languages 章39. PL/pgSQL - SQL过程语言 39.1. 概述 39.2. PL/pgSQL的结构 39.3. 声明 39.4. 表达式 39.5. 基本语句 39.6. 控制结构 39.7. 游标 39.8. 错误和消息 39.9. 触发器过程 39.10. PL/pgSQL Under the Hood 39.11. 开发PL/pgSQL的一些提示 39.12. 从OraclePL/SQL 进行移植 章40. PL/Tcl - Tcl Procedural Language 40.1. Overview 40.2. PL/Tcl Functions and Arguments 40.3. Data Values in PL/Tcl 40.4. Global Data in PL/Tcl 40.5. Database Access from PL/Tcl 40.6. Trigger Procedures in PL/Tcl 40.7. Modules and the unknown command 40.8. Tcl Procedure Names 章41. PL/Perl - Perl Procedural Language 41.1. PL/Perl Functions and Arguments 41.2. Data Values in PL/Perl 41.3. Built-in Functions 41.4. Global Values in PL/Perl 41.6. PL/Perl Triggers 41.5. Trusted and Untrusted PL/Perl 41.7. PL/Perl Under the Hood 章42. PL/Python - Python Procedural Language 42.1. Python 2 vs. Python 3 42.2. PL/Python Functions 42.3. Data Values 42.4. Sharing Data 42.5. Anonymous Code Blocks 42.6. Trigger Functions 42.7. Database Access 42.8. Utility Functions 42.9. Environment Variables 章43. Server Programming Interface 43.1. Interface Functions Spi-spi-connect Spi-spi-finish Spi-spi-push Spi-spi-pop Spi-spi-execute Spi-spi-exec Spi-spi-execute-with-args Spi-spi-prepare Spi-spi-prepare-cursor Spi-spi-prepare-params Spi-spi-getargcount Spi-spi-getargtypeid Spi-spi-is-cursor-plan Spi-spi-execute-plan Spi-spi-execute-plan-with-paramlist Spi-spi-execp Spi-spi-cursor-open Spi-spi-cursor-open-with-args Spi-spi-cursor-open-with-paramlist Spi-spi-cursor-find Spi-spi-cursor-fetch Spi-spi-cursor-move Spi-spi-scroll-cursor-fetch Spi-spi-scroll-cursor-move Spi-spi-cursor-close Spi-spi-saveplan 43.2. Interface Support Functions Spi-spi-fname Spi-spi-fnumber Spi-spi-getvalue Spi-spi-getbinval Spi-spi-gettype Spi-spi-gettypeid Spi-spi-getrelname Spi-spi-getnspname 43.3. Memory Management Spi-spi-palloc Spi-realloc Spi-spi-pfree Spi-spi-copytuple Spi-spi-returntuple Spi-spi-modifytuple Spi-spi-freetuple Spi-spi-freetupletable Spi-spi-freeplan 43.4. Visibility of Data Changes 43.5. Examples VI. 参考手册 I. SQL命令 Sql-abort Sql-alteraggregate Sql-alterconversion Sql-alterdatabase Sql-alterdefaultprivileges Sql-alterdomain Sql-alterforeigndatawrapper Sql-alterfunction Sql-altergroup Sql-alterindex Sql-alterlanguage Sql-alterlargeobject Sql-alteroperator Sql-alteropclass Sql-alteropfamily Sql-alterrole Sql-alterschema Sql-altersequence Sql-alterserver Sql-altertable Sql-altertablespace Sql-altertsconfig Sql-altertsdictionary Sql-altertsparser Sql-altertstemplate Sql-altertrigger Sql-altertype Sql-alteruser Sql-alterusermapping Sql-alterview Sql-analyze Sql-begin Sql-checkpoint Sql-close Sql-cluster Sql-comment Sql-commit Sql-commit-prepared Sql-copy Sql-createaggregate Sql-createcast Sql-createconstraint Sql-createconversion Sql-createdatabase Sql-createdomain Sql-createforeigndatawrapper Sql-createfunction Sql-creategroup Sql-createindex Sql-createlanguage Sql-createoperator Sql-createopclass Sql-createopfamily Sql-createrole Sql-createrule Sql-createschema Sql-createsequence Sql-createserver Sql-createtable Sql-createtableas Sql-createtablespace Sql-createtsconfig Sql-createtsdictionary Sql-createtsparser Sql-createtstemplate Sql-createtrigger Sql-createtype Sql-createuser Sql-createusermapping Sql-createview Sql-deallocate Sql-declare Sql-delete Sql-discard Sql-do Sql-dropaggregate Sql-dropcast Sql-dropconversion Sql-dropdatabase Sql-dropdomain Sql-dropforeigndatawrapper Sql-dropfunction Sql-dropgroup Sql-dropindex Sql-droplanguage Sql-dropoperator Sql-dropopclass Sql-dropopfamily Sql-drop-owned Sql-droprole Sql-droprule Sql-dropschema Sql-dropsequence Sql-dropserver Sql-droptable Sql-droptablespace Sql-droptsconfig Sql-droptsdictionary Sql-droptsparser Sql-droptstemplate Sql-droptrigger Sql-droptype Sql-dropuser Sql-dropusermapping Sql-dropview Sql-end Sql-execute Sql-explain Sql-fetch Sql-grant Sql-insert Sql-listen Sql-load Sql-lock Sql-move Sql-notify Sql-prepare Sql-prepare-transaction Sql-reassign-owned Sql-reindex Sql-release-savepoint Sql-reset Sql-revoke Sql-rollback Sql-rollback-prepared Sql-rollback-to Sql-savepoint Sql-select Sql-selectinto Sql-set Sql-set-constraints Sql-set-role Sql-set-session-authorization Sql-set-transaction Sql-show Sql-start-transaction Sql-truncate Sql-unlisten Sql-update Sql-vacuum Sql-values II. 客户端应用程序 App-clusterdb App-createdb App-createlang App-createuser App-dropdb App-droplang App-dropuser App-ecpg App-pgconfig App-pgdump App-pg-dumpall App-pgrestore App-psql App-reindexdb App-vacuumdb III. PostgreSQL服务器应用程序 App-initdb App-pgcontroldata App-pg-ctl App-pgresetxlog App-postgres App-postmaster VII. 内部 章44. PostgreSQL内部概览 44.1. 查询路径 44.2. 连接是如何建立起来的 44.3. 分析器阶段 44.4. ThePostgreSQL规则系统 44.5. 规划器/优化器 44.6. 执行器 章45. 系统表 45.1. 概述 45.2. pg_aggregate 45.3. pg_am 45.4. pg_amop 45.5. pg_amproc 45.6. pg_attrdef 45.7. pg_attribute 45.8. pg_authid 45.9. pg_auth_members 45.10. pg_cast 45.11. pg_class 45.12. pg_constraint 45.13. pg_conversion 45.14. pg_database 45.15. pg_db_role_setting 45.16. pg_default_acl 45.17. pg_depend 45.18. pg_description 45.19. pg_enum 45.20. pg_foreign_data_wrapper 45.21. pg_foreign_server 45.22. pg_index 45.23. pg_inherits 45.24. pg_language 45.25. pg_largeobject 45.26. pg_largeobject_metadata 45.27. pg_namespace 45.28. pg_opclass 45.29. pg_operator 45.30. pg_opfamily 45.31. pg_pltemplate 45.32. pg_proc 45.33. pg_rewrite 45.34. pg_shdepend 45.35. pg_shdescription 45.36. pg_statistic 45.37. pg_tablespace 45.38. pg_trigger 45.39. pg_ts_config 45.40. pg_ts_config_map 45.41. pg_ts_dict 45.42. pg_ts_parser 45.43. pg_ts_template 45.44. pg_type 45.45. pg_user_mapping 45.46. System Views 45.47. pg_cursors 45.48. pg_group 45.49. pg_indexes 45.50. pg_locks 45.51. pg_prepared_statements 45.52. pg_prepared_xacts 45.53. pg_roles 45.54. pg_rules 45.55. pg_settings 45.56. pg_shadow 45.57. pg_stats 45.58. pg_tables 45.59. pg_timezone_abbrevs 45.60. pg_timezone_names 45.61. pg_user 45.62. pg_user_mappings 45.63. pg_views 章46. Frontend/Backend Protocol 46.1. Overview 46.2. Message Flow 46.3. Streaming Replication Protocol 46.4. Message Data Types 46.5. Message Formats 46.6. Error and Notice Message Fields 46.7. Summary of Changes since Protocol 2.0 47. PostgreSQL Coding Conventions 47.1. Formatting 47.2. Reporting Errors Within the Server 47.3. Error Message Style Guide 章48. Native Language Support 48.1. For the Translator 48.2. For the Programmer 章49. Writing A Procedural Language Handler 章50. Genetic Query Optimizer 50.1. Query Handling as a Complex Optimization Problem 50.2. Genetic Algorithms 50.3. Genetic Query Optimization (GEQO) in PostgreSQL 50.4. Further Reading 章51. 索引访问方法接口定义 51.1. 索引的系统表记录 51.2. 索引访问方法函数 51.3. 索引扫描 51.4. 索引锁的考量 51.5. 索引唯一性检查 51.6. 索引开销估计函数 章52. GiST Indexes 52.1. Introduction 52.2. Extensibility 52.3. Implementation 52.4. Examples 52.5. Crash Recovery 章53. GIN Indexes 53.1. Introduction 53.2. Extensibility 53.3. Implementation 53.4. GIN tips and tricks 53.5. Limitations 53.6. Examples 章54. 数据库物理存储 54.1. 数据库文件布局 54.2. TOAST 54.3. 自由空间映射 54.4. 可见映射 54.5. 数据库分页文件 章55. BKI后端接口 55.1. BKI 文件格式 55.2. BKI命令 55.3. 系统初始化的BKI文件的结构 55.4. 例子 章56. 规划器如何使用统计信息 56.1. 行预期的例子 VIII. 附录 A. PostgreSQL错误代码 B. 日期/时间支持 B.1. 日期/时间输入解析 B.2. 日期/时间关键字 B.3. 日期/时间配置文件 B.4. 日期单位的历史 C. SQL关键字 D. SQL Conformance D.1. Supported Features D.2. Unsupported Features E. Release Notes Release-0-01 Release-0-02 Release-0-03 Release-1-0 Release-1-01 Release-1-02 Release-1-09 Release-6-0 Release-6-1 Release-6-1-1 Release-6-2 Release-6-2-1 Release-6-3 Release-6-3-1 Release-6-3-2 Release-6-4 Release-6-4-1 Release-6-4-2 Release-6-5 Release-6-5-1 Release-6-5-2 Release-6-5-3 Release-7-0 Release-7-0-1 Release-7-0-2 Release-7-0-3 Release-7-1 Release-7-1-1 Release-7-1-2 Release-7-1-3 Release-7-2 Release-7-2-1 Release-7-2-2 Release-7-2-3 Release-7-2-4 Release-7-2-5 Release-7-2-6 Release-7-2-7 Release-7-2-8 Release-7-3 Release-7-3-1 Release-7-3-10 Release-7-3-11 Release-7-3-12 Release-7-3-13 Release-7-3-14 Release-7-3-15 Release-7-3-16 Release-7-3-17 Release-7-3-18 Release-7-3-19 Release-7-3-2 Release-7-3-20 Release-7-3-21 Release-7-3-3 Release-7-3-4 Release-7-3-5 Release-7-3-6 Release-7-3-7 Release-7-3-8 Release-7-3-9 Release-7-4 Release-7-4-1 Release-7-4-10 Release-7-4-11 Release-7-4-12 Release-7-4-13 Release-7-4-14 Release-7-4-15 Release-7-4-16 Release-7-4-17 Release-7-4-18 Release-7-4-19 Release-7-4-2 Release-7-4-20 Release-7-4-21 Release-7-4-22 Release-7-4-23 Release-7-4-24 Release-7-4-25 Release-7-4-26 Release-7-4-27 Release-7-4-28 Release-7-4-29 Release-7-4-3 Release-7-4-30 Release-7-4-4 Release-7-4-5 Release-7-4-6 Release-7-4-7 Release-7-4-8 Release-7-4-9 Release-8-0 Release-8-0-1 Release-8-0-10 Release-8-0-11 Release-8-0-12 Release-8-0-13 Release-8-0-14 Release-8-0-15 Release-8-0-16 Release-8-0-17 Release-8-0-18 Release-8-0-19 Release-8-0-2 Release-8-0-20 Release-8-0-21 Release-8-0-22 Release-8-0-23 Release-8-0-24 Release-8-0-25 Release-8-0-26 Release-8-0-3 Release-8-0-4 Release-8-0-5 Release-8-0-6 Release-8-0-7 Release-8-0-8 Release-8-0-9 Release-8-1 Release-8-1-1 Release-8-1-10 Release-8-1-11 Release-8-1-12 Release-8-1-13 Release-8-1-14 Release-8-1-15 Release-8-1-16 Release-8-1-17 Release-8-1-18 Release-8-1-19 Release-8-1-2 Release-8-1-20 Release-8-1-21 Release-8-1-22 Release-8-1-23 Release-8-1-3 Release-8-1-4 Release-8-1-5 Release-8-1-6 Release-8-1-7 Release-8-1-8 Release-8-1-9 Release-8-2 Release-8-2-1 Release-8-2-10 Release-8-2-11 Release-8-2-12 Release-8-2-13 Release-8-2-14 Release-8-2-15 Release-8-2-16 Release-8-2-17 Release-8-2-18 Release-8-2-19 Release-8-2-2 Release-8-2-20 Release-8-2-21 Release-8-2-3 Release-8-2-4 Release-8-2-5 Release-8-2-6 Release-8-2-7 Release-8-2-8 Release-8-2-9 Release-8-3 Release-8-3-1 Release-8-3-10 Release-8-3-11 Release-8-3-12 Release-8-3-13 Release-8-3-14 Release-8-3-15 Release-8-3-2 Release-8-3-3 Release-8-3-4 Release-8-3-5 Release-8-3-6 Release-8-3-7 Release-8-3-8 Release-8-3-9 Release-8-4 Release-8-4-1 Release-8-4-2 Release-8-4-3 Release-8-4-4 Release-8-4-5 Release-8-4-6 Release-8-4-7 Release-8-4-8 Release-9-0 Release-9-0-1 Release-9-0-2 Release-9-0-3 Release-9-0-4 F. 额外提供的模块 F.1. adminpack F.2. auto_explain F.3. btree_gin F.4. btree_gist F.5. chkpass F.6. citext F.7. cube F.8. dblink Contrib-dblink-connect Contrib-dblink-connect-u Contrib-dblink-disconnect Contrib-dblink Contrib-dblink-exec Contrib-dblink-open Contrib-dblink-fetch Contrib-dblink-close Contrib-dblink-get-connections Contrib-dblink-error-message Contrib-dblink-send-query Contrib-dblink-is-busy Contrib-dblink-get-notify Contrib-dblink-get-result Contrib-dblink-cancel-query Contrib-dblink-get-pkey Contrib-dblink-build-sql-insert Contrib-dblink-build-sql-delete Contrib-dblink-build-sql-update F.9. dict_int F.10. dict_xsyn F.11. earthdistance F.12. fuzzystrmatch F.13. hstore F.14. intagg F.15. intarray F.16. isn F.17. lo F.18. ltree F.19. oid2name F.20. pageinspect F.21. passwordcheck F.22. pg_archivecleanup F.23. pgbench F.24. pg_buffercache F.25. pgcrypto F.26. pg_freespacemap F.27. pgrowlocks F.28. pg_standby F.29. pg_stat_statements F.30. pgstattuple F.31. pg_trgm F.32. pg_upgrade F.33. seg F.34. spi F.35. sslinfo F.36. tablefunc F.37. test_parser F.38. tsearch2 F.39. unaccent F.40. uuid-ossp F.41. vacuumlo F.42. xml2 G. 外部项目 G.1. 客户端接口 G.2. 过程语言 G.3. 扩展 H. The Source Code Repository H.1. Getting The Source Via Git I. 文档 I.1. DocBook I.2. 工具集 I.3. 制作文档 I.4. 文档写作 I.5. 风格指导 J. 首字母缩略词 参考书目 Bookindex Index
テキスト

12.6. Dictionaries

Dictionaries are used to eliminate words that should not be considered in a search (stop words), and to normalize words so that different derived forms of the same word will match. A successfully normalized word is called a lexeme. Aside from improving search quality, normalization and removal of stop words reduce the size of the tsvector representation of a document, thereby improving performance. Normalization does not always have linguistic meaning and usually depends on application semantics.

Some examples of normalization:

  • Linguistic - Ispell dictionaries try to reduce input words to a normalized form; stemmer dictionaries remove word endings

  • URL locations can be canonicalized to make equivalent URLs match:

    • http://www.pgsql.ru/db/mw/index.html

    • http://www.pgsql.ru/db/mw/

    • http://www.pgsql.ru/db/../db/mw/index.html

  • Color names can be replaced by their hexadecimal values, e.g., red, green, blue, magenta -> FF0000, 00FF00, 0000FF, FF00FF

  • If indexing numbers, we can remove some fractional digits to reduce the range of possible numbers, so for example 3.14159265359, 3.1415926, 3.14 will be the same after normalization if only two digits are kept after the decimal point.

A dictionary is a program that accepts a token as input and returns:

  • an array of lexemes if the input token is known to the dictionary (notice that one token can produce more than one lexeme)

  • a single lexeme with the TSL_FILTER flag set, to replace the original token with a new token to be passed to subsequent dictionaries (a dictionary that does this is called a filtering dictionary)

  • an empty array if the dictionary knows the token, but it is a stop word

  • NULL if the dictionary does not recognize the input token

PostgreSQL provides predefined dictionaries for many languages. There are also several predefined templates that can be used to create new dictionaries with custom parameters. Each predefined dictionary template is described below. If no existing template is suitable, it is possible to create new ones; see the contrib/ area of the PostgreSQL distribution for examples.

A text search configuration binds a parser together with a set of dictionaries to process the parser's output tokens. For each token type that the parser can return, a separate list of dictionaries is specified by the configuration. When a token of that type is found by the parser, each dictionary in the list is consulted in turn, until some dictionary recognizes it as a known word. If it is identified as a stop word, or if no dictionary recognizes the token, it will be discarded and not indexed or searched for. Normally, the first dictionary that returns a non-NULL output determines the result, and any remaining dictionaries are not consulted; but a filtering dictionary can replace the given word with a modified word, which is then passed to subsequent dictionaries.

The general rule for configuring a list of dictionaries is to place first the most narrow, most specific dictionary, then the more general dictionaries, finishing with a very general dictionary, like a Snowball stemmer or simple, which recognizes everything. For example, for an astronomy-specific search (astro_en configuration) one could bind token type asciiword (ASCII word) to a synonym dictionary of astronomical terms, a general English dictionary and a Snowball English stemmer:

ALTER TEXT SEARCH CONFIGURATION astro_en
    ADD MAPPING FOR asciiword WITH astrosyn, english_ispell, english_stem;

A filtering dictionary can be placed anywhere in the list, except at the end where it'd be useless. Filtering dictionaries are useful to partially normalize words to simplify the task of later dictionaries. For example, a filtering dictionary could be used to remove accents from accented letters, as is done by the contrib/unaccent extension module.

12.6.1. Stop Words

Stop words are words that are very common, appear in almost every document, and have no discrimination value. Therefore, they can be ignored in the context of full text searching. For example, every English text contains words like a and the, so it is useless to store them in an index. However, stop words do affect the positions in tsvector, which in turn affect ranking:

SELECT to_tsvector('english','in the list of stop words');
        to_tsvector
----------------------------
 'list':3 'stop':5 'word':6

The missing positions 1,2,4 are because of stop words. Ranks calculated for documents with and without stop words are quite different:

SELECT ts_rank_cd (to_tsvector('english','in the list of stop words'), to_tsquery('list & stop'));
 ts_rank_cd
------------
       0.05

SELECT ts_rank_cd (to_tsvector('english','list stop words'), to_tsquery('list & stop'));
 ts_rank_cd
------------
        0.1

It is up to the specific dictionary how it treats stop words. For example, ispell dictionaries first normalize words and then look at the list of stop words, while Snowball stemmers first check the list of stop words. The reason for the different behavior is an attempt to decrease noise.

12.6.2. Simple Dictionary

The simple dictionary template operates by converting the input token to lower case and checking it against a file of stop words. If it is found in the file then an empty array is returned, causing the token to be discarded. If not, the lower-cased form of the word is returned as the normalized lexeme. Alternatively, the dictionary can be configured to report non-stop-words as unrecognized, allowing them to be passed on to the next dictionary in the list.

Here is an example of a dictionary definition using the simple template:

CREATE TEXT SEARCH DICTIONARY public.simple_dict (
    TEMPLATE = pg_catalog.simple,
    STOPWORDS = english
);

Here, english is the base name of a file of stop words. The file's full name will be $SHAREDIR/tsearch_data/english.stop, where $SHAREDIR means the PostgreSQL installation's shared-data directory, often /usr/local/share/postgresql (use pg_config --sharedir to determine it if you're not sure). The file format is simply a list of words, one per line. Blank lines and trailing spaces are ignored, and upper case is folded to lower case, but no other processing is done on the file contents.

Now we can test our dictionary:

SELECT ts_lexize('public.simple_dict','YeS');
 ts_lexize
-----------
 {yes}

SELECT ts_lexize('public.simple_dict','The');
 ts_lexize
-----------
 {}

We can also choose to return NULL, instead of the lower-cased word, if it is not found in the stop words file. This behavior is selected by setting the dictionary's Accept parameter to false. Continuing the example:

ALTER TEXT SEARCH DICTIONARY public.simple_dict ( Accept = false );

SELECT ts_lexize('public.simple_dict','YeS');
 ts_lexize
-----------


SELECT ts_lexize('public.simple_dict','The');
 ts_lexize
-----------
 {}

With the default setting of Accept = true, it is only useful to place a simple dictionary at the end of a list of dictionaries, since it will never pass on any token to a following dictionary. Conversely, Accept = false is only useful when there is at least one following dictionary.

Caution

Most types of dictionaries rely on configuration files, such as files of stop words. These files must be stored in UTF-8 encoding. They will be translated to the actual database encoding, if that is different, when they are read into the server.

Caution

Normally, a database session will read a dictionary configuration file only once, when it is first used within the session. If you modify a configuration file and want to force existing sessions to pick up the new contents, issue an ALTER TEXT SEARCH DICTIONARY command on the dictionary. This can be a "dummy" update that doesn't actually change any parameter values.

12.6.3. Synonym Dictionary

This dictionary template is used to create dictionaries that replace a word with a synonym. Phrases are not supported (use the thesaurus template (Section 12.6.4) for that). A synonym dictionary can be used to overcome linguistic problems, for example, to prevent an English stemmer dictionary from reducing the word 'Paris' to 'pari'. It is enough to have a Paris paris line in the synonym dictionary and put it before the english_stem dictionary. For example:

SELECT * FROM ts_debug('english', 'Paris');
   alias   |   description   | token |  dictionaries  |  dictionary  | lexemes 
-----------+-----------------+-------+----------------+--------------+---------
 asciiword | Word, all ASCII | Paris | {english_stem} | english_stem | {pari}

CREATE TEXT SEARCH DICTIONARY my_synonym (
    TEMPLATE = synonym,
    SYNONYMS = my_synonyms
);

ALTER TEXT SEARCH CONFIGURATION english
    ALTER MAPPING FOR asciiword
    WITH my_synonym, english_stem;

SELECT * FROM ts_debug('english', 'Paris');
   alias   |   description   | token |       dictionaries        | dictionary | lexemes 
-----------+-----------------+-------+---------------------------+------------+---------
 asciiword | Word, all ASCII | Paris | {my_synonym,english_stem} | my_synonym | {paris}

The only parameter required by the synonym template is SYNONYMS, which is the base name of its configuration file — my_synonyms in the above example. The file's full name will be $SHAREDIR/tsearch_data/my_synonyms.syn (where $SHAREDIR means the PostgreSQL installation's shared-data directory). The file format is just one line per word to be substituted, with the word followed by its synonym, separated by white space. Blank lines and trailing spaces are ignored.

The synonym template also has an optional parameter CaseSensitive, which defaults to false. When CaseSensitive is false, words in the synonym file are folded to lower case, as are input tokens. When it is true, words and tokens are not folded to lower case, but are compared as-is.

An asterisk (*) can be placed at the end of a synonym in the configuration file. This indicates that the synonym is a prefix. The asterisk is ignored when the entry is used in to_tsvector(), but when it is used in to_tsquery(), the result will be a query item with the prefix match marker (see Section 12.3.2). For example, suppose we have these entries in $SHAREDIR/tsearch_data/synonym_sample.syn:

postgres        pgsql
postgresql      pgsql
postgre pgsql
gogle   googl
indices index*

Then we will get these results:

mydb=# CREATE TEXT SEARCH DICTIONARY syn (template=synonym, synonyms='synonym_sample');
mydb=# SELECT ts_lexize('syn','indices');
 ts_lexize
-----------
 {index}
(1 row)

mydb=# CREATE TEXT SEARCH CONFIGURATION tst (copy=simple);
mydb=# ALTER TEXT SEARCH CONFIGURATION tst ALTER MAPPING FOR asciiword WITH syn;
mydb=# SELECT to_tsvector('tst','indices');
 to_tsvector
-------------
 'index':1
(1 row)

mydb=# SELECT to_tsquery('tst','indices');
 to_tsquery
------------
 'index':*
(1 row)

mydb=# SELECT 'indexes are very useful'::tsvector;
            tsvector             
---------------------------------
 'are' 'indexes' 'useful' 'very'
(1 row)

mydb=# SELECT 'indexes are very useful'::tsvector @@ to_tsquery('tst','indices');
 ?column?
----------
 t
(1 row)

12.6.4. Thesaurus Dictionary

A thesaurus dictionary (sometimes abbreviated as TZ) is a collection of words that includes information about the relationships of words and phrases, i.e., broader terms (BT), narrower terms (NT), preferred terms, non-preferred terms, related terms, etc.

Basically a thesaurus dictionary replaces all non-preferred terms by one preferred term and, optionally, preserves the original terms for indexing as well. PostgreSQL's current implementation of the thesaurus dictionary is an extension of the synonym dictionary with added phrase support. A thesaurus dictionary requires a configuration file of the following format:

# this is a comment
sample word(s) : indexed word(s)
more sample word(s) : more indexed word(s)
...

where the colon (:) symbol acts as a delimiter between a a phrase and its replacement.

A thesaurus dictionary uses a subdictionary (which is specified in the dictionary's configuration) to normalize the input text before checking for phrase matches. It is only possible to select one subdictionary. An error is reported if the subdictionary fails to recognize a word. In that case, you should remove the use of the word or teach the subdictionary about it. You can place an asterisk (*) at the beginning of an indexed word to skip applying the subdictionary to it, but all sample words must be known to the subdictionary.

The thesaurus dictionary chooses the longest match if there are multiple phrases matching the input, and ties are broken by using the last definition.

Specific stop words recognized by the subdictionary cannot be specified; instead use ? to mark the location where any stop word can appear. For example, assuming that a and the are stop words according to the subdictionary:

? one ? two : swsw

matches a one the two and the one a two; both would be replaced by swsw.

Since a thesaurus dictionary has the capability to recognize phrases it must remember its state and interact with the parser. A thesaurus dictionary uses these assignments to check if it should handle the next word or stop accumulation. The thesaurus dictionary must be configured carefully. For example, if the thesaurus dictionary is assigned to handle only the asciiword token, then a thesaurus dictionary definition like one 7 will not work since token type uint is not assigned to the thesaurus dictionary.

Caution

Thesauruses are used during indexing so any change in the thesaurus dictionary's parameters requires reindexing. For most other dictionary types, small changes such as adding or removing stopwords does not force reindexing.

12.6.4.1. Thesaurus Configuration

To define a new thesaurus dictionary, use the thesaurus template. For example:

CREATE TEXT SEARCH DICTIONARY thesaurus_simple (
    TEMPLATE = thesaurus,
    DictFile = mythesaurus,
    Dictionary = pg_catalog.english_stem
);

Here:

  • thesaurus_simple is the new dictionary's name

  • mythesaurus is the base name of the thesaurus configuration file. (Its full name will be $SHAREDIR/tsearch_data/mythesaurus.ths, where $SHAREDIR means the installation shared-data directory.)

  • pg_catalog.english_stem is the subdictionary (here, a Snowball English stemmer) to use for thesaurus normalization. Notice that the subdictionary will have its own configuration (for example, stop words), which is not shown here.

Now it is possible to bind the thesaurus dictionary thesaurus_simple to the desired token types in a configuration, for example:

ALTER TEXT SEARCH CONFIGURATION russian
    ALTER MAPPING FOR asciiword, asciihword, hword_asciipart
    WITH thesaurus_simple;

12.6.4.2. Thesaurus Example

Consider a simple astronomical thesaurus thesaurus_astro, which contains some astronomical word combinations:

supernovae stars : sn
crab nebulae : crab

Below we create a dictionary and bind some token types to an astronomical thesaurus and English stemmer:

CREATE TEXT SEARCH DICTIONARY thesaurus_astro (
    TEMPLATE = thesaurus,
    DictFile = thesaurus_astro,
    Dictionary = english_stem
);

ALTER TEXT SEARCH CONFIGURATION russian
    ALTER MAPPING FOR asciiword, asciihword, hword_asciipart
    WITH thesaurus_astro, english_stem;

Now we can see how it works. ts_lexize is not very useful for testing a thesaurus, because it treats its input as a single token. Instead we can use plainto_tsquery and to_tsvector which will break their input strings into multiple tokens:

SELECT plainto_tsquery('supernova star');
 plainto_tsquery
-----------------
 'sn'

SELECT to_tsvector('supernova star');
 to_tsvector
-------------
 'sn':1

In principle, one can use to_tsquery if you quote the argument:

SELECT to_tsquery('''supernova star''');
 to_tsquery
------------
 'sn'

Notice that supernova star matches supernovae stars in thesaurus_astro because we specified the english_stem stemmer in the thesaurus definition. The stemmer removed the e and s.

To index the original phrase as well as the substitute, just include it in the right-hand part of the definition:

supernovae stars : sn supernovae stars

SELECT plainto_tsquery('supernova star');
       plainto_tsquery
-----------------------------
 'sn' & 'supernova' & 'star'

12.6.5. Ispell Dictionary

The Ispell dictionary template supports morphological dictionaries, which can normalize many different linguistic forms of a word into the same lexeme. For example, an English Ispell dictionary can match all declensions and conjugations of the search term bank, e.g., banking, banked, banks, banks', and bank's.

The standard PostgreSQL distribution does not include any Ispell configuration files. Dictionaries for a large number of languages are available from Ispell. Also, some more modern dictionary file formats are supported — MySpell (OO < 2.0.1) and Hunspell (OO >= 2.0.2). A large list of dictionaries is available on the OpenOffice Wiki.

To create an Ispell dictionary, use the built-in ispell template and specify several parameters:

CREATE TEXT SEARCH DICTIONARY english_ispell (
    TEMPLATE = ispell,
    DictFile = english,
    AffFile = english,
    StopWords = english
);

Here, DictFile, AffFile, and StopWords specify the base names of the dictionary, affixes, and stop-words files. The stop-words file has the same format explained above for the simple dictionary type. The format of the other files is not specified here but is available from the above-mentioned web sites.

Ispell dictionaries usually recognize a limited set of words, so they should be followed by another broader dictionary; for example, a Snowball dictionary, which recognizes everything.

Ispell dictionaries support splitting compound words; a useful feature. Notice that the affix file should specify a special flag using the compoundwords controlled statement that marks dictionary words that can participate in compound formation:

compoundwords  controlled z

Here are some examples for the Norwegian language:

SELECT ts_lexize('norwegian_ispell', 'overbuljongterningpakkmesterassistent');
   {over,buljong,terning,pakk,mester,assistent}
SELECT ts_lexize('norwegian_ispell', 'sjokoladefabrikk');
   {sjokoladefabrikk,sjokolade,fabrikk}

Note: MySpell does not support compound words. Hunspell has sophisticated support for compound words. At present, PostgreSQL implements only the basic compound word operations of Hunspell.

12.6.6. Snowball Dictionary

The Snowball dictionary template is based on a project by Martin Porter, inventor of the popular Porter's stemming algorithm for the English language. Snowball now provides stemming algorithms for many languages (see the Snowball site for more information). Each algorithm understands how to reduce common variant forms of words to a base, or stem, spelling within its language. A Snowball dictionary requires a language parameter to identify which stemmer to use, and optionally can specify a stopword file name that gives a list of words to eliminate. (PostgreSQL's standard stopword lists are also provided by the Snowball project.) For example, there is a built-in definition equivalent to

CREATE TEXT SEARCH DICTIONARY english_stem (
    TEMPLATE = snowball,
    Language = english,
    StopWords = english
);

The stopword file format is the same as already explained.

A Snowball dictionary recognizes everything, whether or not it is able to simplify the word, so it should be placed at the end of the dictionary list. It is useless to have it before any other dictionary because a token will never pass through it to the next dictionary.

前の記事: 次の記事: