direktori cari
前言 何为PostgreSQL? PostgreSQL简史 格式约定 更多信息 臭虫汇报指导 I. 教程 章1. 从头开始 1.1. 安装 1.2. 体系基本概念 1.3. 创建一个数据库 1.4. 访问数据库 章2. SQL语言 2.1. 介绍 2.2. 概念 2.3. 创建新表 2.4. 向表中添加行 2.5. 查询一个表 2.6. 表间链接 2.7. 聚集函数 2.8. 更新 2.9. 删除 章3. 高级特性 3.1. 介绍 3.2. 视图 3.3. 外键 3.4. 事务 3.5. 窗口函数 3.6. 继承 3.7. 结论 II. SQL语言 章4. SQL语法 4.1. 词法结构 4.2. 值表达式 4.3. 调用函数 章5. 数据定义 5.1. 表的基本概念 5.2. 缺省值 5.3. 约束 5.4. 系统字段 5.5. 修改表 5.6. 权限 5.7. 模式 5.8. 继承 5.9. 分区 5.10. 其它数据库对象 5.11. 依赖性跟踪 章 6. 数据操作 6.1. 插入数据 6.2. 更新数据 6.3. 删除数据 章7. 查询 7.1. 概述 7.2. 表表达式 7.3. 选择列表 7.4. 组合查询 7.5. 行排序 7.6. LIMIT和OFFSET 7.7. VALUES列表 7.8. WITH的查询(公用表表达式) 章8. 数据类型 8.1. 数值类型 8.2. 货币类型 8.3. 字符类型 8.4. 二进制数据类型 8.5. 日期/时间类型 8.6. 布尔类型 8.7. 枚举类型 8.8. 几何类型 8.9. 网络地址类型 8.10. 位串类型 8.11. 文本搜索类型 8.12. UUID类型 8.13. XML类型 8.14. 数组 8.15. 复合类型 8.16. 对象标识符类型 8.17. 伪类型 章 9. 函数和操作符 9.1. 逻辑操作符 9.2. 比较操作符 9.3. 数学函数和操作符 9.4. 字符串函数和操作符 9.5. 二进制字符串函数和操作符 9.6. 位串函数和操作符 9.7. 模式匹配 9.8. 数据类型格式化函数 9.9. 时间/日期函数和操作符 9.10. 支持枚举函数 9.11. 几何函数和操作符 9.12. 网络地址函数和操作符 9.13. 文本检索函数和操作符 9.14. XML函数 9.15. 序列操作函数 9.16. 条件表达式 9.17. 数组函数和操作符 9.18. 聚合函数 9.19. 窗口函数 9.20. 子查询表达式 9.21. 行和数组比较 9.22. 返回集合的函数 9.23. 系统信息函数 9.24. 系统管理函数 9.25. 触发器函数 章10. 类型转换 10.3. 函数 10.2. 操作符 10.1. 概述 10.4. 值存储 10.5. UNION 章11. 索引 11.1. 介绍 11.2. 索引类型 11.3. 多字段索引 11.4. 索引和ORDER BY 11.5. 组合多个索引 11.6. 唯一索引 11.7. 表达式上的索引 11.8. 部分索引 11.9. 操作类和操作簇 11.10. 检查索引的使用 章12. Full Text Search 12.1. Introduction 12.2. Tables and Indexes 12.3. Controlling Text Search 12.4. Additional Features 12.5. Parsers 12.6. Dictionaries 12.7. Configuration Example 12.8. Testing and Debugging Text Search 12.9. GiST and GIN Index Types 12.10. psql Support 12.11. Limitations 12.12. Migration from Pre-8.3 Text Search 章13. 并发控制 13.1. 介绍 13.2. 事务隔离 13.3. 明确锁定 13.4. 应用层数据完整性检查 13.5. 锁和索引 章14. 性能提升技巧 14.1. 使用EXPLAIN 14.2. 规划器使用的统计信息 14.3. 用明确的JOIN语句控制规划器 14.4. 向数据库中添加记录 14.5. 非持久性设置 III. 服务器管理 章15. 安装指导 15.1. 简版 15.2. 要求 15.3. 获取源码 15.4. 升级 15.5. 安装过程 15.6. 安装后的设置 15.7. 支持的平台 15.8. 特殊平台的要求 章16. Installation from Source Code on Windows 16.1. Building with Visual C++ or the Platform SDK 16.2. Building libpq with Visual C++ or Borland C++ 章17. 服务器安装和操作 17.1. PostgreSQL用户帐户 17.2. 创建数据库集群 17.3. 启动数据库服务器 17.4. 管理内核资源 17.5. 关闭服务 17.6. 防止服务器欺骗 17.7. 加密选项 17.8. 用SSL进行安全的TCP/IP连接 17.9. Secure TCP/IP Connections with SSH Tunnels 章18. 服务器配置 18.1. 设置参数 18.2. 文件位置 18.3. 连接和认证 18.4. 资源消耗 18.5. 预写式日志 18.6. 查询规划 18.7. 错误报告和日志 18.8. 运行时统计 18.9. 自动清理 18.10. 客户端连接缺省 18.12. 版本和平台兼容性 18.11. 锁管理 18.13. 预置选项 18.14. 自定义的选项 18.15. 开发人员选项 18.16. 短选项 章19. 用户认证 19.1. pg_hba.conf 文件 19.2. 用户名映射 19.3. 认证方法 19.4. 用户认证 章20. 数据库角色和权限 20.1. 数据库角色 20.2. 角色属性 20.3. 权限 20.4. 角色成员 20.5. 函数和触发器 章21. 管理数据库 21.1. 概述 21.2. 创建一个数据库 21.3. 临时库 21.4. 数据库配置 21.5. 删除数据库 21.6. 表空间 章22. 本土化 22.1. 区域支持 22.2. 字符集支持 章23. 日常数据库维护工作 23.1. Routine Vacuuming日常清理 23.2. 经常重建索引 23.3. 日志文件维护 章24. 备份和恢复 24.1. SQL转储 24.2. 文件系统级别的备份 24.3. 在线备份以及即时恢复(PITR) 24.4. 版本间迁移 章25. 高可用性与负载均衡,复制 25.1. 不同解决方案的比较 25.2. 日志传送备份服务器 25.3. 失效切换 25.4. 日志传送的替代方法 25.5. 热备 章26. 恢复配置 26.1. 归档恢复设置 26.2. 恢复目标设置 26.3. 备服务器设置 章27. 监控数据库的活动 27.1. 标准Unix工具 27.2. 统计收集器 27.3. 查看锁 27.4. 动态跟踪 章28. 监控磁盘使用情况 28.1. 判断磁盘的使用量 28.2. 磁盘满导致的失效 章29. 可靠性和预写式日志 29.1. 可靠性 29.2. 预写式日志(WAL) 29.3. 异步提交 29.4. WAL配置 29.5. WAL内部 章30. Regression Tests 30.1. Running the Tests 30.2. Test Evaluation 30.3. Variant Comparison Files 30.4. Test Coverage Examination IV. 客户端接口 章31. libpq-C库 31.1. 数据库联接函数 31.2. 连接状态函数 31.3. 命令执行函数 31.4. 异步命令处理 31.5. 取消正在处理的查询 31.6. 捷径接口 31.7. 异步通知 31.8. 与COPY命令相关的函数 31.9. Control Functions 控制函数 31.10. 其他函数 31.11. 注意信息处理 31.12. 事件系统 31.13. 环境变量 31.14. 口令文件 31.15. 连接服务的文件 31.16. LDAP查找连接参数 31.17. SSL支持 31.18. 在多线程程序里的行为 31.19. 制作libpq程序 31.20. 例子程序 章32. 大对象 32.1. 介绍 32.2. 实现特点 32.3. 客户端接口 32.4. 服务器端函数 32.5. 例子程序 章33. ECPG - Embedded SQL in C 33.1. The Concept 33.2. Connecting to the Database Server 33.3. Closing a Connection 33.4. Running SQL Commands 33.5. Choosing a Connection 33.6. Using Host Variables 33.7. Dynamic SQL 33.8. pgtypes library 33.9. Using Descriptor Areas 33.10. Informix compatibility mode 33.11. Error Handling 33.12. Preprocessor directives 33.13. Processing Embedded SQL Programs 33.14. Library Functions 33.15. Internals 章34. 信息模式 34.1. 关于这个模式 34.2. 数据类型 34.3. information_schema_catalog_name 34.4. administrable_role_authorizations 34.5. applicable_roles 34.6. attributes 34.7. check_constraint_routine_usage 34.8. check_constraints 34.9. column_domain_usage 34.10. column_privileges 34.11. column_udt_usage 34.12. 字段 34.13. constraint_column_usage 34.14. constraint_table_usage 34.15. data_type_privileges 34.16. domain_constraints 34.18. domains 34.17. domain_udt_usage 34.19. element_types 34.20. enabled_roles 34.21. foreign_data_wrapper_options 34.22. foreign_data_wrappers 34.23. foreign_server_options 34.24. foreign_servers 34.25. key_column_usage 34.26. parameters 34.27. referential_constraints 34.28. role_column_grants 34.29. role_routine_grants 34.30. role_table_grants 34.31. role_usage_grants 34.32. routine_privileges 34.33. routines 34.34. schemata 34.35. sequences 34.36. sql_features 34.37. sql_implementation_info 34.38. sql_languages 34.39. sql_packages 34.40. sql_parts 34.41. sql_sizing 34.42. sql_sizing_profiles 34.43. table_constraints 34.44. table_privileges 34.45. tables 34.46. triggered_update_columns 34.47. 触发器 34.48. usage_privileges 34.49. user_mapping_options 34.50. user_mappings 34.51. view_column_usage 34.52. view_routine_usage 34.53. view_table_usage 34.54. 视图 V. 服务器端编程 章35. 扩展SQL 35.1. 扩展性是如何实现的 35.2. PostgreSQL类型系统 35.3. User-Defined Functions 35.4. Query Language (SQL) Functions 35.5. Function Overloading 35.6. Function Volatility Categories 35.7. Procedural Language Functions 35.8. Internal Functions 35.9. C-Language Functions 35.10. User-Defined Aggregates 35.11. User-Defined Types 35.12. User-Defined Operators 35.13. Operator Optimization Information 35.14. Interfacing Extensions To Indexes 35.15. 用C++扩展 章36. 触发器 36.1. 触发器行为概述 36.3. 用 C 写触发器 36.2. 数据改变的可视性 36.4. 一个完整的例子 章37. 规则系统 37.1. The Query Tree 37.2. 视图和规则系统 37.3. 在INSERT,UPDATE和DELETE上的规则 37.4. 规则和权限 37.5. 规则和命令状态 37.6. 规则与触发器得比较 章38. Procedural Languages 38.1. Installing Procedural Languages 章39. PL/pgSQL - SQL过程语言 39.1. 概述 39.2. PL/pgSQL的结构 39.3. 声明 39.4. 表达式 39.5. 基本语句 39.6. 控制结构 39.7. 游标 39.8. 错误和消息 39.9. 触发器过程 39.10. PL/pgSQL Under the Hood 39.11. 开发PL/pgSQL的一些提示 39.12. 从OraclePL/SQL 进行移植 章40. PL/Tcl - Tcl Procedural Language 40.1. Overview 40.2. PL/Tcl Functions and Arguments 40.3. Data Values in PL/Tcl 40.4. Global Data in PL/Tcl 40.5. Database Access from PL/Tcl 40.6. Trigger Procedures in PL/Tcl 40.7. Modules and the unknown command 40.8. Tcl Procedure Names 章41. PL/Perl - Perl Procedural Language 41.1. PL/Perl Functions and Arguments 41.2. Data Values in PL/Perl 41.3. Built-in Functions 41.4. Global Values in PL/Perl 41.6. PL/Perl Triggers 41.5. Trusted and Untrusted PL/Perl 41.7. PL/Perl Under the Hood 章42. PL/Python - Python Procedural Language 42.1. Python 2 vs. Python 3 42.2. PL/Python Functions 42.3. Data Values 42.4. Sharing Data 42.5. Anonymous Code Blocks 42.6. Trigger Functions 42.7. Database Access 42.8. Utility Functions 42.9. Environment Variables 章43. Server Programming Interface 43.1. Interface Functions Spi-spi-connect Spi-spi-finish Spi-spi-push Spi-spi-pop Spi-spi-execute Spi-spi-exec Spi-spi-execute-with-args Spi-spi-prepare Spi-spi-prepare-cursor Spi-spi-prepare-params Spi-spi-getargcount Spi-spi-getargtypeid Spi-spi-is-cursor-plan Spi-spi-execute-plan Spi-spi-execute-plan-with-paramlist Spi-spi-execp Spi-spi-cursor-open Spi-spi-cursor-open-with-args Spi-spi-cursor-open-with-paramlist Spi-spi-cursor-find Spi-spi-cursor-fetch Spi-spi-cursor-move Spi-spi-scroll-cursor-fetch Spi-spi-scroll-cursor-move Spi-spi-cursor-close Spi-spi-saveplan 43.2. Interface Support Functions Spi-spi-fname Spi-spi-fnumber Spi-spi-getvalue Spi-spi-getbinval Spi-spi-gettype Spi-spi-gettypeid Spi-spi-getrelname Spi-spi-getnspname 43.3. Memory Management Spi-spi-palloc Spi-realloc Spi-spi-pfree Spi-spi-copytuple Spi-spi-returntuple Spi-spi-modifytuple Spi-spi-freetuple Spi-spi-freetupletable Spi-spi-freeplan 43.4. Visibility of Data Changes 43.5. Examples VI. 参考手册 I. SQL命令 Sql-abort Sql-alteraggregate Sql-alterconversion Sql-alterdatabase Sql-alterdefaultprivileges Sql-alterdomain Sql-alterforeigndatawrapper Sql-alterfunction Sql-altergroup Sql-alterindex Sql-alterlanguage Sql-alterlargeobject Sql-alteroperator Sql-alteropclass Sql-alteropfamily Sql-alterrole Sql-alterschema Sql-altersequence Sql-alterserver Sql-altertable Sql-altertablespace Sql-altertsconfig Sql-altertsdictionary Sql-altertsparser Sql-altertstemplate Sql-altertrigger Sql-altertype Sql-alteruser Sql-alterusermapping Sql-alterview Sql-analyze Sql-begin Sql-checkpoint Sql-close Sql-cluster Sql-comment Sql-commit Sql-commit-prepared Sql-copy Sql-createaggregate Sql-createcast Sql-createconstraint Sql-createconversion Sql-createdatabase Sql-createdomain Sql-createforeigndatawrapper Sql-createfunction Sql-creategroup Sql-createindex Sql-createlanguage Sql-createoperator Sql-createopclass Sql-createopfamily Sql-createrole Sql-createrule Sql-createschema Sql-createsequence Sql-createserver Sql-createtable Sql-createtableas Sql-createtablespace Sql-createtsconfig Sql-createtsdictionary Sql-createtsparser Sql-createtstemplate Sql-createtrigger Sql-createtype Sql-createuser Sql-createusermapping Sql-createview Sql-deallocate Sql-declare Sql-delete Sql-discard Sql-do Sql-dropaggregate Sql-dropcast Sql-dropconversion Sql-dropdatabase Sql-dropdomain Sql-dropforeigndatawrapper Sql-dropfunction Sql-dropgroup Sql-dropindex Sql-droplanguage Sql-dropoperator Sql-dropopclass Sql-dropopfamily Sql-drop-owned Sql-droprole Sql-droprule Sql-dropschema Sql-dropsequence Sql-dropserver Sql-droptable Sql-droptablespace Sql-droptsconfig Sql-droptsdictionary Sql-droptsparser Sql-droptstemplate Sql-droptrigger Sql-droptype Sql-dropuser Sql-dropusermapping Sql-dropview Sql-end Sql-execute Sql-explain Sql-fetch Sql-grant Sql-insert Sql-listen Sql-load Sql-lock Sql-move Sql-notify Sql-prepare Sql-prepare-transaction Sql-reassign-owned Sql-reindex Sql-release-savepoint Sql-reset Sql-revoke Sql-rollback Sql-rollback-prepared Sql-rollback-to Sql-savepoint Sql-select Sql-selectinto Sql-set Sql-set-constraints Sql-set-role Sql-set-session-authorization Sql-set-transaction Sql-show Sql-start-transaction Sql-truncate Sql-unlisten Sql-update Sql-vacuum Sql-values II. 客户端应用程序 App-clusterdb App-createdb App-createlang App-createuser App-dropdb App-droplang App-dropuser App-ecpg App-pgconfig App-pgdump App-pg-dumpall App-pgrestore App-psql App-reindexdb App-vacuumdb III. PostgreSQL服务器应用程序 App-initdb App-pgcontroldata App-pg-ctl App-pgresetxlog App-postgres App-postmaster VII. 内部 章44. PostgreSQL内部概览 44.1. 查询路径 44.2. 连接是如何建立起来的 44.3. 分析器阶段 44.4. ThePostgreSQL规则系统 44.5. 规划器/优化器 44.6. 执行器 章45. 系统表 45.1. 概述 45.2. pg_aggregate 45.3. pg_am 45.4. pg_amop 45.5. pg_amproc 45.6. pg_attrdef 45.7. pg_attribute 45.8. pg_authid 45.9. pg_auth_members 45.10. pg_cast 45.11. pg_class 45.12. pg_constraint 45.13. pg_conversion 45.14. pg_database 45.15. pg_db_role_setting 45.16. pg_default_acl 45.17. pg_depend 45.18. pg_description 45.19. pg_enum 45.20. pg_foreign_data_wrapper 45.21. pg_foreign_server 45.22. pg_index 45.23. pg_inherits 45.24. pg_language 45.25. pg_largeobject 45.26. pg_largeobject_metadata 45.27. pg_namespace 45.28. pg_opclass 45.29. pg_operator 45.30. pg_opfamily 45.31. pg_pltemplate 45.32. pg_proc 45.33. pg_rewrite 45.34. pg_shdepend 45.35. pg_shdescription 45.36. pg_statistic 45.37. pg_tablespace 45.38. pg_trigger 45.39. pg_ts_config 45.40. pg_ts_config_map 45.41. pg_ts_dict 45.42. pg_ts_parser 45.43. pg_ts_template 45.44. pg_type 45.45. pg_user_mapping 45.46. System Views 45.47. pg_cursors 45.48. pg_group 45.49. pg_indexes 45.50. pg_locks 45.51. pg_prepared_statements 45.52. pg_prepared_xacts 45.53. pg_roles 45.54. pg_rules 45.55. pg_settings 45.56. pg_shadow 45.57. pg_stats 45.58. pg_tables 45.59. pg_timezone_abbrevs 45.60. pg_timezone_names 45.61. pg_user 45.62. pg_user_mappings 45.63. pg_views 章46. Frontend/Backend Protocol 46.1. Overview 46.2. Message Flow 46.3. Streaming Replication Protocol 46.4. Message Data Types 46.5. Message Formats 46.6. Error and Notice Message Fields 46.7. Summary of Changes since Protocol 2.0 47. PostgreSQL Coding Conventions 47.1. Formatting 47.2. Reporting Errors Within the Server 47.3. Error Message Style Guide 章48. Native Language Support 48.1. For the Translator 48.2. For the Programmer 章49. Writing A Procedural Language Handler 章50. Genetic Query Optimizer 50.1. Query Handling as a Complex Optimization Problem 50.2. Genetic Algorithms 50.3. Genetic Query Optimization (GEQO) in PostgreSQL 50.4. Further Reading 章51. 索引访问方法接口定义 51.1. 索引的系统表记录 51.2. 索引访问方法函数 51.3. 索引扫描 51.4. 索引锁的考量 51.5. 索引唯一性检查 51.6. 索引开销估计函数 章52. GiST Indexes 52.1. Introduction 52.2. Extensibility 52.3. Implementation 52.4. Examples 52.5. Crash Recovery 章53. GIN Indexes 53.1. Introduction 53.2. Extensibility 53.3. Implementation 53.4. GIN tips and tricks 53.5. Limitations 53.6. Examples 章54. 数据库物理存储 54.1. 数据库文件布局 54.2. TOAST 54.3. 自由空间映射 54.4. 可见映射 54.5. 数据库分页文件 章55. BKI后端接口 55.1. BKI 文件格式 55.2. BKI命令 55.3. 系统初始化的BKI文件的结构 55.4. 例子 章56. 规划器如何使用统计信息 56.1. 行预期的例子 VIII. 附录 A. PostgreSQL错误代码 B. 日期/时间支持 B.1. 日期/时间输入解析 B.2. 日期/时间关键字 B.3. 日期/时间配置文件 B.4. 日期单位的历史 C. SQL关键字 D. SQL Conformance D.1. Supported Features D.2. Unsupported Features E. Release Notes Release-0-01 Release-0-02 Release-0-03 Release-1-0 Release-1-01 Release-1-02 Release-1-09 Release-6-0 Release-6-1 Release-6-1-1 Release-6-2 Release-6-2-1 Release-6-3 Release-6-3-1 Release-6-3-2 Release-6-4 Release-6-4-1 Release-6-4-2 Release-6-5 Release-6-5-1 Release-6-5-2 Release-6-5-3 Release-7-0 Release-7-0-1 Release-7-0-2 Release-7-0-3 Release-7-1 Release-7-1-1 Release-7-1-2 Release-7-1-3 Release-7-2 Release-7-2-1 Release-7-2-2 Release-7-2-3 Release-7-2-4 Release-7-2-5 Release-7-2-6 Release-7-2-7 Release-7-2-8 Release-7-3 Release-7-3-1 Release-7-3-10 Release-7-3-11 Release-7-3-12 Release-7-3-13 Release-7-3-14 Release-7-3-15 Release-7-3-16 Release-7-3-17 Release-7-3-18 Release-7-3-19 Release-7-3-2 Release-7-3-20 Release-7-3-21 Release-7-3-3 Release-7-3-4 Release-7-3-5 Release-7-3-6 Release-7-3-7 Release-7-3-8 Release-7-3-9 Release-7-4 Release-7-4-1 Release-7-4-10 Release-7-4-11 Release-7-4-12 Release-7-4-13 Release-7-4-14 Release-7-4-15 Release-7-4-16 Release-7-4-17 Release-7-4-18 Release-7-4-19 Release-7-4-2 Release-7-4-20 Release-7-4-21 Release-7-4-22 Release-7-4-23 Release-7-4-24 Release-7-4-25 Release-7-4-26 Release-7-4-27 Release-7-4-28 Release-7-4-29 Release-7-4-3 Release-7-4-30 Release-7-4-4 Release-7-4-5 Release-7-4-6 Release-7-4-7 Release-7-4-8 Release-7-4-9 Release-8-0 Release-8-0-1 Release-8-0-10 Release-8-0-11 Release-8-0-12 Release-8-0-13 Release-8-0-14 Release-8-0-15 Release-8-0-16 Release-8-0-17 Release-8-0-18 Release-8-0-19 Release-8-0-2 Release-8-0-20 Release-8-0-21 Release-8-0-22 Release-8-0-23 Release-8-0-24 Release-8-0-25 Release-8-0-26 Release-8-0-3 Release-8-0-4 Release-8-0-5 Release-8-0-6 Release-8-0-7 Release-8-0-8 Release-8-0-9 Release-8-1 Release-8-1-1 Release-8-1-10 Release-8-1-11 Release-8-1-12 Release-8-1-13 Release-8-1-14 Release-8-1-15 Release-8-1-16 Release-8-1-17 Release-8-1-18 Release-8-1-19 Release-8-1-2 Release-8-1-20 Release-8-1-21 Release-8-1-22 Release-8-1-23 Release-8-1-3 Release-8-1-4 Release-8-1-5 Release-8-1-6 Release-8-1-7 Release-8-1-8 Release-8-1-9 Release-8-2 Release-8-2-1 Release-8-2-10 Release-8-2-11 Release-8-2-12 Release-8-2-13 Release-8-2-14 Release-8-2-15 Release-8-2-16 Release-8-2-17 Release-8-2-18 Release-8-2-19 Release-8-2-2 Release-8-2-20 Release-8-2-21 Release-8-2-3 Release-8-2-4 Release-8-2-5 Release-8-2-6 Release-8-2-7 Release-8-2-8 Release-8-2-9 Release-8-3 Release-8-3-1 Release-8-3-10 Release-8-3-11 Release-8-3-12 Release-8-3-13 Release-8-3-14 Release-8-3-15 Release-8-3-2 Release-8-3-3 Release-8-3-4 Release-8-3-5 Release-8-3-6 Release-8-3-7 Release-8-3-8 Release-8-3-9 Release-8-4 Release-8-4-1 Release-8-4-2 Release-8-4-3 Release-8-4-4 Release-8-4-5 Release-8-4-6 Release-8-4-7 Release-8-4-8 Release-9-0 Release-9-0-1 Release-9-0-2 Release-9-0-3 Release-9-0-4 F. 额外提供的模块 F.1. adminpack F.2. auto_explain F.3. btree_gin F.4. btree_gist F.5. chkpass F.6. citext F.7. cube F.8. dblink Contrib-dblink-connect Contrib-dblink-connect-u Contrib-dblink-disconnect Contrib-dblink Contrib-dblink-exec Contrib-dblink-open Contrib-dblink-fetch Contrib-dblink-close Contrib-dblink-get-connections Contrib-dblink-error-message Contrib-dblink-send-query Contrib-dblink-is-busy Contrib-dblink-get-notify Contrib-dblink-get-result Contrib-dblink-cancel-query Contrib-dblink-get-pkey Contrib-dblink-build-sql-insert Contrib-dblink-build-sql-delete Contrib-dblink-build-sql-update F.9. dict_int F.10. dict_xsyn F.11. earthdistance F.12. fuzzystrmatch F.13. hstore F.14. intagg F.15. intarray F.16. isn F.17. lo F.18. ltree F.19. oid2name F.20. pageinspect F.21. passwordcheck F.22. pg_archivecleanup F.23. pgbench F.24. pg_buffercache F.25. pgcrypto F.26. pg_freespacemap F.27. pgrowlocks F.28. pg_standby F.29. pg_stat_statements F.30. pgstattuple F.31. pg_trgm F.32. pg_upgrade F.33. seg F.34. spi F.35. sslinfo F.36. tablefunc F.37. test_parser F.38. tsearch2 F.39. unaccent F.40. uuid-ossp F.41. vacuumlo F.42. xml2 G. 外部项目 G.1. 客户端接口 G.2. 过程语言 G.3. 扩展 H. The Source Code Repository H.1. Getting The Source Via Git I. 文档 I.1. DocBook I.2. 工具集 I.3. 制作文档 I.4. 文档写作 I.5. 风格指导 J. 首字母缩略词 参考书目 Bookindex Index
watak

35.9. C-Language Functions

User-defined functions can be written in C (or a language that can be made compatible with C, such as C++). Such functions are compiled into dynamically loadable objects (also called shared libraries) and are loaded by the server on demand. The dynamic loading feature is what distinguishes "C language" functions from "internal" functions — the actual coding conventions are essentially the same for both. (Hence, the standard internal function library is a rich source of coding examples for user-defined C functions.)

Two different calling conventions are currently used for C functions. The newer "version 1" calling convention is indicated by writing a PG_FUNCTION_INFO_V1() macro call for the function, as illustrated below. Lack of such a macro indicates an old-style ("version 0") function. The language name specified in CREATE FUNCTION is C in either case. Old-style functions are now deprecated because of portability problems and lack of functionality, but they are still supported for compatibility reasons.

35.9.1. Dynamic Loading

The first time a user-defined function in a particular loadable object file is called in a session, the dynamic loader loads that object file into memory so that the function can be called. The CREATE FUNCTION for a user-defined C function must therefore specify two pieces of information for the function: the name of the loadable object file, and the C name (link symbol) of the specific function to call within that object file. If the C name is not explicitly specified then it is assumed to be the same as the SQL function name.

The following algorithm is used to locate the shared object file based on the name given in the CREATE FUNCTION command:

  1. If the name is an absolute path, the given file is loaded.

  2. If the name starts with the string $libdir, that part is replaced by the PostgreSQL package library directory name, which is determined at build time.

  3. If the name does not contain a directory part, the file is searched for in the path specified by the configuration variable dynamic_library_path.

  4. Otherwise (the file was not found in the path, or it contains a non-absolute directory part), the dynamic loader will try to take the name as given, which will most likely fail. (It is unreliable to depend on the current working directory.)

If this sequence does not work, the platform-specific shared library file name extension (often .so) is appended to the given name and this sequence is tried again. If that fails as well, the load will fail.

It is recommended to locate shared libraries either relative to $libdir or through the dynamic library path. This simplifies version upgrades if the new installation is at a different location. The actual directory that $libdir stands for can be found out with the command pg_config --pkglibdir.

The user ID the PostgreSQL server runs as must be able to traverse the path to the file you intend to load. Making the file or a higher-level directory not readable and/or not executable by the postgres user is a common mistake.

In any case, the file name that is given in the CREATE FUNCTION command is recorded literally in the system catalogs, so if the file needs to be loaded again the same procedure is applied.

Note: PostgreSQL will not compile a C function automatically. The object file must be compiled before it is referenced in a CREATE FUNCTION command. See Section 35.9.6 for additional information.

To ensure that a dynamically loaded object file is not loaded into an incompatible server, PostgreSQL checks that the file contains a "magic block" with the appropriate contents. This allows the server to detect obvious incompatibilities, such as code compiled for a different major version of PostgreSQL. A magic block is required as of PostgreSQL 8.2. To include a magic block, write this in one (and only one) of the module source files, after having included the header fmgr.h:

#ifdef PG_MODULE_MAGIC
PG_MODULE_MAGIC;
#endif

The #ifdef test can be omitted if the code doesn't need to compile against pre-8.2 PostgreSQL releases.

After it is used for the first time, a dynamically loaded object file is retained in memory. Future calls in the same session to the function(s) in that file will only incur the small overhead of a symbol table lookup. If you need to force a reload of an object file, for example after recompiling it, begin a fresh session.

Optionally, a dynamically loaded file can contain initialization and finalization functions. If the file includes a function named _PG_init, that function will be called immediately after loading the file. The function receives no parameters and should return void. If the file includes a function named _PG_fini, that function will be called immediately before unloading the file. Likewise, the function receives no parameters and should return void. Note that _PG_fini will only be called during an unload of the file, not during process termination. (Presently, unloads are disabled and will never occur, but this may change in the future.)

35.9.2. Base Types in C-Language Functions

To know how to write C-language functions, you need to know how PostgreSQL internally represents base data types and how they can be passed to and from functions. Internally, PostgreSQL regards a base type as a "blob of memory". The user-defined functions that you define over a type in turn define the way that PostgreSQL can operate on it. That is, PostgreSQL will only store and retrieve the data from disk and use your user-defined functions to input, process, and output the data.

Base types can have one of three internal formats:

  • pass by value, fixed-length

  • pass by reference, fixed-length

  • pass by reference, variable-length

By-value types can only be 1, 2, or 4 bytes in length (also 8 bytes, if sizeof(Datum) is 8 on your machine). You should be careful to define your types such that they will be the same size (in bytes) on all architectures. For example, the long type is dangerous because it is 4 bytes on some machines and 8 bytes on others, whereas int type is 4 bytes on most Unix machines. A reasonable implementation of the int4 type on Unix machines might be:

/* 4-byte integer, passed by value */
typedef int int4;

On the other hand, fixed-length types of any size can be passed by-reference. For example, here is a sample implementation of a PostgreSQL type:

/* 16-byte structure, passed by reference */
typedef struct
{
    double  x, y;
} Point;

Only pointers to such types can be used when passing them in and out of PostgreSQL functions. To return a value of such a type, allocate the right amount of memory with palloc, fill in the allocated memory, and return a pointer to it. (Also, if you just want to return the same value as one of your input arguments that's of the same data type, you can skip the extra palloc and just return the pointer to the input value.)

Finally, all variable-length types must also be passed by reference. All variable-length types must begin with a length field of exactly 4 bytes, and all data to be stored within that type must be located in the memory immediately following that length field. The length field contains the total length of the structure, that is, it includes the size of the length field itself.

Warning

Never modify the contents of a pass-by-reference input value. If you do so you are likely to corrupt on-disk data, since the pointer you are given might point directly into a disk buffer. The sole exception to this rule is explained in Section 35.10.

As an example, we can define the type text as follows:

typedef struct {
    int4 length;
    char data[1];
} text;

Obviously, the data field declared here is not long enough to hold all possible strings. Since it's impossible to declare a variable-size structure in C, we rely on the knowledge that the C compiler won't range-check array subscripts. We just allocate the necessary amount of space and then access the array as if it were declared the right length. (This is a common trick, which you can read about in many textbooks about C.)

When manipulating variable-length types, we must be careful to allocate the correct amount of memory and set the length field correctly. For example, if we wanted to store 40 bytes in a text structure, we might use a code fragment like this:

#include "postgres.h"
...
char buffer[40]; /* our source data */
...
text *destination = (text *) palloc(VARHDRSZ + 40);
destination->length = VARHDRSZ + 40;
memcpy(destination->data, buffer, 40);
...

VARHDRSZ is the same as sizeof(int4), but it's considered good style to use the macro VARHDRSZ to refer to the size of the overhead for a variable-length type.

Table 35-1 specifies which C type corresponds to which SQL type when writing a C-language function that uses a built-in type of PostgreSQL. The "Defined In" column gives the header file that needs to be included to get the type definition. (The actual definition might be in a different file that is included by the listed file. It is recommended that users stick to the defined interface.) Note that you should always include postgres.h first in any source file, because it declares a number of things that you will need anyway.

Table 35-1. Equivalent C Types for Built-In SQL Types

SQL Type C Type Defined In
abstime AbsoluteTime utils/nabstime.h
boolean bool postgres.h (maybe compiler built-in)
box BOX* utils/geo_decls.h
bytea bytea* postgres.h
"char" char (compiler built-in)
character BpChar* postgres.h
cid CommandId postgres.h
date DateADT utils/date.h
smallint (int2) int2 or int16 postgres.h
int2vector int2vector* postgres.h
integer (int4) int4 or int32 postgres.h
real (float4) float4* postgres.h
double precision (float8) float8* postgres.h
interval Interval* utils/timestamp.h
lseg LSEG* utils/geo_decls.h
name Name postgres.h
oid Oid postgres.h
oidvector oidvector* postgres.h
path PATH* utils/geo_decls.h
point POINT* utils/geo_decls.h
regproc regproc postgres.h
reltime RelativeTime utils/nabstime.h
text text* postgres.h
tid ItemPointer storage/itemptr.h
time TimeADT utils/date.h
time with time zone TimeTzADT utils/date.h
timestamp Timestamp* utils/timestamp.h
tinterval TimeInterval utils/nabstime.h
varchar VarChar* postgres.h
xid TransactionId postgres.h

Now that we've gone over all of the possible structures for base types, we can show some examples of real functions.

35.9.3. Version 0 Calling Conventions

We present the "old style" calling convention first — although this approach is now deprecated, it's easier to get a handle on initially. In the version-0 method, the arguments and result of the C function are just declared in normal C style, but being careful to use the C representation of each SQL data type as shown above.

Here are some examples:

#include "postgres.h"
#include <string.h>
#include "utils/geo_decls.h"

#ifdef PG_MODULE_MAGIC
PG_MODULE_MAGIC;
#endif

/* by value */

int
add_one(int arg)
{
    return arg + 1;
}

/* by reference, fixed length */

float8 *
add_one_float8(float8 *arg)
{
    float8    *result = (float8 *) palloc(sizeof(float8));

    *result = *arg + 1.0;

    return result;
}

Point *
makepoint(Point *pointx, Point *pointy)
{
    Point     *new_point = (Point *) palloc(sizeof(Point));

    new_point->x = pointx->x;
    new_point->y = pointy->y;

    return new_point;
}

/* by reference, variable length */

text *
copytext(text *t)
{
    /*
     * VARSIZE is the total size of the struct in bytes.
     */
    text *new_t = (text *) palloc(VARSIZE(t));
    SET_VARSIZE(new_t, VARSIZE(t));
    /*
     * VARDATA is a pointer to the data region of the struct.
     */
    memcpy((void *) VARDATA(new_t), /* destination */
           (void *) VARDATA(t),     /* source */
           VARSIZE(t) - VARHDRSZ);  /* how many bytes */
    return new_t;
}

text *
concat_text(text *arg1, text *arg2)
{
    int32 new_text_size = VARSIZE(arg1) + VARSIZE(arg2) - VARHDRSZ;
    text *new_text = (text *) palloc(new_text_size);

    SET_VARSIZE(new_text, new_text_size);
    memcpy(VARDATA(new_text), VARDATA(arg1), VARSIZE(arg1) - VARHDRSZ);
    memcpy(VARDATA(new_text) + (VARSIZE(arg1) - VARHDRSZ),
           VARDATA(arg2), VARSIZE(arg2) - VARHDRSZ);
    return new_text;
}

Supposing that the above code has been prepared in file funcs.c and compiled into a shared object, we could define the functions to PostgreSQL with commands like this:

CREATE FUNCTION add_one(integer) RETURNS integer
     AS 'DIRECTORY/funcs', 'add_one'
     LANGUAGE C STRICT;

-- note overloading of SQL function name "add_one"
CREATE FUNCTION add_one(double precision) RETURNS double precision
     AS 'DIRECTORY/funcs', 'add_one_float8'
     LANGUAGE C STRICT;

CREATE FUNCTION makepoint(point, point) RETURNS point
     AS 'DIRECTORY/funcs', 'makepoint'
     LANGUAGE C STRICT;

CREATE FUNCTION copytext(text) RETURNS text
     AS 'DIRECTORY/funcs', 'copytext'
     LANGUAGE C STRICT;

CREATE FUNCTION concat_text(text, text) RETURNS text
     AS 'DIRECTORY/funcs', 'concat_text'
     LANGUAGE C STRICT;

Here, DIRECTORY stands for the directory of the shared library file (for instance the PostgreSQL tutorial directory, which contains the code for the examples used in this section). (Better style would be to use just 'funcs' in the AS clause, after having added DIRECTORY to the search path. In any case, we can omit the system-specific extension for a shared library, commonly .so or .sl.)

Notice that we have specified the functions as "strict", meaning that the system should automatically assume a null result if any input value is null. By doing this, we avoid having to check for null inputs in the function code. Without this, we'd have to check for null values explicitly, by checking for a null pointer for each pass-by-reference argument. (For pass-by-value arguments, we don't even have a way to check!)

Although this calling convention is simple to use, it is not very portable; on some architectures there are problems with passing data types that are smaller than int this way. Also, there is no simple way to return a null result, nor to cope with null arguments in any way other than making the function strict. The version-1 convention, presented next, overcomes these objections.

35.9.4. Version 1 Calling Conventions

The version-1 calling convention relies on macros to suppress most of the complexity of passing arguments and results. The C declaration of a version-1 function is always:

Datum funcname(PG_FUNCTION_ARGS)

In addition, the macro call:

PG_FUNCTION_INFO_V1(funcname);

must appear in the same source file. (Conventionally, it's written just before the function itself.) This macro call is not needed for internal-language functions, since PostgreSQL assumes that all internal functions use the version-1 convention. It is, however, required for dynamically-loaded functions.

In a version-1 function, each actual argument is fetched using a PG_GETARG_xxx() macro that corresponds to the argument's data type, and the result is returned using a PG_RETURN_xxx() macro for the return type. PG_GETARG_xxx() takes as its argument the number of the function argument to fetch, where the count starts at 0. PG_RETURN_xxx() takes as its argument the actual value to return.

Here we show the same functions as above, coded in version-1 style:

#include "postgres.h"
#include <string.h>
#include "fmgr.h"
#include "utils/geo_decls.h"

#ifdef PG_MODULE_MAGIC
PG_MODULE_MAGIC;
#endif

/* by value */

PG_FUNCTION_INFO_V1(add_one);

Datum
add_one(PG_FUNCTION_ARGS)
{
    int32   arg = PG_GETARG_INT32(0);

    PG_RETURN_INT32(arg + 1);
}

/* by reference, fixed length */

PG_FUNCTION_INFO_V1(add_one_float8);

Datum
add_one_float8(PG_FUNCTION_ARGS)
{
    /* The macros for FLOAT8 hide its pass-by-reference nature. */
    float8   arg = PG_GETARG_FLOAT8(0);

    PG_RETURN_FLOAT8(arg + 1.0);
}

PG_FUNCTION_INFO_V1(makepoint);

Datum
makepoint(PG_FUNCTION_ARGS)
{
    /* Here, the pass-by-reference nature of Point is not hidden. */
    Point     *pointx = PG_GETARG_POINT_P(0);
    Point     *pointy = PG_GETARG_POINT_P(1);
    Point     *new_point = (Point *) palloc(sizeof(Point));

    new_point->x = pointx->x;
    new_point->y = pointy->y;

    PG_RETURN_POINT_P(new_point);
}

/* by reference, variable length */

PG_FUNCTION_INFO_V1(copytext);

Datum
copytext(PG_FUNCTION_ARGS)
{
    text     *t = PG_GETARG_TEXT_P(0);
    /*
     * VARSIZE is the total size of the struct in bytes.
     */
    text     *new_t = (text *) palloc(VARSIZE(t));
    SET_VARSIZE(new_t, VARSIZE(t));
    /*
     * VARDATA is a pointer to the data region of the struct.
     */
    memcpy((void *) VARDATA(new_t), /* destination */
           (void *) VARDATA(t),     /* source */
           VARSIZE(t) - VARHDRSZ);  /* how many bytes */
    PG_RETURN_TEXT_P(new_t);
}

PG_FUNCTION_INFO_V1(concat_text);

Datum
concat_text(PG_FUNCTION_ARGS)
{
    text  *arg1 = PG_GETARG_TEXT_P(0);
    text  *arg2 = PG_GETARG_TEXT_P(1);
    int32 new_text_size = VARSIZE(arg1) + VARSIZE(arg2) - VARHDRSZ;
    text *new_text = (text *) palloc(new_text_size);

    SET_VARSIZE(new_text, new_text_size);
    memcpy(VARDATA(new_text), VARDATA(arg1), VARSIZE(arg1) - VARHDRSZ);
    memcpy(VARDATA(new_text) + (VARSIZE(arg1) - VARHDRSZ),
           VARDATA(arg2), VARSIZE(arg2) - VARHDRSZ);
    PG_RETURN_TEXT_P(new_text);
}

The CREATE FUNCTION commands are the same as for the version-0 equivalents.

At first glance, the version-1 coding conventions might appear to be just pointless obscurantism. They do, however, offer a number of improvements, because the macros can hide unnecessary detail. An example is that in coding add_one_float8, we no longer need to be aware that float8 is a pass-by-reference type. Another example is that the GETARG macros for variable-length types allow for more efficient fetching of "toasted" (compressed or out-of-line) values.

One big improvement in version-1 functions is better handling of null inputs and results. The macro PG_ARGISNULL(n) allows a function to test whether each input is null. (Of course, doing this is only necessary in functions not declared "strict".) As with the PG_GETARG_xxx() macros, the input arguments are counted beginning at zero. Note that one should refrain from executing PG_GETARG_xxx() until one has verified that the argument isn't null. To return a null result, execute PG_RETURN_NULL(); this works in both strict and nonstrict functions.

Other options provided in the new-style interface are two variants of the PG_GETARG_xxx() macros. The first of these, PG_GETARG_xxx_COPY(), guarantees to return a copy of the specified argument that is safe for writing into. (The normal macros will sometimes return a pointer to a value that is physically stored in a table, which must not be written to. Using the PG_GETARG_xxx_COPY() macros guarantees a writable result.) The second variant consists of the PG_GETARG_xxx_SLICE() macros which take three arguments. The first is the number of the function argument (as above). The second and third are the offset and length of the segment to be returned. Offsets are counted from zero, and a negative length requests that the remainder of the value be returned. These macros provide more efficient access to parts of large values in the case where they have storage type "external". (The storage type of a column can be specified using ALTER TABLE tablename ALTER COLUMN colname SET STORAGE storagetype. storagetype is one of plain, external, extended, or main.)

Finally, the version-1 function call conventions make it possible to return set results (Section 35.9.10) and implement trigger functions (Chapter 36) and procedural-language call handlers (Chapter 49). Version-1 code is also more portable than version-0, because it does not break restrictions on function call protocol in the C standard. For more details see src/backend/utils/fmgr/README in the source distribution.

35.9.5. Writing Code

Before we turn to the more advanced topics, we should discuss some coding rules for PostgreSQL C-language functions. While it might be possible to load functions written in languages other than C into PostgreSQL, this is usually difficult (when it is possible at all) because other languages, such as C++, FORTRAN, or Pascal often do not follow the same calling convention as C. That is, other languages do not pass argument and return values between functions in the same way. For this reason, we will assume that your C-language functions are actually written in C.

The basic rules for writing and building C functions are as follows:

  • Use pg_config --includedir-server to find out where the PostgreSQL server header files are installed on your system (or the system that your users will be running on).

  • Compiling and linking your code so that it can be dynamically loaded into PostgreSQL always requires special flags. See Section 35.9.6 for a detailed explanation of how to do it for your particular operating system.

  • Remember to define a "magic block" for your shared library, as described in Section 35.9.1.

  • When allocating memory, use the PostgreSQL functions palloc and pfree instead of the corresponding C library functions malloc and free. The memory allocated by palloc will be freed automatically at the end of each transaction, preventing memory leaks.

  • Always zero the bytes of your structures using memset. Without this, it's difficult to support hash indexes or hash joins, as you must pick out only the significant bits of your data structure to compute a hash. Even if you initialize all fields of your structure, there might be alignment padding (holes in the structure) that contain garbage values.

  • Most of the internal PostgreSQL types are declared in postgres.h, while the function manager interfaces (PG_FUNCTION_ARGS, etc.) are in fmgr.h, so you will need to include at least these two files. For portability reasons it's best to include postgres.h first, before any other system or user header files. Including postgres.h will also include elog.h and palloc.h for you.

  • Symbol names defined within object files must not conflict with each other or with symbols defined in the PostgreSQL server executable. You will have to rename your functions or variables if you get error messages to this effect.

35.9.6. Compiling and Linking Dynamically-Loaded Functions

Before you are able to use your PostgreSQL extension functions written in C, they must be compiled and linked in a special way to produce a file that can be dynamically loaded by the server. To be precise, a shared library needs to be created.

For information beyond what is contained in this section you should read the documentation of your operating system, in particular the manual pages for the C compiler, cc, and the link editor, ld. In addition, the PostgreSQL source code contains several working examples in the contrib directory. If you rely on these examples you will make your modules dependent on the availability of the PostgreSQL source code, however.

Creating shared libraries is generally analogous to linking executables: first the source files are compiled into object files, then the object files are linked together. The object files need to be created as position-independent code (PIC), which conceptually means that they can be placed at an arbitrary location in memory when they are loaded by the executable. (Object files intended for executables are usually not compiled that way.) The command to link a shared library contains special flags to distinguish it from linking an executable (at least in theory — on some systems the practice is much uglier).

In the following examples we assume that your source code is in a file foo.c and we will create a shared library foo.so. The intermediate object file will be called foo.o unless otherwise noted. A shared library can contain more than one object file, but we only use one here.

BSD/OS

The compiler flag to create PIC is -fpic. The linker flag to create shared libraries is -shared.

gcc -fpic -c foo.c
ld -shared -o foo.so foo.o

This is applicable as of version 4.0 of BSD/OS.

FreeBSD

The compiler flag to create PIC is -fpic. To create shared libraries the compiler flag is -shared.

gcc -fpic -c foo.c
gcc -shared -o foo.so foo.o

This is applicable as of version 3.0 of FreeBSD.

HP-UX

The compiler flag of the system compiler to create PIC is +z. When using GCC it's -fpic. The linker flag for shared libraries is -b. So:

cc +z -c foo.c

or:

gcc -fpic -c foo.c

and then:

ld -b -o foo.sl foo.o

HP-UX uses the extension .sl for shared libraries, unlike most other systems.

IRIX

PIC is the default, no special compiler options are necessary. The linker option to produce shared libraries is -shared.

cc -c foo.c
ld -shared -o foo.so foo.o

Linux

The compiler flag to create PIC is -fpic. On some platforms in some situations -fPIC must be used if -fpic does not work. Refer to the GCC manual for more information. The compiler flag to create a shared library is -shared. A complete example looks like this:

cc -fpic -c foo.c
cc -shared -o foo.so foo.o

MacOS X

Here is an example. It assumes the developer tools are installed.

cc -c foo.c 
cc -bundle -flat_namespace -undefined suppress -o foo.so foo.o

NetBSD

The compiler flag to create PIC is -fpic. For ELF systems, the compiler with the flag -shared is used to link shared libraries. On the older non-ELF systems, ld -Bshareable is used.

gcc -fpic -c foo.c
gcc -shared -o foo.so foo.o

OpenBSD

The compiler flag to create PIC is -fpic. ld -Bshareable is used to link shared libraries.

gcc -fpic -c foo.c
ld -Bshareable -o foo.so foo.o

Solaris

The compiler flag to create PIC is -KPIC with the Sun compiler and -fpic with GCC. To link shared libraries, the compiler option is -G with either compiler or alternatively -shared with GCC.

cc -KPIC -c foo.c
cc -G -o foo.so foo.o

or

gcc -fpic -c foo.c
gcc -G -o foo.so foo.o

Tru64 UNIX

PIC is the default, so the compilation command is the usual one. ld with special options is used to do the linking.

cc -c foo.c
ld -shared -expect_unresolved '*' -o foo.so foo.o

The same procedure is used with GCC instead of the system compiler; no special options are required.

UnixWare

The compiler flag to create PIC is -K PIC with the SCO compiler and -fpic with GCC. To link shared libraries, the compiler option is -G with the SCO compiler and -shared with GCC.

cc -K PIC -c foo.c
cc -G -o foo.so foo.o

or

gcc -fpic -c foo.c
gcc -shared -o foo.so foo.o

Tip: If this is too complicated for you, you should consider using GNU Libtool , which hides the platform differences behind a uniform interface.

The resulting shared library file can then be loaded into PostgreSQL. When specifying the file name to the CREATE FUNCTION command, one must give it the name of the shared library file, not the intermediate object file. Note that the system's standard shared-library extension (usually .so or .sl) can be omitted from the CREATE FUNCTION command, and normally should be omitted for best portability.

Refer back to Section 35.9.1 about where the server expects to find the shared library files.

35.9.7. Extension Building Infrastructure

If you are thinking about distributing your PostgreSQL extension modules, setting up a portable build system for them can be fairly difficult. Therefore the PostgreSQL installation provides a build infrastructure for extensions, called PGXS, so that simple extension modules can be built simply against an already installed server. Note that this infrastructure is not intended to be a universal build system framework that can be used to build all software interfacing to PostgreSQL; it simply automates common build rules for simple server extension modules. For more complicated packages, you need to write your own build system.

To use the infrastructure for your extension, you must write a simple makefile. In that makefile, you need to set some variables and finally include the global PGXS makefile. Here is an example that builds an extension module named isbn_issn consisting of a shared library, an SQL script, and a documentation text file:

MODULES = isbn_issn
DATA_built = isbn_issn.sql
DOCS = README.isbn_issn

PG_CONFIG = pg_config
PGXS := $(shell $(PG_CONFIG) --pgxs)
include $(PGXS)

The last three lines should always be the same. Earlier in the file, you assign variables or add custom make rules.

Set one of these three variables to specify what is built:

MODULES

list of shared objects to be built from source files with same stem (do not include suffix in this list)

MODULE_big

a shared object to build from multiple source files (list object files in OBJS)

PROGRAM

a binary program to build (list object files in OBJS)

The following variables can also be set:

MODULEDIR

subdirectory into which DATA and DOCS files should be installed (if not set, default is contrib)

DATA

random files to install into prefix/share/$MODULEDIR

DATA_built

random files to install into prefix/share/$MODULEDIR, which need to be built first

DATA_TSEARCH

random files to install under prefix/share/tsearch_data

DOCS

random files to install under prefix/doc/$MODULEDIR

SCRIPTS

script files (not binaries) to install into prefix/bin

SCRIPTS_built

script files (not binaries) to install into prefix/bin, which need to be built first

REGRESS

list of regression test cases (without suffix), see below

EXTRA_CLEAN

extra files to remove in make clean

PG_CPPFLAGS

will be added to CPPFLAGS

PG_LIBS

will be added to PROGRAM link line

SHLIB_LINK

will be added to MODULE_big link line

PG_CONFIG

path to pg_config program for the PostgreSQL installation to build against (typically just pg_config to use the first one in your PATH)

Put this makefile as Makefile in the directory which holds your extension. Then you can do make to compile, and later make install to install your module. By default, the extension is compiled and installed for the PostgreSQL installation that corresponds to the first pg_config program found in your path. You can use a different installation by setting PG_CONFIG to point to its pg_config program, either within the makefile or on the make command line.

Caution

Changing PG_CONFIG only works when building against PostgreSQL 8.3 or later. With older releases it does not work to set it to anything except pg_config; you must alter your PATH to select the installation to build against.

The scripts listed in the REGRESS variable are used for regression testing of your module, just like make installcheck is used for the main PostgreSQL server. For this to work you need to have a subdirectory named sql/ in your extension's directory, within which you put one file for each group of tests you want to run. The files should have extension .sql, which should not be included in the REGRESS list in the makefile. For each test there should be a file containing the expected result in a subdirectory named expected/, with extension .out. The tests are run by executing make installcheck, and the resulting output will be compared to the expected files. The differences will be written to the file regression.diffs in diff -c format. Note that trying to run a test which is missing the expected file will be reported as "trouble", so make sure you have all expected files.

Tip: The easiest way of creating the expected files is creating empty files, then carefully inspecting the result files after a test run (to be found in the results/ directory), and copying them to expected/ if they match what you want from the test.

35.9.8. Composite-Type Arguments

Composite types do not have a fixed layout like C structures. Instances of a composite type can contain null fields. In addition, composite types that are part of an inheritance hierarchy can have different fields than other members of the same inheritance hierarchy. Therefore, PostgreSQL provides a function interface for accessing fields of composite types from C.

Suppose we want to write a function to answer the query:

SELECT name, c_overpaid(emp, 1500) AS overpaid
    FROM emp
    WHERE name = 'Bill' OR name = 'Sam';

Using call conventions version 0, we can define c_overpaid as:

#include "postgres.h"
#include "executor/executor.h"  /* for GetAttributeByName() */

#ifdef PG_MODULE_MAGIC
PG_MODULE_MAGIC;
#endif

bool
c_overpaid(HeapTupleHeader t, /* the current row of emp */
           int32 limit)
{
    bool isnull;
    int32 salary;

    salary = DatumGetInt32(GetAttributeByName(t, "salary", &isnull));
    if (isnull)
        return false;
    return salary > limit;
}

In version-1 coding, the above would look like this:

#include "postgres.h"
#include "executor/executor.h"  /* for GetAttributeByName() */

#ifdef PG_MODULE_MAGIC
PG_MODULE_MAGIC;
#endif

PG_FUNCTION_INFO_V1(c_overpaid);

Datum
c_overpaid(PG_FUNCTION_ARGS)
{
    HeapTupleHeader  t = PG_GETARG_HEAPTUPLEHEADER(0);
    int32            limit = PG_GETARG_INT32(1);
    bool isnull;
    Datum salary;

    salary = GetAttributeByName(t, "salary", &isnull);
    if (isnull)
        PG_RETURN_BOOL(false);
    /* Alternatively, we might prefer to do PG_RETURN_NULL() for null salary. */

    PG_RETURN_BOOL(DatumGetInt32(salary) > limit);
}

GetAttributeByName is the PostgreSQL system function that returns attributes out of the specified row. It has three arguments: the argument of type HeapTupleHeader passed into the function, the name of the desired attribute, and a return parameter that tells whether the attribute is null. GetAttributeByName returns a Datum value that you can convert to the proper data type by using the appropriate DatumGetXXX() macro. Note that the return value is meaningless if the null flag is set; always check the null flag before trying to do anything with the result.

There is also GetAttributeByNum, which selects the target attribute by column number instead of name.

The following command declares the function c_overpaid in SQL:

CREATE FUNCTION c_overpaid(emp, integer) RETURNS boolean
    AS 'DIRECTORY/funcs', 'c_overpaid'
    LANGUAGE C STRICT;

Notice we have used STRICT so that we did not have to check whether the input arguments were NULL.

35.9.9. Returning Rows (Composite Types)

To return a row or composite-type value from a C-language function, you can use a special API that provides macros and functions to hide most of the complexity of building composite data types. To use this API, the source file must include:

#include "funcapi.h"

There are two ways you can build a composite data value (henceforth a "tuple"): you can build it from an array of Datum values, or from an array of C strings that can be passed to the input conversion functions of the tuple's column data types. In either case, you first need to obtain or construct a TupleDesc descriptor for the tuple structure. When working with Datums, you pass the TupleDesc to BlessTupleDesc, and then call heap_form_tuple for each row. When working with C strings, you pass the TupleDesc to TupleDescGetAttInMetadata, and then call BuildTupleFromCStrings for each row. In the case of a function returning a set of tuples, the setup steps can all be done once during the first call of the function.

Several helper functions are available for setting up the needed TupleDesc. The recommended way to do this in most functions returning composite values is to call:

TypeFuncClass get_call_result_type(FunctionCallInfo fcinfo,
                                   Oid *resultTypeId,
                                   TupleDesc *resultTupleDesc)

passing the same fcinfo struct passed to the calling function itself. (This of course requires that you use the version-1 calling conventions.) resultTypeId can be specified as NULL or as the address of a local variable to receive the function's result type OID. resultTupleDesc should be the address of a local TupleDesc variable. Check that the result is TYPEFUNC_COMPOSITE; if so, resultTupleDesc has been filled with the needed TupleDesc. (If it is not, you can report an error along the lines of "function returning record called in context that cannot accept type record".)

Tip: get_call_result_type can resolve the actual type of a polymorphic function result; so it is useful in functions that return scalar polymorphic results, not only functions that return composites. The resultTypeId output is primarily useful for functions returning polymorphic scalars.

Note: get_call_result_type has a sibling get_expr_result_type, which can be used to resolve the expected output type for a function call represented by an expression tree. This can be used when trying to determine the result type from outside the function itself. There is also get_func_result_type, which can be used when only the function's OID is available. However these functions are not able to deal with functions declared to return record, and get_func_result_type cannot resolve polymorphic types, so you should preferentially use get_call_result_type.

Older, now-deprecated functions for obtaining TupleDescs are:

TupleDesc RelationNameGetTupleDesc(const char *relname)

to get a TupleDesc for the row type of a named relation, and:

TupleDesc TypeGetTupleDesc(Oid typeoid, List *colaliases)

to get a TupleDesc based on a type OID. This can be used to get a TupleDesc for a base or composite type. It will not work for a function that returns record, however, and it cannot resolve polymorphic types.

Once you have a TupleDesc, call:

TupleDesc BlessTupleDesc(TupleDesc tupdesc)

if you plan to work with Datums, or:

AttInMetadata *TupleDescGetAttInMetadata(TupleDesc tupdesc)

if you plan to work with C strings. If you are writing a function returning set, you can save the results of these functions in the FuncCallContext structure — use the tuple_desc or attinmeta field respectively.

When working with Datums, use:

HeapTuple heap_form_tuple(TupleDesc tupdesc, Datum *values, bool *isnull)

to build a HeapTuple given user data in Datum form.

When working with C strings, use:

HeapTuple BuildTupleFromCStrings(AttInMetadata *attinmeta, char **values)

to build a HeapTuple given user data in C string form. values is an array of C strings, one for each attribute of the return row. Each C string should be in the form expected by the input function of the attribute data type. In order to return a null value for one of the attributes, the corresponding pointer in the values array should be set to NULL. This function will need to be called again for each row you return.

Once you have built a tuple to return from your function, it must be converted into a Datum. Use:

HeapTupleGetDatum(HeapTuple tuple)

to convert a HeapTuple into a valid Datum. This Datum can be returned directly if you intend to return just a single row, or it can be used as the current return value in a set-returning function.

An example appears in the next section.

35.9.10. Returning Sets

There is also a special API that provides support for returning sets (multiple rows) from a C-language function. A set-returning function must follow the version-1 calling conventions. Also, source files must include funcapi.h, as above.

A set-returning function (SRF) is called once for each item it returns. The SRF must therefore save enough state to remember what it was doing and return the next item on each call. The structure FuncCallContext is provided to help control this process. Within a function, fcinfo->flinfo->fn_extra is used to hold a pointer to FuncCallContext across calls.

typedef struct
{
    /*
     * Number of times we've been called before
     *
     * call_cntr is initialized to 0 for you by SRF_FIRSTCALL_INIT(), and
     * incremented for you every time SRF_RETURN_NEXT() is called.
     */
    uint32 call_cntr;

    /*
     * OPTIONAL maximum number of calls
     *
     * max_calls is here for convenience only and setting it is optional.
     * If not set, you must provide alternative means to know when the
     * function is done.
     */
    uint32 max_calls;

    /*
     * OPTIONAL pointer to result slot
     *
     * This is obsolete and only present for backwards compatibility, viz,
     * user-defined SRFs that use the deprecated TupleDescGetSlot().
     */
    TupleTableSlot *slot;

    /*
     * OPTIONAL pointer to miscellaneous user-provided context information
     *
     * user_fctx is for use as a pointer to your own data to retain
     * arbitrary context information between calls of your function.
     */
    void *user_fctx;

    /*
     * OPTIONAL pointer to struct containing attribute type input metadata
     *
     * attinmeta is for use when returning tuples (i.e., composite data types)
     * and is not used when returning base data types. It is only needed
     * if you intend to use BuildTupleFromCStrings() to create the return
     * tuple.
     */
    AttInMetadata *attinmeta;

    /*
     * memory context used for structures that must live for multiple calls
     *
     * multi_call_memory_ctx is set by SRF_FIRSTCALL_INIT() for you, and used
     * by SRF_RETURN_DONE() for cleanup. It is the most appropriate memory
     * context for any memory that is to be reused across multiple calls
     * of the SRF.
     */
    MemoryContext multi_call_memory_ctx;

    /*
     * OPTIONAL pointer to struct containing tuple description
     *
     * tuple_desc is for use when returning tuples (i.e., composite data types)
     * and is only needed if you are going to build the tuples with
     * heap_form_tuple() rather than with BuildTupleFromCStrings().  Note that
     * the TupleDesc pointer stored here should usually have been run through
     * BlessTupleDesc() first.
     */
    TupleDesc tuple_desc;

} FuncCallContext;

An SRF uses several functions and macros that automatically manipulate the FuncCallContext structure (and expect to find it via fn_extra). Use:

SRF_IS_FIRSTCALL()

to determine if your function is being called for the first or a subsequent time. On the first call (only) use:

SRF_FIRSTCALL_INIT()

to initialize the FuncCallContext. On every function call, including the first, use:

SRF_PERCALL_SETUP()

to properly set up for using the FuncCallContext and clearing any previously returned data left over from the previous pass.

If your function has data to return, use:

SRF_RETURN_NEXT(funcctx, result)

to return it to the caller. (result must be of type Datum, either a single value or a tuple prepared as described above.) Finally, when your function is finished returning data, use:

SRF_RETURN_DONE(funcctx)

to clean up and end the SRF.

The memory context that is current when the SRF is called is a transient context that will be cleared between calls. This means that you do not need to call pfree on everything you allocated using palloc; it will go away anyway. However, if you want to allocate any data structures to live across calls, you need to put them somewhere else. The memory context referenced by multi_call_memory_ctx is a suitable location for any data that needs to survive until the SRF is finished running. In most cases, this means that you should switch into multi_call_memory_ctx while doing the first-call setup.

A complete pseudo-code example looks like the following:

Datum
my_set_returning_function(PG_FUNCTION_ARGS)
{
    FuncCallContext  *funcctx;
    Datum             result;
    further declarations as needed

    if (SRF_IS_FIRSTCALL())
    {
        MemoryContext oldcontext;

        funcctx = SRF_FIRSTCALL_INIT();
        oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
        /* One-time setup code appears here: */
        user code
        if returning composite
            build TupleDesc, and perhaps AttInMetadata
        endif returning composite
        user code
        MemoryContextSwitchTo(oldcontext);
    }

    /* Each-time setup code appears here: */
    user code
    funcctx = SRF_PERCALL_SETUP();
    user code

    /* this is just one way we might test whether we are done: */
    if (funcctx->call_cntr < funcctx->max_calls)
    {
        /* Here we want to return another item: */
        user code
        obtain result Datum
        SRF_RETURN_NEXT(funcctx, result);
    }
    else
    {
        /* Here we are done returning items and just need to clean up: */
        user code
        SRF_RETURN_DONE(funcctx);
    }
}

A complete example of a simple SRF returning a composite type looks like:

PG_FUNCTION_INFO_V1(retcomposite);

Datum
retcomposite(PG_FUNCTION_ARGS)
{
    FuncCallContext     *funcctx;
    int                  call_cntr;
    int                  max_calls;
    TupleDesc            tupdesc;
    AttInMetadata       *attinmeta;

    /* stuff done only on the first call of the function */
    if (SRF_IS_FIRSTCALL())
    {
        MemoryContext   oldcontext;

        /* create a function context for cross-call persistence */
        funcctx = SRF_FIRSTCALL_INIT();

        /* switch to memory context appropriate for multiple function calls */
        oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);

        /* total number of tuples to be returned */
        funcctx->max_calls = PG_GETARG_UINT32(0);

        /* Build a tuple descriptor for our result type */
        if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
            ereport(ERROR,
                    (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
                     errmsg("function returning record called in context "
                            "that cannot accept type record")));

        /*
         * generate attribute metadata needed later to produce tuples from raw
         * C strings
         */
        attinmeta = TupleDescGetAttInMetadata(tupdesc);
        funcctx->attinmeta = attinmeta;

        MemoryContextSwitchTo(oldcontext);
    }

    /* stuff done on every call of the function */
    funcctx = SRF_PERCALL_SETUP();

    call_cntr = funcctx->call_cntr;
    max_calls = funcctx->max_calls;
    attinmeta = funcctx->attinmeta;

    if (call_cntr < max_calls)    /* do when there is more left to send */
    {
        char       **values;
        HeapTuple    tuple;
        Datum        result;

        /*
         * Prepare a values array for building the returned tuple.
         * This should be an array of C strings which will
         * be processed later by the type input functions.
         */
        values = (char **) palloc(3 * sizeof(char *));
        values[0] = (char *) palloc(16 * sizeof(char));
        values[1] = (char *) palloc(16 * sizeof(char));
        values[2] = (char *) palloc(16 * sizeof(char));

        snprintf(values[0], 16, "%d", 1 * PG_GETARG_INT32(1));
        snprintf(values[1], 16, "%d", 2 * PG_GETARG_INT32(1));
        snprintf(values[2], 16, "%d", 3 * PG_GETARG_INT32(1));

        /* build a tuple */
        tuple = BuildTupleFromCStrings(attinmeta, values);

        /* make the tuple into a datum */
        result = HeapTupleGetDatum(tuple);

        /* clean up (this is not really necessary) */
        pfree(values[0]);
        pfree(values[1]);
        pfree(values[2]);
        pfree(values);

        SRF_RETURN_NEXT(funcctx, result);
    }
    else    /* do when there is no more left */
    {
        SRF_RETURN_DONE(funcctx);
    }
}

One way to declare this function in SQL is:

CREATE TYPE __retcomposite AS (f1 integer, f2 integer, f3 integer);

CREATE OR REPLACE FUNCTION retcomposite(integer, integer)
    RETURNS SETOF __retcomposite
    AS 'filename', 'retcomposite'
    LANGUAGE C IMMUTABLE STRICT;

A different way is to use OUT parameters:

CREATE OR REPLACE FUNCTION retcomposite(IN integer, IN integer,
    OUT f1 integer, OUT f2 integer, OUT f3 integer)
    RETURNS SETOF record
    AS 'filename', 'retcomposite'
    LANGUAGE C IMMUTABLE STRICT;

Notice that in this method the output type of the function is formally an anonymous record type.

The directory contrib/tablefunc in the source distribution contains more examples of set-returning functions.

35.9.11. Polymorphic Arguments and Return Types

C-language functions can be declared to accept and return the polymorphic types anyelement, anyarray, anynonarray, and anyenum. See Section 35.2.5 for a more detailed explanation of polymorphic functions. When function arguments or return types are defined as polymorphic types, the function author cannot know in advance what data type it will be called with, or need to return. There are two routines provided in fmgr.h to allow a version-1 C function to discover the actual data types of its arguments and the type it is expected to return. The routines are called get_fn_expr_rettype(FmgrInfo *flinfo) and get_fn_expr_argtype(FmgrInfo *flinfo, int argnum). They return the result or argument type OID, or InvalidOid if the information is not available. The structure flinfo is normally accessed as fcinfo->flinfo. The parameter argnum is zero based. get_call_result_type can also be used as an alternative to get_fn_expr_rettype.

For example, suppose we want to write a function to accept a single element of any type, and return a one-dimensional array of that type:

PG_FUNCTION_INFO_V1(make_array);
Datum
make_array(PG_FUNCTION_ARGS)
{
    ArrayType  *result;
    Oid         element_type = get_fn_expr_argtype(fcinfo->flinfo, 0);
    Datum       element;
    bool        isnull;
    int16       typlen;
    bool        typbyval;
    char        typalign;
    int         ndims;
    int         dims[MAXDIM];
    int         lbs[MAXDIM];

    if (!OidIsValid(element_type))
        elog(ERROR, "could not determine data type of input");

    /* get the provided element, being careful in case it's NULL */
    isnull = PG_ARGISNULL(0);
    if (isnull)
        element = (Datum) 0;
    else
        element = PG_GETARG_DATUM(0);

    /* we have one dimension */
    ndims = 1;
    /* and one element */
    dims[0] = 1;
    /* and lower bound is 1 */
    lbs[0] = 1;

    /* get required info about the element type */
    get_typlenbyvalalign(element_type, &typlen, &typbyval, &typalign);

    /* now build the array */
    result = construct_md_array(&element, &isnull, ndims, dims, lbs,
                                element_type, typlen, typbyval, typalign);

    PG_RETURN_ARRAYTYPE_P(result);
}

The following command declares the function make_array in SQL:

CREATE FUNCTION make_array(anyelement) RETURNS anyarray
    AS 'DIRECTORY/funcs', 'make_array'
    LANGUAGE C IMMUTABLE;

There is a variant of polymorphism that is only available to C-language functions: they can be declared to take parameters of type "any". (Note that this type name must be double-quoted, since it's also a SQL reserved word.) This works like anyelement except that it does not constrain different "any" arguments to be the same type, nor do they help determine the function's result type. A C-language function can also declare its final parameter to be VARIADIC "any". This will match one or more actual arguments of any type (not necessarily the same type). These arguments will not be gathered into an array as happens with normal variadic functions; they will just be passed to the function separately. The PG_NARGS() macro and the methods described above must be used to determine the number of actual arguments and their types when using this feature.

35.9.12. Shared Memory and LWLocks

Add-ins can reserve LWLocks and an allocation of shared memory on server startup. The add-in's shared library must be preloaded by specifying it in shared_preload_libraries. Shared memory is reserved by calling:

void RequestAddinShmemSpace(int size)

from your _PG_init function.

LWLocks are reserved by calling:

void RequestAddinLWLocks(int n)

from _PG_init.

To avoid possible race-conditions, each backend should use the LWLock AddinShmemInitLock when connecting to and initializing its allocation of shared memory, as shown here:

static mystruct *ptr = NULL;

if (!ptr)
{
        bool    found;

        LWLockAcquire(AddinShmemInitLock, LW_EXCLUSIVE);
        ptr = ShmemInitStruct("my struct name", size, &found);
        if (!found)
        {
                initialize contents of shmem area;
                acquire any requested LWLocks using:
                ptr->mylockid = LWLockAssign();
        }
        LWLockRelease(AddinShmemInitLock);
}

Artikel sebelumnya: Artikel seterusnya: