Reporting Services 2: 参数化报表
这篇随笔介绍一下Reporting Services的参数化报表(Parameterized Report)。 按照报表内容的可变性,报表分为静态报表和参数化报表,数据驱动的参数化报表更符合人们的需求习惯。 首先来看一下使用Reporting Services创建参数化报表的步骤: 1、在项目AWRepor
这篇随笔介绍一下Reporting Services的参数化报表(Parameterized Report)。
按照报表内容的可变性,报表分为静态报表和参数化报表,数据驱动的参数化报表更符合人们的需求习惯。
首先来看一下使用Reporting Services创建参数化报表的步骤:
1、在项目AWReport中使用右键菜单“添加”→“新建项”→“报表”添加一个新的报表rptPR.rdl;
2、在报表rptPR.rdl的“数据”选项卡中,单击数据集下来列表新建数据集DataEmployee(如图1所示),单击“通用查询设计器”切换到查询设计状态,添加表HumanResources.Employee,并选择所有列作为输出。
图1 新建数据集
参数名称
数据类型
提示
pTitle
String
头衔
pMStatus
String
婚姻状况
pGender
String
性别
pSFlag
Boolean
是否带薪
pBDate
DateTime
生日
表1 参数名称、数据类型和提示的设置
4、切换到报表的“数据”选项卡,为相应字段添加图2所示的筛选条件。
图2 为字段添加参数作为筛选条件
5、切换到报表的“布局”选项卡,为报表设计图3所示的布局。
图3 报表布局
6、在预览状态下,在报表中显示的参数项目中为参数指定不同的值,就可以看到图4所示的参数化报表了。
图4 预览状态下的参数化报表
“婚姻状况”的实际取值有两种——M(Married,已婚的)和S(Single或Sole,未婚的),而对于用户来说,该参数的值使用文本框来接受输入显然是不合适的,例如输入单词的全拼或Unmarried的第一个字母U就查不出任何结果,可以考虑在“报表参数”对话框的“可用值”区域进行图5所示的设置,这样“婚姻状况”参数的输入就可以使用下拉列表来进行了。“性别”参数应该进行同样的设置。
图5 为“婚姻状况”设置可用值
如果用户对于公司雇员的头衔不熟悉或者头衔有不同的叫法,同样会带来上面的困扰,而“头衔”字段的值并不是固定的,系统维护时很可能出现增加或减少的情况,这时候可以考虑从查询中为该参数指定“可用值”。需要注意的是,如果使用同一数据集DataEmployee中的Title字段作为可用值的来源,而Title字段又是第一个参数时会出现前向依赖错误,因为此时Title字段的取值是由后面的参数决定的;而如果将Title字段作为最后一个参数时,每次更改其它参数的值都会引起Title参数下拉列表中数据的刷新和变动(这个可用来做如省/市/县等的联动)。此时,应该考虑,通过图1所示的方法另外新建一个数据集DataTitle,并指定其数据来源为:
SELECT Distinct Title FROM HumanResources.Employee
然后按照图6所示的设置指定“头衔”参数的可用值。
图6 为“头衔”设置可用值
另外,可以为每个参数指定“默认值”,减少常见查询时的数据输入。
报表在一个Web项目中的显示如图7所示。
图7 Web项目中显示的参数化报表(点击小图看大图)
Reporting Services在一定程度上满足了用户对参数化报表的需求,但是也存在一些可以改进的地方,如一旦定义参数就不能忽略参数的值进行报表的查看。

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

New feature of PHP5.4 version: How to use callable type hint parameters to accept callable functions or methods Introduction: PHP5.4 version introduces a very convenient new feature - you can use callable type hint parameters to accept callable functions or methods . This new feature allows functions and methods to directly specify the corresponding callable parameters without additional checks and conversions. In this article, we will introduce the use of callable type hints and provide some code examples,

Product parameters refer to the meaning of product attributes. For example, clothing parameters include brand, material, model, size, style, fabric, applicable group, color, etc.; food parameters include brand, weight, material, health license number, applicable group, color, etc.; home appliance parameters include brand, size, color , place of origin, applicable voltage, signal, interface and power, etc.

i9-12900H is a 14-core processor. The architecture and technology used are all new, and the threads are also very high. The overall work is excellent, and some parameters have been improved. It is particularly comprehensive and can bring users Excellent experience. i9-12900H parameter evaluation review: 1. i9-12900H is a 14-core processor, which adopts the q1 architecture and 24576kb process technology, and has been upgraded to 20 threads. 2. The maximum CPU frequency is 1.80! 5.00ghz, which mainly depends on the workload. 3. Compared with the price, it is very suitable. The price-performance ratio is very good, and it is very suitable for some partners who need normal use. i9-12900H parameter evaluation and performance running scores

During the development process, we may encounter such an error message: PHPWarning: in_array()expectsparameter. This error message will appear when using the in_array() function. It may be caused by incorrect parameter passing of the function. Let’s take a look at the solution to this error message. First, you need to clarify the role of the in_array() function: check whether a value exists in the array. The prototype of this function is: in_a

Hyperbolic functions are defined using hyperbolas instead of circles and are equivalent to ordinary trigonometric functions. It returns the ratio parameter in the hyperbolic sine function from the supplied angle in radians. But do the opposite, or in other words. If we want to calculate an angle from a hyperbolic sine, we need an inverse hyperbolic trigonometric operation like the hyperbolic inverse sine operation. This course will demonstrate how to use the hyperbolic inverse sine (asinh) function in C++ to calculate angles using the hyperbolic sine value in radians. The hyperbolic arcsine operation follows the following formula -$$\mathrm{sinh^{-1}x\:=\:In(x\:+\:\sqrt{x^2\:+\:1})}, Where\:In\:is\:natural logarithm\:(log_e\:k)

C++ parameter type safety checking ensures that functions only accept values of expected types through compile-time checks, run-time checks, and static assertions, preventing unexpected behavior and program crashes: Compile-time type checking: The compiler checks type compatibility. Runtime type checking: Use dynamic_cast to check type compatibility, and throw an exception if there is no match. Static assertion: Assert type conditions at compile time.

An important task in ML is model selection, or using data to find the best model or parameters for a given task. This is also called tuning. You can tune a single estimator, such as LogisticRegression, or an entire pipeline that includes multiple algorithms, characterizations, and other steps. Users can tune the entire Pipeline at once, rather than tuning each element in the Pipeline individually. An important task in ML is model selection, or using data to find the best model or parameters for a given task. This is also called tuning. A single Estimator (such as LogisticRegression) can be tuned, or

Although large-scale language models (LLM) have strong performance, the number of parameters can easily reach hundreds of billions, and the demand for computing equipment and memory is so large that ordinary companies cannot afford it. Quantization is a common compression operation that sacrifices some model performance in exchange for faster inference speed and less memory requirements by reducing the accuracy of model weights (such as 32 bit to 8 bit). But for LLMs with more than 100 billion parameters, existing compression methods cannot maintain the accuracy of the model, nor can they run efficiently on hardware. Recently, researchers from MIT and NVIDIA jointly proposed a general-purpose post-training quantization (GPQ).
