This VDE SPEC provides a way to describe certain socio-technical characteristics of systems and applications that incorporate artificial intelligence techniques and methods. The scope of application refers to products for which a particularly demanding level of trust is desired or required. By applying the VCIO model explained in this standard, it is possible to describe whether a product adheres to specific values and can be trusted. This standard can therefore e.g., form the basis for attaching a trust label to a product. The product characterisation according to this standard can be used in a wide variety of contexts. End consumers, companies and government organizations can use the description to define requirements or to compare different products. In doing so, it also becomes possible to assess the compliance with regard to different values (for example, one product might better comply with privacy requirements, while the other might comply better with transparency criteria). In addition, target requirements can be set during the development of a given product. Those requirements are then considered in the development process in order to achieve a desired value compliance. The standardised description is independent of the risk posed by the product and does not define any minimum requirements in the context of this. It describes compliance with the specified values in an orthogonal manner. Nevertheless, companies, users or government bodies can themselves set requirements for a minimum level within this framework. The consortium has worked towards making this standard compatible with the emerging AI Act at the European level. In the case of AI products, the objective is to have a description of trustworthiness aspects that both demonstrate compliance of the product with the AI Act and provide differentiation in the market. The focus of the standard is on systems and applications that incorporate artificial intelligence techniques and methods. The criteria, indicators and observables therefore aim at characteristics of AI systems, like underlying data sets, the precise definition of the scope, the development, the application, processes, and the clear assignment of responsibilities. In addition, aspects that are not limited to AI systems, but are necessary to demonstrate their trustworthiness, were considered.