Artificial intelligence is often described as a latent threat to corporate data sovereignty. If you don’t want to expose yourself to this threat, you need to develop strategies and tactics for using AI securely and productively. VNC explains the most important parameters of Confidential AI.
Artificial intelligence and confidential computing are in a seemingly irresolvable tension: on the one hand, AI poses a latent threat to confidential computing, but on the other hand, it can help in its implementation. The magic word for resolving this contradiction is confidential AI. It describes a concept for dealing with AI that identifies the major uncertainty factors and makes them manageable. Data and data sovereignty come first. In addition, AI can be used in a targeted and secure manner to optimize collaboration and communication applications. VNC, a leading developer of open source-based enterprise applications, explains the key elements:
- Decentralized LLMs: Large Language Models (LLMs) typically run on the vendors’ proprietary hyperscaler platforms. What happens to the data there is unknown. Therefore, it makes sense to download the LLMs to your own platforms using appropriate toolkits (such as Intel OpenVINO). This way, both the data and the applications remain in your own data center and are protected from unauthorized access.
- Local AI for application optimization: Stripped-down models of these LLMs can be integrated directly into applications in dedicated software stacks. AI functions such as intelligent assistants can be integrated directly and transparently at any time. They are also potentially available to all applications in the same form across modules via modular platforms.
- Train models using local knowledge bases: LLMs are usually trained with large amounts of data. By using LLMs locally, they can also be trained and optimized quickly, specifically and practically, by querying the company’s own database. This also makes them transparent and auditable at any time.
- Acceleration through indexing: In addition, access to the local knowledge base can be accelerated by indexing the databases. Queries are no longer made against the entire database but only against the indexes. This increases speed, availability and accuracy – and therefore productivity.
- Open source: Open source is the natural enemy of proprietary systems. In order to avoid vendor lock-in, not only should chronically non-transparent vendor-specific AI platforms (PaaS) be avoided for confidential computing, but open source should also be the first choice for AI applications at the software level (SaaS).
“Confidential AI gives companies and users back the sovereignty over their own data and applications,”
emphasizes Andrea Wörrlein, Managing Director of VNC in Berlin and Member of the Board of VNC AG in Zug.
“Artificial intelligence can thus be used in a secure, targeted, independent and practical way to increase productivity and job satisfaction.”
Dear reader,
Opinions expressed in the op-ed section are solely those of the individual author and do not represent the official stance of our newspaper. We believe in providing a platform for a wide range of voices and perspectives, even those that may challenge or differ from our own. We remain committed to providing our readers with high-quality, fair, and balanced journalism. Thank you for your continued support.