Jump to content

Dual-use foundation model: Difference between revisions

From Hackerpedia
imported>Unknown user
No edit summary
 
imported>Unknown user
No edit summary
 
(2 intermediate revisions by the same user not shown)
Line 1: Line 1:
{{LanguageHeader|en}}
{{LanguageHeader|en}}
{{CyberTerm|definition=<p>An AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters, such as by:</p><p style="margin-left:40px;">(i) substantially lowering the barrier of entry for non-experts to design, synthesize, acquire, or use chemical, biological, radiological, or nuclear (CBRN) weapons;</p><p style="margin-left:40px;">(ii) enabling powerful offensive cyber operations through automated vulnerability discovery and exploitation against a wide range of potential targets of cyber attacks; or</p><p style="margin-left:40px;">(iii) permitting the evasion of human control or oversight through means of deception or obfuscation.</p><p>Models meet this definition even if they are provided to end users with technical safeguards that attempt to prevent users from taking advantage of the relevant unsafe capabilities.</p>|source=NIST SP 800-218A}}
{{CyberTerm|definition=An AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters, such as by:(i) substantially lowering the barrier of entry for non-experts to design, synthesize, acquire, or use chemical, biological, radiological, or nuclear (CBRN) weapons;(ii) enabling powerful offensive cyber operations through automated vulnerability discovery and exploitation against a wide range of potential targets of cyber attacks; or(iii) permitting the evasion of human control or oversight through means of deception or obfuscation.Models meet this definition even if they are provided to end users with technical safeguards that attempt to prevent users from taking advantage of the relevant unsafe capabilities.|source=NIST SP 800-218A}}

Latest revision as of 00:24, 20 January 2026

Languages: English | Français

Dual-use foundation model

An AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters, such as by:(i) substantially lowering the barrier of entry for non-experts to design, synthesize, acquire, or use chemical, biological, radiological, or nuclear (CBRN) weapons;(ii) enabling powerful offensive cyber operations through automated vulnerability discovery and exploitation against a wide range of potential targets of cyber attacks; or(iii) permitting the evasion of human control or oversight through means of deception or obfuscation.Models meet this definition even if they are provided to end users with technical safeguards that attempt to prevent users from taking advantage of the relevant unsafe capabilities.


Source: NIST SP 800-218A | Category: