Library/XNNPACK
Library/XNNPACKForked

google/XNNPACK

XNNPACK

High-efficiency floating-point neural network inference operators for mobile, server, and Web

Builder

Google

Google

google • big-tech

Stars

2,291

Using upstream star count

Forks

475

Using upstream fork count

Open Issues

0

Activity Score

0/100

0 commits in 30d

Created

Sep 13, 2019

Project creation date

README Summary

XNNPACK is a highly optimized library of floating-point neural network inference operators designed for mobile, server, and web platforms. It provides efficient implementations of common neural network operations with extensive hardware acceleration support across different architectures including ARM, x86, and WebAssembly.

AI Dev Skills

Unmapped

Neural Network OptimizationHardware AccelerationMobile AI InferenceSIMD ProgrammingCross-platform Performance EngineeringLow-level Neural Network OperatorsQuantized Neural NetworksARM NEON Optimizationx86 AVX OptimizationWebAssembly SIMD

Tags

Neural Network OptimizationHardware AccelerationMobile AI InferenceSIMD ProgrammingCross-platform Performance EngineeringLow-level Neural Network OperatorsQuantized Neural NetworksARM NEON Optimizationx86 AVX OptimizationWebAssembly SIMDEdge/MobileEdge AI DeploymentSelf-hostedReal-time AI ApplicationsMultimodalOn-premiseOn-device AIBrowser/WASMMobile Neural Network InferenceMobile AICloud APIBrowser-based ML ModelsServer-side Model ServingEmbedded AI SystemsEdge ComputingC

Taxonomy

Recent Activity

Updated 1 months ago

7 Days

0

30 Days

0

90 Days

0

Quality

production
Quality
high
Maturity
production

Categories

Dev Tools & AutomationPrimaryInference & ServingML Platform & InfrastructureCoding & Dev ToolsMultimodal AIEdge & Mobile AIOther AI / MLRoboticsFoundation Models

PM Skills

Developer Platform

Languages

C100.0%

Timeline

Project created
Sep 13, 2019
Forked
Mar 13, 2026
Your last push
1 months ago
Upstream last push
6 days ago
Tracked since
Mar 13, 2026

Similar Repos

pgvector cosine similarity · $0

Loading…