![]() Wa_emttechnology: "emttechnology:inteltechnologies/oneapi", Wa_emtprogramminglanguage: "emtprogramminglanguage:cc/dataparallelcdpc,emtprogramminglanguage:cc,emtprogramminglanguage:sycl", ![]() Wa_emtcontenttype: "emtcontenttype:designanddevelopmentreference/developerguide", ![]() In parallel to oneVPL accelerating decoding and encoding of media streams, oneDNN (oneAPI Deep Neural Network library) delivers AI optimized kernels enabled to accelerate inference modes in TensorFlow or PyTorch frameworks, or with the OpenVINO model optimizer and inference engine to further accelerate inference and speed customer deployment of their workloads. Media distributors can choose from the two leading media frameworks FFMPEG or GStreamer, both enabled for acceleration with oneVPL on Intel CPUs and GPUs. Media streaming and delivery software stacks lean on Intel ® oneVPL to decode and encode acceleration for all the major codecs including AV1. AI models can be applied to the decoded streams utilizing Intel ® Data Center GPU Flex Series’ X e-cores. ![]() It supports 8 simultaneous 4K streams or more than 30 1080p streams per card. Targeting data center cloud gaming, media streaming and video analytics applications, Intel ® Data Center GPU Flex Series provide hardware accelerated AV1 encoder, delivering a 30% bit-rate improvement without compromising on quality. ![]()
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |