Takaichi LDP landslide - watch with over 2/3 of seats : strongest mandate in living memory -good news for humans AI (Japan as world class benchmark connecting community actions and ai data model- also Jensen Huang's favorite country for diversity of engineering startupsGemini update relevance Norman Macrae (Von Neumann & Japan/Economist diaries) legacy to AI's Q2 AIWHI ED EconomistDiary.com 2/3 of brainpower involves Asia Rising -to map intelligence links est 1943
by Scot teenage navigator Allied Bomber Command Burma see:->
Future History..
...
>>

 Sovereignty of Japan AI & \Engineering unique - history explains why its Jensen Huang's favorite space for science tourism and community application of machines with billion ti8mes more maths brain power

If you map the legacy of NET (Neumann-Einstein-Turing) Japan was first to implememt demings recursive qyailty systems making it able to value microelectronic innovation matching moores law 100 fi=old advance per decade 1965-1995. Japan shared this consequence with futures of Korea Taiwean HK Singapore until financial slump late 1980s. Nonetheless a generation of Japans digital twinning with us west coast brough supercity infrastructure, micro-design to electronic goods. advances in robotics. All of this aligned to consciousness of nature and ritual celebration of rising sun values. 

Japan is potentially the most exciting AI part=ner of deep community needs everywhere, but this has different first priorities for 2/3 peoples who are Asian and 1/6 people who make up the rich western-north or the poot west-south.Its just as well NHK media listens deeply with its social tourism programs such as somewhere strret 

.


Saturday, May 3, 2025

 Some interesting interviewees at pregame to jensen huang CES keynote 5 Jan 2026

nb Deepak Pathak

ceo of mercedez benz - 5 years into partnerjhip with nvidia that has become world's safest car and roboti8c platform  Alpamayo

For the NVIDIA pregame show before Jensen Huang's CES 2026 keynote, moderators included Mark Lipacis (Evercore)Vivek Arya (Bank of America Securities), and Sarah Guo (Conviction), with guests like Ola Källenius (Mercedes-Benz CEO)Deepak Pathak (Skild AI CEO)Harjot Gill (CodeRabbit CEO)Sridhar Ramaswamy (Snowflake CEO), and Shiv Rao (Abridge CEO) discussing AI, robotics, and the AI-native era. 


also during keynote fascinating recommendations by jensen iuncluded

founder of perplexity - developed foiirst multi-model

crediting deep seek as first successful example of inference (not jyist pretraining llm) - 

google/gemini ai clarifies:

At the CES 2026 keynote on January 5, 2026, Jensen Huang explicitly cited DeepSeek R1 and its successors as a "gift to the AI industry" that has forced a fundamental shift in how NVIDIA approaches inference. 
Huang detailed how DeepSeek’s breakthroughs in test-time scaling and reasoning have inspired NVIDIA to pivot its architecture from simple "one-shot" answers to complex, multi-step inference. 
Key Details of the "DeepSeek Inspiration"
  • The Paradigm Shift: Huang stated that the industry's old mental model—where AI only required massive compute for training—is "wrong". He credited DeepSeek with proving that "reasoning AI" consumes up to 100 times more compute during inference than non-reasoning AI.
  • Vera Rubin Architecture: Inspired by the need for more efficient reasoning power, Huang unveiled the Vera Rubin supercomputing architecture at CES 2026. It is specifically designed to handle the heavy inference demands of reasoning models, offering 5x the inference performance of the previous Blackwell chips.
  • Cost Reduction: Following DeepSeek’s demonstration of extreme cost-efficiency, NVIDIA's new Vera Rubin platform aims to reduce inference token costs by tenfold.
  • Open Ecosystem Support: Huang noted that DeepSeek R1 was the world's first open-source reasoning model to "shock the world". This has pushed NVIDIA to further integrate "Agentic AI" (like Nemotron) and open-source models into its software stack to ensure they run optimally on NVIDIA hardware.
  • Inference Context Memory: To support the long "thinking" chains required by reasoning models like DeepSeek's, NVIDIA announced a new inference context memory storage platform to optimize how data is stored during complex reasoning tasks. 

Huang concluded that rather than DeepSeek making NVIDIA less relevant, its innovations have "ignited global enthusiasm" that is driving extraordinary demand for higher-performance chips capable of sustained, high-intensity inference. 


No comments:

Post a Comment