Insights & Announcements

Homepage > ROI AI Brief: Investment Tech Weekly #12
ROI AI Brief: Investment Tech Weekly #12
Posted on 19 January, 2026

A weekly Newsletter on technology applications in investment management with an AI / LLM and automation angle. Curated news, announcements, and posts, primarily directly from sources. We apply some of the ROI's Kubro(TM) Engine tools at the backend for production, yet with a human in the loop (for now). It's a start, and it will evolve on a weekly basis.

Disclaimers: content not fully human-verified, with AI summaries below. AI/LLMs may hallucinate and provide inaccurate summaries. Select items only, not intended as a comprehensive view. For information purposes only. Please DM with feedback and requests


1. SELECTED FROM ARXIV

🔹 QuantEval Benchmark Introduced for Assessing Large Language Models on Financial Quantitative Tasks

QuantEval is a benchmark introduced by Zhaolu Kang, Junhao Gong, Wenqing Hu, and colleagues on 13 January 2026 to evaluate large language models (LLMs) in financial quantitative tasks across three dimensions: knowledge-based question answering, quantitative mathematical reasoning, and quantitative strategy coding. QuantEval uniquely incorporates a CTA-style backtesting framework that executes model-generated strategies and assesses them with financial performance metrics. The authors evaluated state-of-the-art open-source and proprietary LLMs, finding significant gaps compared to human experts in reasoning and strategy coding. They also conducted supervised fine-tuning and reinforcement learning experiments showing consistent improvements.

🔗 Source: arxiv.org View Source | Found on Jan 14, 2026

🔹 LLaMA-3-8B Model Finetuned with LoRA for Financial Named Entity Recognition

The paper by Zhiming Lian presents a method for financial named entity recognition (NER) using Meta's Llama 3 8B model, enhanced through instruction fine-tuning and Low-Rank Adaptation (LoRA). Each of the 1,693 annotated sentences in the corpus is formatted as an instruction-input-output triple to facilitate learning. The approach achieves a micro-F1 score of 0.894, outperforming Qwen3-8B, Baichuan2-7B, T5, and BERT-Base models. The study includes dataset statistics, training hyperparameters, and visualizations of entity density, learning curves, and evaluation metrics. Results demonstrate state-of-the-art performance for domain-sensitive NER tasks.

🔗 Source: arxiv.org View Source | Found on Jan 16, 2026

🔹 Comparative Analysis of Prompt-Based Versus Fine-Tuned Large Language Models

The article "Simplify-This: A Comparative Analysis of Prompt-Based and Fine-Tuned LLMs" by Eilam Cohen, Itamar Bul, Danielle Inbar, and Omri Loewenbach presents a comparative study of prompt-based versus fine-tuned large language models (LLMs) for text simplification using encoder-decoder architectures across multiple benchmarks. The study finds that fine-tuned models consistently achieve stronger structural simplification, while prompt-based approaches yield higher semantic similarity scores but often copy inputs. Human evaluation favors the outputs from fine-tuned models overall. The authors release code, a cleaned derivative dataset, model checkpoints, and prompt templates to support reproducibility and future research.

🔗 Source: arxiv.org View Source | Found on Jan 12, 2026

🔹 Adaptive Dataflow System Developed for Financial Time-Series Synthesis

The article "History Is Not Enough: An Adaptive Dataflow System for Financial Time-Series Synthesis" by Haochong Xia and co-authors introduces a drift-aware dataflow system that integrates machine learning-based adaptive control into financial data curation. The system features a parameterized data manipulation module with single-stock transformations, multi-stock mix-ups, and curation operations, managed by an adaptive planner-scheduler using gradient-based bi-level optimization. This unified differentiable framework supports provenance-aware replay and continuous data quality monitoring. Experiments on forecasting and reinforcement learning trading tasks show improved model robustness and risk-adjusted returns, offering a generalizable solution for adaptive financial data management.

🔗 Source: arxiv.org View Source | Found on Jan 16, 2026

🔹 Multi-Agent Chain-of-Thought Method Counters Manipulative Bots in Memecoin Copy Trading

The article introduces an explainable multi-agent system for meme coin copy trading, inspired by asset management teams and utilizing few-shot chain-of-thought prompting to equip agents with professional trading knowledge. The system addresses challenges such as manipulative bots, uncertain wallet performance, and trade execution lag in the meme coin market. Empirical evaluation on transaction data from 1,000 meme coin projects demonstrates that the multi-agent approach achieves 73% precision in identifying high-quality projects and 70% precision in identifying key opinion leader (KOL) wallets. The selected KOLs collectively generated a total profit of $500,000 across these projects.

🔗 Source: arxiv.org View Source | Found on Jan 14, 2026

🔹 Knowledge Graph Improves Large Language Model Numerical Reasoning in Financial Documents

The article by Aryan Mishra and Akash Anil, submitted on 12 Jan 2026, presents a framework that enhances numerical reasoning in financial documents by integrating Knowledge Graphs (KGs) with Large Language Model (LLM) predictions. The KGs are extracted using a proposed schema directly from the processed document. The framework was evaluated on the FinQA benchmark dataset using the open-source Llama 3.1 8B Instruct model and demonstrated an approximate 12% improvement in execution accuracy compared to the vanilla LLM.

🔗 Source: arxiv.org View Source | Found on Jan 13, 2026

🔹 Deep Learning Insights Improve Portfolio Optimization Methods

The article "Enhancing Portfolio Optimization with Deep Learning Insights" by Brandon Luo and Jim Skufca, submitted on 12 January 2026, presents a deep learning approach to portfolio optimization for long-only, multi-asset strategies across market cycles. The authors utilize pre-training techniques and transformer architectures to incorporate state variables when training models with limited regime data. Their evaluation demonstrates that these models outperform traditional methods, showing resilience in volatile markets and highlighting the importance of adaptive strategies for improved predictive accuracy in dynamic market conditions.

🔗 Source: arxiv.org View Source | Found on Jan 14, 2026


2. BIG TECH AI AND DATA

🔹 Anthropic Releases Economic Index Report on Economic Primitives

In November 2025, Claude usage was concentrated in coding tasks, with the top 10 tasks accounting for 24% of Claude.ai conversations and 32% of API traffic. Augmentation (collaborative use) rose to 52% on Claude.ai, while automation dominated API use at 74%. Usage was highest in the US, India, Japan, the UK, and South Korea; within the US, states with more tech workers had higher adoption. A 1% increase in GDP per capita correlated with a 0.7% rise in usage globally. Productivity gains were estimated at up to 1.8 percentage points annually but fell to about 1.0 after adjusting for task reliability.

🔗 Source: anthropic.com View Source | Found on Jan 15, 2026

🔹 AWS Launches European Sovereign Cloud and Announces Expansion in Multiple European Countries

On January 15, 2026, Amazon Web Services (AWS) announced the general availability of the AWS European Sovereign Cloud, an independent cloud infrastructure located entirely within the EU and separate from other AWS Regions. The first AWS Region launched in Brandenburg, Germany, with plans to expand Local Zones to Belgium, the Netherlands, and Portugal. Amazon will invest over €7.8 billion in Germany for this initiative, supporting about 2,800 jobs annually and contributing approximately €17.2 billion to Germany’s GDP. The cloud offers more than 90 services and is managed by EU residents under European governance with strict data residency and sovereignty controls.

🔗 Source: press.aboutamazon.com View Source | Found on Jan 15, 2026

🔹 NVIDIA and Lilly CEOs Present AI and Drug Discovery Blueprint

NVIDIA and Lilly announced a first-of-its-kind AI co-innovation lab at the J.P. Morgan Healthcare Conference in San Francisco, aiming to advance drug discovery by combining Lilly’s pharmaceutical expertise with NVIDIA’s AI leadership. The lab, based in the San Francisco Bay Area, will receive up to $1 billion in joint investment over five years for talent, infrastructure, and compute. It will operate under a scientist-in-the-loop framework integrating wet and dry labs for continuous learning. The initiative builds on Lilly’s DGX SuperPOD supercomputer and highlights collaborations with leaders such as Thermo Fisher and Multiply Labs for autonomous lab infrastructure.

🔗 Source: blogs.nvidia.com View Source | Found on Jan 13, 2026

🔹 Google's Gemini Launches Personal Intelligence Feature

Gemini’s Personal Intelligence feature offers personalized recommendations, such as travel plans and board games, by analyzing user data from Gmail and Photos only when users enable app connections. Users control which apps are connected and can turn off access at any time. Gemini references but does not directly train on personal data from Gmail or Google Photos; instead, it trains on prompts and responses after filtering or obfuscating personal information. The system provides source explanations for its answers, allows corrections or non-personalized chats, includes guardrails for sensitive topics, and enables users to adjust settings or delete chat history at any time.

🔗 Source: blog.google View Source | Found on Jan 14, 2026

🔹 LSEG Launches Digital Settlement House Platform for Instant Settlement Between Independent Payment Networks

LSEG has launched Digital Settlement House (LSEG DiSH), an open-access platform enabling programmatic and instantaneous settlement between independent payment networks, both on and off chain. The service uses commercial bank deposits held on the DiSH ledger (DiSH Cash) to facilitate 24/7 movement of money in multiple currencies and jurisdictions, supporting PvP and DvP settlements for FX and digital asset transactions. Following a successful Proof of Concept with Digital Asset and leading financial institutions on the Canton Network, LSEG DiSH allows instant ownership transfer of tokenised commercial bank deposits, optimises liquidity management, reduces settlement risk, and operates through LSEG’s Post Trade Solutions business.

🔗 Source: lseg.com View Source | Found on Jan 15, 2026

🔹 LSEG Partners with AWS to Enhance Real-Time Data Infrastructure

LSEG announced on January 15, 2026, its collaboration with AWS to transform its Real-Time – Full Tick and Real-Time – Optimized data processing capabilities by leveraging AWS services for the collection, routing, and distribution of financial data. LSEG’s private cloud will enable financial institutions to access comprehensive market data covering over 100 million instruments from more than 575 exchanges and trading venues worldwide, delivering hundreds of billions of daily updates. The real-time network peaks at up to 20 million messages per second. LSEG employs over 26,000 people globally and operates in 65 countries.

🔗 Source: press.aboutamazon.com View Source | Found on Jan 15, 2026