AI千集
首页热点资讯知识笔记阅读频道关于我们知识库客户端下载
登录
  • 最新
  • 推荐
  • AI医疗
  • AI写作
  • AI应用
  • AI基础
  • AI量化
  • AI数据
  • AI画图
  • AIGC
  • 编程基础
  • 课程演讲
  • 资料仓库
  • 我们有一款类 OpenClaw 的产品,不是通用型的,是专门帮大家写作赚钱的!

    icodebase 5天前 AI应用 OpenClaw
    0 / 37
  • AI 自媒体保姆级教程:从注册到变现,小白照做就能上手

    icodebase 17天前 AIGC AI 自媒体个人创业
    0 / 229
  • Agent造富时代已来:不囤货、不熬夜,普通人的数字资产新机会

    icodebase 17天前 AIGC 第四次造富潮
    0 / 114
  • OpenClaw 最佳实践:弃用飞书,我把 AI 团队全搬到了 Discord

    icodebase 22天前 AIGC DiscordOpenClaw
    0 / 102
  • 2025年终总结&新年计划

    icodebase 24天前 AI基础 年终总结
    0 / 90
  • 为什么你一定要用OpenCode

    openoker 28天前 AI应用 opencode
    0 / 94
  • OpenClaw 远程访问配置指南

    icodebase 2026-02-05 12:49:17 AI应用 OpenClaw远程访问
    0 / 207
  • OpenClaw 安装配置教程

    icodebase 2026-02-05 11:28:47 AI基础 OpenClawAgent
    0 / 240
  • OpenClaw(旧称ClawdBot/Moltbot)下载安装使用,详细的图文教程

    icodebase 2026-01-31 22:23:53 AI基础 OpenClawClawdBotMoltbot
    0 / 300
  • 完整指南:如何使用Umaxing编辑图片

    icodebase 2026-01-29 10:00:56 AIGC umaxing
    0 / 147
  • 联系Umaxing团队

    icodebase 2026-01-29 09:48:13 AIGC umaxing
    0 / 191
  • OpenCode 与 OpenSkills 集成 API 文档

    icodebase 2026-01-28 20:58:28 AI基础 opencodeopenskills
    0 / 210
  • OpenSkills 零基础入门指南

    icodebase 2026-01-28 20:12:21 AIGC openskills
    0 / 188
  • 使用技能扩展 Claude

    icodebase 2026-01-28 18:46:56 AI基础 Skills
    0 / 191
  • OpenCode 安装和使用自定义模型实战

    icodebase 2026-01-23 17:11:08 AI基础 opencodeAI编程自动写代码
    0 / 288
  • OpenCode + Oh My OpenCode 配置和开发教程

    icodebase 2026-01-17 10:43:44 AI基础 opencode
    0 / 3748
  • OpenCode + Superpowers 安装教程

    icodebase 2026-01-16 15:39:20 AI基础 opencodeSuperpowers
    0 / 2042
  • AI编程奇点已经到来

    icodebase 2026-01-16 10:30:24 编程基础 AI编程
    0 / 280
  • 掌握 ulw、ralph-loop 与高效玩法

    icodebase 2026-01-15 19:55:16 编程基础 opencodeulwralph-loop
    0 / 517
  • Conda配置国内镜像源

    icodebase 2026-01-13 10:35:19 AI基础 conda
    0 / 182
  • Dify工作流:用爬虫批量把公众号文章存到AI知识库

    openoker 2025-11-04 11:35:47 AI应用 dify工作流爬虫
    0 / 843
  • Transformer模型详解

    icodebase 2025-10-27 19:42:25 AI基础 大模型AItransformer
    0 / 351
  • AI视频工厂使用说明

    icodebase 2025-10-20 12:33:45 AI应用 AI视频
    0 / 768
  • Docker将运行中容器打包成镜像并保存到本地

    icodebase 2025-10-13 17:36:25 AI基础 Docker
    0 / 435
  • Xinference部署教程

    icodebase 2025-10-11 14:20:25 AIGC Xinference大模型
    0 / 2067
  • RTX4090 跑 Qwen3 32B 模型实测

    icodebase 2025-07-13 10:39:56 AI基础 Qwen3
    0 / 582
  • ubuntu 22.04 安装 nvidia 驱动

    openoker 2025-07-03 19:41:50 AI基础 NVIDIA
    0 / 501
  • xfce4设置屏保/锁屏时间

    openoker 2025-07-03 14:18:13 AI基础 xfce
    0 / 680
  • vLLM 服务启动Qwen3-32B详细参数解析

    openoker 2025-06-20 09:55:04 AI基础 vLLM
    0 / 10029
  • AI TransPDF —AI PDF文档翻译

    132****1139 2025-05-23 11:58:15 AIGC
    1
    0 / 776
  • Qwen3:如何运行和微调

    openoker 2025-05-14 10:41:36 AI基础 Qwen3微调unsloth
    0 / 6537
  • 如何在本地高效运行和微调Qwen3模型

    openoker 2025-05-10 13:29:05 AI基础 Qwen3llama.cpp
    0 / 20111
  • llama.cpp使用教程

    openoker 2025-05-10 11:01:23 AI基础 llama.cpp
    0 / 834
  • 包管理工具UV使用指南

    icodebase 2025-04-21 14:01:21 AI基础 uvpython包安装器
    0 / 3406
  • 制作并量化GGUF模型上传到HuggingFace和ModelScope

    icodebase 2025-04-14 17:07:48 AI基础 GGUFGPUStackOllama
    0 / 1701
  • 解决传统医学大模型难以充分满足个性化需求的问题

    icodebase 2025-04-11 21:47:59 AI医疗 医学大模型个性化需求
    0 / 1006
  • 大模型 + 长期慢病的预测和管理

    icodebase 2025-04-11 11:28:36 AI医疗 慢病预测疾病预测
    0 / 977
  • Agent-RAG + GraphRAG 双框架,让个性化治疗方案准确率高达 100%

    icodebase 2025-04-10 11:40:50 AI医疗 Agentic-RAGGraph-RAG
    0 / 1111
  • 免疫学基础知识

    openoker 2025-03-29 22:36:25 AI医疗 免疫学
    0 / 1026
  • 如何解读淋巴细胞亚群检测报告?

    openoker 2025-03-29 19:28:27 AI医疗 免疫检测
    0 / 855
  • 使用 LLaMA-Factory+AutoDL 微调 DeepSeek1.5B 笔记

    openoker 2025-03-16 21:55:15 AI基础 LLaMA-FactoryDeepSeek微调
    0 / 1513
  • Python 依赖管理工具 uv 使用笔记

    openoker 2025-03-10 16:40:51 AI基础 uvpythonpip
    0 / 4735
  • 使用国内镜像源加速安装PyTorch

    有贸 2025-03-08 21:02:55 AI基础 国内源pytorch镜像
    0 / 1717
  • 将 Open WebUI 从 SQLite 数据库迁移到 PostgreSQL 数据库

    有贸 2025-03-02 20:45:40 AI应用 open-webuiPostgreSQL知识库
    0 / 1184
  • Ollama 安装指南:解决国内下载慢和安装卡住问题

    icodebase 2025-02-16 14:52:51 AI基础 Ollama
    0 / 7126
  • AI代运营矩阵系统的发布选型

    openoker 2025-02-03 13:45:28 AI应用 矩阵系统内容运营
    0 / 1265
  • Ubuntu 安装 python3.8

    icodebase 2025-01-23 20:31:31 AI基础 python
    0 / 1102
  • Certificate verification failed: The certificate is NOT trusted

    icodebase 2025-01-23 19:48:22 AI基础 Linux
    0 / 984
  • 千集助理微信小程序评分在哪,如何评分

    openoker 2025-01-19 13:51:43 AI应用 千集助理
    0 / 1389
  • 《GPT 提示词大全》基础版(五)

    shadow 2025-01-09 17:22:03 AIGC
    0 / 1042
上一页 下一页
  • 1
  • 2
  • 3
  • 4
  • 5
签到
立即签到
签到可以获得积分哦!
公告

AI千集是一个专注于科研服务的智能平台
在这里您可以获得本平台自训练的
科研智能体
和小伙伴一起玩转AI,做自己的AI机器人
来AI千集,赋能智慧快人一步
扫一扫,快速获取解决方案与报价
立即咨询

最新资讯更多
  • Deep Reinforcement learning (DRL) has achieved remarkable success in domains with well-defined reward structures, such as Atari games and locomotion. In contrast, dexterous manipulation lacks general-purpose reward formulations and typically depends on task-specific, handcrafted priors to guide hand-object interactions. We propose Contact Coverage-Guided Exploration (CCGE), a general exploration method designed for general-purpose dexterous manipulation tasks. CCGE represents contact state as the intersection between object surface points and predefined hand keypoints, encouraging dexterous hands to discover diverse and novel contact patterns, namely which fingers contact which object regions. It maintains a contact counter conditioned on discretized object states obtained via learned hash codes, capturing how frequently each finger interacts with different object regions. This counter is leveraged in two complementary ways:(1) to assign a count-based contact coverage reward that promotes exploration of novel contact patterns, and (2) an energy-based reaching reward that guides the agent toward under-explored contact regions. We evaluate CCGE on a diverse set of dexterous manipulation tasks, including cluttered object singulation, constrained object retrieval, in-hand reorientation, and bimanual manipulation. Experimental results show that CCGE substantially improves training efficiency and success rates over existing exploration methods, and that the contact patterns learned with CCGE transfer robustly to real-world robotic systems. Project page is
    喜欢 0    评论 0
  • Vision Language Models (VLMs) exhibit persistent hallucinations in counting tasks, with accuracy substantially lower than other visual reasoning tasks (excluding sentiment). This phenomenon persists even in state-of-the-art reasoning-capable VLMs. Conversely, CNN-based object detection models (ODMs) such as YOLO excel at spatial localization and instance counting with minimal computational overhead. We propose GroundCount, a framework that augments VLMs with explicit spatial grounding from ODMs to mitigate counting hallucinations. In the best case, our prompt-based augmentation strategy achieves 81.3% counting accuracy on the best-performing model (Ovis2.5-2B) - a 6.6pp improvement - while reducing inference time by 22% through elimination of hallucination-driven reasoning loops for stronger models. We conduct comprehensive ablation studies demonstrating that positional encoding is a critical component, being beneficial for stronger models but detrimental for weaker ones. Confidence scores, by contrast, introduce noise for most architectures and their removal improves performance in four of five evaluated models. We further evaluate feature-level fusion architectures, finding that explicit symbolic grounding via structured prompts outperforms implicit feature fusion despite sophisticated cross-attention mechanisms. Our approach yields consistent improvements across four of five evaluated VLM architectures (6.2--7.5pp), with one architecture exhibiting degraded performance due to incompatibility between its iterative reflection mechanisms and structured prompts. These results suggest that counting failures stem from fundamental spatial-semantic integration limitations rather than architecture-specific deficiencies, while highlighting the importance of architectural compatibility in augmentation strategies.
    喜欢 0    评论 0
  • The rapid evolution and inherent complexity of modern software requirements demand highly flexible and responsive development methodologies. While Agile frameworks have become the industry standard for prioritizing iteration, collaboration, and adaptability, software development teams continue to face persistent challenges in managing constantly evolving requirements and maintaining product quality under tight deadlines. This article explores the intersection of Artificial Intelligence (AI) and Software Engineering (SE), to analyze how AI serves as a powerful catalyst for enhancing agility and fostering innovation. The research combines a comprehensive review of existing literature with an empirical study, utilizing a survey directed at Software Engineering professionals to assess the perception, adoption, and impact of AI-driven tools. Key findings reveal that the integration of AI (specifically through Machine Learning (ML) and Natural Language Processing (NLP) )facilitates the automation of tedious tasks, from requirement management to code generation and testing . This paper demonstrates that AI not only optimizes current Agile practices but also introduces new capabilities essential for sustaining quality, speed, and innovation in the future landscape of software development.
    喜欢 0    评论 0
  • Human uplift studies - or studies that measure AI effects on human performance relative to a status quo, typically using randomized controlled trial (RCT) methodology - are increasingly used to inform deployment, governance, and safety decisions for frontier AI systems. While the methods underlying these studies are well-established, their interaction with the distinctive properties of frontier AI systems remains underexamined, particularly when results are used to inform high-stakes decisions. We present findings from interviews with 16 expert practitioners with experience conducting human uplift studies in domains including biosecurity, cybersecurity, education, and labor. Across interviews, experts described a recurring tension between standard causal inference assumptions and the object of study itself. Rapidly evolving AI systems, shifting baselines, heterogeneous and changing user proficiency, and porous real-world settings strain assumptions underlying internal, external, and construct validity, complicating the interpretation and appropriate use of uplift evidence. We synthesize these challenges across key stages of the human uplift research lifecycle and map them to practitioner-reported solutions, clarifying both the limits and the appropriate uses of evidence from human uplift studies in high-stakes decision-making.
    喜欢 0    评论 0
  • VLMs have become increasingly proficient at a range of computer vision tasks, such as visual question answering and object detection. This includes increasingly strong capabilities in the domain of art, from analyzing artwork to generation of art. In an interdisciplinary collaboration between computer scientists and art historians, we characterize the mechanisms underlying VLMs' ability to predict artistic style and assess the extent to which they align with the criteria art historians use to reason about artistic style. We employ a latent-space decomposition approach to identify concepts that drive art style prediction and conduct quantitative evaluations, causal analysis and assessment by art historians. Our findings indicate that 73% of the extracted concepts are judged by art historians to exhibit a coherent and semantically meaningful visual feature and 90% of concepts used to predict style of a given artwork were judged relevant. In cases where an irrelevant concept was used to successfully predict style, art historians identified possible reasons for its success; for example, the model might "understand" a concept in more formal terms, such as dark/light contrasts.
    喜欢 0    评论 0
  • We present IsalGraph, a method for representing the structure of any finite, simple graph as a compact string over a nine-character instruction alphabet. The encoding is executed by a small virtual machine comprising a sparse graph, a circular doubly-linked list (CDLL) of graph-node references, and two traversal pointers. Instructions either move a pointer through the CDLL or insert a node or edge into the graph. A key design property is that every string over the alphabet decodes to a valid graph, with no invalid states reachable. A greedy \emph{GraphToString} algorithm encodes any connected graph into a string in time polynomial in the number of nodes; an exhaustive-backtracking variant produces a canonical string by selecting the lexicographically smallest shortest string across all starting nodes and all valid traversal orders. We evaluate the representation on five real-world graph benchmark datasets (IAM Letter LOW/MED/HIGH, LINUX, and AIDS) and show that the Levenshtein distance between IsalGraph strings correlates strongly with graph edit distance (GED). Together, these properties make IsalGraph strings a compact, isomorphism-invariant, and language-model-compatible sequential encoding of graph structure, with direct applications in graph similarity search, graph generation, and graph-conditioned language modelling
    喜欢 0    评论 0
积分排行
  • gingo

    104 帖子 • 8 评论

    1431
  • 137****0631

    0 帖子 • 0 评论

    997
  • weilaiweiding

    52 帖子 • 1 评论

    821
  • icodebase

    59 帖子 • 1 评论

    537
  • openoker

    171 帖子 • 23 评论

    298
  • 188****2791

    0 帖子 • 0 评论

    180
  • boatingman

    0 帖子 • 1 评论

    90
  • toc

    0 帖子 • 0 评论

    60
  • Van

    3 帖子 • 0 评论

    42
  • shadow

    5 帖子 • 0 评论

    28
友情链接 查看更多>>
  • 导航

    打造最强静态导航网站

  • 135AI排版

    公众号智能文案生成与自动排版工具

  • 二次元数字人视频生成平台

    二次元数字人视频生成平台

关于 标签 友链
粤ICP备18152112号 网信算备330110507206401230035号 粤公网安备44030302001590号 © 2018-2025 AI千集 All Rights Reserved