查看: 631|回复: 0

机器学习方向最新文章[1021]

[复制链接]

机器学习方向最新文章[1021]

jjkkasmine 发表于 2019-10-27 02:07:53 浏览:  631 回复:  0 [显示全部楼层] 复制链接
ML: 涵盖学习理论cs.LG,统计机器学习stat.ML,信息检索cs.IR领域的最新文章[1] Instance adaptive adversarial training: Improved accuracy tradeoffs in  neural nets链接:http://arxiv.org/abs/1910.08051v1备注:作者:Yogesh Balaji;Tom Goldstein;Judy Hoffman摘要:Adversarial training is by far the most successful strategy for improving robustness of neural networks to adversarial attacks. Despite its success as a defense mechanism, adversarial training fails to generalize well to unperturbed test set.
[2] Single Episode Policy Transfer in Reinforcement Learning链接:http://arxiv.org/abs/1910.07719v1备注:作者:Jiachen Yang;Brenden Petersen;Hongyuan Zha;Daniel Faissol摘要:Transfer and adaptation to new unknown environmental dynamics is a key challenge for reinforcement learning (RL).
[3] A Double Residual Compression Algorithm for Efficient Distributed  Learning链接:http://arxiv.org/abs/1910.07561v1备注:作者:Xiaorui Liu;Yao Li;Jiliang Tang;Ming Yan摘要:Large-scale machine learning models are often trained by parallel stochastic gradient descent algorithms.
[4] Discrete Residual Flow for Probabilistic Pedestrian Behavior Prediction链接:http://arxiv.org/abs/1910.08041v1备注:CoRL 2019作者:Ajay Jain;Sergio Casas;Renjie Liao;Yuwen Xiong;Song Feng;Sean Segal;Raquel Urtasun摘要:Self-driving vehicles plan around both static and dynamic objects, applying predictive models of behavior to estimate future locations of the objects in the environment.
[5] From Dark Matter to Galaxies with Convolutional Neural Networks链接:http://arxiv.org/abs/1910.07813v1备注:5 pages, 2 figures. Accepted to the Second Workshop on Machine  Learning and the Physical Sciences (NeurIPS 2019)作者:Jacky H. T. Yip;Xinyue Zhang;Yanfang Wang;Wei Zhang;Yueqiu Sun;Gabriella Contardo;Francisco Villaescusa-Navarro;Siyu He;Shy Genel;Shirley Ho摘要:Cosmological simulations play an important role in the interpretation of astronomical data, in particular in comparing observed data to our theoretical expectations.
[6] Graph Embedding VAE: A Permutation Invariant Model of Graph Structure链接:http://arxiv.org/abs/1910.08057v1备注:Presented at the NeurIPS 2019 Workshop on Graph Representation  Learning作者:Tony Duan;Juho Lee摘要:Generative models of graph structure have applications in biology and social sciences. The state of the art is GraphRNN, which decomposes the graph generation process into a series of sequential steps. While effective for modest sizes, it loses its permutation invariance for larger graphs.
[7] Predicting retrosynthetic pathways using a combined linguistic model and  hyper-graph exploration strategy链接:http://arxiv.org/abs/1910.08036v1备注:作者:Philippe Schwaller;Riccardo Petraglia;Valerio Zullo;Vishnu H Nair;Rico Andreas Haeuselmann;Riccardo Pisoni;Costas Bekas;Anna Iuliano;Teodoro Laino摘要:We present an extension of our Molecular Transformer architecture combined with a hyper-graph exploration strategy for automatic retrosynthesis route planning without human intervention.
[8] Deep clustering with concrete k-means链接:http://arxiv.org/abs/1910.08031v1备注:作者:Boyan Gao;Yongxin Yang;Henry Gouk;Timothy M. Hospedales摘要:We address the problem of simultaneously learning a k-means clustering and deep feature representation from unlabelled data, which is of interest due to the potential of deep k-means to outperform traditional two-step feature extraction and shallow-clustering strategies.
[9] A Unified Framework for Tuning Hyperparameters in Clustering Problems链接:http://arxiv.org/abs/1910.08018v1备注:作者:Xinjie Fan;Yuguang Yue;Purnamrita Sarkar;Y. X. Rachel Wang摘要:Selecting hyperparameters for unsupervised learning problems is difficult in general due to the lack of ground truth for validation.
[10] Why bigger is not always better: on finite and infinite neural networks链接:http://arxiv.org/abs/1910.08013v1备注:作者:Laurence Aitchison摘要:Recent work has shown that the outputs of convolutional neural networks become Gaussian process (GP) distributed when we take the number of channels to infinity.
[11] Dropping forward-backward algorithms for feature selection链接:http://arxiv.org/abs/1910.08007v1备注:作者:Thu Nguyen摘要:In this era of big data, feature selection techniques, which have long been proven to simplify the model, makes the model more comprehensible, speed up the process of learning, have become more and more important.
[12] On Concept-Based Explanations in Deep Neural Networks链接:http://arxiv.org/abs/1910.07969v1备注:作者:Chih-Kuan Yeh;Been Kim;Sercan O. Arik;Chun-Liang Li;Pradeep Ravikumar;Tomas Pfister摘要:Deep neural networks (DNNs) build high-level intelligence on low-level raw features. Understanding of this high-level intelligence can be enabled by deciphering the concepts they base their decisions on, as human-level thinking.
[13] A Stochastic Variance Reduced Nesterov's Accelerated Quasi-Newton Method链接:http://arxiv.org/abs/1910.07939v1备注:Accepted in ICMLA 2019作者:Sota Yasuda;Shahrzad Mahboubi;S. Indrapriyadarsini;Hiroshi Ninomiya;Hideki Asai摘要:Recently algorithms incorporating second order curvature information have become popular in training neural networks.
[14] Keyphrase Extraction from Disaster-related Tweets链接:http://arxiv.org/abs/1910.07897v1备注:12 pages, 7 figures作者:Jishnu Ray Chowdhury;Cornelia Caragea;Doina Caragea摘要:While keyphrase extraction has received considerable attention in recent years, relatively few studies exist on extracting keyphrases from social media platforms such as Twitter, and even fewer for extracting disaster-related keyphrases from such sources.
[15] WOTBoost: Weighted Oversampling Technique in Boosting for imbalanced  learning链接:http://arxiv.org/abs/1910.07892v1备注:10 pages, 5 figures, 3 tables作者:Wenhao Zhang;Ramin Ramezani;Arash Naeim摘要:Machine learning classifiers often stumble over imbalanced datasets where classes are not equally represented. This inherent bias towards the majority class may result in low accuracy in labeling minority class.
[16] An Information-Theoretic Perspective on the Relationship Between  Fairness and Accuracy链接:http://arxiv.org/abs/1910.07870v1备注:作者:Sanghamitra Dutta;Dennis Wei;Hazar Yueksel;Pin-Yu Chen;Sijia Liu;Kush R. Varshney摘要:Our goal is to understand the so-called trade-off between fairness and accuracy. In this work, using a tool from information theory called Chernoff information, we derive fundamental limits on this relationship that explain why the accuracy on a given dataset often decreases as fairness increases.
[17] Effect of Superpixel Aggregation on Explanations in LIME -- A Case Study  with Biological Data链接:http://arxiv.org/abs/1910.07856v1备注:作者:Ludwig Schallner;Johannes Rabold;Oliver Scholz;Ute Schmid摘要:End-to-end learning with deep neural networks, such as convolutional neural networks (CNNs), has been demonstrated to be very successful for different tasks of image classification. To make decisions of black-box approaches transparent, different solutions have been proposed.
[18] KDE sampling for imbalanced class distribution链接:http://arxiv.org/abs/1910.07842v1备注:作者:Firuz Kamalov摘要:Imbalanced response variable distribution is not an uncommon occurrence in data science. One common way to combat class imbalance is through resampling the minority class to achieve a more balanced distribution.
[19] Sharper bounds for uniformly stable algorithms链接:http://arxiv.org/abs/1910.07833v1备注:14 pages作者:Olivier Bousquet;Yegor Klochkov;Nikita Zhivotovskiy摘要:The generalization bounds for stable algorithms is a classical question in learning theory taking its roots in the early works of Vapnik and Chervonenkis and Rogers and Wagner.
[20] Overcoming Forgetting in Federated Learning on Non-IID Data链接:http://arxiv.org/abs/1910.07796v1备注:Accepted to NeurIPS 2019 Workshop on Federated Learning for Data  Privacy and Confidentiality作者:Neta Shoham;Tomer Avidor;Aviv Keren;Nadav Israel;Daniel Benditkis;Liron Mor-Yosef;Itai Zeitak摘要:We tackle the problem of Federated Learning in the non i.i.d. case, in which local models drift apart, inhibiting learning. Building on an analogy with Lifelong Learning, we adapt a solution for catastrophic forgetting to Federated Learning.
[21] Cascading: Association Augmented Sequential Recommendation链接:http://arxiv.org/abs/1910.07792v1备注:29 pages,13 figures作者:Xu Chen;Kenan Cui;Ya Zhang;Yanfeng Wang摘要:Recently, recommendation according to sequential user behaviors has shown promising results in many application scenarios. Generally speaking, real-world sequential user behaviors usually reflect a hybrid of sequential influences and association relationships.
[22] Indoor Information Retrieval using Lifelog Data链接:http://arxiv.org/abs/1910.07784v1备注:作者:Deepanwita Datta摘要:Studying human behaviour through lifelogging has seen an increase in attention from researchers over the past decade.
[23] Achieving Robustness to Aleatoric Uncertainty with Heteroscedastic  Bayesian Optimisation链接:http://arxiv.org/abs/1910.07779v1备注:Accepted to the 2019 NeurIPS Workshop on Safety and Robustness in  Decision Making作者:Ryan-Rhys Griffiths;Miguel Garcia-Ortegon;Alexander A. Aldrick;Alpha A. Lee摘要:Bayesian optimisation is an important decision-making tool for high-stakes applications in drug discovery and materials design. An oft-overlooked modelling consideration however is the representation of input-dependent or heteroscedastic aleatoric uncertainty.
[24] Teaching Vehicles to Anticipate: A Systematic Study on Probabilistic  Behavior Prediction using Large Data Sets链接:http://arxiv.org/abs/1910.07772v1备注:the paper is intended to be submited to IEEE Transcations on  Intelligent Transportation Systems (T-ITS) 15 pages 13 figures 12 tables作者:Florian Wirthmüller;Julian Schlechtriemen;Jochen Hipp;Manfred Reichert摘要:Observations of traffic participants and their environment enable humans to drive road vehicles safely. However, when being driven, there is a notable difference between having a non-experienced vs. an experienced driver.
[25] Mixture-of-Experts Variational Autoencoder for clustering and generating  from similarity-based representations链接:http://arxiv.org/abs/1910.07763v1备注:Submitted as conference paper at Eighth International Conference on  Learning Representations (ICLR 2020)作者:Andreas Kopf;Vincent Fortuin;Vignesh Ram Somnath;Manfred Claassen摘要:Clustering high-dimensional data, such as images or biological measurements, is a long-standing problem and has been studied extensively.
[26] Annealed Denoising Score Matching: Learning Energy-Based Models in  High-Dimensional Spaces链接:http://arxiv.org/abs/1910.07762v1备注:作者:Zengyi Li;Yubei Chen;Friedrich T. Sommer摘要:Energy-Based Models (EBMs) outputs unmormalized log-probability values given data samples. Such an estimation is essential in a variety of applications such as sample generation, denoising, sample restoration, outlier detection, Bayesian reasoning, and many more.
[27] Reducing the Computational Complexity of Pseudoinverse for the  Incremental Broad Learning System on Added Inputs链接:http://arxiv.org/abs/1910.07755v1备注:作者:Hufei Zhu;Chenghao Wei摘要:In this brief, we improve the Broad Learning System (BLS) [7] by reducing the computational complexity of the incremental learning for added inputs. We utilize the inverse of a sum of matrices in [8] to improve a step in the pseudoinverse of a row-partitioned matrix.
[28] Toward Subject Invariant and Class Disentangled Representation in BCI  via Cross-Domain Mutual Information Estimator链接:http://arxiv.org/abs/1910.07747v1备注:8 pages, 2 figures作者:Eunjin Jeon;Wonjun Ko;Jee Seok Yoon;Heung-Il Suk摘要:In recent, deep learning-based feature representation methods have shown a promising impact in electroencephalography (EEG)-based brain-computer interface (BCI).
[29] A Survey of Deep Learning Techniques for Autonomous Driving链接:http://arxiv.org/abs/1910.07738v1备注:38 pages, 7 figures作者:Sorin Grigorescu;Bogdan Trasnea;Tiberiu Cocias;Gigel Macesanu摘要:The last decade witnessed increasingly rapid progress in self-driving vehicle technology, mainly backed up by advances in the area of deep learning and artificial intelligence.
[30] Autoregressive Models: What Are They Good For?链接:http://arxiv.org/abs/1910.07737v1备注:Accepted for the Information Theory and Machine Learning workshop at  NeurIPS 2019作者:Murtaza Dalal;Alexander C. Li;Rohan Taori摘要:Autoregressive (AR) models have become a popular tool for unsupervised learning, achieving state-of-the-art log likelihood estimates.
[31] Collaborative Filtering with Label Consistent Restricted Boltzmann  Machine链接:http://arxiv.org/abs/1910.07724v1备注:6 pages, ICAPR 2017, Code: https://github.com/sagarverma/LC-CFRBM作者:Sagar Verma;Prince Patel;Angshul Majumdar摘要:The possibility of employing restricted Boltzmann machine (RBM) for collaborative filtering has been known for about a decade. However, there has been hardly any work on this topic since 2007. This work revisits the application of RBM in recommender systems.
[32] Communication-Efficient Asynchronous Stochastic Frank-Wolfe over  Nuclear-norm Balls链接:http://arxiv.org/abs/1910.07703v1备注:作者:Jiacheng Zhuo;Qi Lei;Alexandros G. Dimakis;Constantine Caramanis摘要:Large-scale machine learning training suffers from two prior challenges, specifically for nuclear-norm constrained problems with distributed systems: the synchronization slowdown due to the straggling workers, and high communication costs.
[33] Probabilistic Deterministic Finite Automata and Recurrent Networks,  Revisited链接:http://arxiv.org/abs/1910.07663v1备注:15 pages, 4 figures;  http://csc.ucdavis.edu/~cmg/compmech/pubs/pdfarnr.htm作者:S. E. Marzen;J. P. Crutchfield摘要:Reservoir computers (RCs) and recurrent neural networks (RNNs) can mimic any finite-state automaton in theory, and some workers demonstrated that this can hold in practice.
[34] Tensor Graph Convolutional Networks for Prediction on Dynamic Graphs链接:http://arxiv.org/abs/1910.07643v1备注:A shorter version of this paper was accepted as an extended abstract  at the NeurIPS 2019 Workshop on Graph Representation Learning作者:Osman Asif Malik;Shashanka Ubaru;Lior Horesh;Misha E. Kilmer;Haim Avron摘要:Many irregular domains such as social networks, financial transactions, neuron connections, and natural language structures are represented as graphs. In recent years, a variety of graph neural networks (GNNs) have been successfully applied for representation learning and prediction on such graphs.
[35] Optimal Transport Based Generative Autoencoders链接:http://arxiv.org/abs/1910.07636v1备注:15 pages作者:Oliver Zhang;Ruei-Sung Lin;Yuchuan Gou摘要:The field of deep generative modeling is dominated by generative adversarial networks (GANs). However, the training of GANs often lacks stability, fails to converge, and suffers from model collapse.
[36] A New Defense Against Adversarial Images: Turning a Weakness into a  Strength链接:http://arxiv.org/abs/1910.07629v1备注:*: Equal Contribution, 14 pages作者:Tao Yu;Shengyuan Hu;Chuan Guo;Wei-Lun Chao;Kilian Q. Weinberger摘要:Natural images are virtually surrounded by low-density misclassified regions that can be efficiently discovered by gradient-guided search --- enabling the generation of adversarial images.
[37] Generalized Clustering by Learning to Optimize Expected Normalized Cuts链接:http://arxiv.org/abs/1910.07623v1备注:作者:Azade Nazi;Will Hang;Anna Goldie;Sujith Ravi;Azalia Mirhoseini摘要:We introduce a novel end-to-end approach for learning to cluster in the absence of labeled examples. Our clustering objective is based on optimizing normalized cuts, a criterion which measures both intra-cluster similarity as well as inter-cluster dissimilarity.
[38] Learning chordal extensions链接:http://arxiv.org/abs/1910.07600v1备注:Submitted to Journal of Global Optimization作者:Defeng Liu;Andrea Lodi;Mathieu Tanneau摘要:A highly influential ingredient of many techniques designed to exploit sparsity in numerical optimization is the so-called chordal extension of a graph representation of the optimization problem.
[39] Active Learning for Graph Neural Networks via Node Feature Propagation链接:http://arxiv.org/abs/1910.07567v1备注:16 pages, 5 figures作者:Yuexin Wu;Yichong Xu;Aarti Singh;Yiming Yang;Artur Dubrawski摘要:Graph Neural Networks (GNNs) for prediction tasks like node classification or edge prediction have received increasing attention in recent machine learning from graphically structured data.
[40] Comment: Reflections on the Deconfounder链接:http://arxiv.org/abs/1910.08042v1备注:Comment to appear in JASA discussion of "The Blessings of Multiple  Causes."作者:Alexander D'Amour摘要:The aim of this comment (set to appear in a formal discussion in JASA) is to draw out some conclusions from an extended back-and-forth I have had with Wang and Blei regarding the deconfounder method proposed in "The Blessings of Multiple Causes" [arXiv:1805.06826]. I will make three points here.
[41] Computationally Efficient CFD Prediction of Bubbly Flow using  Physics-Guided Deep Learning链接:http://arxiv.org/abs/1910.08037v1备注:This paper is under review of International Journal of Multi-phase  Flow作者:Han Bao;Jinyong Feng;Nam Dinh;Hongbin Zhang摘要:To realize efficient computational fluid dynamics (CFD) prediction of two-phase flow, a multi-scale framework was proposed in this paper by applying a physics-guided data-driven approach.
[42] DeepFork: Supervised Prediction of Information Diffusion in GitHub链接:http://arxiv.org/abs/1910.07999v1备注:12 Pages, 7 Figures, 2 Tables作者:Ramya Akula;Niloofar Yousefi;Ivan Garibay摘要:Information spreads on complex social networks extremely fast, in other words, a piece of information can go viral within no time. Often it is hard to barricade this diffusion prior to the significant occurrence of chaos, be it a social media or an online coding platform.
[43] Universal Text Representation from BERT: An Empirical Study链接:http://arxiv.org/abs/1910.07973v1备注:作者:Xiaofei Ma;Peng Xu;Zhiguo Wang;Ramesh Nallapati;Bing Xiang摘要:We present a systematic investigation of layer-wise BERT activations for general-purpose text representations to understand what linguistic information they capture and how transferable they are across different tasks.
[44] Adaptive Curriculum Generation from Demonstrations for Sim-to-Real  Visuomotor Control链接:http://arxiv.org/abs/1910.07972v1备注:作者:Lukas Hermann;Max Argus;Andreas Eitel;Artemij Amiranashvili;Wolfram Burgard;Thomas Brox摘要:We propose Adaptive Curriculum Generation from Demonstrations (ACGD) for reinforcement learning in the presence of sparse rewards.
[45] Self-supervised 3D Shape and Viewpoint Estimation from Single Images for  Robotics链接:http://arxiv.org/abs/1910.07948v1备注:Accepted at the 2019 IEEE/RSJ International Conference on Intelligent  Robots and Systems (IROS). Video at  https://www.youtube.com/watch?v=oQgHG9JdMP4作者:Oier Mees;Maxim Tatarchenko;Thomas Brox;Wolfram Burgard摘要:We present a convolutional neural network for joint 3D shape prediction and viewpoint estimation from a single input image. During training, our network gets the learning signal from a silhouette of an object in the input image - a form of self-supervision.
[46] Ranking variables and interactions using predictive uncertainty measures链接:http://arxiv.org/abs/1910.07942v1备注:作者:Topi Paananen;Michael Riis Andersen;Aki Vehtari摘要:For complex nonlinear supervised learning models, assessing the relevance of input variables or their interactions is not straightforward due to the lack of a direct measure of relevance, such as the regression coefficients in generalized linear models.
[47] H-VECTORS: Utterance-level Speaker Embedding Using A Hierarchical  Attention Model链接:http://arxiv.org/abs/1910.07900v1备注:作者:Yanpei Shi;Qiang Huang;Thomas Hain摘要:In this paper, a hierarchical attention network to generate utterance-level embeddings (H-vectors) for speaker identification is proposed.
[48] Can I teach a robot to replicate a line art链接:http://arxiv.org/abs/1910.07860v1备注:9 pages, Accepted for the 2020 Winter Conference on Applications of  Computer Vision (WACV '20); Supplementary Video: https://youtu.be/nMt5Dw04XhY作者:Raghav Brahmadesam Venkataramaiyer;Subham Kumar;Vinay P. Namboodiri摘要:Line art is arguably one of the fundamental and versatile modes of expression. We propose a pipeline for a robot to look at a grayscale line art and redraw it.
[49] Calculating Optimistic Likelihoods Using (Geodesically) Convex  Optimization链接:http://arxiv.org/abs/1910.07817v1备注:作者:Viet Anh Nguyen;Soroosh Shafieezadeh-Abadeh;Man-Chung Yue;Daniel Kuhn;Wolfram Wiesemann摘要:A fundamental problem arising in many areas of machine learning is the evaluation of the likelihood of a given observation under different nominal distributions. Frequently, these nominal distributions are themselves estimated from data, which makes them susceptible to estimation errors.
[50] Service Wrapper: a system for converting web data into web services链接:http://arxiv.org/abs/1910.07786v1备注:作者:Naibo Wang;Zhiling Luo;Xiya Lyu;Zitong Yang;Jianwei Yin摘要:Web services are widely used in many areas via callable APIs, however, data are not always available in this way. We always need to get some data from web pages whose structure is not in order.
[51] CFEA: Collaborative Feature Ensembling Adaptation for Domain Adaptation  in Unsupervised Optic Disc and Cup Segmentation链接:http://arxiv.org/abs/1910.07638v1备注:作者:Peng Liu;Bin Kong;Zhongyu Li;Shaoting Zhang;Ruogu Fang摘要:Recently, deep neural networks have demonstrated comparable and even better performance with board-certified ophthalmologists in well-annotated datasets.
[52] Path homologies of deep feedforward networks链接:http://arxiv.org/abs/1910.07617v1备注:To appear in the proceedings of IEEE ICMLA 2019作者:Samir Chowdhury;Thomas Gebhart;Steve Huntsman;Matvey Yutin摘要:We provide a characterization of two types of directed homology for fully-connected, feedforward neural network architectures. These exact characterizations of the directed homology structure of a neural network architecture are the first of their kind.
[53] The Role of Coded Side Information in Single-Server Private Information  Retrieval链接:http://arxiv.org/abs/1910.07612v1备注:40 pages; This work was presented in part at the 2018 IEEE  Information Theory Workshop, Guangzhou, China, November 2018, and the 2019  IEEE International Symposium on Information Theory, Paris, France, July 2019作者:Anoosheh Heidarzadeh;Fatemeh Kazemi;Alex Sprintson摘要:We study the role of coded side information in single-server Private Information Retrieval (PIR).
[54] Global Saliency: Aggregating Saliency Maps to Assess Dataset Artefact  Bias链接:http://arxiv.org/abs/1910.07604v1备注:Machine Learning for Health (ML4H) Workshop at NeurIPS 2019作者:Jacob Pfau;Albert T. Young;Maria L. Wei;Michael J. Keiser摘要:In high-stakes applications of machine learning models, interpretability methods provide guarantees that models are right for the right reasons.
[55] Contextual Joint Factor Acoustic Embeddings链接:http://arxiv.org/abs/1910.07601v1备注:作者:Yanpei Shi;Qiang Huang;Thomas Hain摘要:Embedding acoustic information into fixed length representations is of interest for a whole range of applications in speech and audio technology. We propose two novel unsupervised approaches to generate acoustic embeddings by modelling of acoustic context.
[56] Scaling up Psychology via Scientific Regret Minimization: A Case Study  in Moral Decision-Making链接:http://arxiv.org/abs/1910.07581v1备注:作者:Mayank Agrawal;Joshua C. Peterson;Thomas L. Griffiths摘要:Do large datasets provide value to psychologists? Without a systematic methodology for working with such datasets, there is a valid concern that analyses will produce noise artifacts rather than true effects.
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

管理员

70

主题

0

回帖

252

积分

管理员

Rank: 9Rank: 9Rank: 9

积分
252

最新帖子

  • admin admin 2025-01-02

    【文/观察者网 齐倩】1998年,美俄日等16个国家联合建设国际空间站项目,却将中国排除

    帖子: NASA推进空间站商业化,美媒担忧:若延误,

  • Miranda Miranda 2025-01-02

    【环球网报道】据美国《华尔街日报》、《以色列时报》当地时间周二(2024年12月31日)

    帖子: 外媒:消息人士透露,加沙停火谈判又陷僵局

  • admin admin 2025-01-02

    直播吧1月1日讯 NBA常规赛今日继续开打,马刺今日坐镇主场迎战快船,两支球队本赛季有

    帖子: