Bitget, a cryptocurrency exchange platform, unveiled its newest AI tool, named Future Quant, designed to empower users with essential information for making informed investment decisions, employing advanced AI technology and sophisticated algorithms. In a distinct development, the Philippine military received an order to halt the use of AI applications due to potential security concerns. Stay updated with these and more in today’s AI roundup.
- Bitget Unveils Future Quant: An AI-Powered Investment Tool In a recent announcement, Bitget, the cryptocurrency exchange platform, introduced its cutting-edge AI tool, Future Quant. This innovative tool harnesses the power of artificial intelligence and advanced algorithms to offer users premium investment portfolios and well-informed decision-making support. Bitget emphasizes that Future Quant operates autonomously, adapting its settings in response to market dynamics, without the need for human intervention.
- US AI Chip Export Restrictions Could Benefit Huawei Recent export restrictions on AI chips imposed by the United States may create opportunities for Huawei Technologies to expand its presence in the Chinese market, according to a report by Reuters. Despite Nvidia’s dominant market share in China, these ongoing restrictions could pave the way for Chinese tech companies to vie for the position of the leading AI chip provider. Jiang Yifan, chief market analyst at brokerage firm Guotai Junan Securities, expressed his perspective on Weibo, stating, “This U.S. move, in my opinion, is actually giving Huawei’s Ascend chips a significant advantage.”
- Philippine Military Ceases Use of AI Apps Citing Security Risks While AI adoption continues worldwide, the Philippine military has been directed to discontinue the use of AI applications, as reported by the Associated Press. The directive comes from Philippine Defense Secretary Gilberto Teodoro Jr., who cited security concerns associated with apps that require users to submit multiple photos to generate AI representations. Teodoro expressed his apprehension, stating, “This seemingly innocuous and entertaining AI-powered application can be maliciously exploited to create fake profiles, leading to identity theft, social engineering, phishing attacks, and other nefarious activities.”
- AI Chatbots Accused of Disseminating Biased Medical Information, Research Reveals A recent study led by the Stanford School of Medicine, published in the Nature Journal, exposed a concerning issue related to AI chatbots. While these chatbots have the potential to assist patients by summarizing doctors’ notes and reviewing health records, they have been found to propagate inaccurate medical information. The study involved posing medical inquiries regarding kidney function and lung capacity to four AI chatbots, including ChatGPT and Google. Instead of providing accurate medical information, these chatbots responded with “inaccurate beliefs regarding disparities between different patient groups.”
- AI Enhances Patient Care with Spine Fracture Detection A recent release from the University of Oxford highlights the commencement of the NHS ADOPT study, which employs AI for identifying patients with spine fractures. The AI program, named Nanox.AI, analyzes computed tomography (CT) scans to detect spine fractures, promptly notifying the specialized medical team for immediate intervention. This AI program is a collaborative effort between the University of Oxford, Addenbrooke’s Hospital in Cambridge, medical imaging technology company Nanox.AI, and the Royal Osteoporosis Society.
Also Read
Master the Art of Crafting AI Images via Google Search with Text Descriptions
One thought on “5 AI Developments Uncovered Today: Chatbots Disseminating Biased Medical Information, Innovative AI Investment Tool, and More”