Upgraded AI helps detect abnormalities

 With upgraded versions of AI, users have more control over personal data

At the recent Google I/O event, Google launched Gemini 1.5 - an artificial intelligence (AI) model with many new features, including the ability to analyze code, text, audio recordings, and videos with different durations. longer than its predecessor.


Detect suspicious words


In particular, Gemini 1.5 Pro is expected to be included in Gmail, Google Docs... in the near future, and will become a multi-purpose tool in Workspace, helping users collect any information from Drive. Another AI model is Gemini Live, which helps users interact with smartphones using natural voices... At this event, Google attracted attention when it brought an AI model called Gemini Nano to smartphones for use. The Android operating system has a feature to detect suspicious words in conversations to help users be alert to scam calls. Currently, Gemini Nano has been integrated on Pixel 8 Pro and Galaxy S24 series.


Sharing opinions on social networks, users expect Gemini Nano will gradually become more popular on Android lines on the market. However, some people expressed concern that call data could be exposed if permissions were granted to Google. In this regard, Google believes that call data is protected, stored only on the user's phone. Users can customize the fraud call warning feature to turn on/off. Gemini Nano operates independently, is not connected to the internet and cannot be accessed by even Google or third parties.


Previously, OpenAI Company also announced a new AI model called GPT-4o, helping ChatGPT's power increase 5 times. structured, with optimization of algorithms and training methods, GPT-4o will provide more accurate answers and maintain high consistency in long conversations. Minimize common errors and improve context understanding, helping models respond more accurately to complex questions… 


Open AI says GPT-4o interface ensures personal interactions and information between users and AI is handled securely. Meta - The parent company of Facebook and Instagram - announced that it will stop developing the Workplace business platform to focus on developing AI products and the virtual universe Metaverse. According to the technology world, Meta's AI projects in the near future will help users have a better experience with an advertising personalization platform, suitable content recommendations, direct translation in many languages... Time to develop privacy policies to protect user information.


At a recent event, Microsoft introduced a new feature called "Recall" - which the company collectively calls AI Explorer. created, this AI tool will be integrated on Copilot Plus computers, helping users track everything done on the computer, from web browsing to voice chat, creating history stored on the computer. that users can search for when they need to remember something they did before. In addition, Recall also provides data in a visual timeline, allowing users to easily scroll through and explore all activities on the computer. The included Live Captions feature helps users conveniently search for online meetings and videos, transcribe and translate speech... Microsoft commits that Recall will operate privately on the device, so user data will be protected. ensure safety. Users can pause, stop, or delete captured content, or choose to exclude specific apps or websites…


Gemini 1.5pro answers user questions in diagram form in just a few seconds AI photo: Le Tinh


Gemini 1.5pro answers user questions in diagram form in just a few seconds AI photo: Le Tinh


Remind and warn users


See more beautiful photo albums Here >>>


According to security expert Pham Dinh Thang, the process of interacting with users has increased the intelligence level of AI. Therefore, the AI ​​model will continuously change drastically and become more streamlined, however this also increases risks for users such as facial impersonation (deepfake), voice will be more difficult to detect because of very high degree of authenticity.


Therefore, technology companies like Google or ChatGPT... must pay attention to protect the safety of users' data. "We cannot put all our trust in the data security commitment of technology companies, users need to protect themselves first. Do not visit strange AI websites of unknown origin because the rate of containing malicious code is very high. Do not access information that is not related to you. At the same time, regularly update news about AI scams" - Mr. 


Recommended ladder. Meanwhile, Mr. Huynh Trong Tha, an information security expert, said that when using AI products, users should not worry too much about their data because companies collect personal data to Analyze to provide appropriate warnings and keywords to personalize the experience. "The fraud rate will decrease a lot after Google and OpenAI apply new models. The fraud warning is a strong reminder that makes users stop before the target's sweet invitations. scam" - Mr. Tho said.


Đăng nhận xét

Mới hơn Cũ hơn

Support me!!! Thanks you!

Join our Team

Please watch the video on YouTube to support me. Thank you very much!