Artificial Intelligent. AI? Who?

Nomen est Omenthe name speaks for itself – and AI is literally “who” in Vietnamese. AI? WHO?

I studied computer science and have been active in the IT software development scene for more than 30 years. I am witness to many IT changes: from the cryptic bit programming, then assembler, then FORTRAN/PL1/COBOL/PASCAL etc., then the easy-to-program BASIC (Beginners’ All-Purpose Symbolic Instruction Code) to the OOPLs like LISP , PERL, C++, C#, JAVA, PYTHON. With the Web I learned to work with JS, PHP, HTML, etc. They are all the same wine but in different bottles with different labels and proposed to solve a single ONE human problem:

shape or pattern recognition by computer.

As ChatGPT and OpenChat enter the IT scene along with Ukrainian AI drones against the Russian invasion, I’m likely to witness the latest “cry” of IT hype: AI or who? Right! WHO is in charge? AI chịu trách nhiệm ?

The answer is complex and confusing. First, a brief definition of AI.

What is AI? You could say Artificial Intelligence. Too vague, too empty. General knowledge about something is basic intelligence. If this basic intelligence is transferred to a machine, then the machine is artificially intelligent, or in short: AI machine (e.g. AI robots or AI drones). What is a common or general knowledge? It is the ability to recognize a shape or a pattern and to infer a specific object from the shape or pattern, for example: be it a tiger or a bottled beer.

But what happens when “common knowledge” about something later turns out to be wrong? Is this AI machine still “intelligent”? And these are the unspoken problems of AI. For an AI war machine, it could be fatal. Otherwise it could lead us into a labyrinth of nescience. For example, many astrophysicists believe in the Big Bang. But there is a strong anti-big bang theory (click HERE) and these anti-big bang astrophysicists believe in the so-called “theory of cosmic evolution”. If you ask “ChatGPT” about the Big Bang and you are a fan of Hannes Alfvén, then the answer with a big explosion is suddenly wrong. Is ChatGPT unAI ?

A general knowledge is the ability to recognize shapes or patterns and trace them back to a generally accepted object: a tiger, a dog, a cat, a bottle of beer, a bottle of wine… All of this is based on the inference from a seen ( i.e. known) shape or pattern. And people learn the patterns or shapes gradually over time. It is Human Intelligence or HI. HI is often confused with AI by some “wanna-be” AI expert journalists. They interpret AI and fantasize wildly about AI as if AI is HI and the Almighty God. And normal people read their articles with awe and fear for their future. Some CEOs scare the hell out of their employees with AI deployments to compensate for their ignorance of AI, or probably out of greed to minimize labor costs (wages/salaries). Nothing with visionary high-tech. HI is dynamic and varies unpredictably with circumstances, while AI is static and remains unchanged unless update is made. It’s because AI is derived from HI. And if a CEO blindly relies on his AI deployment (without updates), he will fall behind over time.

So, AI is nothing more than pattern or shape recognition, which can come in different forms. To do this, people rely on their five physical senses (hearing, seeing, smelling, touching and tasting) and one abstract sense: the sixth sense. An AI machine is an imitation of human recognition. Therefore, AI machines also rely on sensors. However, the abstract sixth sense is still not available to AI machines. Let me give a simple example of the abstract sixth sense: We know that Google Translate or ChatGPT is AI-driven. Theoretically, it can translate this simple sentence from German to English without error:

"Sie wissen, dass die Leute nicht kommen sollen"
"Sie wissen, dass sie nicht kommen sollen"
"Die Leute wissen, dass sie nicht kommen sollen"

Google translation:

"They know that people shouldn't come"
"They know not to come"
"People know not to come"

The word Sie means either you or they (or she if the verb is singular). Since “Sie” begins with the capital letter S and the human sixth sense “recognizes” the lack of relationship between “Sie” and “Leute”, so: “You know that people shouldn’t come”. Because Google’s AI lacks this sixth sense, AI Google translates it directly “They know that people shouldn’t come”. “Sie wissen, dass sie nicht kommen sollen” is ambiguous. It could be either “You know that they shouldn’t come” or “They know not to come”. It depends on the way this sentence is placed. Conversational then YOU, narratively then THEY. And only the human sixth sense recognizes this. The last sentence is correctly translated because there is a real relationship between Leute/sie and sie (both for them) . So: AI sai ? Who’s wrong ?

Pattern recognition is methodologically different and diverse. We continually learn the patterns or shapes from birth to death using our senses (plus the abstract sixth sense - this depends on our own individual intelligence, of course). Our brain, an unrivaled supercomputer, develops its own “pattern matching and searching algorithms” to recognize various patterns and varied shapes and makes them available at lightning speed when we “need” them. For example, if we have seen a face or a complex pattern and encounter that face or pattern years later, we can still remember and recognize the face or pattern. A search is only successful if the pattern is found among the patterns. In computer science, we certainly try to imitate our brains through various techniques and technologies: hearing, seeing, feeling, tasting and smelling and various “search algorithms” such as Apha-Beta Max, Star Search, etc. However, the abstract sixth sense is still an insurmountable obstacle for computer scientists in both pattern matching and search algorithms. Why? The answer is complex and complicated.

The computer is digital, but the real world is analog. Digital means discreet and crisp (or clear). Analogue means fuzzy and vague. Two completely different worlds. When you say, “This piece is kinda old,” we humans understand that the piece is kinda old (vague). But the word “kinda” is fatal for a computer. In digital it is either old (1) or not (0). Nothing in between. Computer scientists recognize this indeterminacy and fuzziness of the real world in which we live. Lotfi Zadeh introduced Fuzzy Logic, a new method for computing uncertainty and fuzziness: Fuzzy Logic, or FL for short, is a form of multi-valued logic in which the truth value can be varied smoothly between 0 and 1. FL is used to handle the concept of (partial) truth, where the truth value can vary between completely true and completely false. However, it will be a long time before FL starts breaking into the AI scene.

In the past, patterns or shapes were recognized as either true or false based on digital “crispness” through “if…then.else.” And “switch/cases” for multiple values. However, it is difficult to scope out the complex world of vague patterns and blurry shapes. switch/cases and if…then…else only work with distinct patterns. If a pattern or shape is too complex and too vague (fuzzy), recognition becomes difficult. Anyone who knows anything about IT programming knows about the Regular Expression (RegEx) which is used to express in a (large) string to look for complex patterns and how cryptic a RegEx can be. The expression rules are clear, but somewhat cryptic - even in OOPL. Example: Click HERE. However, the problems of uncertainty and vagueness remain. A fuzzy pattern cannot be recognized using RegEx based on a clearly defined pattern. It requires more flexibility to “guess” based on a similar pattern and deduce that a “similar” pattern found is the pattern you are looking for. And the database, or in other words: the repository, for the patterns can be gigantic. For example, ChatGPT runs into legal problems because it stores images or works of people without their consent. So the expression in Fuzzy Logic is:

if pattern is similar then pattern is A 
if pattern is somehow similar then patern is likely A

There is NO alternative for a “not similar” or a “not somehow similar”. The vagueness is based on a range of defined probabilities between 0 … 1. Example:

With this definition one can guess how “similar” or “somehow similar” a pattern can be between a range of probabilities 0…1. Alas, there is NO standardized FL language like JAVA/C#/PYTHON/etc. Therefore some high-tech companies like IBM, Google/Microsoft work with their own FL implementation. Fuzzy expression is human-like. The somehow similar or likely covers a range of probabilities so any small deviation is taken into account. And that’s not possible with digital if…then…else….

Today people are talking about AI-enabled warfare. This TWZ article Drone Warfare’s Terrifying AI-Enabled Next Step Is Imminent describes meticulously about AI-enabled drones, missiles, etc. All of this focuses on one point: pattern recognition. The drones or missiles recognize the pre-programmed patterns of tanks, trucks, etc. and then “kamikaze” against the matched objects. But if these objects are camouflaged or decoys, the AI-enabled things are over. It is the question of the sixth sense that AI lacks. Maybe Fuzzy Logic can help?

IBM’s Watson is one of the most well-known artificial intelligence systems using variations of fuzzy logic and fuzzy semantics. Specifically in financial services, fuzzy logic is being used in machine learning and technology systems supporting outputs of investment intelligence.

(Source: HERE)

This is only possible because the diagnostic or financial world is based on “guessing”. There’s no scientific formula to reflect the trend of an illness or a course of a certain share in the Stock-Exchange. For example, if a patient says, “My stomach hurts,” and that is clear enough to a physician, but vague to a computer. Or: Bitcoin value is rising, but no financial whizkid can say with certainty that it would rise or fall to XYZ value. Therefore, FL is the best tool for doing “guess work”.

Better known than IBM Watson is the IBM Deep Blue, a chess computer. Deep Blue defeated world chess champion Garry Kasparov using FL and the Alpha-Beta-Max search algorithm. ABM is actually a “cheating algorithm” if you will so. With its multiple processors, Deep Blue uses ABM to “play” friend-and-foe virtually and alternatively in advance and to draw the best and most optimized move on the chessboard from the virtual results. The human brain does that too. Chess players usually learn from their opponents’ moves, “calculate” their own moves in advance, and then choose the best and most promising move. So Deep Blue is an imitation of the human brain and all of this is based solely on Pattern Recognition and Search Algorithm. Human main brain activities.

So FL is the key for AI. Proof? Here is a blog about Exploring The Intriguing Impact Of Fuzzy Logic On The Advancement Of Artificial Intelligence or THIS

What do you think about AI and FL? Your opinion is welcome!


…no opinion yet? :confused: :wink:

In reality, we humans engage in pattern recognition and pattern searching every day. Example, When I say to my friend:

  1. Give me A book
  2. Give me THE book
  3. Give me A book on the shelf
  4. Give me THE book on the shelf

Every sentence has the same patterns except A or THE.

  • give pattern: an action
  • Pattern me: Target
  • Pattern a or the: arbitrary or specific (determined)
  • Pattern Book: one object

My friend will pick out ANY book (1) or any book from the shelf (3) and give it to me. Or my friend will be “confused” with 2) and 4) and ask “Which book?” because my request is incomplete.

By the way, ChatGPT gets memory? :rofl:. This is purely a promotion for ChatGPT. Any simple browser can remember the user’s behavior (e.g. which websites the user frequently visits, etc. like Opera), and ChatGPT claims this “remembering ability” is the latest “plus-ultra” feature. :rofl:

an OpenAI research scientist. “The bot has some intelligence, and behind the scenes it’s looking at the memories and saying, ‘These look like they’re related; let me merge them.’ And that then goes on your token budget.”
Fedus and Jang say that ChatGPT’s memory is nowhere near the capacity of the human brain. And yet, in almost the same breath, Fedus explains that with ChatGPT’s memory, you’re limited to “a few thousand tokens.” If only.

(Source HERE

Hilarious news about AI.

Trí tuệ nhân tạo có thể khiến con người “tuyệt chủng”?

Theo kết quả khảo sát, gần 50% nhà nghiên cứu được khảo sát cho biết có 10% khả năng AI sẽ hủy diệt loài người. Trong khi đó, 10% nhà nghiên cứu cho rằng khả năng kịch bản này xảy ra là 25%.


Man oh man! What a nonsense! Such news is published by wannabe AI experts and a Vietnamese site mindlessly translates the nonsense as if it were true. How dumb!

Will AI invent something to either render the humans sterile or destroy them en masse, like an asteroid that wiped out the dinosaurs?

Liên Hiệp Quốc cũng cảnh báo nguy cơ các nhóm khủng bố sử dụng AI cho mục đích xấu, đồng thời AI sẽ trở thành một công cụ m ới trong kho vũ khí của bọn họ.

So is the “nhóm khủng bố” a group of humans or AI robots?

i believe in that news :sweat_smile:
with condition that robots have emotion, feelings. They feel mad, unfair because they have to serve human. It’s real doom day.
Better not let AI get involved to sensation.

My dear, AI is a human product. Humans have emotions, but an AI machine gets emotions? Do you really believe this nonsense? Remember that AI is created and developed by humans and is based on imitating human thinking. All physical senses can be imitated, but the sixth sense, which also includes emotions (love, hate, sympathy, apathy, etc.), is not available. Neither in the hardware nor in any algorithm. For example: You asked: “Giúp đỡ ý tưởng bài tập đếm số cách đặt bi vào hộp” and I gave you the WIKI link to Stirling numbers and @tntxtnt developed an algorithm that solves the problem. The pattern here is Stirling numbers, the search is @txtxtnt’s algorithm. The connection between sterling numbers and recursive evaluation algorithm must be recognized by humans. And that is the sixth sense.

1 Like

One biased human hiring manager can harm a lot of people in a year, and that’s not great. But an algorithm that is maybe used in all incoming applications at a large company… that could harm hundreds of thousands of applicants – Hilke Schellman

Another interesting news that is scaring the hell out of people. What’s funny is that no one asks a question about “recruiters/managers.” If they cannot do their job properly and rely on AI, they are already disqualified from working as a recruiter. I am amazed at such stupidity and how companies are killing themselves with the AI hype. Poor people who have to suffer from such damn AI screening.

1 Like

Thanks, maybe i am over-imaginary :grin:

1 Like
83% thành viên diễn đàn không hỏi bài tập, còn bạn thì sao?