To solve this, let’s first consider the total number of combinations when there are no restrictions. Since there are 4 friends, there are 4! (4 factorial) ways to arrange them, which equals 24 combinations.
Now, let’s count the number of combinations where A and C are next to each other. Since A and C can be arranged in 2 ways, and for each arrangement, the other two friends (B and D) can be arranged in 2! ways, the total number of combinations where A and C are next to each other is 2 * 2! = 4.
So, the number of combinations where A and C cannot be next to each other is the total number of combinations minus the number of combinations where A and C are next to each other:
The difference between 3.5 and 4 is substantial. Here is what 4 says
To find the number of combinations in which four friends (A, B, C, and D) can stand in line such that A and C are not next to each other, we can use the following approach:
Total Combinations: First, calculate the total number of ways in which four people can be arranged in a line. This can be calculated by (4!) (4 factorial), since there are 4 slots to fill, each choice reducing the number of available choices by one for the next slot.
[
4! = 4 \times 3 \times 2 \times 1 = 24
]
Unwanted Combinations (Where A and C are next to each other):
Consider A and C as a single unit. This effectively reduces the number of units to arrange from 4 to 3 (the AC unit, B, and D).
These three units can be arranged in (3!) ways:
[
3! = 3 \times 2 \times 1 = 6
]
However, within the AC unit, A and C can switch places. So, there are 2 ways to arrange A and C within their unit.
Therefore, the total number of arrangements where A and C are next to each other is:
[
3! \times 2 = 6 \times 2 = 12
]
Subtracting Unwanted Combinations: Subtract the number of unwanted combinations (where A and C are next to each other) from the total number of combinations:
[
24 - 12 = 12
]
Thus, there are 12 combinations where A and C are not next to each other.
It is true that newer models that have ingested more training data are better at this kind of thing, but it is not because they are using logic, but because they are copying and following examples they already learnt, if that makes sense. I got the question from a test passed to kids ages 12-13, so arguably it wasn’t really that challenging. If you want to you can try out the more advanced problems from the same place I got it from, although it’s in Spanish, so pass it through Google Translate first.
If you turn to programmers they’ll tell you that AI usually makes mistakes no human would normally make such as inventing variables that don’t exist and that kind of thing. It is because in the examples it learnt from they have mostly existed.
What I mean to say is, if you give an AI a problem that is not in its training data and can only be solved using logic (so, you can’t apply what is used in other problems) it will be incapable of solving it. The Internet is so vast that almost everything has been written about so AIs will seem to know how to solve any problem, but it is no more than an illusion.
HOWEVER, if we manage to integrate AIs and normal, mathematical computation really closely so that they function as one, that problem might be solved. It will probably also have its caveats, though.
I’m tempted to argue that many humans aren’t generally intelligent based on your definition of requiring original thought/solving things they haven’t been told/trained on, but we don’t have to go there. Lol
Can you expand on your last paragraph? You’re saying if the model was trained on more theory and less examples of solved problems it might be improved?
If I’m being completely honest, now that I’ve woken up with a fresh mind, I have no idea where I was going with that last part. Giving LLMs access to tools like running code so that they can fact check or whatever is a really good idea (that is already being tried) but I don’t think it has anything to do with the problem at hand.
The real key issue (I think) is getting AI to keep learning and iterating over itself past the training stage. Which is actually what many people call AGI/the “singularity”.
Logic. As an example, non textbook math questions. I asked ChatGPT 3.5 this:
It answered 20, the correct answer is 12.
All possible conbinations
abcd,abdc,adbc,adcb, cbad,cbda,cdba,cdab, bcda,badc, dcba,dabc
Its answer
To solve this, let’s first consider the total number of combinations when there are no restrictions. Since there are 4 friends, there are 4! (4 factorial) ways to arrange them, which equals 24 combinations.
Now, let’s count the number of combinations where A and C are next to each other. Since A and C can be arranged in 2 ways, and for each arrangement, the other two friends (B and D) can be arranged in 2! ways, the total number of combinations where A and C are next to each other is 2 * 2! = 4.
So, the number of combinations where A and C cannot be next to each other is the total number of combinations minus the number of combinations where A and C are next to each other:
24 - 4 = 20 combinations.
The difference between 3.5 and 4 is substantial. Here is what 4 says
Total Combinations: First, calculate the total number of ways in which four people can be arranged in a line. This can be calculated by (4!) (4 factorial), since there are 4 slots to fill, each choice reducing the number of available choices by one for the next slot. [ 4! = 4 \times 3 \times 2 \times 1 = 24 ]
Unwanted Combinations (Where A and C are next to each other):
Subtracting Unwanted Combinations: Subtract the number of unwanted combinations (where A and C are next to each other) from the total number of combinations: [ 24 - 12 = 12 ]
Thus, there are 12 combinations where A and C are not next to each other.
It is true that newer models that have ingested more training data are better at this kind of thing, but it is not because they are using logic, but because they are copying and following examples they already learnt, if that makes sense. I got the question from a test passed to kids ages 12-13, so arguably it wasn’t really that challenging. If you want to you can try out the more advanced problems from the same place I got it from, although it’s in Spanish, so pass it through Google Translate first.
If you turn to programmers they’ll tell you that AI usually makes mistakes no human would normally make such as inventing variables that don’t exist and that kind of thing. It is because in the examples it learnt from they have mostly existed.
What I mean to say is, if you give an AI a problem that is not in its training data and can only be solved using logic (so, you can’t apply what is used in other problems) it will be incapable of solving it. The Internet is so vast that almost everything has been written about so AIs will seem to know how to solve any problem, but it is no more than an illusion.
HOWEVER, if we manage to integrate AIs and normal, mathematical computation really closely so that they function as one, that problem might be solved. It will probably also have its caveats, though.
I hear you. You make very good points.
I’m tempted to argue that many humans aren’t generally intelligent based on your definition of requiring original thought/solving things they haven’t been told/trained on, but we don’t have to go there. Lol
Can you expand on your last paragraph? You’re saying if the model was trained on more theory and less examples of solved problems it might be improved?
If I’m being completely honest, now that I’ve woken up with a fresh mind, I have no idea where I was going with that last part. Giving LLMs access to tools like running code so that they can fact check or whatever is a really good idea (that is already being tried) but I don’t think it has anything to do with the problem at hand.
The real key issue (I think) is getting AI to keep learning and iterating over itself past the training stage. Which is actually what many people call AGI/the “singularity”.
You can have it try again over and over, even while telling it the answer is 12, and it hallucinates basically random numbers to boot.