Jump to content
An Old School Catholic Message Board

A.I.


Laudate_Dominum

Recommended Posts

Laudate_Dominum

Scroll down to the line in caps to skip the introductory blather.

Introductory Blather:

I've had certain topics on my mind lately (such as 3D rendering engine architecture, genetic algorithms, neural nets, etc.) and whilst on the toilet earlier I had an epiphany or sorts (perhaps anyway, we'll see). To put it simply I've come up with a concept for the concept of a concept. It is actually sort of rooted in classical substance metaphysics. I'd like to architect / create a little working prototype before really getting into details, but basically I'm talking about a four dimensional matrix (at first anyway) in which the basic unit of information processing is what might be thought of as a triangle. A ternary value which may be conceived of as three vertices in relation to one another. The matrix forms and stores concepts as an aggregation of trivalent structures. What one might think of as a 'concept' in this scenario is what I call a temporally recursive four dimensional form (yes, the fourth dimension is quite predictably time). The system would have a set of innate concepts upon which other concepts are created based on ternary transforms and aggregation. The set of innate concepts roughly correspond to temporal and spatial accidents organized via core mathematical principles.

It is a bit hard to explain in a little post and there really isn't any point in getting into technical details because my question isn't of a technical nature.

Basically, I was chillin earlier (never mind where I was sitting ;-) and this whole system sort of came to me and the nice thing about it is that the abstraction involved is in a certain sense visual. The cognitive processes can be represented in 3D space and that is pretty much how I saw it all in my mind. I posed the question, 'is snow white?', and I could imagine a 3D representation of this system cycling through the matrix and vertex orientations being rotated here and there (the basis of an 'assertion'). The result was a three dimensional form representing the question 'is snow white?' and the generation of the question 'what is snow?'. Then I visualized a second cycling of the question through the matrix, this time with there being a temporally recursive four dimensional form present partially defining the essence structure of snow. Now the response generated was 'yes, snow is definitely white'.
This little scenario obviously presupposes an integrated subsystem for processing language.

I'm essentially envisioning a scenario where just as people now go out and get a phat video card for their computer, people may someday go out and pick up the latest AI (artificial intelligence) card for their machine. You could attach different language subsystems to the cognitive matrix, or install updates and expansion packs to your matrix. Maybe you want your computer to be an expert on the French revolution; simply install the correct module and your computer can tell you just about anything you might want to know. And of course your machine would offer problem solving ability not merely information. But the real beauty of it is that concepts can be continually expanded and refined, or discarded if they're found to be erroneous. That's pretty much the basic principle. Information enters the cognitive pipeline and as it is cycled through the matrix existing trivalent aggregations react appropriately, either remaining in stasis or initiating an assertion. If the matrix has any existing concepts with which to deal with the information, ternary connections will be formed via assertions and the initially ambiguous or unknown information will become a concrete concept. But really there will always be an assertion of some kind since one of the innate concepts is the concept of the 'unknown'. And even ambiguous or 'unknown' information will often trigger what I've dubbed a 'zero assertion', this is a particular type of assertion which will usually initiate another 'pass' or cycle through the matrix or a particular region of the matrix. You might think of this as the process of pondering ambiguous information. So a phrase such as 'asjfsd kjds rjj' will likely pass through the matrix with little more than an unknown, but 'cat speak noun far my hair' would probably generate quite an array of zero assertions among other things, assuming we're talking about a fairly mature matrix; a 'blank' matrix would only have the innate concepts and could make no more sense out of the latter than the former.

Obviously there are hardware issues which would have to be faced for any practical application of this goofy idea. For one it is basically a ternary (let's say 'trinary') processing system running in a binary environment (true, false and possibly as opposed to simply true or false). The ideal for a cognitive matrix card or peripheral device would be a trinary architecture. And a matrix of this sort with any real sophistication would require a virtually infinite memory address space. But for my purposes a slow and pitiful prototype would more than suffice. Although I am considering designing said prototype for a 64-bit platform.

I'm hoping that after a long and boring process of one-on-one instruction the matrix would be ready to process information feeds. I would probably start with a simple children's dictionary or something of this sort. Perhaps eventually it would be time for the new oxford (who know's, maybe an encyclopedia). Of course this process would generate countless questions from the matrix which would have to be manually dealt with according to a specific methodology. I realize that this is beyond ambitious and that I'm probably being quite naive, but it's worth a try.

Another interesting thing is that while the cognitive matrix 'out of the box' (blank matrix - innate concepts only) can be described as four dimensional (in the mathematical sense, I'm not talking sci-fi physics or anything goofy like that), it is designed to increase in dimensionality as the matrix grows in complexity and as concepts are born and connections made. Theoretically the threshold is infinite. This is related to the way in which concepts contain other concepts but are distinct from these concepts. In a sense you can say that a concept both is and isn't another concept at the same time. This also comes into play when discussing concepts that are and are not something given the sense or circumstances (concepts that are verbally or adjectively augmented by a conceptual 'relative' of the innate concept of relation and another innate concept or it's decendent). This is an obvious aspect of the requirement of virtually infinite address space. Deductive and inductive 'agents' in the cognitive pipeline will need to be able to span a vast dimensional range and perform operations which will at some point require incredibly large numeric values among other things. The dimensional expanse will always be in threes so it will always be possible (if for no other reason than for sheer fun) to attach a graphics subsystem and produce a 3D graphical environment which represents the state and activity of the cognitive matrix (since it is fundamentally reducible to scalars, vectors, vertices and such). You could think of the multidimensionality as a matrix within a matrix within a matrix.. ad nauseum. If it helps, although it's not as exciting, it could be thought of as a multidimensional array of matrices. Although in actuality this is inaccurate since it is a single matrix.

Ok, I promised not to go down the path of technical details and I'm feeling tempted.. My question is actually pretty straight forward.

QUESTION STARTS HERE:

Is anyone aware of any Catholic perspectives on artificial intelligence? I have interest and ambition in the realm of AI and I know that this field is often perceived in light of some materialistic ideology and is thus seen as implying that the human person is simply a sophisticated computational machine and is entirely reducible to material phenomenon and mathematical principles. Obviously I completely reject this mentality and I actually think it's pretty stupid. I consider my technological projects to be more of an artistic outlet than anything else. We are made in the image of God and the creative/artistic impulse is part of our nature. A painter or a sculptor is praised for capturing a high degree of realism in some stone or canvas which he has patterned after the image of man; what is wrong with attempting to achieve a certain realism in a computer system, whether it be a game or an android? The reason I ask is because I actually do feel a certain tension. On the one hand I think it's really cool and interesting, but at the same time a part of me feels funny.

Link to comment
Share on other sites

homeschoolmom

[font="Lucida Console"]Oh... A.I.! I thought you were directing the entire thread to Aloysius...[/font]

Link to comment
Share on other sites

Laudate_Dominum

[quote name='homeschoolmom' post='1014097' date='Jun 28 2006, 04:32 PM']
[font="Lucida Console"]Oh... A.I.! I thought you were directing the entire thread to Aloysius...[/font]
[/quote]
I'll add the periods. :)

Link to comment
Share on other sites

[quote]I've had certain topics on my mind lately (such as 3D rendering engine architecture, genetic algorithms, neural nets, etc.) and whilst on the toilet earlier I had an epiphany or sorts (perhaps anyway, we'll see). [/quote]


And that Marty, is when I came up with the idea for the flux capacitor!

Link to comment
Share on other sites

As you said, understanding all that is really hard in a written post.
I see nothing wrong with AI. Obviously, AI is never going ot be able to fully replicate teh human mind. Trying to make it replace the human mind or something like that (such as creating robots that people think are and treat the same as real people), however, I see as wrong.

Edit: I did understand quite a bit of that, though... but I don't see the actual advantage of ternary processing... A friend of mine explained to me an idea of his involving luminous processing, which has major speed advantages, and involved also ternary processing (RGD colors)... but I don't know what the advantage of straight ternary processing is...

Edited by Franimus
Link to comment
Share on other sites

Lounge Daddy

thats a great question, L_D

for sure A.I. would be much more acceptible than a clone

and to me... much more interesting :cool:

Link to comment
Share on other sites

Until some sort of breakthrough in organic computing allows for such extensive and dynamic data processing and storage, I'm inclined to think you have about as much reason for tension as one might experience pondering the implications of encountering extra-terrestrials on a voyage to Beta Pictoris.

I'm only kidding. :D:

I'm no Church scholar but on a more serious note, my personal thought is that until it goes organic there is not much to worry about. I think if one were to create an AI based on an organic processor and data structure in real human physical likeness, there would be a serious problem (as there is with the other 'AI', if you get my drift) since to build such a creature would certainly require that one transgress some basic moral and ethical laws. Obviously, this warrants tension.

Setting aside binary and even "trinary" (liquid processors?) medium limitations, I can't see what a non-organic AI capable of virtual learning with "limitless" data scope boundaries would in itself present as a moral dillema aside from the opportunity for abuse. Fundamentally, it would still just be a computer, crunching data and doing calculations, right? Or do you suspect the data content and structure could reach the level of complexity necessary to facilitate true sentience? I'm skeptical that could be attained outside the organic medium but who knows really....and even if it somehow did accomplish such a feat, would it's designer necessarily be culpable? Hmmmmm.....

BTW by organic I mean specifically living tissue - not merely carbon based compounds.

Link to comment
Share on other sites

Creating true A.I. would be committing one of the classic blunders of the sci-fi world:

1. Never teach a computer to think (it [i]always[/i] turns out badly for the humans).
2. Never mess with the space-time continuum (all sorts of unexpected consequences).



As far as the moral acceptability goes, I'm not sure.... I guess part of it would depend on your intent in creating it and how it would most likely by used by others (regardless of your intent). Part of it may also depend on whether the computer would be truly intelligent and able to learn vs. obeying pre-set commands and making "decisions" based on probabilities and algorithms.

I think that at least to some degree, the purpose of art is to expess something about humanity, the human experience, or the world. It's a physical representation of something that is more than just what is shown in the painting or the sculpture. It's about truth, beauty, and/or goodness... or sometimes about possibility, pain, expression of emotion...or about telling a story.... While some paintings and sculptures are very realistic, that's not generally the [i]point[/i] of the art. Some of the greatest artists in history were not from schools that valued realism in the portrayal.

With video games, the point is to entertain, to make the player more involved in the world and situations created by the game, to draw the player into an experience he can't have in the real world. It's the creation of a fantasy world that feels real.... The video game isn't trying to mimic reality so much as create a new "reality."

What is the purpose of your A.I. is it just for novelties sake? to see if it can be done? to help humanity in some way? to replace humanity in some way? to create a quasi-human? to play God with a lesser "being"? Is it to do human work? Will that divorce us even more from who we are? Will it express something about the human experience? Create a new experience? Teach people about other human experiences? Will it further truth, beauty, and goodness?

Any technology will be coopted by others. What ramifications would this have? Is it worth the risk?


I know I've given you questions rather than answers....sorry :idontknow:

Link to comment
Share on other sites

Guest JeffCR07

L_D, how familiar are you with the contemporary AI discussion within the field of Philosophy of Mind?

Many, like John Searle, argue that creating artificial intelligence is in principle impossible. As the argument goes, even if a computer could be created that could carry on a functional conversation, it could still not be properly said to "understand."

Aristotelians, however, would argue that a computer could not even be taught how to properly communicate at all (i.e. a computer will never pass the "Turing Test"). This is because certain concepts necessary for proper linguistic communication are [i]rational[/i] in character and, as such, demand the presence of an immaterial soul, which computers lack.

The strength and weakness of the aristotelian response is that it accepts the challenge of the "Turing Test." If the test is ever passed, then the whole psychology of Aristotle must be thrown out. However, as the years go by and we get no closer to having a computer successfully pass the test, Aristotle becomes more and more convincing.

Dualists will argue that AI is impossible because we cannot force a soul into a computer, while Materialists will argue that AI is possible.

Your Brother In Christ,

Jeff

Link to comment
Share on other sites

[quote name='hierochloe' post='1014350' date='Jun 29 2006, 02:55 AM']
Fundamentally, it would still just be a computer, crunching data and doing calculations, right? Or do you suspect the data content and structure could reach the level of complexity necessary to facilitate true sentience?[/quote]

I'd be inclined to agree w/ Jeff on this.
1. yes
2. sentience is not something based merely on complexity of logical constructs.

In the end, I think it'll be nothing more than something that people use to cheat on exams...sorry, I'm being cynical....

I don't think the structure you describe would work with merely numerical data. Getting it to the point of language processing would be a feat in and of itself. It's almost like you'd have to input philosophy of language as raw information for it to even begin. It'd probably be horrendously slow, and the likelihood of infinite looping is great. It sounds like it would either be ridiculously simple to code or horrendously hard to code. I think you'd run out of points of knowledge on a 64bit workstation pretty quickly.

something like
point x is defined as
x (some_sort_of_raw_info, some_linked_list_of_connections)
where a connection is defined by
connection (list_of_types_of_connection, connected_to)
and each type of connection is an x point itself, probably.

Theoretically you'd want to be able to put in, say, "straw" and it would link to
hay
scarecrow
straw man argument
drinking straw
bedding
organic material
farm supply
feed
etc.

Then, all that could branch out and...
hit the hay
bales of hay
hey!
Wizard of Oz
brain
logic
fallacy
debate
liquid
cylinder
air pressure
feathers
springs
mattress
carbon-based
grass
farm
farmer
oats
etc.

and all of that could branch out.... recursively. A true AI would also be able to describe what kind of link it is:
hay - use/other name
scarecrow - use
straw man argument - linguistic/metaphorical
drinking straw - different definition
bedding - use
organic material - description of type 1
farm supply - "
feed - possible use
etc.

Link to comment
Share on other sites

Laudate_Dominum

[quote name='JeffCR07' post='1014564' date='Jun 29 2006, 10:24 AM']
L_D, how familiar are you with the contemporary AI discussion within the field of Philosophy of Mind?

Many, like John Searle, argue that creating artificial intelligence is in principle impossible. As the argument goes, even if a computer could be created that could carry on a functional conversation, it could still not be properly said to "understand."
[/quote]
I scoop up a phil. of mind type journal from time to time. I'm quite familiar with Searle and have greatly enjoyed his books. I disagree that artificial intelligence is impossible, I'd say it's already a reality (in a limited sense). I do agree that the idea of a computer having "understanding" in the full sense of the word is nonsense, but a computer is perfectly capable of faking it pretty darn well. The Turing Test doesn't really concern me too much. I'm more interested in creating learning/problem solving systems than in fooling people into thinking a computer is a person. And in actuality I've already created some systems that would probably startle some people.
I don't think that the concept of a "sentient" computer really makes any sense. Personal subjectivity is not reducible to mathematics nor can it be conjured up by simply piling up the code and complexity.

A few of the posts above seem to indicate a lack of appreciation for the difference between a deterministic system and a stochastic system. What I'm getting at is a non-deterministic system based on stochastic processes. And I've already created systems that process language and solve complex problems, learning as they go and improving themselves. This is radically useful stuff btw. I wrote a program a few months back that was able to read through a bunch of completely disorganized data and sort out what was what for millions of accounts. This would have taken a team of people a very long time and would have been an extremely tedious and somewhat degrading task. My program was able to do this after being walked through a few thousand scenarios and adapting itself according to complex patterns and rules that were discerned you might say. And it was naturally a probabilistic system and each set of account information had a degree of certitude attached to it so that data which was retrieved with a lower degree of certitude could be verified manually. But to everyone's surprise this "smart" program was more than 98% accurate, which is likely better than what a group of humans would have done. Oh, and it processed these millions of accounts in mere seconds, whereas humans would have spent weeks on it.

This toilet vision thing is sort of about trying a new approach. I've been doing stuff with neural nets, fuzzy logic, genetic algorithms, etc. and now I want to invent something new.

Link to comment
Share on other sites

hierochloe

[quote name='Laudate_Dominum' post='1015199' date='Jun 30 2006, 11:34 AM']
I don't think that the concept of a "sentient" computer really makes any sense. Personal subjectivity is not reducible to mathematics nor can it be conjured up by simply piling up the code and complexity.
[/quote]
Then aside from potential for abuse (like anything else really), I guess I can't imagine what would cause the 'tension' when developing AI. I suppose there might be a temptation to compare one's creative ability and product to God's work, which wouldn't be spiritually healthy imho, but this is certainly not a temptation exclusive to any one type of endeavor anyways.

I would agree that purely mechanical sentience will likely never be anything more than a concept of fantasy. However I think it's short-sighted at best to write off the possibility of sentience and even self-awareness (at some level) in an anthropogenic construct that is organically based. To clarify, I hope humans never see the day when such a thing happens, but I nevertheless recognize the possibility. I use the term sentience in a broader sense here, not specifically a human level of sentience (altho I think there's a probability for that too depending on the organic medium). Again, I'm referring to living tissue, not just carbon-based material, with the term 'organic'.

Regarding the feasability of AI, it seems maybe there's some semantic ambiguity in this thread with that term. AI systems, as defined by the IT world, are in use already and have been for years. There's no question of their viability and practical application, and they continue to improve (thx to toilet visions?). However, if 'AI' is meant to refer to a rational, concious entity then that's another ball of wax.

Link to comment
Share on other sites

Laudate_Dominum

[quote name='hierochloe' post='1016144' date='Jul 2 2006, 01:29 AM']
Regarding the feasability of AI, it seems maybe there's some semantic ambiguity in this thread with that term. AI systems, as defined by the IT world, are in use already and have been for years. There's no question of their viability and practical application, and they continue to improve (thx to toilet visions?). However, if 'AI' is meant to refer to a rational, concious entity then that's another ball of wax.
[/quote]
I agree. :)

But the fact is I've recently been distracted from my A.I. projects because I've been inspired to create a Star Wars vs. Star Trek video game. :woot:

So far I have a mostly functional prototype of the engine and complete models & textures for a couple of federation ships and the Borg cube ship. I need to make the stuff for some tie fighters, imperial cruisers, a-fighters, and of course the rebellion ships (x-wing and of course the falcon). There will also be Klingons and possibly the Death Star. :yahoo:

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...