I think I'm starting to see the problem with your language. You're trying to be the ultimate high level language that anyone can use, but also a serious language that real programmers can use for complex tasks.
Fair enough. Plus providing experienced programmers the opportunity to question some of their (possibly erroneous) preconcieved notions about the subject. Plus laying the groundwork for an "apparently intelligent" machine like the HAL 9000. But yes. I definitely wanted a single, simple, consistent tool -- language and interface -- that I could use to teach my eight-year-old son everything from basic programming to compiler design.
As a result you've ended up with something of a mess which has pitfalls for both sides.
Well, it's a tall order.
Your documentation definitely plays up the simple and accessible aspect.
Was it Einstein who said, "If you can't explain it simply, you don't understand it yourself"?
You've made your own GUI and so on to try to appeal to non-programmers...
And as an example for others. When I see my dentist's receptionist with all sorts of windows and widgets on her screen that she doesn't need, it strikes me that programmers would benefit from being reminded that it's okay to "make your own GUI" to better serve your user.
...but you've also saddled them with difficult concepts which you like (or maybe you just can't escape from?). Touching on a few examples that I already mentioned:
Event-driven programs may be trivial to you.
And the fifth-graders in that study I mentioned.
Have you ever taught an introductory programming course?
Yes. I've taught database design and programming, in both classroom settings and private tutoring sessions, for the past 30 years; literally thousands of students, of all ages, have been under my tutelage.
You may be surprised just how many college-level students have serious trouble grasping other trivial concepts that are much simpler than an event-driven UI.
I've found that
nothing is hard for any
motivated student when (1) the student has the proper prerequisites, and (2) the new material is presented step-by-step.
Despite what your documentation claims, memory management is most definitely not trivial. "Just remember to destroy what you create" is, frankly, a bold-faced lie.
Or an accurate description.
Determining object lifetime and ownership is a challenging problem, even for experienced programmers.
Not the way
we write code. I don't recall having a single "memory leak" during the development of our entire system (25,000 lines) that wasn't both the result of a mere minor oversight and easily fixed.
There are other issues along similar lines, like the inclusion of pointers and linked lists (even though they may be called "things"), but I don't have time to delve into all of them.
As I said above, we wanted a system that would be useful to both beginners and experts; that would support the writing of everything from simple console applications to advanced wysiwyg page editors to native-code-generating compiler/linkers with the very same language and interface. So we wrapped up our linked lists as "things" for the beginners, but left enough of the details exposed for those who wanted to delve deeper.
For experienced programmers, you have a whole host of other issues:
At the moment, Notepad is a more function[al] editor than the one included in your program. It's interesting that on one hand you defend this terrible editor by claiming it eliminates preconceived ideas, but on the other hand defend it by saying that I'm just [not] familiar enough with your "Ctrl-Home, Ctrl-F" preconceived notion of how one should navigate an editor.
It's almost always true that "user friendly is what the user is used to." And different tools are designed to be used in different ways. As I mentioned before, our editor is designed to be used with Raskin's "Leap" paradigm rather than the ubiquitous "scrollbar" paradigm. You will, of course, hate the thing if you try to use it as you would use Notepad (or some other, traditional editor). But use it as it was intended for a while, and I think you'll see the advantages of the thing.
As far as I can tell, there is no way to decouple the language from the editor.
Right. That's why it's called an
integrated development environment. When the compiler finds an error, for example, it automatically takes you, in the editor, to the file and line where the error was discovered.
Even if you love your editor, preventing people from using tools how they desire is a HUGE barrier to entry.
"Barrier to entry" for whom? We're well aware that it's unwise to put new wine into old wineskins. If the advantages of our approach aren't more-or-less immediately obvious (or at least enticingly curious) to someone, that someone is most likely beyond our reach. Next person in line, step up!
Expose a command line interface for your compiler/linker/debugger and save everyone some headache.
By "everyone" I think you mean "that subset of humanity that is both heavily left-brained and experienced with command-line processing." Not going to happen. Neither is a completely "visual programming" interface going to happen. Some things are better expressed/done with words, some with pictures. The balanced interface (like the balanced brain) is what we're striving for.
The whole attitude of your documentation is a turn-off. Although I think it was probably written (at least partly) in jest...
We put a lot of humorous stuff in there, to be sure. But we're also quite serious about every point that is made.
I suspect that roughly 98% of programmers will read your page on debugging, roll their eyes, and think some variation of "you've got to be kidding". Just as I did.
No, not kidding at all. That's how we really did debug the thing -- and it's a non-trivial program. That's pretty much how programming greats like Niklaus Wirth used to debug, as well.
Whenever possible, I have the programmers who work for me work in teams of two. We set them up with two monitors connected to a single machine (both monitors displaying the same stuff) and we have one guy run the mouse while the other runs the keyboard. The mouse guy is the leader; the keyboard guy types what he is told (or what he anticipates the leader is thinking). The players switch roles from time to time as different kinds of expertise are called into play (typically one guy is more left-brained and does better with the math stuff; the other is more right-brained and does better with the interface stuff). We've found this technique exceptionally beneficial: every line of code is double-checked by two pairs of eyes before it is run; left- and right-brain biases are balanced in both the design and code; and nobody turns into a cranky and friendless introvert who prefers machines to people.
I mean that your "sales pitch" in this thread gives the impression that you can essentially just talk to the compiler, and it's able to parse your intent.
And you can, for the most part, once you code up enough "helper routines" that describe the
kind of things that you want to say to the compiler. Remember, every routine you code becomes, in essence, part of the (now expanded) syntax of the language. But of course it's only a prototype, an experiment. A "proof of concept". It still needs lots of further work.
That's not the case - your language adheres to a particular grammar, just like Java, C, or anything else. You just happen to have a grammar that's less rigid and more verbose than most other languages.
All languages, including natural languages like English and Spanish, have a grammar. But our grammar and parsing are not like Java's and C's. Consider, for example, the keywords in those languages, words like "typedef" and "struct" and "void" and "volatile"; now consider our keywords: words like "a" and "the" and "of" and "in" -- articles and prepositions, for the most part. In other words, our compiler keys off the true "marker" words of the language, the words that naturally appear
between what you're talking about. Which allows the programmer to extend the syntax and grammar of the language simply by programming: every new routine not only accomplishes some end, but becomes an automatic and immediately operational template for additional sentences forms. If you must compare our language with others, try FORTH; the similarities there are much more pronounced.
Eh, that's [Turing complete, able to recompile itself, with reasonable convenience and efficiency] not a very high standard.
But it is. How many programmers here, do you think, have ever created such a thing? And how many popular languages can't meet that standard?
Brainfuck is a Turing complete language, and it's a toy.
Because it doesn't meet the criteria above: it's not convenient.
IIRC, C++ templates form their own Turing complete language. You can do some amazing wizardry with template metaprogramming, but anyone using it to solve complex problems should probably be smacked upside the head.
Again, failure on the "convenient" part of the standard.
Also, perhaps I'm missing something (I've never been that into the theory side of things) but shouldn't any Turing complete language be capable of self-compilation?
Capable of, yes. But someone has to actually write the compiler before it meets our standard (else it's a mere virgin). And most such projects would fail either on the "convenience" or "efficiency" requirements.
I imagine most of the languages you mentioned are built on a C or C++ backend for performance reasons, but I'm not really seeing why they couldn't self compile if you wanted them to.
Many of them could. The question is why the original developers of such languages (1) produced something that performed so badly, and (2) didn't
want to use them to reproduce themselves.
Honestly, it seems like a language that was designed in response to the 80s/90s era when most people wanting to learn programming were stuck choosing between C and BASIC.
Actually, it was designed in response to the era in which it was developed (2005-2006). Programming just wasn't fun anymore. Instead of a small language that could be mastered in a day, with a small library of intuitive and useful functions for manipulating the screen, disk, mouse, keyboard, printer, and communications port, we were faced with mammoth frameworks and convoluted APIs and ill-conceived object-orient paradigms that forced us to spend our time learning about someone else's way of doing things and searching huge files for not-quite-the-right object to do what we wanted to do. I don't know what that is, but it's not programming.
Yeah, we've moved past that point.
But too far (or not far enough!) to get a wysiwyg post editor on this forum!
Like I said, the concept is interesting, but so far I really see no advantages over modern high level languages like Ruby or Python.
We think the "last" programming language will allow us to produce the "ultimate" in code: something like a math book: a natural language framework with snippets of specialized syntax (and even graphics) where appropriate. If we're right, Plain English is a step in that direction because it's a trivial matter to add specialized sub-compilers to handle those snippets to our system; but it's next to impossible to add Plain English processing to a language like Ruby or Python. That's the advantage, though it's a future one.