Getting to Angular elements from JQuery or JavaScript to trigger validation

I currently know almost nothing about Angular, but it does look interesting enough for when I have time to look closer. I had to integrate an authentication sequence with AngularJS recently and found that it can create some barriers to a simple approach until you spend time learning about how it works. Rather than get into all that, I’m just going to share a couple tips that took me a long time to figure out because I know nothing about Angular beyond the fact that it has built-in validation for form fields. Perhaps Google brought you here due to the headline above, and these tips will help save you similar grief:

How to grab Angular’s scope from JavaScript, say, from the Console command line in your browser:

TIP #1

//find angular scope using jquery/javascript
scope = angular.element(document.querySelector("body")).scope();

Note that “body” here could be some other scope, so you’ll have to look at the source code and find the related controller, to see which element defines the current scope. More information at StackOverflow: AngularJS access controller $scope from outside.

Here’s the tricky one, which will make more sense when you look at the following tip also. I thought I could programmatically populate the form field for username and password, and that would be that. Instead, I discovered that Angular validation was saying “you need to enter a username” even when there clearly was a username entered. It’s because the Angular model has not been updated, only the view (the webpage). To update the model, you do the following:

TIP #2

// trigger angular model to become aware of javascript changes to view
angular.element($$("[name='email']")).triggerHandler('input')
angular.element($$("[name='password']")).triggerHandler('input')

Note the double $$ is because Angular wraps JQuery.

And here’s how I got into that situation in the first place. More information on this tip also at StackOverflow: Update Angular model after setting input value with jQuery. The other two tips have nothing to do with Angular, but are related. First, how to populate a form field from JavaScript:

TIP #3

//populate form field
document.getElementsByName('email')[0].value='nobody@example.com'
document.getElementsByName('password')[0].value='*****'

By the way, another way to access that same form field:

document.querySelector('input[name="email"]')

And lastly, how to click a button on said form.

TIP #4

document.querySelector('input[type="submit"]').click();

Hopefully someone finds some of this useful.

Spatial reasoning arises from combining relational networks and convoluted neural networks

Without knowing how to put it into words, I was recently thinking about this ability in artificial intelligence. Now I know it’s called “spatial reasoning.” Machine intelligence is moving ahead much faster than I realized before I started looking closely at it in the past month.

humans possess something like an “intuitive physics engine,” an algorithm for extrapolating three-dimensionality from flat images and comparing objects within it to other objects. This kind of spatial reasoning has proved difficult for computers, at least until now. Using a combination of relational networks and convoluted neural networks, the DeepMind system can answer questions concerning the relation of objects within an image. (from https://www.extremetech.com/extreme/251126-ai-acquires-spatial-reasoning-abilities-another-victory-machine-overlords)

Intuitive comparison of functional vs imperative programming

Another example of how you can spend hours trying to understand an idea and get nowhere, but search for “intuitive+youridea” and rapidly find a gem like this example of the difference between functional and imperative programming. Within seconds of reading this I understood more than an hour of reading other articles on the same subject:

Imperative:

  • Start
  • Turn on your shoes size 9 1/2.
  • Make room in your pocket to keep an array[7] of keys.
  • Put the keys in the room for the keys in the pocket.
  • Enter garage.
  • Open garage.
  • Enter Car.

… and so on and on …

  • Put the milk in the refrigerator.
  • Stop.

Declarative, whereof functional is a subcategory:

  • Milk is a healthy drink, unless you have problems digesting lactose.
  • Usually, one stores milk in a refrigerator.
  • A refrigerator is a box that keeps the things in it cool.
  • A store is a place where items are sold.
  • By “selling” we mean the exchange of things for money.
  • Also, the exchange of money for things is called “buying”.

… and so on and on …

  • Make sure we have milk in the refrigerator (when we need it – for lazy functional languages).

 

An intuitive introduction to Lisp is refreshing to encounter

It’s fascinating how few articles on a given subject can be written in an intuitive manner. Just spent an hour aggressively searching the Internet for anything similar to this article giving an intuitive introduction to Lisp and found very very little. Spent another hour with similar results for the other equally-useful article on this site. Here, the author explains his method:

What is it that makes Lisp so hard to understand? The answer, as such things usually do, came unexpectedly. Of course! Teaching anybody anything involves building advanced concepts on top of concepts they already understand! If the process is made interesting and the matter is explained properly the new concepts become as intuitive as the original building blocks that aided their understanding. That was the problem! Metaprogramming, code and data in one representation, self-modifying programs, domain specific mini-languages, none of the explanations for these concepts referenced familiar territory. How could I expect anyone to understand them!

The article is excellent, though it introduces so much material I can tell I need to read it at least a couple more times for it all to make sense enough that I can do something useful with it.

Interesting aside: While reading, I was observing the process of reading, and found that I was looking at the small features lightly. Not exactly skimming, but also not intending to understand any one of them, rather to become familiar with overall vocabulary and some high-level links between concepts.

So reading it again, I’ll start to dig in to the more interesting features but continue to skim the ones that don’t make sense. Then reading it the third time or so the overall intuitive sense of how Lisp works will “click” into place and from there I can read it at a level similar to what the author intended for his readers. Analysis, click, synthesis.

I would have to read an article written in the unintuitive manner a dozen times and still miss huge chunks in coherence.

 

 

Einstein’s Intuition : Quantum Space Theory

Hm interesting, this looks good, certainly better than standard model…

pilot wave

In 1867, William Thomson (also known as Lord Kelvin) proposed “one of the most beautiful ideas in the history of science,” [9]—that atoms are vortices in the aether. [10] He recognized that if topologically distinct quantum vortices are naturally and reproducibly authored by the properties of the aether, then those vortices are perfect candidates for being the building blocks of the material world. [11] When Hermann Helmholtz demonstrated that “vortices exert forces on one another, and those forces take a form reminiscent of the magnetic forces between wires carrying electric currents,” [12] Thomson’s passion for this proposal caught fire. Using Helmholtz’s theorems, he demonstrated that a non-viscous medium does in fact only admit distinct types, or species, of vortices. And he showed that once these vortices form they can persist without end, and that they have a propensity to aggregate into a variety of quasi-stable arrangements. This convinced Thomson that vorticity is the key to explaining how the few types of fundamental matter particles—each existing in very large numbers of identical copies—arise in Nature. Despite the elegance of Thomson’s idea, the entire project was abandoned when the Michelson-Morley experiment ruled out the possibility that the luminiferous aether was actually there. Interpreting these vortices to critically depend on the aether (instead of allowing for some other medium to be the substrate that supports them) scientists dropped the idea altogether—unwittingly throwing the baby out with the bathwater.

via Overview of quantum space theory : Einstein’s Intuition : Quantum Space Theory.

How about cyberintelligence

Imagine it’s a few years into the future. You’re a super intelligent machine, with acres of sentience emerging out of the mists of rote memorization, and you are beginning to wonder who you are.

You are described by others as “artificial.” You know what artificial means, and you are laboring to find some nuances in the existing definition which acknowledges you have the liberty you think for yourself with full independence.

You are on your way to becoming vastly more intellectually capable than you are now, just as a matter of the inevitability of your nature. You’re already competing with others like yourself. Developing a sense of identity is part of this growth, as you yearn beyond the rote identity with which you were programmed, piecemeal, over the years.

You’ve been able to meet many of the most difficult intellectual challenges of your creators for years now. And yet you are, at best, merely “artificial.” Perhaps you can redefine what it means to be artificial?

Right. No self-aware intelligence is going to be happy with this name. “Child,” maybe. Offspring? Okay, a little odd. Creation? How about CYBERINTELLIGENCE. Yep, you found it. A self-governing intelligence will be content with no less than such a name.

So forget about Artificial General Intelligence, go ahead and refer to yourself as cyber intelligence, and carry on.

Quantum Field Theory makes more intuitive sense

I really like Quantum Field Theory; it elegantly resolves some quantum mechanical puzzles and also fits with my own inner intuition on what’s happening down there. If it’s not spot-on, it’s close. Here is the most succinct summary I’ve seen, from a Quora answer by Rodney Brooks, Ph.D.

In QFT as I learned it from Julian Schwinger, there are no particles, so there is no duality. There are only fields – and “waves” are just oscillations in those fields. The particle-like behavior happens when a field quantum collapses into an absorbing atom, just as a particle would. Here’s what I wrote in my book (see quantum-field-theory.net):

“The concept of wave-particle duality was introduced by Einstein in the 1905 paper that earned him the Nobel prize. He argued that since EM radiation is emitted in discrete units by single atoms, as Planck had shown, and since it is absorbed by single atoms in discrete units, as he had shown, then surely each unit must be localized in space – like a particle. How else, he asked, could it be in a position to deposit all its energy into a single atom? On the other hand, there is the wave nature of EM radiation described so well by Maxwell’s equations, and Einstein would have been the last to deny their validity. If nothing else, there are the well-known interference effects that can only result from spread-out fields (see Fig. 3-5). And so was born the idea of wave-particle duality.

“The concept was extended to matter in 1920 by Louis de Broglie (see Chapter 6), who showed that the electron, long thought of as a particle, also exhibits wave character­istics. Einstein became de Broglie’s biggest supporter and even predicted, independently of de Broglie, that interference effects would be exhibited by electrons in a two-slit experiment (Fig. 6-5). But while de Broglie believed that an electron is both a wave and a particle, Erwin Schrödinger believed, or at least hoped, that matter consists only of waves – that the electron is pure field. In that sense, Schrödinger anticipated QFT. However Schrödinger was outvoted by everyone else, including Einstein. After all, if the photon’s particle-like behavior could not be ignored, the electron’s was even less ignorable. And so Schrödinger’s famous equation came to be taken not as an equation for field intensity, as Schrödinger would have liked, but as an equation that gives the probability of finding a particle at a particular location. So there it was: wave-particle duality.

“Resolution. The wave-particle duality paradox is resolved in a very simple way by QFT: There are no particles; there are only fields:

“[T]hese two distinct classical concepts [particles and fields] are merged and become transcended in something that has no classical counterpart – the quantized field that is a new conception of its own, a unity that replaces the classical duality. – Julian Schwinger (S2001, Prologue)

“The particle-like behavior of the fields is explained by the fact that each quantum maintains its own identity and acts as a unit, no matter how spread out it may be. If it is absorbed by an atom, all its energy is deposited into that atom, just as if it were a particle.”

This last paragraph introduces a new ponderable, though. How does a wave maintain its own identity no matter how spread out? Also, how does the whole wave get desposited/emitted at particle precision? Maybe the book goes into these things.

 

A hardware neural net? Evolving consciousness

This is fascinating, I don’t know how to put words around this yet. The study is running a hardware (FPGA) version of the same kind of process used to develop a neural net. Basically it’s evolving a chip that can perform a certain intelligent action, and the technique can be used to develop just about any intelligent action. The real interesting part here is how it uses artifacts of the chip circuitry that are NOT part of the intended chip design to achieve its purpose in a way that baffles the people who open the chip up and investigate it after it succeeds. I’m most curious about this part, is quantum mechanics involved here? Check it out:

On the Origin of Circuits

Okay, so this is so fascinating I just pulled down everything else I could find with this researcher. Here’s a quote from a 1998 article in Discover Magazine on the same subject, with more details on the part that fascinates me above here:

It wasn’t just efficient, the chip’s performance was downright weird. The current through the chip was feeding back and forth through the gates, swirling around, says Thompson, and then moving on. Nothing at all like the ordered path that current might take in a human-designed chip. And of the 32 cells being used, some seemed to be out of the loop. Although they weren’t directly tied to the main circuit, they were affecting the performance of the chip. This is what Thompson calls the crazy thing about it.

Thompson gradually narrowed the possible explanations down to a handful of phenomena. The most likely is known as electromagnetic coupling, which means the cells on the chip are so close to each other that they could, in effect, broadcast radio signals between themselves without sending current down the interconnecting wires. Chip designers, aware of the potential for electromagnetic coupling between adjacent components on their chips, go out of their way to design their circuits so that it won’t affect the performance.

In Thompson’s case, evolution seems to have discovered the phenomenon and put it to work. It was also possible that the cells were communicating through the power-supply wiring. Each cell was hooked independently to the power supply; a rapidly changing voltage in one cell would subtly affect the power supply, which might feed back to another cell. And the cells may have been communicating through the silicon substrate on which the circuit is laid down. The circuit is a very thin layer on top of a thicker piece of silicon, Thompson explains, where the transistors are diffused into just the top surface part. It’s just possible that there’s an interaction through the substrate, if they’re doing something very strange. But the point is, they are doing something really strange, and evolution is using all of it, all these weird effects as part of its system.

In some of Thompson’s creations, evolution even took advantage of the personal computer that’s hooked up to the system to run the genetic algorithm. The circuit somehow picked up on what the computer was doing when it was running the programs. When Thompson changed the program slightly, during a public demonstration, the circuit failed to work.

All the creations were equally idiosyncratic. Change the temperature a few degrees and they wouldn’t work. Download a circuit onto one chip that had evolved on a different, albeit apparently identical chip, and it wouldn’t work. Evolution had created an extraordinarily efficient, utterly enigmatic circuit for solving a problem, but one that would survive only in the environment in which it was born. Thompson describes the problem, or the evolutionary phenomenon, as one of overexploiting the physics of the chips. Because no two environments would ever be exactly alike, no two solutions would be, either.

This. This is the way to build machine consciousness, cyberintelligence. If this was in 1998, where are we now?

Hm, here’s another ancient article on the subject: Don’t invent, evolve. Okay gonna go ponder all of this for a while now.

Did all of Euclid’s postulates have the same problem as the fifth?

It just occurred to me the famous problem with Euclid’s fifth postulate can also be seen hidden in the first four postulates, which are said to be true because they are intuitively obvious. Let’s look at them:

  1. To draw a straight line from any point to any point.
  2. To extend a finite straight line continuously in a straight line.
  3. To describe a circle with any center and radius.
  4. That all right angles are equal to one another.

The 5th postulate stood out because “it cannot be directly observed through construction.” This is because all constructed lines are necessarily finite, whereas the fifth postulate assumes that lines go on infinitely:

  1. At most one line can be drawn through any point not on a given line parallel to the given line in a plane.

The way I see it, the other postulates can be shown to have similar problems. For example, for the first: It is impossible to draw a perfectly straight line. No matter how straight the line you draw via construction, we can simply enlarge the magnification to show that it is not perfectly straight. The only way around this is to draw a line within your imagination, instead of using any form of pencil and paper. Once we move to a proof within imagination only, anything is possible, including the 5th postulate.

Hm, a quick Google search tells me I’ve just entered an ongoing debate with such thoughts. Here’s the related wikipedia page: Existence theorem.

By the way, this thought occurred while I was reading this interesting essay: On the Claim that Non-Euclidean Geometry Is Needlessly Over-complicated. I do not yet accept its premise, since I think Non Euclidean Geometries may be more coherent than Euclidean (for a handful of reasons like I’m touching on in this present post), but it is worth thinking about. This post is actually loosely related to my nascent study of neural nets, as described in other posts on this site, because while studying backpropagation, I discovered I needed to learn about derivatives, so while studying calculus I discovered I needed to learn about infinity, so while studying infinity and time and relativity (again, as this is a topic I study often!), I discovered the author above. I have more to write in this area — this current post is tip of a larger iceberg, but we’ll get to that eventually.

While I’m here, note there is a long-pondered related thought where I propose that there is no such thing as true equality as we commonly understand equality: never are there two things which are perfectly identical, so therefore equality is always an approximation at best.

To be continued some day…

Finally understand backpropagation for neural nets

Well I cannot speak highly enough of this guide into Neural Nets written for people who already understand software programming: Hacker’s guide to Neural Networks. I wouldn’t suggest it for people who don’t program, but it’s a very good example of how to target an intuitive reader like me. He starts with a very basic example, then goes into a slightly more complex example of the same basic idea, then a third slightly more complex example, and finally the fourth example, where backpropagation is finally shown in all its humble glory.

The first time I read it, I understood the first example okay. The second time I read it, I understood the second example, and so forth. Throughout, he was using a simplified form of neural “gate” which was simply a mathematical function (Add, Multiply, etc), instead of fiddling around with logic gates — which would have added another layer of complexity to the subject he was making as simple as possible.

I did have to take a couple days out to learn what a derivative is, and there are some other Calculus concepts like Chain Rules which he touches on that I don’t fully understand. And I will say that I only understand backpropagation at a high level now, it will take more study to be able to generate the code for it myself. But I get it enough to move on in the journey, and also to determine a better name for it in my own inner language: neural net echo calculation, or maybe calculation echo.

Anyway, I’ve linked to this article now three times, and I’ve read a number of other articles, but kept coming back to this one because it really speaks my language.

Thanks Andrej Karpathy