Filippo Valsorda talks about the challenges in maintaining and keeping the cryptographic libraries written in Go secure, safe, useful and modern. In particular, Valsorda discusses how security, scope and maintainer resources are on a balance, and what tools we can deploy to tip the scale.

In this podcast, Daniel Bryant spoke to Michelle Krejci, service engineer lead at Pantheon, about the Drupal and Wordpress webops-based company’s move to a microservices architecture.

Katharine Jarmul provides examples of data (mis)use and asking how we can work with data without violating the trust and privacy of users, producing an ethical product?

Agile leadership is the art and craft of creating the right environment for self-managing teams. The book Agile Leadership Toolkit by Peter Koning is a practical book that supports existing agile managers and leaders in growing their agile teams and creating the right environment for them.

Kirsten Newcomer identifies the most common layers in a typical container deployment, and discusses technologies and deployment patterns that can be used to ensure strong multi-tenancy at each layer. She also touches on best practices for managing container content, and registries, the build process, and the deployment process in a multi-tenant cluster.

InfoQ Homepage News Alexa Research Paper Shows Genetic Algorithms Offer Best Solution for Neural Network Optimization

Amazon's Alexa Science researchers published a paper providing a theoretical basis for neural-network optimization. While showing that it is computationally intractable to find a perfect solution, the paper does provide a formulation, the Approximate Architecture Search Problem (a-ASP), that can be solved with genetic algorithms.

In a recent blog post describing the work, research engineer Adrian de Wynter cast the problem of choosing a neural-network architecture as an exercise in function approximation; in this formulation, the function is the "true" mapping of input data to outputs, and the approximation is the learned neural-network model. Network architectures are often chosen based on intuition or trial-and-error, but de Wynter claims this "arbitrary selection of a neural architecture is unlikely to provide the best solution." Instead, given a set of neural-network components, such as convolutional or max-pooling layers, an automated optimal architecture search would find the combination of these components that approximates the function with minimal error, and de Wynter's work provides "theoretical guarantees on the accuracy of its computations." He shows that the general architecture search problem (ASP) is intractable---that is, it cannot be guaranteed to run in polynomial time. He thus proposes a "relaxed" formulation of the problem, approximate-ASP (a-ASP), which can be solved in polynomial time using co-evolutionary genetic algorithms.

Automatic optimization of machine-learning systems is an active research area. Many of the major cloud platforms offer AutoML systems, and there are several open-source options. Most AutoML solutions address all parts of the ML pipeline, including data cleanup and hyper-parameter optimization as well as model selection. By contrast, de Wynter's research particularly focuses on the selection of the best neural-network model structure. While some researchers have tackled this problem using techniques such as Bayesian Optimization, de Wynter's paper claims genetic algorithms "perform better than others in a generalized setting."

A genetic algorithm is an optimization technique that is based on the concepts of biological evolution: "survival of the fittest." Each potential solution to a problem has a fitness score, indicating how well it solves the problem, and a genetic representation. The key idea is that a solution must be represented in a way that allows for random mutations as well as crossover from other solutions. The genetic algorithm runs for several generations, trying out various solutions, applying mutations, and keeping the fittest results. In de Wynter's formulation, the genetic algorithm searches for combinations of neural-network components, such as convolutional layers, which are drawn from a set of components that is shown to be equivalent to a Turing machine. The genetic algorithm must find a sequence of these components that produces a network which best approximates the desired mapping of input data to outputs, subject to a constraint on maximum sequence length.

Other research teams have applied genetic or evolutionary algorithms to the problem of optimizing deep-learning systems. Google last year open-sourced AdaNet, a TensorFlow-based framework for evolutionary-based AutoML. More recently, Uber open-sourced EvoGrad, a PyTorch library for evolutionary algorithms which treats the population as an abstract probability distribution. According to de Wynter,

[M]any researchers have come to the conclusion that co-evolutionary algorithms provide the best way to build machine learning systems. But the function-approximation framework from my paper helps provide a more secure theoretical foundation for their intuition.

NGINX Plus is the complete application delivery platform for the modern web. Start your 30 day free trial.

A round-up of last week’s content on InfoQ sent out every Tuesday. Join a community of over 250,000 senior developers. View an example

You need to Register an InfoQ account or Login or login to post comments. But there's so much more behind being registered.

100% Original Table Lamp For Study -

A round-up of last week’s content on InfoQ sent out every Tuesday. Join a community of over 250,000 senior developers. View an example

We also donate 1% of our profit to different causes, which we as a company, as well as our employees, strongly believe in. and all content copyright © 2006-2019 C4Media Inc. hosted at Contegix, the best ISP we've ever worked with.Privacy Notice, Terms And Conditions, Cookie Policy

Valve Cover, Lost Wax Casting Parts, Pump Shell - Weiwo,