Título:  Limbic systeminspired performanceguaranteed control for nonlinear multiagent systems with uncertainties 
Fuente:  IEEE Transactions on Neural Networks and Learning Systems, Vol. 34, no. 1 
Autor/es:  Rubio Scola, Ignacio; Rodolfo Garcia Carrillo, Luis; Hespanha, João P. 
Materias:  Incertidumbre; Estabilidad; Perturbación; Equipos de control 
Editor/Edición:  IEEE Computational Intelligence Society; 2023 
Licencia:  
Afiliaciones:  Rubio Scola, Ignacio. Universidad Nacional de Rosario. Ingeniería y Agrimensura. Facultad de Ciencias Exactas. Department of Mathematics. CIFASIS (UNRFCEIA); Argentina Rubio Scola, Ignacio. (CONICET); Argentina Rodolfo Garcia Carrillo, Luis. New Mexico State University. Klipsch School of Electrical and Computer Engineering; Estados Unidos Hespanha, João P. University of California. Center for Control, Dynamical Systems, and Computation; Estados Unidos 


Resumen:  We introduce a performance–guaranteed Limbic System–Inspired Control (LISIC) strategy for nonlinear multi– agent systems (MAS) with uncertain highorder dynamics and ex ternal perturbations, where each agent in the MAS incorporates a LISIC structure to support the consensus controller. This novel approach, which we call Double Integrator LISIC (DILISIC), is designed to imitate double integrator dynamics after closing the agent–specific control loop, allowing the control designer to apply consensus techniques specifically formulated for double integrator agents. The objective of each DILISIC structure is then to identify and compensate model differences between the theoretical assumptions considered when tuning the consensus protocol and the actual conditions encountered in the real–time system to be controlled. A Lyapunov analysis is provided to demonstrate the stability of the closedloop MAS enhanced with the DILISIC. Additionally, the stabilization a complex system via DILISIC is addressed in a synthetic scenario: the consensus control of a team of flexible singlelink arms. The dynamics of these agents are of fourth order, contain uncertainties, and are subject to external perturbations. The numerical results validate the applicability of the proposed method. 
Descargar 
Ver+/
1 Limbic System–Inspired Performance–Guaranteed Control for Nonlinear Multi–Agent Systems with Uncertainties Ignacio Rubio Scola, Luis Rodolfo Garcia Carrillo, Member, IEEE, and Joa˜o P. Hespanha, Fellow, IEEE Abstract—We introduce a performance–guaranteed Limbic System–Inspired Control (LISIC) strategy for nonlinear multi– agent systems (MAS) with uncertain highorder dynamics and external perturbations, where each agent in the MAS incorporates a LISIC structure to support the consensus controller. This novel approach, which we call Double Integrator LISIC (DILISIC), is designed to imitate double integrator dynamics after closing the agent–speciﬁc control loop, allowing the control designer to apply consensus techniques speciﬁcally formulated for double integrator agents. The objective of each DILISIC structure is then to identify and compensate model differences between the theoretical assumptions considered when tuning the consensus protocol and the actual conditions encountered in the real–time system to be controlled. A Lyapunov analysis is provided to demonstrate the stability of the closedloop MAS enhanced with the DILISIC. Additionally, the stabilization a complex system via DILISIC is addressed in a synthetic scenario: the consensus control of a team of ﬂexible singlelink arms. The dynamics of these agents are of fourth order, contain uncertainties, and are subject to external perturbations. The numerical results validate the applicability of the proposed method. Index Terms—Brain–like Control Design; Biology Elements in the Loop; Nonlinear Multi–Agent Systems; Performance– Guaranteed Control; Robust Control. I. INTRODUCTION Coordination of autonomous and dynamic Multiagent Systems (MAS) is challenging because the dynamics of the agents, which could be, for example, aerial, ground, and water vehicles, or even a combination of them, are usually not precisely known. Furthermore, MAS that execute missions in unstructured/uncertain environments are often subject to perturbations and varying operational conditions [1], [2]. As robotic agents become advanced and complex, ﬁnding control solutions with guaranteed performance and low complexity becomes a challenging and relevant problem in the domain of MAS with nonlinear uncertain dynamics. Speciﬁc problems addressed and key results of the paper Four main challenges have been identiﬁed as crucial for effective MAS performance. These challenges are are discussed I. Rubio Scola is with CIFASIS (CONICETUNR), Department of Mathematics, FCEIAUNR, Rosario, Argentina, email: irubio@fceia.unr.edu.ar L.R. Garcia Carrillo is with the Klipsch School of Electrical and Computer Engineering, New Mexico State University, Las Cruces, NM, USA. email luisillo@nmsu.edu J. Hespanha is with Center for Control, Dynamical Systems, and Computation, University of California, Santa Barbara, USA. email: hespanha@ece.ucsb.edu Manuscript received xxxx DILISIC1 · · · DILISICn¯ Perturbations Consensus Algorithm DILISIC Virtual Double Integrator Non Linear Agent LISIC Learning Algorithm Fig. 1. The components of the novel MAS control framework proposed is this work. LISIC directly addresses Problems 1 and 2, while DILISIC allows overcoming Problem 3. All the components have a complexity appropriate for real–time implementation, as desired in order to address Problem 4. next, along with the proposed solution, which is graphically represented in Fig. 1. 1) Problem 1. Lack of knowledge of the statedependent functions and the presence of unknown perturbations: if the nonlinear dynamics were represented by an input afﬁne model with bounded internal states, with no perturbations, and the nonlinear statedependent functions were known, the control problem would be trivial because these assumptions would automatically guarantee the global existence of a solution (due to boundedness) and convergence of the tracking error to zero. The challenge to overcome is then the lack of knowledge of the statedependent functions and the addition of unknown external perturbations. Proposed solution for Problem 1: we propose to estimate the statedependent functions using a novel learning–inspired estimation and control algorithm which is capable of guaranteeing a speciﬁc performance degree to unknown external perturbations. A numerical comparison with conventional estimation methods is included to demonstrate the enhanced performance obtained when implementing the proposed novel methodology. This observed performance improvement is our principal motivation to support our work on a learning– inspired algorithm. 2) Problem 2. Inconsistency with the computational model of the limbic system: learninginspired controllers based on the computational model of the limbic system have been proposed, see for example [3] and [4] and references therein. The ma 2 jority of these methodologies are based on a modiﬁed version of the computational model of the Brain Emotional Learning system. In particular both [3] and [4] omit the Thalamus node, and the former also contains additional bias parameters inside the orbitofrontal cortex (OFC). These and other similar changes proposed in the related literature are added to simplify the design of the controllers. These modiﬁcations, however, are not consistent with the widely accepted computational model of brain emotional learning presented in [5]. Proposed solution for Problem 2: we propose a methodology which contains no bias parameters and includes the Thalamus node. Therefore, we enforce a learninginspired computational model that closely follows the Brain Emotional Learning computational model proposed in [5]. 3) Problem 3. Consensus for nonlinear MAS: in the existing literature, robust and adaptive solutions to linear second order consensus algorithms have been addressed thoroughly. On the other hand, consensus for nonlinear MAS is still a challenging and relevant open problem. Proposed solution for Problem 3: we propose an original approach consisting on the implementation of agent–speciﬁc learning–inspired controllers over agents with uncertain highorder nonlinear dynamics with the objective of allowing them to imitate agents with linear second order closedloop dynamics. The technique is further enhanced with an integral action for improving the performance with respect to two main desirable properties: (i) maintaining the agent–speciﬁc closedloop stability during the learning process, and (ii) ensuring stability in the case of unknown external perturbations. We call this novel technique the Double Integrator Limbic System Inspired Control (DILISIC). Ultimately, DILISIC allow us to incorporate control techniques speciﬁcally designed for MAS whose agents have linear second order dynamics, and apply them in the domain of MAS whose agents have highorder nonlinear dynamics. 4) Problem 4: Computational complexity aligned with realtime requirements: advanced solutions proposed for nonlinear MAS consensus are, in general, computationally demanding. The development of a controller with a level of complexity considered to be implementable in realtime, and preferably in embedded hardware, is a relevant but challenging task. Proposed solution for Problem 4: the original solution consisting on the combination of the DILISIC framework with a selected robust and adaptive linear consensus algorithm results in a lowcomplexity controller since the DILISIC structure is composed by a singlelayered architecture. Therefore, the implementation of DILISIC leads to a computational complexity whose order is dictated by the consensus algorithm selected by the designer. To demonstrate applicability, we applied the novel DILISIC framework to the seminal ﬂocking algorithm presented in [6] and the robust ﬂocking algorithm introduced in [7], achieving a level of complexity of order O(n), whose practical implementation is feasible in real–time. Remark 1: Each one of the four problems mentioned above represent a major challenge, and in fact, there are some research works proposing solutions for each one of them. However, these solutions are tailored to address each one of the problems separately, while the reality is that these problems exist simultaneously in almost every realtime nonlinear MAS, as illustrated in Fig. 1. In this work we propose a computational method that addresses these problems simultaneously under a holistic approach, which is an original and novel result not available in the literature. The rest of this manuscript is organized as follows. Section II describes the existing related work in the literature. The problem statement is then presented in section III. The novel LISIC controller is introduced in Section IV, and our main result, i.e., the double integrator closedloop imitation DILISIC controller is presented in Section V. Next, the performance analysis of the proposed framework for MAS consensus control is provided in Section VI by means of numerical results. Section VII concludes the manuscript and provides current and future directions of this research. The manuscript concludes with an Appendix, which revisits robust consensus techniques for agents with double integrator dynamics. II. RELATED WORK Lowcomplexity learning–inspired systems Biologicallyinspired solutions have allowed solving computationally–complex control engineering problems whose analytical solution is very hard or even impossible to obtain. For example, a distributed neural adaptive control design was proposed in [8] to achieve motion synchronization of a group of networked nonholonomic agents with a leader agent. Similarly, a computational model that mimics a group of parts of the mammalian brain that are known to produce emotion, namely, the amygdala, the orbitofrontal cortex (OFC), the thalamus, and the sensory input cortex, was developed in [5]. This framework, which was named by its authors as the Brain Emotional Learning (BEL) model, was later used in [9] for control systems purposes, leading to the so–called BEL–Based Intelligent Controller (BELBIC). Learning systems for estimation of nonlinear functions Classic control methodologies may require full knowledge of the dynamics of the system to be stabilized. Reinforcement Learning (RL) recently appeared as an effective tool to deal with uncertain dynamics and external disturbances. However, as mentioned in [10], RLinspired approaches are not always accompanied with stability proofs, see for example the recent work in [11], and generally have a complexity greater than O(n), see for example the RL algorithm in [12], which has a complexity of O(n log2(n)). BELBIC is categorized also as a modelfree controller, and therefore, it does not require full knowledge of the dynamics of the system to be controlled. Furthermore, BELBIC has a singlelayered architecture, leading to a computational complexity of order O(n). This complexity is relatively small if compared to other existing learningbased intelligent controls, and represents an appealing characteristic for real–time implementation purposes. A different approach widely studied in the literature is the use of Radial Basis Functions (RBF) for estimation of 3 nonlinear functions. The authors in [13] demonstrated that an artiﬁcial Neural Network (NN) design with one hidden layer of nodes possessing radial Gaussian input–output characteristics is capable of uniformly approximating sufﬁciently smooth functions on a compact set. Exploiting this property in combination with Lyapunov stability analysis, a method for using dynamic structure Gaussian RBFNN for adaptive control of afﬁne nonlinear systems has been presented in [14]. One of the main challenges associated with the implementation of RBFNN is that they need a large number of hidden nodes to accomplish an acceptable approximation precision. The number of hidden nodes is exponential growth with the increase of input signals. In [15], the authors propose to decrease the number of inputs to decreasing the hidden nodes. Engineering applications have been solved also by estimating nonlinearities for feedback control using NNs with associated Lyapunov stability proofs. In [16] a NN–based output feedback control is proposed for reference tracking of underactuated surface vessels (USVs) with input saturation and uncertainties, with a NN–based observer that estimates the velocity data of the USV. Also, in [17] an adaptive output feedback control based on NNs is proposed to stabilize ﬂexible multilink planar manipulators. Implementation of BELbased control Implementations of BELBIC for solving complex engineering problems in real–world scenarios have been proposed, see for example [18] and [19]. In our recent previous work we proposed and implemented a BEL–inspired tracking controller for a holonomic unmanned aircraft system (UAS) in the presence of uncertain system dynamics and disturbances [20]. Furthermore, we extended this method for creating a BELinspired ﬂocking controller which allowed stabilizing a MAS in a similar challenging scenario [21], [22], [23], [24]. A closely related robust controller based on an approximation of the limbic system model has been proposed in [3], and recently also in [4] for a class of uncertain nonlinear systems. However, this kind of approximation cannot be strictly considered a control strategy based on the limbic system model, due to the multiple structural modiﬁcations made by the authors in the computational model, in order to guarantee the convergence of their method. In our previous work presented in [25], we introduced the idea of a robust controller inspired by the mammalian limbic system for a class of nonlinear systems through an integral action. We further extended this results in [26], where we ﬁrstly introduce the idea of mimicking a virtual double integrator to support the overall MAS controller. In the present manuscript, we incorporate both ideas in a uniﬁed approach, and provide all the theoretical framework required to ensure stability of our solutions. Nonlinear consensus for systems afﬁne in the control 1) First and second order systems: In [27] the authors addressed the problem of consensus of nonlinear ﬁrst and second order afﬁne in the control systems with non–identical partially unknown control directions and bounded input disturbances. Similarly, in [28] the authors solved the consensus control of nonlinear MAS with uncertain input disturbance using fuzzy adaptive techniques, but assuming that the input is additive in the afﬁne–in–control model. The problem of ﬁnite–time consensus of second–order switched nonlinear MAS, where the nonlinearities are additives to the input was considered in [29]. 2) nth order systems: In [30], the authors solved the distributed consensus tracking for multiple uncertain nth order nonlinear strict–feedback systems, but the system considered is different since it is assumed that all the state derivatives are in afﬁneincontrol form with the input acting additively. In the work [31], the authors proposed an adaptive neural consensus tracking control for nonlinear nth order MAS using a ﬁnite–time ﬁltered backstepping command. Here the additive nonlinearities are unknown but the multiplicative ones are supposed to be known. In [32] the leaderfollowing consensus problem is solved for a class of known Lipschitz nonlinear multiagent systems with known dynamics and an additive uncertainty, where each agent transmits only its noisy output, at discrete instants, and independently from its neighbors. In contrast with these previous methods, we propose a more general continuous nonlinear nth order system in an afﬁneinthecontrol form in the nth derivative with unknown statedependent functions and subject to additive unknown perturbations. 3) nth order systems with delays: In the work presented in [33] the authors solved the problem of consensus for nonlinear timedelay systems with unknown virtual control coefﬁcients through an adaptive neural control. The controller, however, involves solving at each time step a deﬁnite integral of the unknown functions of the systems. This characteristic makes the implementation of this method infeasible in realtime applications involving dynamic autonomous systems. Our approach, in contrast, results in a low–complexity control strategy suitable for realtime implementation. 4) Directed topology consensus: The robust consensus tracking problem is studied in [34] for a class of heterogeneous linear MAS with known dynamics, disturbances, and directed communication topology. In [35], the leaderless consensus problem is studied for scalar linearly parameterized MAS under directed graphs with the combination of uncertainties and the nonsymmetric Laplacian matrix. The consensus of linear known timevariant MAS on directed graphs through adaptive eventtriggered control is studied in [36]. Our methodology builds upon undirected topology consensus, and leaves the directed topology scenario as a potential extension. A summary of our main contributions We introduce a novel biologicallyinspired agent–speciﬁc controller for agents with highorder nonlinear dynamics constituting a (MAS) under undirected topology consensus. We focus our attention on agents with a particular afﬁneinthecontrol model where the internal states are assumed to be bounded but the nonlinear state dependent functions are unknown. The main challenge is to guarantee stability under the lack of knowledge of the main state depending 4 functions combined with the presence of external perturbations. The proposed controller makes use of a computational model structure that closely resembles the widely accepted computational model of the limbic system encountered in the human brain [5]. The main purpose of the limbic system inspired control system is to estimate the unknown state depending functions in order to guarantee stability and an H∞ performance index under external perturbations. The fundamental characteristic pursued with the proposed control framework is to drive the highorder nonlinear dynamics of each agent to behave like the dynamics of a double integrator. The proposed technique, which we call DILISIC, allows us to consider the MAS stabilization problem from a different perspective, and to exploit high–level control techniques developed for double integrator consensus, which most of the time are only effective in ideal scenarios or numerical examples. Our goal is then to demonstrate that the novel DILISIC can stabilize a MAS, with a guaranteed performance in terms of consensus, trajectory tracking, and disturbance rejection, despite the fact that the agents exhibit unknown nonlinear state dependent functions and disturbances. The DILISIC framework proposed by us is combined with a robust and adaptive linear consensus algorithm, which results in a controller whose complexity is dictated by the high level controller. Therefore the control designer can arbitrarily select an appropriate high level control strategy with low complexity, ensuring an effective implementation in real–time missions and embedded systems. III. PROBLEM STATEMENT Consider an agent whose dynamics are consistent with a class of nonlinear systems of order n, which are described by x(n) = f (x) + g(x)u + d(x, t) (1) where x = [x, x˙ , . . . , x(n−1)]T ∈ Rn is the state vector, x˙ is the derivative of x with respect to (w.r.t.) time, x(n−1) is the (n − 1)th ordered derivative of x w.r.t. time, and u ∈ R is the control input. Assume the state vector x and the perturbation d(x, t) are bounded by known positive constants x ≤ Mx and d(x, t) ≤ Md, respectively. Assume also that g(x) > 0, and 1/g(x) and f (x) are unknown continuous scalar functions. Assume that the desired trajectory xd and its derivatives, up to its nth order derivative, are smooth and bounded. Remark 2: Some research works assume directly a bound on f (x) and 1/g(x), see for example the methodology proposed in [8], [37], and [38]. However, as discussed in Problem 1 from Section I  Introduction, and from a practical point of view, it is more realistic to bound the state vector x and assume that f (x) and 1/g(x) are continuous implying a boundedness on these functions. Our assumption and reasoning are consistent, for example, with the results presented in [39], [40], [41] and [42]. We now deﬁne an auxiliary variable s depending on the system’s tracking error and its derivatives as s = e(n−1) + ∆n−1e(n−2) + . . . + ∆1e (2) with the tracking error e = x − xd, and the terms ∆k (k = 1, 2, . . . n − 1) as constants such that the roots of the polynomial λn−1 + ∆n−1λn−2) + . . . + ∆1 = 0 have negative real part. The derivative of the auxiliary variable s is calculated as s˙ = f (x) + g(x)u + qa(t) + d(x, t) (3) with qa = −x(dn) + ∆n−1e(n−1) + . . . + ∆1e˙. If the functions f (x) and g(x) were known and d(x, t) = 0, it would be possible to achieve the dynamics s˙ = −Ks + ur with the following exact matching control law u∗ = −(f (x) + qa + Ks − ur)/g(x), (4) with ur as an auxiliary input to be speciﬁed next. In the next section, we propose a methodology to overcome the lack of knowledge of the state dependent functions and perturbations, which has the desirable characteristic of being consistent with the computational model of the limbic system in the human brain. We introduce the use of a lowcomplexity learning algorithm to estimate functions f (x) and g(x), when these are unknown, and then the addition of an integral action in ur to guarantee an H∞ performance index. IV. MAIN CONTRIBUTION: A NOVEL LIMBIC SYSTEM INSPIRED CONTROL (LISIC) STRATEGY An implementation of the control law in equation (4) would require precise knowledge of the unknown functions f (x) and g(x). To overcome this challenge, we shall construct online estimates fˆ(x) and hˆ(x) of the functions f (x) and h(x) := 1/g(x), respectively, that appear in the control law. In contrast with our previous work [25], by estimating 1/g(x) instead of g(x), we avoid the “divisionbyzero” that would arise when the estimate of g(x) crossed zero. We build fˆ(x) and hˆ(x) using a combination of Gaussian RBF that emulates the emotional learning structure of the mammal limbic system originally proposed in [5]: fˆ(x) :=fˆ(x, Vf , Wf ) = VfT ΦA(s(x)) − WfT Φ(s(x)) hˆ(x) :=hˆ(x, Vh, Wh) = VhT ΦA(s(x)) − WhT Φ(s(x)) (5) where the terms Vf = [Vf1, Vf2, . . . , Vfp, Vfth]T , Wf = [Wf1, Wf2, . . . , Wfp]T , Vh = [Vh1, Vh2, . . . , Vhp, Vhth]T , Wh = [Wh1, Wh2, . . . , Whp]T are vectors of weight parameters. Amygdala and OFC weights are represented by V and W weights, respectively. Their interconnection in the computational model is as shown in Figure 3. The terms Φj are Gaussian RBF that can be represented using the following structure Φj = exp −(s − µj)2/σj2 , (6) m = max([Φ1, Φ2, . . . , Φp]) where s is the error dynamics described by equation (2), and µj and σj are the corresponding mean and smoothing factor, 5 Fig. 2. The limbic system in the mammalian brain (from [43]). This amygdalafrontal circuit is known to be responsible of emotion regulation. The most important parts of this system are: the OFC, the Amygdala, Thalamus and the Sensory cortex. Diverse artiﬁcial computational models take inspiration from this structure (e.g. [5]), adopting some of its parts, and ignoring those considered not relevant for speciﬁc applications. RBFs Φ1 Φ2 s ... ... Φp Sensory Inputs ... Wf max m Vf Wh hˆ (x) fˆ(x) Fig. 3. The proposed limbicsystem inspired computational model closely follows the biological structure of the limbic system in the mammalian brain. The analogy between the biological system shown in Figure 2 and the proposed computational system are highlighted by means of the following color code: OFC, Amygdala, Thalamus, Sensory cortex. The sensory input is processed in the Thalamus by means of multiple RBF, generating a set of p sensory inputs. The output is an estimation of the unknown functions described in equation (5) respectively. The RBF are Φ = [Φ1, Φ2, . . . , Φp]T and ΦA = [Φ, m]T , m is an input coming from the Thalamus, and Vth is its corresponding weight. Let the optimal weight parameters be deﬁned as follows [Vf∗, Wf∗] = arg min Vf ,Wf [sup VfT ΦA(x˜) − WfT Φ(x˜) − f (x˜)], (7) x˜ [Vh∗, Wh∗] = arg min Vh ,Wh [sup VhT ΦA(x˜) − WhT Φ(x˜) − 1/g(x˜)], (8) x˜ which are bounded by known positive constants Vf∗ ≤ Mfv, Wf∗ ≤ Mfw, Vh∗ ≤ Mhv, and Wh∗ ≤ Mhw, and x˜ is a dummy variable. In the sequel, we denote our estimates of f and h corre sponding the the optimal weights by fˆ∗(x) :=fˆ(x, Vf∗, Wf∗) hˆ∗(x) :=hˆ(x, Vh∗, Wh∗), the approximation errors with respect to these estimates by fe(x) =f (x) − fˆ∗(x), 1/he(x) =1/h(x) − 1/hˆ∗(x), (9) ω˜ =fe(x) + u/he(x) and the weight estimation errors by V˜f = Vf∗ − Vf W˜ f = Wf∗ − Wf V˜h = Vh∗ − Vh W˜ h = Wh∗ − Wh (10) Based on the following adaptation rules from [3] V˙f = αf ΦA max(BeT Pese, 0) W˙ f = −βf ΦBeT Pese V˙h = αhΦA max(BeT Peseuh, 0) W˙ h = −βhΦBeT Peseuh we propose the new adaptation rules in equation (11), which include a projection algorithm to guarantee boundedness of the weights Vf , Wf , Vh, and Wh. Notice that the update laws of the amygdala nodes are consistent with the basic update rules in the emotional brain model from [5], speciﬁcally, they can only increase. Correspondingly, the OFC weights can both decrease and increase, and therefore, they can prevent inappropriate learning responses of the amygdala. In equation (11), αf , αh, βf , and βh are positive scalars and uh = fˆ(x) + qa + Ks − ur. V˙f = αf ΦA max(BeT Pese, 0) if ( Vf < Mfv) or ( Vf = Mfv and αf VfT ΦA max(BeT Pese, 0) ≤ 0) αf ΦA − VfT ΦAVf / Vf 2 max(BeT Pese, 0), if ( Vf = Mfv and αf VfT ΦA max(BeT Pese, 0) > 0) W˙ f = − βf ΦBeT Pese if ( Wf < Mfw) or ( Wf = Mfw and βf WfT ΦBeT Pese ≥ 0) − βf Φ − WfT ΦWf / Wf 2 BeT Pese, if ( Wf = Mfw and βf WfT ΦBeT Pese < 0) (11) V˙h = αhΦA max(BeT Peseuh, 0) if ( Vh < Mhv) or ( Vh = Mhv and αhVhT ΦA max(BeT Peseuh, 0) ≤ 0) αh ΦA − VhT ΦAVh/ Vh 2 max(BeT Peseuh, 0), if ( Vh = Mhv and αhVhT ΦA max(BeT Peseuh, 0) > 0) W˙ h = − βhΦBeT Peseuh if ( Wh < Mhw) or ( Wh = Mhw and βhWhT ΦBeT Peseuh ≥ 0) − βh Φ − WhT ΦWh/ Wh 2 BeT Peseuh, if ( Wh = Mhw and βhWhT ΦBeT Peseuh < 0) 6 We introduce an auxiliary state ξ(t) = s(t)dt to augment the system and improve performance through an integral action, s˙ ξ˙ = −K 1 0 0 s ξ + 1 0 ur Ae Be then, the auxiliary input term ur can be obtained by solving the following Riccati equation 0 = AeT Pe + PeAe − PeBeR−1BeT Pe + Qe (12) ur = −BeT Pese/r (13) where se = [s ξ]T , Qe = diag{Q, QI } and R = ρ2r/(2ρ2 − r), with Qe = QTe 0 and 2ρ2 > r. This choice for ur guarantees a degree of robustness for the closedloop stability against the external perturbation d, and also against the differences between the functions f (x) and g(x) and their respective estimations fˆ(x) and 1/hˆ(x). Additionally, the parameter ρ deﬁnes the H∞ performance index. Figure 4 illustrates the LISIC control strategy, which will be formally introduced in our main result, see Theorem 1. Theorem 1 (LISIC Theorem): Consider the nonlinear system in equation (1) together with the following control law u = −hˆ(x)(fˆ(x) + qa + Ks − ur) (14) where fˆ and hˆ are given by equation (5), with adaptation laws inspired by the limbic system computational model as described in equation (11), and ur as deﬁned in equation (13). Along solutions to this system, the error function s remain bounded and the H∞ tracking performance criteria satisﬁes T sTe Qesedt ≤ V˜f (0)T V˜f (0)/αf + W˜ f (0)T W˜ f (0)/βf 0 + V˜h(0)T V˜h(0)/αh + W˜ h(0)T W˜ h(0)/βh T + sTe (0)PesTe (0) + ρ2 ωT ωdt. (15) 0 Proof: The Pe matrix appearing in equation (12) is positive deﬁnite and can be decomposed as Pe = P P2T P2 P3 (16) Pre– and post–multiplying equation (12) by se, we obtain 2sTe AeT Pese − sTe PeBeR−1BeT Pese + sTe Qese = 0 ⇒ − KsBeT Pese + P2s2 + P3ξs − sTe PeBeBeT Pese/r = − (seT Qese + seT PeBeBeT Pese/ρ2)/2 (17) Using equations (3), (5), (9), and (14), and after some algebraic manipulations, the derivative of s in closed–loop is s˙ =f (x) + g(x)u + qa + d =fˆ∗(x) + u/hˆ∗(x) + qa + d + w˜ s˙ =f˜ + fˆ + (h˜ − h∗)(fˆ + qa + Ks − ur)/hˆ∗ + qa + d + w˜ =f˜ + h˜(fˆ + qa + Ks − ur)/hˆ∗ + d − Ks + ur + w˜ =f˜ + h˜uh/hˆ∗ + d − Ks + ur + w˜, xd x(n) = f (x) + g(x)u + d(x, t) x ∆k s u = −hˆ(x)(fˆ(x) + qa + Ks − ur) ur = −BeT Pese/r KI fˆ(x) = VfT ΦA − WfT Φ hˆ(x) = VhT ΦA − WhT Φ Adaptive Laws V˙f , W˙ f , V˙h, W˙ h LISIC Fig. 4. Scheme of the novel LISIC controller proposed is this work. The input is composed by two main components: the ﬁrst one, shown in red color, depending on a limbic system structure that estimates functions f (x) and g(x), and the second one, shown in blue color, depending on a state feedback with an integration action. with f˜(x) = fˆ∗(x) − fˆ(x), h˜(x) = hˆ∗(x) − hˆ(x), and uh = fˆ + qa + Ks − ur, leading to s˙ =V˜fT ΦA − W˜ fT Φ + V˜hT ΦAuh/hˆ∗ − W˜ hT Φuh/hˆ∗ − Ks + ur + ω˜ + d (18) with a term ur of the form ur = −BeT Pese/r = −(P s + P2ξ)/r (19) The following Lyapunov function is used to prove the result Vx = V˜fT V˜f /(2αf ) + W˜ fT W˜ f /(2βf ) + V˜hT V˜h/(2hˆ∗αh) + W˜ hT W˜ h/(2hˆ∗βh) + sTe Pese/2, (20) based on the weight errors deﬁned in (10). Taking derivatives with respect to time, we obtain: V˙x = − V˜fT V˙f /αf − W˜ fT W˙ f /βf − V˜hT V˙h/(hˆ∗αh) − W˜ hT W˙ h/(hˆ∗βh) + s˙Te Pese, (21) where the last term can be computed using (18): s˙Te Pese =s˙(P s + P2ξ) + P2s2 + P3sξ =s˙BeT Pese + P2s2 + P3sξ =(V˜fT ΦA − W˜ fT Φ + V˜hT ΦAuh/hˆ∗ − W˜ hT Φuh/hˆ∗ − Ks + ur + ω˜ + d)BeT Pese + P2s2 + P3sξ. (22) 7 From (21) and (22), V˙x can be rewritten as V˙x = V˜fT (ΦABeT Pese − V˙f /αf ) − W˜ fT (ΦBeT Pese + W˙ f /βf ) + V˜hT (ΦABeT Peseuh − V˙h/αh)/hˆ∗ − W˜ hT (ΦBeT Peseuh + W˙ h/βh)/hˆ∗ − KsBeT Pese − BeT PeseBeT Pese/r + P2s2 + P3sξ + (ω˜ + d)BeT Pese (23) As a ﬁrst case, we assume that the ﬁrst line of the update laws in equation (11) are active, and using equation (17), it is possible to rewrite equation (23) as V˙x ≤ −(sTe Qese + sTe PeBeBeT Pese/ρ2)/2 + V˜fT ΦA(BeT Pese − max(BeT Pese, 0)) + V˜hT ΦA(BeT Peseuh − max(BeT Peseuh, 0))/hˆ∗ + (ω˜ + d)BeT Pese (24) For the second case, i.e., when the update laws are deﬁned by the second line of equation (11), for each dynamic of the NN weights we obtain V˙x ≤ −(sTe Qese + seT PeBeBeT Pese/ρ2)/2 + V˜fT ΦA(BeT Pese − max(BeT Pese, 0)) + V˜hT ΦA(BeT Peseuh − max(BeT Peseuh, 0))/hˆ∗ + (ω˜ + d)BeT Pese + V˜fT (VfT ΦAVf / Vf 2) max(BeT Pese, 0) (25) + V˜hT (VhT ΦAVh/ Vh 2) max(BeT Peseuh, 0) − W˜ fT (WfT ΦWf / Wh 2)BeT Pese − W˜ gT (WhT ΦWh/ Wh 2)BeT Peseuh The new term depending on Vf is analyzed, using equation (10) and the respective conditions in equation (11) as V˜fT (VfT ΦA max(BeT Pese, 0)/ Vf 2) Vf = V˜fT Vf ζ1 :=ζ1 >0 = (Vf∗T − VfT )Vf ζ1 = Vf∗T Vf ζ1 − Vf 2ζ1 ≤ ( Vf∗ − Vf ) Vf ζ1 We know that in this case Vf = Mfv, Vf∗ ≤ Mfv, and therefore we can conclude that V˜fT (VfT ΦAVf / Vf ) max(BeT Pese, 0)) ≤ 0 A similar analysis can be done for the new terms depending on Vh, Wf , and Wh, therefore, we can conclude that the projection algorithm does not modify the Lyapunov equation (24). Considering equation (24) and using the fact that a max(b, 0) ≤ max(ab, 0), ∀a, b ∈ R, then V˙x ≤ −(seT Qese + seT PeBeBeT Pese/ρ2)/2 + (V˜fT ΦA + V˜hT ΦAuh/hˆ∗)(BeT Pese − max(BeT Pese, 0)) + (ω˜ + d)BeT Pese Deﬁning the worst case perturbation, in the sense of maximizing V˙x, as ω = ω˜ + d + V˜fT ΦA + V˜hT ΦAuh/hˆ∗sign(ω˜ + d) (26) with a maximum value for ω2 less than sTe Qese/ρ2, the following is obtained V˙x ≤ − (seT Qese + sTe PeBeBeT Pese/ρ2)/2 + ωBeT Pese (27) Adding and subtracting ρ2ω2/2 to the previous equation, the following is obtained V˙x ≤ −seT Qese/2 − (BeT Pese/ρ − ρω)2/2 + ρ2ω2/2 V˙x ≤ −sTe Qese/2 + ρ2ω2/2 = −sT Qs/2 − ξT QI ξ/2 + ρ2ω2/2 (28) By integrating equation (28) from t = 0 to t = T , the H∞ tracking performance criteria in equation (15) is attained. If ω ∈ L2, using Barbalat’s Lemma [44] it can be proved that the error function s asymptotically converges to zero. V. A NOVEL LISIC STRATEGY FOR MAS CONSENSUS In terms of MAS consensus, the main objective is to design a control signal ui for each agent i, in such a way that the collective motion of all the agents exhibits an emergent behavior arising from simple rules that are followed by individuals, and does not involve any central coordination. For the novel framework proposed in this research work, each agent i is designed to incorporate a LISIC structure to support the overall consensus controller. The objective of each LISICi control structure is to identify and compensate model differences between what was theoretically supposed when tuning the MAS controllers, see equations (41)(43), and the real practical conditions encountered in the system. Another objective of the LISIC framework is to enable the implementation of secondorder MAS control techniques into MAS whose agents exhibit nthorder nonlinear dynamics, like those described by equation (1). Even if a linear model is adopted for each agent, see for example the MAS dynamics in equation (37) in the appendix, the interconnection of the agents is done under a nonlinear MAS protocol, as equations (41)(43) show. This leads to a nonlinear propagation of the MAS model uncertainties or external perturbations. The novel framework interfaces the LISIC structure with the MAS by means of implementing a reference model of a double integrator to create a virtual reference for the s variable. The proposed interconnection framework, which we call the Double Integrator–LISIC (DILISIC) is shown in Fig. 5. The DILISIC system is composed by an agent in closedloop with a LISIC, imitating the desired double integrator dynamics. Remark 3: In the absence of model mismatches and/or perturbations, the LISIC strategy should not interfere with the nominal MAS control. A. Double integrator closedloop behavior We propose to use the LISIC structure to compensate the differences between the highorder model of each agent and a nominal system described by a double integrator. By doing 8 u Double Integrator Behaviour (DILISIC) Nonlinear d x Agent uLISIC LISIC s Reference xd Generator uDI Fig. 5. DILISIC structure: a LISIC controller scheme imitating the double integrator behavior. A nonlinear agent affected by an external perturbation d is placed in closed loop with a LISIC controller. This arrangement corrects the error s between the system output x and a double integrator system that generates xd from uDI . B. Robust Adaptive Control of MAS. Due to the incorporation of the DILISIC structure, each agent will inherit a nonlinear component that can be considered as a nonlinear function. With the main objective of designing an adaptive ﬂocking control to overcome this challenge, we revisit a classic MAS model, see equation (37) in the Appendix, but now including a nonlinear additive perturbation [45]: q˙i = pi , i = 1, 2, . . . , n¯ (34) p˙i = f (pi) + ui where qi, pi, and ui, represent the position, velocity, and control input of agent i, respectively, and n¯ is the number of agents. Additionally, f (pi) is a nonlinear function that represents the error produce by the DILISIC controller in the transformation of the original agent into a double integrator. Using a distributed ﬂocking algorithm of the form this, LISIC facilitates the implementation of any consensus– inspired control strategy speciﬁcally designed for second order nonlinear agents. As a ﬁrst step, consider a reference model representing the double integrator dynamics x¨d = uDI (29) where the subscript (·)DI indicates the double integrator system that the LISIC closedloop should imitate, and we consider that uDI ∈ Cn−2. Next, the system output is compared with the reference model that represents the double integrator dynamics e = xd − x (30) x(n) = f (x) + g(x)(uDI + uLISIC ) (31) where uLISIC comes from the controller in equation (14) and uDI is deﬁned in equation (29). The DILISIC closedloop system can now be rewritten as x(n) =f (x) + g(x)uLISIC + g(x)uDI − u(DnI−2) +uD(nI−2) d(x,t) (32) The stability proof is straightforward using Theorem 1. For the particular case of a second order system we have x¨ = f (x) + g(x)uLISIC + g(x)uDI − uDI + uDI (33) If the functions f (x) = 0 and g(x) = 1, then the systems in equations (29) and (33) are identical. If both systems have the same initial conditions, there is no need for compensation and the LISIC controller output should be uLISIC = 0. With the DILISIC structure imitating double integrator agents, we can now take a MAS whose agents exhibit highorder dynamics, and directly apply consensus techniques designed for double integrator agents. ui = −∆qi V (q) + (aij(t) + δij(t))(pj − pi) j∈Ni where q = [q1, . . . , qn¯ ], and ∆qi V (q) is a gradientbased term of the collective potential function V , deﬁned in (39). The second term in ui is the velocity consensus term deﬁned in (41). Additionally, aij are the elements of the spatial adjacency matrix, δij = δji is the asymmetric parameter perturbation, and Ni is the neighborhood set of agent i deﬁned in (38). In this work we adopt the ﬂocking algorithm proposed in [6]. Additional techniques regarding MAS consensus with linear double integrator agents are revisited in the Appendix. The following assumptions are needed, as stated in [45]. Assumption 1: There exist a constant diagonal matrix H = diag(h1, . . . , hn) and a positive value such that (z − y)T (f (z, t) − f (y, t)) − (z − y)T H(z − y) ≤ − (z − y)T (z − y), ∀z, y ∈ Rm. Assumption 2: There exist positive constant Iij such that δij ≤ Iij, ∀t ≥ 0, i = j; i, j = 1, . . . , n¯. (35) Assumption 3: The collective potential function V satisﬁes N ∆qi V (q) = 0, i=1 ∆qi V (q) = ∆qi−q¯V (q − 1n¯ ⊗ q¯), i = 1, . . . , n¯. where 1n¯ is a n¯ dimensional ones vector, ⊗ denotes the Kroneker product of matrices and q¯ = 1 n¯ n¯ j=1 qj . All lin ear and piecewise linear functions satisfy the condition in Assumption 1. Lemma 1 (from [45]): Suppose that Assumptions 13 hold and the MAS velocity network is connected. The MAS in equation (34) can reach ﬂocking formation under the following distributed adaptive control law: L˙ ij = −αij (pj − pi)T (pj − pi), (36) where αij = αji are positive constants, 1 ≤ i = j ≤ n¯, and L is the semipositive deﬁnite Laplacian matrix in the undirected network. 9 The Laplacian matrix is deﬁned as Li,j = −ai,j for i = j, and Li,i = ki. The terms ki = − n¯ j=1,j =i Li,j and ai,j are taken from [6], as stated in equation (41) from the Appendix. The next section presents numerical simulations showing the performance of the proposed distributed MAS controller. same parameters, including the robust term. Figure 6 illustrates the position and the tracking error for both controllers. Notice the faster convergence of the BELbased NN with respect to the classical RBFNN. VI. SIMULATIONS The performance of the proposed performance–guaranteed ﬂocking controller for highorder nonlinear MAS, which is inspired by the mammalian limbic system, is validated here in a set of numerical simulations. Each agent in the MAS corresponds to a ﬂexible single–link arm under gravity and joint friction, whose dynamics are of fourth order and are expressed by [46] x(4) =f (x) + g(x)u + d, gx˙ 2 g 2gx˙ 2 gx(2) f (x) = l − lα2 cos(x) + + lα l sin(x) cx 2cx˙ cx(2) x(2) 2x(3) − ml2α2 − ml2α − ml2 + α2 − α c g(x) = ml2α2 with c = 2276.3, g = 9.81, m = 2.27, l = 0.96, α = −36.52. The challenge we address consists on the stabilization and consensus of a group of seven of these ﬂexible single–link arms, when the functions f (x) and g(x) are unknown, the MAS is affected by external perturbations, and the agents evolve in an environment with obstacles. The parameters adopted in our simulations are d(0 ≤ t < 70) = 0, d(70 ≤ t < 75) = 6000, d(t ≥ 75) = 0, xi(0) = [x0,i, 0, 0, 0]T , with x0,i equally distributed between −0.64rad and 0.64rad, and a sampling time of Ts = 10−4. The parameters for the sigmoidal function are a = 20, b = 50 and = 0.1 (see equation (40) in the Appendix). The DILISIC tuning parameters are p = 45, r = 0.0018375, ρ = 0.035, ∆1 = 125, ∆2 = 75, ∆3 = 15, K = 20, Q = 0.2, and QI = 10, and the reference is xd = −π sin(t)/10. The Radial Basis function parameters µj are equally distributed between −45 and 45, and σj = 45. The weight parameters Vf , Vh, Wf , and Wh, as well as the integrator state ξ, are initialized with zero values. The MAS controller is tuned with the following parameters: c1α = 50, cα2 = 2 cα1 , c1β = 50, c2β = 2 cβ1 , c1γ = 0.04, cγ2 = 2 c1γ , c1sc = 0.07, cs2c = 2 cs1c, and the adaptation rate αij = 30. The derivatives needed in equation (32) are obtained from a ﬁrst order backward difference formula. Numerical results for a single agent: inverted pendulum In order to compare the performance of a BELbased NN with respect to a classical RBFNN, we present ﬁrst a numerical simulation corresponding to the stabilization of an inverted pendulum, whose dynamic model is described in [3]. For this comparison, a low–order system was chosen due to the fact that the classical RBFNN exhibited inappropriate performance when stabilizing the more complex ﬂexible single–link arm model. Both BELbased NN and RBFNN are tuned with the Fig. 6. Comparison of numerical results of a BELbased NN with respect to a classical RBFNN with same tuning parameters, same initial conditions, and the robust term ur. The BELbased NN exhibits faster convergence with respect to the classical RBFNN. Numerical results for a MAS: seven ﬂexible single–link arms The group of seven agents are tasked to follow a center of mass (CoM) reference in consensus mode. The agents evolve in an environment with obstacles, and are affected by external perturbations. The numerical results in Figure 7 show the evolution of the angular position of the seven agents. At time t = 30 seconds, an obstacle appears at position x = 0.9rad. Notice that, as soon as the obstacle appears, the separation distance between agents is adjusted and successfully maintained to the desired values. The CoM of the MAS is modiﬁed at the same time, see Figure 8, allowing the agents to maintain the desired interagent separation, and an effective consensus. An external perturbation appears at time t = 70 seconds, which simulates a uniform force in the positive x axis, and affects all the agents simultaneously. In summary, from Figure 7 it is observed that each agent rejects the perturbation, and from Figure 8 that the MAS can effectively follow the CoM. The agents’ velocities are shown in Figure 9. Notice that these states exhibit small corrections between t = 30 seconds and t = 45 seconds, which are due to the presence of the obstacle. Additionally, the variation observed at time t = 70 seconds is due to the presence of the perturbation. In both cases, the proposed controller is effective for ensuring MAS velocity consensus, according to the design requirements. For illustration purposes, the time evolution of the external perturbation and the corresponding control input of agent number 1 in the MAS is shown in Figure 10. Figure 11 shows the estimations of functions f (x) and h(x) computed by the LISIC controller, for the exact same agent. VII. CONCLUSIONS This paper introduced a novel biologicallyinspired agent– speciﬁc controller for agents with highorder nonlinear dynamics constituting a MAS. The agents’ dynamic models 10 Fig. 7. Positions of a 7agent MAS (1dimensional agents) following a sinusoidal reference, and maintaining a security distance from a walltype obstacle (black line). The obstacle appears at time t = 30 seconds, with a position of x = 0.9rad. When the obstacle appears, the separation distance between agents is adjusted and successfully maintained to the desired values. The same adjustment is observed at t = 70 seconds, when an external perturbation affects the MAS. Fig. 10. The time evolution of the perturbation and the corresponding control input generated by agent 1 in the MAS. The high control input values at the beginning of the simulation are due to the initialization of the LISIC weights. The same behavior is observed during the compensation of the external perturbation. The perturbation illustrated here affect all of the agents in the MAS from time t = 70 to t = 75. Fig. 8. Time evolution of the CoM of the MAS formation (blue signal) w.r.t. the desired reference (black signal). The obstacle appears at time t = 30 seconds, affecting the CoM of the MAS. At time t = 70sec an external perturbation modiﬁes the formation. In both cases, the proposed control strategy enables an effective asymptotic tracking of the reference. Fig. 11. The estimation of the functions f (x) and h(x), as obtained from the LISIC structure in agent number 1. The adaptation is observed at the ﬁrst seconds of the simulation. Additional adaptations are needed under the effect of the obstacle, which is present between t = 30 seconds and t = 45 seconds, and also under the effect of the external perturbation between t = 70 seconds and t = 75 seconds. Fig. 9. Angular velocity of the MAS. Notice the effect of the obstacle between t = 30s and t = 45s, and the perturbation at around t = 70s. In both cases, the proposed controller is effective for ensuring MAS velocity consensus, according to the design requirements. belong to an afﬁneinthecontrol class, where the nonlinear state–dependent functions are unknown. Making use of a computational structure which closely resembles the limbic system encountered in the human brain [5], the controller is able to estimate the unknown state depending functions, even in the presence of obstacles and external perturbations. The proposed framework, which we called DILISIC, established a novel control framework that is capable of imitating double integrator dynamics after closing the control loop. Then, even if the agents exhibit high–order dynamics, it is possible for the control designer to directly apply consensus techniques originally formulated for double integrator agents. Furthermore, by relying on the LISIC strategy, the individual agents are provided with robustness to external disturbances – an effect that is also achieved at the overall MAS level. The DILISIC framework proposed by us is designed in such a way that, if the control designer chooses a high–level control strategy with complexity O(n), then the overall controller will exhibit the same complexity, and therefore the effective implementation of this method in embedded systems and in real–time missions is ensured. A Lyapunov proof is provided to demonstrate stability of the proposed strategy. Additionally, to demonstrate the 11 effectiveness and performance of the proposed approach, a set of numerical results consisting on the ﬂocking control of a group of seven ﬂexible single–link arms under gravity and joint friction, with fourth order uncertain dynamics, and operating in a scenario with obstacles and disturbances is provided. Comparisons with similar methods are also provided in order to show the superior performance obtained when DILISIC is adopted. Current directions of this research explore the implementation of DILISIC for consensus of Unmanned Aircraft Systems (UASs) in 2dimensional and 3dimensional scenarios. ACKNOWLEDGMENT This work was supported by the National Scientiﬁc and Technical Research Council of Argentina (CONICET), the Army Research Ofﬁce (ARO) under grant W911NF1810210 and by National Science Foundation (NSF) under grant EPCN–1608880 and CI–1730589. APPENDIX A. Consensus for Agents with Double Integrator Dynamics Assuming n¯ agents with second order dynamics evolving in an m dimensional space (m = 2, 3), it is possible to describe the motion of each agent i as q˙i = pi , i = 1, 2, . . . , n¯ (37) p˙i = ui where {ui, qi, pi} ∈ Rm are control input, position, and velocity of agent i, respectively. An associated dynamic graph G(υ, ε) consisting of a set of vertices υ and edges ε is represented by υ = {1, 2, . . . , n¯}, ε ⊆ {(i, j) : i, j ∈ υ, j = i}. Each agent i is represented by a vertex, and each edge represents a communication link between a pair of agents. The neighborhood set of agent i is Niα = {j ∈ υα : qj − qi < r, j = i} (38) where · is the Euclidean norm in Rm, and the positive constant r is the range of interaction between agents i and j. To describe the geometric model of the ﬂock, i.e., the α–lattice, the following set of algebraic conditions should be solved [6] qj − qi σ = dα ∀j ∈ Niα where dα = d σ, the positive constant d is the distance between neighbors i and j, and d σ is the σnorm expressed by z σ = ( 1 + z 2 −1)/ , with > 0. The σ–norm is a map from Rm to R ≥ 0 for a vector z and is differentiable everywhere. From the above constraints, a smooth collective potential function can be obtained as V (q) = 0.5 ψα( qj − qi σ) (39) i j=i where ψα(z) is a smooth pairwise potential function deﬁned as ψα(z) = z dα φα(s)ds, with φα(z) = ρh(z/rα)φ(z − dα) φ(z) = ((a + b)σ1(z + c) + (a − b))/2 (40) σ1(z) = z/(1 + z2)1/2 Also, φ(√z) is a sigmoidal function with 0 < a ≤ b, c = a − b/ 4ab, to guarantee that φ(0) = 0. The term ρh(z) is a scalar bump function that smoothly varies between [0, 1]. A possible choice for deﬁning ρh(z) is [6]: 1, z ∈ [0, h) 0.5(1 + cos(π(z − h)/(1 − h))), z ∈ [h, 1] 0, otherwise The ﬂocking control algorithm ui = uiα+uiβ +uiγ introduced in [6] allows avoiding obstacles, while making all agents to form an α–lattice conﬁguration. The control algorithm has three parts: uαi is the interaction component between two α–agents, uβi is the interaction component between the α–agent and an obstacle (the β–agent), and uiγ is a goal component consisting of a distributed navigational feedback term. In particular uiα =cα1 φα( qj − qi σ)ni,j + j∈Niα (41) cα2 aij(q)(pj − pi) j∈Niα uiβ =c1β φβ ( qˆi,k − qi σ)nˆi,k+ k∈Niβ c2β bi,k(q)(pˆi,k − pi) (42) k∈Niβ uγi = − cγ1 (qi − qr) − cγ2 (pi − pr) n¯ n¯ − c1sc(( qi)/n¯ − qr) − c2sc(( pi)/n¯ − pr) (43) i=1 i=1 where cα1 , cβ1 , c1γ , cs1c, c2α, cβ2 , c2γ and c2sc are positive constants. The pair (qr, pr) is the coordinates of a virtual leader of the MAS ﬂock, i.e., the γ–agent which can be represented as {q˙r = pr, p˙r n¯ i=1 pi/n¯ deﬁne = fr the (qr, pr)}. The coordinates of terms n¯ i=1 the Center qi/n¯ and of Mass (CoM) of the MAS. The terms ni,j and nˆi,k are vectors deﬁned similar as in [7] and [6]. The stability of the MAS ﬂocking comes from Theorem 1 in [7]. The weights c1sc and c2sc, corresponding to the attractive force between the MAS CoM and the reference, are freely set so that the CoM can converge to the reference as soon as possible. In [7] the authors show that the choice of c1sc, cs2c does not affect the consensus stability or the obstacle avoidance. Finally, bi,k(q) and aij(q) are the elements of the heterogeneous adjacency matrix B(q) and spatial adjacency ma trix A(q), respectively, which are described as bi,k(q) = ρh( qˆi,k − qi σ/dβ) and aij(q) = ρh( qj − qi σ)/rα ∈ [0, 1], i = j. In these equations, rα = r σ, aii(q) = 0 ∀i and q, dβ = d σ, and rβ = r σ. The positive constant d is the distance between an α–agent and obstacles. The term φβ(z) is a repulsive action function which is deﬁned as φβ(z) = ρh(z/dβ)(σ1(z − dβ) − 1). Now we can deﬁne the set of β–neighbors of the ith α–agent in a similar way to equation (38) as Niβ = {k ∈ νβ : qˆi,k − qi < r } where the positive constant r is the range of interaction of an α–agent with obstacles. 12 REFERENCES [1] Beard, R.W., McLain, T.W., Nelson, D.B., Kingston, D. and Johanson, D. (2006). “Decentralized Cooperative Aerial Surveillance Using FixedWing Miniature UAVs”. Proc. of the IEEE. Vol.94(7), pp.1306–1324. [2] Garcia Carrillo, L.R. and Vamvoudakis, K.G. (2019). “DeepLearning Tracking for Autonomous Flying Systems Under Adversarial Inputs”. IEEE Transactions on Aerospace and Electronic Systems (Early Access). [3] Baghbani, F., AkbarzadehT, M.R., and Sistani, M.B.N. (2018). “Stable robust adaptive radial basis emotional neurocontrol for a class of uncertain nonlinear systems”. Neurocomputing, 309, 11–26. [4] F. Baghbani, M.R. AkbarzadehT, M.B. NaghibiSistani and A. Akbarzadeh. (2020). “Emotional neural networks with universal approximation property for stable direct adaptive nonlinear control systems” Engineering Applications of Artiﬁcial Intelligence. Vol 89, pp. 103447. [5] Moren, J. (2002). “Emotion and learning: A computational model of the amygdala”. Ph.D. thesis, Lund University Cognitive Studies. [6] OlfatiSaber, R. (2006). “Flocking for multiagent dynamic systems: Algorithms and theory”. IEEE TAC, 51(3), 401–420. [7] La, H.M. and Sheng, W. (2009). “Flocking control of a mobile sensor network to track and observe a moving target”. In IEEE ICRA 2009. [8] Peng, Z. Wang, D., Liu, H.H.T. and Sun, G. (2013). “Neural adaptive control for leader–follower ﬂocking of networked nonholonomic agents with unknown nonlinear dynamics”. Int J Adapt Control. Wiley, Vol. 28(6), pp.479–495. [9] Lucas, C., Shahmirzadi, D., and Sheikholeslami, N. (2004). “Introducing belbic: Brain emotional learning based intelligent controller”. Intelligent Automation & Soft Computing, 10(1), 11–21. [10] Nian, R., Liu, J. abd Huang, B. (2020). “A review On reinforcement learning: Introduction and applications in industrial process control”. Elsevier. Computers and Chemical Engineering, Vol (139) 106886. [11] Young, Z and La, H. L. (2020). “Consensus, cooperative learning, and ﬂocking for multiagent predator avoidance”. SAGE. Int. Journal of Advanced Robotic Systems. pp.1–19 [12] Lattimore,T. Hutter,M. and Sunehag, P. (2013). “The SampleComplexity of General Reinforcement Learning.” 30th PMLR 28(3): pp.2836. [13] Sanner, R.M. and Slotine, J.J.E. (1996). “Gaussian Networks for Direct Adaptive Control”, IEEE TNNLS, Vol. 7(5), pp. 837–863. [14] Fabri, S., and Kadirkamanathan, V. (1996). “Dynamic Structure Neural Networks for Stable Adaptive Control of Nonlinear Systems”, IEEE Ttransactions on neural networks, Vol.7(5), pp.1151–1167. [15] Liu, Q., Ge, S. S., Li, Y., Yang, M., Xu, H., and Tee, K. P. (2020). “A Simpler Adaptive Neural Network Tracking Control of Robot Manipulators by Output Feedback”. In IEEE ICCAR 2020, pp. 96100. [16] Park, B. S., Kwon, J.W. amd Kim, H. (2017). “Neural networkbased output feedback control for reference tracking of underactuated surface vessels”, Automatica Vol 17. pp.353–359 [17] Rahmani, B. and Belkheiri , M. (2018). “Adaptive neural network output feedback control for ﬂexible multilink robotic manipulators”, International Journal of Control, Vol.92(10.), pp.2324–2338. [18] Subramaniam, M.,Gopalraj, M., Sakthivelu, S. S. and Kandasamy, S. (2016). “BELBIC Tuned PI Controller Based Chopper Driven PMDC Motor”, Circuits and Systems, 7. [19] Wu,Q.,Lin, C.M. Fang, W. Cha, F., Yang, L., Shang C. and Zhou, C. (2018) “Selforganizing Brain Emotional Learning Controller Network for Intelligent Control System of Mobile Robots”, Access IEEE, 6. [20] Jafari, M., Xu, H., and Garcia Carrillo, L.R. (2018). “A neurobiologicallyinspired intelligent trajectory tracking control for unmanned aircraft systems with uncertain system dynamics and disturbance”. Trans. Inst. Meas. Control., Vol.41(2), pp.417–432. [21] Jafari, M., Fehr, R., Garcia Carrillo, L. R. and Xu,H. (2017). “Brain emotional learningbased intelligent tracking control for Unmanned Aircraft Systems with uncertain system dynamics and disturbance,” IEEE ICUAS 2017, pp.14701475. [22] Jafari, M., Fehr, R., Garcia Carrillo, L.R., Quesada, E.S.E., and Xu, H. (2017). “Implementation of brain emotional learningbased intelligent controller for ﬂocking of multiagent systems”. IFACPapersOnLine, Vol.50(1), pp.6934–6939. [23] Jafari, M., Xu, H., and Garcia Carrillo, L.R. (2017).“Brain emotional learningbased intelligent controller for ﬂocking of multiagent systems”. In IEEE ACC 2017. [24] Jafari, M. Xu, H. and Garcia Carrillo, L. R. (2020) “A biologicallyinspired reinforcement learning based intelligent distributed ﬂocking control for MultiAgent Systems in presence of uncertain system and dynamic environment,” IFAC Journ. Syst. & Ctrl. Vol.13, pp.100096. [25] Rubio Scola, I., Garcia Carrillo, L. R and Hespanha, J. P. (2020) “Stable robust controller inspired by the mammalian limbic system for a class of nonlinear systems,” IEEE ACC 2020, pp. 842–847. [26] Rubio Scola, I., Garcia Carrillo, L. R, Hespanha, J.P. and Lozano, R. (2020) “Performance–guaranteed consensus control inspired by the mammalian limbic system for a class of nonlinear multiagents”, Elsevier. IFACPapersOnLine, Vol 53(2), Pages 94969501. [27] Chen, J., Li, J., Zhang, R., Chengzhou Wei, (2019) “Distributed fuzzy consensus of uncertain topology structure multiagent systems with nonidentical partially unknown control directions”. Applied Mathematics and Computation, Vol.362, pp.124581. [28] Chen, J., Li, J., and Yuan, X. (2020). Global fuzzy adaptive consensus control of unknown nonlinear multiagent systems. IEEE Transactions on Fuzzy Systems, Vol.28(3), pp.510–522. [29] Zou, W., Shi, P., Xiang, Z., and Shi, Y. (2019). Finitetime consensus of secondorder switched nonlinear multiagent systems. IEEE Trans Neural Netw Learn Syst, Vol.31(5), pp.1757–1762 [30] Yoo, S. J. (2013). “Distributed consensus tracking for multiple uncertain nonlinear strictfeedback systems under a directed graph.” IEEE Trans Neural Netw Learn Syst, Vol.24(4), pp.666–672. [31] Zhao, L., Yu, J., Lin, C., and Ma, Y. (2017). “Adaptive neural consensus tracking for nonlinear multiagent systems using ﬁnitetime command ﬁltered backstepping.” IEEE TSMC, 48(11), 2003–2012. [32] Me´nard, T. Ajwad, S.A., Moulay, E. , Coirault, P. and Defoort, M. (2020) “Leaderfollowing consensus for multiagent systems with nonlinear dynamics subject to additive bounded disturbances and asynchronously sampled outputs” Elsevier. Automatica. Vol(121) 109176. [33] Ge, S. S., Hong, F., and Lee, T. H. (2004). “Adaptive neural control of nonlinear time–delay systems with unknown virtual control coefﬁcients”. IEEE Trans. Syst. Man Cybern. Syst., Part B, Vol 34(1), 499516. [34] Hong, H. ,Yu , W., Wen, G. and Fu, J. (2017) “Robust consensus tracking for heterogeneous linear multiagent systems with disturbances.” IEEE. 11th Asian Control Conference (ASCC) [35] Jie Mei (2018) “Model Reference Adaptive Consensus for Uncertain Multiagent Systems under Directed Graphs” IEEE Conference on Decision and Control 2018. p 6198–6203 [36] Li, X. Sun, Z., Tang, Y. and Karimi, H.R. (2020). “Adaptive EventTriggered Consensus of MultiAgent Systems on Directed Graphs” IEEE TAC, Vol. 66(4), p 1670–1685. [37] Rezaee, H., Abdollahi, F. (2017). “Stationary Consensus Control of a Class of HighOrder Uncertain Nonlinear Agents With Communication Delays”, IEEE T SYST MAN CY C, Vol.49(6), pp. 1285–1290. [38] Wang, G. (2019). “Distributed control of higherorder nonlinear multiagent systems with unknown nonidentical control directions under general directed graphs”, Elsevier, Automatica, Vol.110, pp.108559. [39] Meng, W., Liu, P.X., Yang, Q., and Sun , Y. (2019). “Distributed Synchronization Control of Nonafﬁne Multiagent Systems With Guaranteed Performance”. IEEE TNNLS., Vol.31(5), pp. 1571–1580. [40] Qin, J., Zhang, G., Zheng, W. X. and Kang, Y. (2019). “Neural NetworkBased Adaptive Consensus Control for a Class of Nonafﬁne Nonlinear Multiagent Systems With Actuator Faults”, IEEE Trans. Neural Netw. Learn. Syst. Vol.30(12), pp.3633–3644. [41] Wang, Q., Psillakis, H. E. and Sun, C. (2019). “Adaptive Cooperative Control With Guaranteed Convergence in TimeVarying Networks of Nonlinear Dynamical Systems”, IEEE Trans Cybern. Vol.50(12), pp.5035–5046. [42] H. Dong and X. Yang (2021). “Adaptive neural ﬁnitetime control for space circumnavigation missions with uncertain input constraints”, Elsevier, Journal of the Franklin Institute, Article in Press. [43] Compare, A., Zarbo, C., Shonin, E., Van Gordon, W., and Marconi, C. (2014). “Emotional regulation and depression: A potential mediator between heart and mind.” Cardiovasc Psychiatry Neurol, 2014:324374. [44] Khalil, H.K. “Nonlinear Systems: Second edition”, Prentice Hall, 1996. [45] Yu, W. and Chen, G. (2010) “Robust Adaptive Flocking Control of Nonlinear Multiagent Systems”. 2010 IEEE Int. Symposium on CACSD. [46] Cambera, J. C., Chocoteco, J. A. and Feliu V. (2014). “Feedback Linearizing Controller for a Flexible SingleLink Arm under Gravity and Joint Friction.” In “ROBOT2013: First Iberian Robotics Conference”. Editors: Armada, M. A., Sanfeliu, A. and Ferre, M. Springer Cham. pp. 169–184.Ver+/ 