<\body> <\tmdoc-title> Multivariate Bernstein Subdivision solver <\with|par-mode|right> \; We consider here the problem of computing the solutions of a polynomial system <\equation*> ,\, x|)>=0>>|>>|,\,x|)>=0>>>>>|\> in \ a box ,b|]>\\\,b|]>\>. This solver uses the representation of multivariate polynomials in the Bernstein basis, analysis of sign variations and univariate solvers to localise the real roots of a polynomial system. The output is a set of small-enough boxes, which may contain these roots. \; By a direct extension to the multivariate case, any polynomial ,\,x|)>\,\,x|]>> of degree > in the variable >, can be decomposed as: <\equation*> f,\,x|)>==0>>\=0>>b,\,i>B>>;a,b|)>\B>>x;a,b|)>. where B>>;a,b|)>\B>>;a,b|)>|)>i\d,\,0\i\d>> is the tensor product Bernstein basis on the domain ,b|]>\\\,b|]>\> and =,\,i>|)>i\d,\,0\i\d>> are the control coefficients of on . The polynomial is represented in this basis by the >>> order tensor of control coefficients =,\,i>|)>i\d,0\j\d,0\k\d>>. The size of , denoted by >, is =max -a|\|>;i=1,\,n|}>>. De Casteljau algorithm also applies in each of the direction >, , ,n> so that we can split this representation in these directions. We use the following properties to isolate the roots: <\definition> For any |]>> and ,n>, let <\eqnarray*> |)>>||=0>>mini\d,k\j|}>>b,\,i>B>>;a,b|)>>>||)>>||=0>>maxi\d,k\j|}>>b,\,i>xB>>;a,b|)>.>>>> <\theorem> [Projection Lemma]For any =,\,u|)>\I>, and any ,n>, we have <\equation*> m|)>\f|)>\M|)>. As a direct consequence, we obtain the following corollary: <\corollary> For any root =,\,u|)>> of the equation |)>=0> in the domain , we have >\u\>> where <\itemize> >> (resp. >>) is either a root of |)>=0> or |)>=0> in ,b|]>> or > (resp. >) if |)>=0> (resp. |)>=0>) has no root on ,b|]>>, \0\M> on >,>|]>>. The solver implementation contains the following main steps. It consists in\ <\itemize> applying a preconditioning step on the equations; in reducing the domain; if the reduction ratio is too small, to split the domain until the size of the domain is smaller than a given epsilon. As we are going to see, we have several options for each of these steps, leading to different algorithms with different behaviors, as we will see in the next sections. Indeed the solvers that we will consider are parameterized by the <\itemize-minus> : A transformation of the initial system into a system, which has a better numerical behavior. : The technique used to reduce the initial domain, for searching the roots of the system. : The technique used to subdivide the domain, in order to simplify the forthcoming steps, for searching of roots of the system. Solving the system =0> is equivalent to solving the system =0>, where is an s> invertible matrix As such a transformation may increase the degree of some equations, with respect to some variables, it has a cost, which might not be negligible in some cases. Moreover, if for each polynomials of the system not all the variables are involved, that is if the systems is sparse with respect to the variables, such a preconditioner may transform it into a system which is not sparse anymore. In this case, we would prefer a partial preconditioner on a subsets of the equations sharing a subset of variables.\ The following preconditioners are curently avialable: <\itemize-dot> We minimise the distance between the equations, considered as vectors in an affine space of polynomials of a given degree. In this part we consider square systems, for which . If we are ``near'' a simple solution of the system =0>, we transform locally the system > into a system >, where >f|)>i,j\s>> is the Jacobian matrix of > at a point > of the domain , where it is invertible. A direct computation shows that locally (in a neighborhood of > the level-set of > ,s> are orthogonal to the >-axes. We can prove that the reduction based on the polynomial bounds and behaves like Newton iteration near a simple root, that is we have a quadratic convergence. Here are several reduction strategies,, that can be considered. <\itemize> : A method called Interval Projected Polyhedron (or IPP) can be used in order to reduce the domain of search. It is based on the property that the convex hull property. A new reduced domain is computed, by intersecting the convex hull of the projected set of control points. With our notations, in considering the control polygons defining ;u|)>> and ,u|)>> instead of these polynomials. : A direct improvement of the convex hull reduction consist in computing the first (resp. last) root of the polynomial ;u|)>>, (resp. ;u|)>>), in the interval ,b|]>>. The current implementation of this reduction steps allows us to consider the convex hull reduction, as one iteration step of this reduction process. Here, we compute all the roots of the polynomials ;u|)>> and ;u|)>> and keep the intervals >,>|]>> defined in corollary . The guarantee that the computed intervals contain the root of , is achieved by controlling the rounding mode of the operations during the de Casteljau computation. This method will be particularly interesting in the cases where more than one interval have to be considered. This may happen at the beginning of the search but is less expected locally near a single root. Here some simple rules that can be used to subdivide a domain. We will show in the next section their impact on the performance of the solver <\itemize> : The domain is then split in half in a direction for which -a|\|>> is maximal. : Instead of choosing the size of the interval as a criterion for the direction in which we split, we may choose a and a such that ;u|)>=M;u|)>-m;u|)>> (or the difference between the control coefficients) is maximal. Then, a value > for splitting the domain in the direction >, is chosen <\itemize> either where ;u|)>> has a local maximum, or where <\equation> \;v|)>d*v=\;v|)>d*v The right-hand side of this equation can be easily computed, from the sum of all the control coefficients of ;v|)>>. \; <\initial> <\collection>