In the standard propositional logic we have a certain number of variables (let's say A, B, C, D, E, and F) which can take one of two values (0 or 1, False or True). If we know the values of each variable (let's say A = 0, B = 1, C = 1, D = 0, and so on), then we know everything about the "world" and we do not need logic. However, instead of a complete knowledge about the world (value of each variable) we might have some restrictions like: We do not know what values A, B and C have, but we know that either A and B are both equal to 1 or C is not equal to 1: $(A \land B) \lor (\lnot C)$. In addition to that we usualy have some other "restrictions" (for example $\lnot A \land (\lnot D \land B)$, and so on). The goal of the propositional logic is to combine those "restrictions" to say what the world can / can't be.
This picture naturally suggest a generalisation. What if our variables (A, B, C, D, and so on) can have more than two values? Let's say instead of allowing them to be "black" and "white" (as before), we allow them be "red", "green" and "blue". Is there a special logic for that? Within this logic we could make statements like this: If B is blue or green, then A is red, except C is black (in this case A is blue or yellow).
Is there a special logic for statements like this? Many-valued logic is the first what comes in mind, but I guess it is about allowing truth-values to have more than two values, which is not the same as the case I have described.
The first thing to note is that there is not actually a widely accepted definition of what counts as a "Logic". The closest thing to that is Gurevich's definition, which is what I will be using to answer your question. I paraphrase it as follows. A logic $L$ is given by a pair of functions (Sen,Sat) satisfying the following conditions. Sen associates with every vocabulary $\sigma$ a recursive set Sen($\sigma$) whose elements are called $L$-sentences of vocabulary $\sigma$. Sat associates with every vocabulary $\sigma$ a recursive relation Sat$_\sigma(A,\varphi)$ where $A$ is a $\sigma$-structure and $\varphi$ an $L$-sentence of vocabulary $\sigma$. We say that $A$ satisfies $\varphi$ if Sat$_\sigma(A,\varphi)$ holds.
Unpacking this, Sen is meant to be the set of all sentences of the logic, and Sat determines whether a structure satisfies that sentence. As an example, there is an easy definition for Sen and Sat that give propositional logic, setting Sen($\tau$) to be the set of all propositional sentences with up to $k$ distinct propositions, where $\tau$ is some vocabulary of $k$ distinct propositions, and setting Sat($A,\varphi$) to be the function that verifies whether the assignment of the propositions given by $A$ satisfies $\varphi$. For all other vocabularies $\sigma$, Sen($\sigma$) gets the empty set.
Notice here that the binary-ness of propositional calculus comes from the fact that we take as convention that propositions are nullary relations. So these vocabularies $b_k$ are merely sets of $k$ nullary relations. Thus, there is no analagous version for what you're looking for under the classical definition of relations, and hence under this definition of logics. You can of course generalize the definition of vocabulary so that instead of a vocabulary having only a set of relations and functions within its universe, it also has a set of functions with co-domain some fixed boolean algebra, that's how you get many-valued logics.
There is really no good way to enforce what you are looking for, the best you can do is something similar to what follows. Define the vocabulary $\sigma$ to have unary relations $Red(x), Blue(x), Green(x),$etc. and $k$ constants $P_1,P_2,\ldots,P_k$ to represent the propositions, for some $k$. Then you can define Sen($\sigma$) to be the boolean formulas over $\sigma$ (where base terms are in the form $R(P_i)$, where $R$ is some relation in $\sigma$). For any $\sigma$-structure $A$ and any $L$-sentence $\varphi$, Sat($A,\varphi$) will be true if and only if $A$ satisfies $\varphi$ in the typical way and moreover, $A$ interprets the relations of $\sigma$ to form a partition over the constants $P_i$. That is, for any two relations $R_\ell,R_j \in \sigma$, $R_\ell \cap R_j = \emptyset$ and for some $j$, $P_i \in R_j$. This no longer satisfies some of the basic tautologies you find in boolean logic however, for example the sentence $Red(P)\vee\neg Red(P)$ will no longer be a tautology. To make that a tautology you would have to restrict the class of structures over which we consider the logic, but this does not count as a logic of its own.