In case you want " common elements " discard me the possibility of thinking about arrangement. From the concept I am accustomed to arrangement the order matters. At an intersection, the order is totally irrelevant.
For example, the {0,1}
and {1,0}
arrangements are different. If they are treated as sets , however, they are the same sets. I do not know what the formal definition of intersection for arrangements would be, but for sets it looks like this:
IfyouhavenolimitationwhatsoeverwiththeJavayouareusing,youcanuseSet
directlyandsolveyourproblem.Somethinglikethis(Java8version):
int[]a={1,2,3,4,5,6,7};int[]b={0,1,2,3};//criaumobjetodaclasseSetsobreoarrayaSet<Integer>setA=IntStream.of(a).boxed().collect(Collectors.toSet());//criaumobjetodaclasseSetsobreoarraybSet<Integer>setB=IntStream.of(b).boxed().collect(Collectors.toSet());//determinaainterseção:verificaseoselementosdeaestãoemb,transformandoemarraylogoemseguidaint[]c=setA.stream().filter(b::contains).mapToInt(Integer::intValue).toArray();
Now,thisseemstomelikeateacher-proposedexercisesothatyoudevelopallthelogicbehindit,usingtheleastexternalimplementationsofthesubject.So,Icanassumethateverythingiswithyou?Thatis,isallcoderunningfromyourkeyboard?
Well,thefirststepwouldbetofindawaytosurvivethesizelimitationoftheset.Iproposeanutty,disgustingandingenioussolution:wecreateasketchvectorofthemaximumpossiblesizeandoperateonit.Then,sincewecountalltheelementsoftheintersection,wecreateavectorofappropriatesizeandwethrowfromthesketchtothenewarraytheinterestingpart:
int[]a={1,2,3,4,5,6,7};int[]b={0,1,2,3};int[]rascunho=newint[a.length<b.length?a.length:b.length];inttamanhoIntersecoesNoRascunho=0;//iterandosobreoselementosnovetorafor(intelA:a){//primeiro,devoverificarsejáfoiinseridonorascunhoparanãorepetir...booleancontidoRascunho=false;for(inti=0;i<tamanhoIntersecoesNoRascunho;i++){//sim,jáforainseridonorascunhoif(elA==rascunho[i]){contidoRascunho=true;break;}//sejáestánorascunho,váparaopróximoelementoaserprocessadoif(contidoRascunho){continue;}//verificasepertenceaooutroconjuntofor(intelB:b){//foidetectadaainterseçãoif(elA==elB){rascunho[tamanhoIntersecoesNoRascunho]=elA;tamanhoIntersecoesNoRascunho++;break;}}}}//criandoovetorfinal,detamanhoadequadoinc[]c=newint[tamanhoIntersecoesNoRascunho];for(inti=0;i<tamanhoIntersecoesNoRascunho;i++){c[i]=rascunho[i];}
Thissolutionsimplyworks,butit(initscurrentincarnation)isbadforseveralreasons:
- itsexecutiontimeis
o(n*m)
,n
beingthesizeofthefirstsetandm
thesizeofthesecondset;Ithinkitwillgetalittleworseifithasalargeintersection,butthatwouldnotchangethe asymptotic complexity
- It requires an extra memory expense, even if it is plausible that the space at the end will not be used
You can even improve the runtime part by using the logic of initially preprocessing the vectors to get them in ascending order. In this case, however, note that the iteration must be different. Read more [1] [2] . Excluding the part of the ordering, it would thus be the detection:
int []a = {1, 2, 3, 4, 5, 6, 7 };
int []b = {0, 1, 2, 3};
// ... rotina misteriosa que ordena a e b ...
int i = 0;
int j = 0;
int idxRascunho = 0;
int []rascunho = new int[a.length < b.length? a.length: b.length];
int ultimoElementoAdicionado = 0;
while (i < a.length && j < b.length) {
if (a[i] == b[j]) {
int candidato = a[i];
if (idxRascunho == 0 || ultimoElementoAdicionado != candidato) {
rascunho[idxRascunho] = candidato;
idxRascunho++;
}
i++;
j++;
} else if (a[i] < b[j]) {
i++;
} else { // a[i] > b[j]...
j++;
}
}
// criando o vetor final, de tamanho adequado
inc []c = new int[idxRascunho];
for (int i = 0; i < idxRascunho; i++) {
c[i] = rascunho[i];
}
This solution improves part of the performance, but continue with memory problem ...
To solve this, how about a linked list? So assuming the code is entirely ours, we can make a linked list. In case, our list will be extremely simple and each node will contain only 3 information:
the number it carries with you
the pointer to the next element in the list ( null
means end of list)
the size of the list so far, counting the current node
We can add a utility method in it that transforms the list into an integer vector.
So, our linked list node class can be defined like this:
class Nodo {
final Nodo proximo;
final int tamanho;
final int conteudo;
// para construir um novo nó, só preciso ser informado de seu conteúdo e do vizinho
Nodo(int conteudo, Nodo proximo) {
this.conteudo = conteudo;
this.proximo = proximo;
this.tamanho = (proximo != null? proximo.tamanho: 0) + 1;
}
static int[] toArray(Nodo nodo) {
if (nodo == null) {
return new int[0];
}
int []vetor = new int[nodo.tamanho];
for (Nodo it = nodo; it != null; it = it.proximo) {
// como armazenamos em cada nó o tamanho do lista que se segue + 1,
// então se pusermos o seu conteúdo na posição it.tamanho-1 manteremos
// a ordem de inserção dos nodos dentro do vetor, então o vetor
// gerado estará ordenada
vetor[it.tamanho - 1] = it.conteudo;
}
return vetor;
}
}
Since the bound list necessarily contains the largest element discovered at the intersection, then we can create the following utilitarian function to know if by chance the element has already been inserted at the intersection:
private static boolean candidatoEhNovidade(int candidato, Nodo ultimoNodo) {
// se está nulo é porque não tem elementos na interseção
if (ultimoNodo == null) {
return true;
} else {
return candidato != ultimoNodo.conteudo;
}
}
So, on top of that, we can give an abstraction on a method that maintains the list, returning the new head from the list:
private static Nodo insereSeNovidade(int candidato, Nodo ultimoNodo) {
if (candidatoEhNovidade(candidato, ultimoNodo)) {
return new Nodo(candidato, ultimoNodo);
} else {
// nada a fazer, então a lista continua no mesmo estado
return ultimoNodo;
}
}
And the core of the program would be this:
int []a = {1, 2, 3, 4, 5, 6, 7 };
int []b = {0, 1, 2, 3};
// ... rotina misteriosa que ordena a e b ...
int i = 0;
int j = 0;
Nodo cabecaLista = null;
while (i < a.length && j < b.length) {
if (a[i] == b[j]) {
cabecaLista = insereSeNovidade(a[i], cabecaLista);
i++;
j++;
} else if (a[i] < b[j]) {
i++;
} else { // a[i] > b[j]...
j++;
}
}
// criando o vetor final, de tamanho adequado
inc []c = Nodo.toArray(cabecaLista);
So, without making any use of the standard Java library (except perhaps in the ordering gap), we were able to create the list in
o(n log n)
time and with memory usage asymptotically equal to the size of the intersection between the two sets.