Latest Versions of TechStack
| Software | Used Version(s) | Current |
|---|---|---|
| Java | 8,13 | Java SE 17 (LTS) |
| Spring boot | 2.3.12 | 2.6.3 |
| Angular | NA | 13.1.1 |
| Android studio | NA | 4.1 |
| Lombok | v1.17 | v1.18.22 |
| Log4j 2 | 2.17.1 | 2.17.1 |
| Oracle | 12 | 19c |
| Springfox swagger | 2.9.2 | 3.0.0 |
| Resiliance 4j | 1.7 | 1.7.1 |
| Jenkins | 2.324 | 2.324 |
Apache Kafka : an open-source message broker project developed by the Apache Software Foundation written in Scala and is a distributed publish-subscribe messaging system.
Features of kafka
High Throughput : Support for millions of messages with modest hardware
Scalability : Highly scalable distributed systems with no downtime
Replication : Messages are replicated across the cluster to provide support for multiple subscribers and balances the consumers in case of failures
Durability Provides support for persistence of message to disk
Stream Processing Used with real-time streaming applications like Apache Spark & Storm
Data Loss Kafka with proper configurations can ensure zero data loss
Various components of Kafka:
Topic – a stream of messages belonging to the same type
Producer – that can publish messages to a topic
Brokers – a set of servers where the publishes messages are stored
Consumer – that subscribes to various topics and pulls data from the brokers.
Topic :
Topic is like a table identified by name.
Topic is split in partitions.
Topic 1 -- Partition 0, partition 1 , partition 2.
Explain the role of the offset.
Messages contained in the partitions are assigned a unique ID number that is called the offset. The role of the offset is to uniquely identify every message within the partition.
What is a Consumer Group?
To enhance parallelism.
Consumer Groups is a concept exclusive to Kafka. Every Kafka consumer group consists of one or more consumers that jointly consume a set of subscribed topics.
You cant have more consumers than partitions.
if you have 3 partitions u should not have 4 consumers in one group. because consumers in a group shares the partitions. if we have 3 partitions for a topic and 4 consumers in a group each consumer connects to one partitions and 4th one become idle and do nothing.
consumer has to specify broker name and topic name to read and kafka will take care of pulling data from right brokers
Messages are read in order like 0,1,2,... but in parallel across the partitions.
B1 - Topic 1 - partition 0 - 0,1,2,3,4
B2 - Topic 2 - partition 1 - 0,1,2,3,4,5,6,7
Each consumer within a group read from exclusive partitions.
Brokers
Broker 1 Broker 2 Broker 3
Topic 1 Topic 1 Topic 1
P-0 P-2 P-1
Topic 2 Topic 2 Topic 1
P-1 P-0 P-0
Replication Factor always > 1
Partitions :
What is the role of the ZooKeeper?
Kafka uses Zookeeper to store offsets of messages consumed for a specific topic and partition by a specific Consumer Group.
Is it possible to use Kafka without ZooKeeper?
No, it is not possible to bypass Zookeeper and connect directly to the Kafka server. If, for some reason, ZooKeeper is down, you cannot service any client request.
Explain the concept of Leader and Follower.
Every partition in Kafka has one server which plays the role of a Leader, and none or more servers that act as Followers. The Leader performs the task of all read and write requests for the partition, while the role of the Followers is to passively replicate the leader. In the event of the Leader failing, one of the Followers will take on the role of the Leader. This ensures load balancing of the server.
Why are Replications critical in Kafka? Kafka is durable with replications.
Replication ensures that published messages are not lost and can be consumed in the event of any machine error, program error or frequent software upgrades.
How do you define a Partitioning Key?
Within the Producer, the role of a Partitioning Key is to indicate the destination partition of the message. By default, a hashing-based Partitioner is used to determine the partition ID given the key. Alternatively, users can also use customized Partitions.


rest security
security config class-
basic auth
authentication filter-> authentication object-> not validated-> authentication manager builder->
finds authetication providers-> like DAO or custom authentication provider
we can pass JWT token in header for authentication.
server validates if it is generated by itself.
session-based vs token based security.
tokens are stateless
requests can go to any node which doesn't understand previous session settings.
What is Bearer (ur the owner of the token) Vs Basic
after authentication is done only we get JWT token which is used for authorization
JWT --OAUTH Grant Types:
implicit --Implicit Grant
authorization_code --Authorization Code Grant- This grant type flow is also called "three-legged" OAuth.
You've seen this flow anytime your app opens a browser to the resource server's login page and invites you log in to your actual account (for example, Facebook or Twitter).
If you successfully log in, the app will receive an authorization code that it can use to negotiate an access token with the authorization server.
client_credentials --Client Credentials Grant
password --Resource Owner Password Grant
refresh_token --Use Refresh Tokens
urn: ietf: params: oauth: grant-type: device_code --Device Authorization Grant
By using @Lazy annotation on the dependency we can resolveor avoid constructor based injection in SB and use setter based injection
server.tomcat.max-connections | 8192 |
Maximum number of connections that the server accepts and processes at any given time. Once the limit has been reached, the operating system may still accept connections based on the "acceptCount" property.
|
server.tomcat.max-http-form-post-size | 2MB |
Maximum size of the form content in any HTTP post request.
|
server.tomcat.max-swallow-size | 2MB |
Maximum amount of request body to swallow.
|
server.tomcat.max-threads | 200 |
Maximum amount of worker threads.
|
server.tomcat.mbeanregistry.enabled | false |
Whether Tomcat's MBean Registry should be enabled.
|
server.tomcat.min-spare-threads | 10 |
Minimum amount of worker threads.
|
server.tomcat.port-header | X-Forwarded-Port |
Name of the HTTP header used to override the original port value.
|
server.tomcat.processor-cache | 200 |
Maximum number of idle processors that will be retained in the cache and reused with a subsequent request. When set to -1 the cache will be unlimited with a theoretical maximum size equal to the maximum number of connections.
|
server.tomcat.protocol-header |
Header that holds the incoming protocol, usually named "X-Forwarded-Proto".
|
// find employees whose salaries are above 10000 empList.stream().filter(emp->emp.getSalary() > 10000).forEach(System.out::println);Resttemplate : getForEntity(gets full responseEntity) vs getForObject (only object we get)
exchange also do the same
Right Status codes for delete - 204 - content not found
200 - OK
For each request, a thread is blocked.
at one point in time threads will be out. so time out is needed to release threads. Default is 200 threads in thread pool.
Ex :Read time out(not able to complete reading data) , server time out (not able to get connection)
the new functionality will be rolled with new version apis
using request param or path param
also using headers
uri @Getmapping
mediatype : Produces or consumes is nothing but content negotiation.
AOP - for logging purposes,
security setup using Request Filters, and Interceptors for managing requests and response data
STEP1: Create an interface with the annotation name
STEP2: Create an Aspect
STEP3: Add the annotation
@Target(ElementType.METHOD)
@Retention(RetentionPolicy.RUNTIME)
public @interface Traceable {
}
public interface ProductRepository extends PagingAndSortingRepository<Product, Integer> {
List<Product> findAllByPrice(double price, Pageable pageable);
}
Conversely, we could have chosen to extend JpaRepository instead, as it extends PagingAndSortingRepository too.
HATEOAS constraint of REST means enabling the client of the API to discover the next and previous pages based on the current page in the navigation.
we're going to use the Link HTTP header, coupled with the “next“, “prev“, “first” and “last” link relation types.
In the case of pagination, the event – PaginatedResultsRetrievedEvent – is fired in the controller layer. Then we'll implement discoverability with a custom listener for this event.
@Api
@RequestMapping("/v1")
public interface ProfileV1Interface{
@ApiOperation(value = "Api to Get a specific setting for a cluster", notes = "Get a specific setting for a Cluster")
@GetMapping(value = "/cluster/{name}/settings", produces = MediaType.APPLICATION_JSON_VALUE)
Map<String, String> getClusterSetting(@RequestParam(required = true) String clusterId,@PathVariable(required = true) String name);
}
@Autowired is Spring's own annotation. @Inject is part of a Java technology called CDI that defines a standard for dependency injection similar to Spring. In a Spring application, the two annotations work the same way as Spring has decided to support some JSR-299 annotations in addition to their own.
@Controller: The @Controller is a class-level annotation. It is a specialization of @Component. It marks a class as a web request handler. It is often used to serve web pages. By default, it returns a string that indicates which route to redirect. It is mostly used with @RequestMapping annotation.

import java.rmi.RemoteException;
import net.webservicex.www.GetQuote;
import net.webservicex.www.GetQuoteResponse;
import org.apache.axis2.AxisFault;
import com.demo.ws.stock.quote.StockQuoteStub;
public class MainClient {
public static void main(String[] args) throws RemoteException {
try {
StockQuoteStub stub = new StockQuoteStub();
GetQuote gq = new GetQuote();
gq.setSymbol("IBM");
GetQuoteResponse resp = stub.getQuote(gq);
System.out.println(resp.getGetQuoteResult());
} catch (AxisFault e) {
e.printStackTrace();
}
}
}
Sample Output after running this WS : package com.krish.sorting;
import java.util.Random;
public class SortingTechniques {
// Applies selection sort technique to the given array
public static int[] doSelectionSort(int[] arr) {
for (int i = 0; i < arr.length - 1; i++) {
int index = i;
for (int j = i + 1; j < arr.length; j++) {
if (arr[j] < arr[index]) {
index = j;
}
}
int smallerNumber = arr[index];
arr[index] = arr[i];
arr[i] = smallerNumber;
}
return arr;
}
// Applies bubble sort technique to the given array
public static int[] doBubbleSort(int[] arr) {
int n = arr.length;
int k;
for (int m = n; m >= 0; m--) {
for (int i = 0; i < n - 1; i++) {
k = i + 1;
if (arr[i] > arr[k]) {
swapNumbers(i, k, arr);
}
}
}
return arr;
}
// Applies Insertion sorting Technique
public static int[] doInsertionSort(int[] arr) {
int temp;
for (int i = 1; i < arr.length; i++) {
for (int j = i; j > 0; j--) {
if (arr[j] < arr[j - 1]) {
temp = arr[j];
arr[j] = arr[j - 1];
arr[j - 1] = temp;
}
}
}
return arr;
}
// Applies Quick sort to the given array
public static void doQuickSort(int lowerIndex, int higherIndex,int[] myArray) {
int i = lowerIndex;
int j = higherIndex;
// calculate pivot number, I am taking pivot as middle index number
int pivot = myArray[lowerIndex + (higherIndex - lowerIndex) / 2];
// Divide into two arrays
while (i <= j) {
/**
* In each iteration, we will identify a number from left side which
* is greater then the pivot value, and also we will identify a
* number from right side which is less then the pivot value. Once
* the search is done, then we exchange both numbers.
*/
while (myArray[i] < pivot) {
i++;
}
while (myArray[j] > pivot) {
j--;
}
if (i <= j) {
exchangeNumbers(i, j,myArray);
// move index to next position on both sides
i++;
j--;
}
}
// call quickSort() method recursively
if (lowerIndex < j)
doQuickSort(lowerIndex, j,myArray);
if (i < higherIndex)
doQuickSort(i, higherIndex,myArray);
}
private static void exchangeNumbers(int i, int j,int[] myArray) {
int temp = myArray[i];
myArray[i] = myArray[j];
myArray[j] = temp;
}
private static void swapNumbers(int i, int k, int[] arr) {
int temp;
temp = arr[i];
arr[i] = arr[k];
arr[k] = temp;
}
public static void printArray(int[] printArray) {
for (int i : printArray) {
System.out.print(i);
System.out.print(", ");
}
}
private static int[] getRandomNumbersArray() {
Random myRandom = new Random();
int[] randomArray = { myRandom.nextInt(100), myRandom.nextInt(20),
myRandom.nextInt(600), myRandom.nextInt(60),
myRandom.nextInt(200) };
return randomArray;
}
public static void main(String[] args) {
int[] myArray = getRandomNumbersArray();
System.out.println("\nBefore Selection sort:");
printArray(myArray);
System.out.println("\nAfter Selection sort:");
printArray(doSelectionSort(myArray));
myArray = getRandomNumbersArray();
System.out.println("\nBefore Bubble sort:");
printArray(myArray);
System.out.println("\nAfter Bubble sort:");
printArray(doSelectionSort(myArray));
myArray = getRandomNumbersArray();
System.out.println("\nBefore Insertion sort:");
printArray(myArray);
System.out.println("\nAfter Insertion sort:");
printArray(doSelectionSort(myArray));
myArray = getRandomNumbersArray();
System.out.println("\nBefore Quick sort:");
printArray(myArray);
int length = myArray.length;
doQuickSort(0, length - 1,myArray);
System.out.println("\nAfter Quick sort:");
printArray(myArray);
}
}
Output : package com.krish.queue;
import com.krish.datastructures.common.Cell;
public class QueueMain {
Cell head;
Cell tail;
public QueueMain() {
head = null;
tail = null;
}
public void enqueue(Object obj) {
Cell newCell = new Cell(obj, null);
if (head == null && tail == null)
head = newCell;
else
tail.next = newCell;
tail = newCell;
System.out.println("Enqueud element:"+obj);
printQueue();
}
public Cell front() {
return head;
}
public Cell rear() {
return tail;
}
public void dequeue() {
if (head == null && tail == null) {
System.out.println("Q is empty");
} else {
System.out.println("Dequeued element:"+head.getVal());
if (head.next == null) {
tail = null;
}
head = head.next;
}
printQueue();
}
public int getsize() {
int size = 0;
if(!(head ==null && tail == null)){
size = 1;
for(Cell n = head; n.next != null; n = n.next)
size = size+1;
return size;
}
return size;
}
public void printQueue() {
if (head == null && tail == null) {
System.out.println("Q is empty");
} else {
System.out.println("Q Elememts:");
Cell current = head;
System.out.println("Head");
while (current != null) {
System.out.println("->" + current.getVal());
current = current.next;
}
System.out.println("<--tail br=""> System.out.println("Size of the Q:"+getsize());
}
}
public static void main(String[] args) {
QueueMain queue = new QueueMain();
queue.enqueue(23);
queue.enqueue(43);
queue.enqueue(143);
queue.enqueue(321);
queue.dequeue();
queue.dequeue();
}
}
--tail>
package com.krish.datastructures.common;
public class Cell {
Object val; // value in the cell
public Cell next; // the address of the next cell in the list
/**
* Constructor Cell builds a new cell
*
* @param value
* - the value inserted in the cell
* @param link
* - the cell that is chained to this new cell
*/
public Cell(Object value, Cell link) {
val = value;
next = link;
}
/** getVal returns the value held in the cell */
public Object getVal() {
return val;
}
/** getNext returns the address of the cell chained to this one */
public Cell getNext() {
return next;
}
/**
* setNext resets the address of the cell chained to this one
*
* @param link
* - the address of the Cell that is chained to this one
*/
public void setNext(Cell link) {
next = link;
}
}
package com.krish.stack;
import com.krish.datastructures.common.Cell;
public class StackUsingLinkedLists {
public static Cell top;
/** Constructor Stack creates an empty stack */
public StackUsingLinkedLists() {
top = null;
}
/**
* push inserts a new element onto the stack
*
* @param ob
* - the element to be added
*/
public void push(Object ob) {
System.out.println("PUSH : Inserted the element:" + ob);
top = new Cell(ob, top);
printStack();
}
/**
* pop removes the most recently added element prints error if stack is
* empty
*/
public void pop() {
if (top == null) {
System.out.println("POP: Stack error: stack empty");
} else {
Object answer = top.getVal();
top = top.getNext();
System.out.println("POP: popped the element:" + answer);
printStack();
}
}
/**
* top returns the identity of the most recently added element
*
* @return the element
* @exception RuntimeException
* if stack is empty
*/
public Object top() {
if (top == null) {
throw new RuntimeException("Stack error: stack empty");
}
return top.getVal();
}
/**
* isEmpty states whether the stack has 0 elements.
*
* @return whether the stack has no elements
*/
public boolean isEmpty() {
return (top == null);
}
// Print the stack elements by using top and next
public void printStack() {
if (top == null) {
System.out.println("PRINT: Stack error: stack empty");
} else {
System.out.println("PRINT:These are the stack elements now!");
Cell temp = top;
while (temp != null) {
System.out.println(temp.getVal());
temp = temp.next;
}
}
}
public static void main(String[] args) {
StackUsingLinkedLists stack = new StackUsingLinkedLists();
stack.pop();
stack.push(23);
stack.push(267);
stack.push(500);
stack.pop();
stack.pop();
stack.pop();
}
}

package com.krish.stack;
import java.io.BufferedReader;
import java.io.InputStreamReader;
public class StackMain {
public static void main(String[] args) {
try{
BufferedReader br = new BufferedReader(new InputStreamReader(System.in));
String size;
System.out.println("Enter stack size u want to create : ");
size = br.readLine();
System.out.println("Creating a Stack size is:"+Integer.parseInt(size));
Stack stack = new Stack(Integer.parseInt(size));
stack.printStackElements();
stack.push(100);
stack.push(200);
stack.push(300);
stack.pop();
stack.pop();
stack.pop();
}
catch(Exception e){
e.printStackTrace();
}
}
}
Stack Class: package com.krish.stack;
public class Stack {
int[] myStack;
int top = -1;
int size;
public Stack(int size) {
this.size = size;
myStack = new int[size];
}
public void printStackElements() {
if (top >= 0) {
System.out.println("Present elements in the stack:");
for (int i = 0; i <= top; i++) {
System.out.println("Element at " + i + "position is "
+ myStack[i]);
}
} else {
System.out.println("Stack is empty");
}
}
public void push(int element) {
if (top < size - 1) {
System.out.println("Pushing " + element + " to stack now");
System.out.println("After push()");
top++;
myStack[top] = element;
printStackElements();
} else {
System.out.println("Stack overflow; element can't be pushed");
}
}
public void pop() {
if (top >= 0) {
System.out.println("Poping the top element now:" + myStack[top]);
top--;
printStackElements();
} else {
System.out.println("stack undeflow");
}
}
}
Output : public insertUser(String name, String email) {
Connection conn = null;
PreparedStatement stmt = null;
try {
conn = setupTheDatabaseConnectionSomehow();
stmt = conn.prepareStatement("INSERT INTO person (name, email) values (?, ?)");
stmt.setString(1, name);
stmt.setString(2, email);
stmt.executeUpdate();
}
finally {
try {
if (stmt != null) { stmt.close(); }
}
catch (Exception e) {
// log this error
}
try {
if (conn != null) { conn.close(); }
}
catch (Exception e) {
// log this error
}
}
}
Reference :
http://stackoverflow.com/questions/1812891/java-escape-string-to-prevent-sql-injection