Thursday, 11 April 2024

links for Data Structure

 


1) 𝐁𝐞𝐜𝐨𝐦𝐞 𝐌𝐚𝐬𝐭𝐞𝐫 𝐢𝐧 𝐋𝐢𝐧𝐤𝐞𝐝 𝐋𝐢𝐬𝐭: https://lnkd.in/gXQux4zj

2) 𝐀𝐥𝐥 𝐭𝐲𝐩𝐞𝐬 𝐨𝐟 𝐓𝐫𝐞𝐞 𝐓𝐫𝐚𝐯𝐞𝐫𝐬𝐚𝐥𝐬: https://lnkd.in/gKja_D5H

3) 𝐁𝐞𝐜𝐨𝐦𝐞 𝐌𝐚𝐬𝐭𝐞𝐫 𝐢𝐧 𝐑𝐞𝐜𝐮𝐫𝐬𝐢𝐨𝐧: https://lnkd.in/gQiasy8H

4) 𝐀 𝐆𝐞𝐧𝐞𝐫𝐚𝐥 𝐚𝐩𝐩𝐫𝐨𝐚𝐜𝐡 𝐭𝐨 𝐁𝐚𝐜𝐤𝐭𝐫𝐚𝐜𝐤𝐢𝐧𝐠 𝐐𝐮𝐞𝐬𝐭𝐢𝐨𝐧𝐬: https://lnkd.in/gVkQX5vA

5) 𝐈𝐦𝐩𝐨𝐫𝐭𝐚𝐧𝐭 𝐒𝐭𝐫𝐢𝐧𝐠 𝐐𝐮𝐞𝐬𝐭𝐢𝐨𝐧𝐬 𝐏𝐚𝐭𝐭𝐞𝐫𝐧: https://lnkd.in/gkNvEi8j

6) 10-𝐥𝐢𝐧𝐞 𝐓𝐞𝐦𝐩𝐥𝐚𝐭𝐞 𝐭𝐡𝐚𝐭 𝐜𝐚𝐧 𝐬𝐨𝐥𝐯𝐞 𝐦𝐨𝐬𝐭 '𝐬𝐮𝐛𝐬𝐭𝐫𝐢𝐧𝐠' 𝐩𝐫𝐨𝐛𝐥𝐞𝐦𝐬: https://lnkd.in/giASrwds

7) 𝐒𝐥𝐢𝐝𝐢𝐧𝐠 𝐖𝐢𝐧𝐝𝐨𝐰 𝐓𝐞𝐦𝐩𝐥𝐚𝐭𝐞: https://lnkd.in/gjatQ5pK

8) 𝐓𝐰𝐨 𝐏𝐨𝐢𝐧𝐭𝐞𝐫𝐬 𝐏𝐚𝐭𝐭𝐞𝐫𝐧𝐬: https://lnkd.in/gBfWgHYe

9) 𝐏𝐨𝐰𝐞𝐫𝐟𝐮𝐥 𝐔𝐥𝐭𝐢𝐦𝐚𝐭𝐞 𝐁𝐢𝐧𝐚𝐫𝐲 𝐒𝐞𝐚𝐫𝐜𝐡 𝐓𝐞𝐦𝐩𝐥𝐚𝐭𝐞: https://lnkd.in/gKEm_qUK

10) 𝐓𝐞𝐦𝐩𝐥𝐚𝐭𝐞 𝐟𝐨𝐫 𝐌𝐨𝐧𝐨𝐭𝐨𝐧𝐢𝐜 𝐒𝐭𝐚𝐜𝐤 𝐏𝐫𝐨𝐛𝐥𝐞𝐦𝐬: https://lnkd.in/gdYahWVN

11) 𝐆𝐫𝐞𝐞𝐝𝐲 𝐏𝐫𝐨𝐛𝐥𝐞𝐦 𝐏𝐚𝐭𝐭𝐞𝐫𝐧𝐬: https://lnkd.in/gw8CgMkC

12) 𝐀𝐥𝐥 𝐓𝐲𝐩𝐞𝐬 𝐨𝐟 𝐏𝐚𝐭𝐭𝐞𝐫𝐧𝐬 𝐟𝐨𝐫 𝐁𝐢𝐭𝐬 𝐌𝐚𝐧𝐢𝐩𝐮𝐥𝐚𝐭𝐢𝐨𝐧𝐬: https://lnkd.in/gXzegWuU

13) 𝐆𝐫𝐚𝐩𝐡 𝐏𝐚𝐭𝐭𝐞𝐫𝐧𝐬: https://lnkd.in/gKE6w7Jb

14) 𝐃𝐲𝐧𝐚𝐦𝐢𝐜 𝐏𝐫𝐨𝐠𝐫𝐚𝐦𝐦𝐢𝐧𝐠 𝐏𝐚𝐭𝐭𝐞𝐫𝐧𝐬: https://lnkd.in/gbpRU46g

15) 14 𝐏𝐚𝐭𝐭𝐞𝐫𝐧𝐬 𝐭𝐨 𝐀𝐜𝐞 𝐂𝐨𝐝𝐢𝐧𝐠 𝐈𝐧𝐭𝐞𝐫𝐯𝐢𝐞𝐰 𝐐𝐮𝐞𝐬𝐭𝐢𝐨𝐧𝐬: https://lnkd.in/gMZJVkFf

Repost to help others in your network ♻️

Join 6100+ readers of my free newsletter to master coding and system design using simple explanations and visuals: https://lnkd.in/dXtb8SwU

Friday, 26 January 2024

Elastic Search basic

 1. Node contains data as document

2. one cluster contains multiple nodes.

3. the document contains data as json format.

3. multiple documents group together to make indices.


  • Replication is about maintaining real-time copies (primary and replica shards) of data within a cluster to ensure high availability, fault tolerance, and improved read performance.

  • Snapshot is about creating backups of your data and settings at a specific point in time. Snapshots are useful for disaster recovery, migration, and long-term data retention.


Invertes index- > for text search
keyword: sorting or aggregation

 "name" : {
                  "type" : "text",
                  "fields" : {
                    "keyword" : {
                      "type" : "keyword",
                      "ignore_above" : 256
                    }
                  }
                }

so this name field will store in inverted index as well in doc_value also up to 256 char

  "type" : "text", -> store in innverted index


using below store in doc value as well that will use for sort and aggregation.
"keyword" : {
                      "type" : "keyword",
                      "ignore_above" : 256
                    }

Logstash

 Logstash is a Data pipeline it consist of three stages inputs, filters and outputs.


input-> can get data from Kafka,relation database , file or any other input source or can get data from multiple input source also.

filters -> filter what kind of data we need to process. 

outlet -> where we need to write data after filter like elastic

Take an example we want to read logs of access.log from file using logstash

like log stash receive one line of log -> then process line using grok pattern then push into elastic .




Having too many concurrent indexing connections may result in a high bulk queue, bad responsiveness and timeouts. And for that reason in most cases, the common setup is to have Logstash placed between Beat instances and Elasticsearch to control the indexing.

And for larger scale system, the common setup is having a buffering message queue (Apache Kafka, Rabbit MQ or Redis) between Beats and Logstash for resilency to avoid congestion on Logstash during event spikes.









t


Tuesday, 21 March 2023

mvn for Sonar

 mvn sonar:sonar -Dsonar.login=developer -Dsonar.password=developer

Sunday, 4 September 2022

Trigger in Mysql

create table employee (id INT AUTO_INCREMENT PRIMARY KEY, name varchar(255), employeeNumber varchar(255), createdDate datetime);


create table employee_audit (id INT AUTO_INCREMENT PRIMARY KEY,employee_id int,  name varchar(255), employeeNumber varchar(255), createdDate datetime,audit_created_date datetime);


Trigger:

delimiter //

create TRIGGER employee_update After update on employee  for each row  begin  insert into employee_audit (employee_id,name,employeeNumber,createdDate,audit_created_date) values (OLD.id,OLD.name,OLD.employeeNumber,OLD.createdDate,sysdate());  end;//

delimiter ;


in above statement we can see delimiter // this is used to change the delimiter ; to // we need to change delimiter because we are using multiple ; in trigger so after changing delimiter we can use // to break the statement



insert into employee (name,employeeNumber,createddate) values('sushil','001',sysdate);


update employee set name = 'sushil mittal' where id = 1;


//after update new entry get created in employee_audit table with exist record



we can add condition in trigger also like:

if we want to audit only if empployeeNumber get updated


 IF :OLD.employeeNumber != :NEW.employeeNumber THEN



delimiter //

create TRIGGER employee_update After update on employee  for each row  begin 

 IF :OLD.employeeNumber != :NEW.employeeNumber THEN

 insert into employee_audit (employee_id,name,employeeNumber,createdDate,audit_created_date) values (OLD.id,OLD.name,OLD.employeeNumber,OLD.createdDate,sysdate());  end;//

delimiter ;


links for Data Structure

  1) 𝐁𝐞𝐜𝐨𝐦𝐞 𝐌𝐚𝐬𝐭𝐞𝐫 𝐢𝐧 𝐋𝐢𝐧𝐤𝐞𝐝 𝐋𝐢𝐬𝐭:  https://lnkd.in/gXQux4zj 2) 𝐀𝐥𝐥 𝐭𝐲𝐩𝐞𝐬 𝐨𝐟 𝐓𝐫𝐞𝐞 𝐓𝐫𝐚𝐯𝐞𝐫𝐬𝐚𝐥𝐬...