Pranav Kumar Pandey
I am final-year CSE student at IIT Dharwad.
Contributions
0
Projects
0
Interviews
0
Events
Primary Education
IIT Dharwad
Computer Science & Engineering • 2026
Projects
Shared Experiences (0)
Interviews Shared
Pranav Kumar Pandey
Interview Experience at Amazon AI research Engineer
I'm thrilled to say that I was selected in Amazon. There was a résumé shortlisting round, and after my résumé got selected, my interview was scheduled. # Interview Process There were two in-person interviews. ## 1. DSA Round This round focused on problem-solving and core data structures. I was asked two medium–hard level questions: One was a graph traversal question with a DP twist, which made it slightly tricky. The second was based on 1-D Dynamic Programming. ## 2. ML Breadth & Depth Round This round covered a wide range of Machine Learning concepts. Topics included: Self-attention mechanism Transformer architecture (encoder and decoder) Deep learning training process Classical ML algorithms Towards the end, I was also asked conditional questions on overfitting and underfitting, which tested conceptual clarity. Overall, I would rate the difficulty as medium to hard. # Advice for Others Practice 2–3 DSA questions daily and stay consistent. Build strong conceptual understanding of ML algorithms and architectures. Study the concepts of overfitting and underfitting in depth.
Interview Experience at Amazon AI research Engineer
I'm thrilled to say that I was selected in Amazon. There was a résumé shortlisting round, and after my résumé got selected, my interview was scheduled. # Interview Process There were two in-person interviews. ## 1. DSA Round This round focused on problem-solving and core data structures. I was asked two medium–hard level questions: One was a graph traversal question with a DP twist, which made it slightly tricky. The second was based on 1-D Dynamic Programming. ## 2. ML Breadth & Depth Round This round covered a wide range of Machine Learning concepts. Topics included: Self-attention mechanism Transformer architecture (encoder and decoder) Deep learning training process Classical ML algorithms Towards the end, I was also asked conditional questions on overfitting and underfitting, which tested conceptual clarity. Overall, I would rate the difficulty as medium to hard. # Advice for Others Practice 2–3 DSA questions daily and stay consistent. Build strong conceptual understanding of ML algorithms and architectures. Study the concepts of overfitting and underfitting in depth.
Projects Shared
Pranav Kumar Pandey
Detecting DeepFake Images using Decentralized Network
Our project aims to tackle the challenges posed by deepfake technology to media authenticity by leveraging the combined power of blockchain and machine learning. Deepfake Detection: Uploaded media is analyzed by three backend nodes running a deep learning model to assess if it's likely a deepfake. A consensus mechanism can be used for higher reliability. Integrity Verification: A cryptographic hash (SHA-256) of the original media file is calculated before upload. Decentralized Storage: Verified authentic media is uploaded to IPFS via Pinata, ensuring content-addressable, decentralized storage. Immutable Record: The IPFS CID (Content Identifier) and the calculated hash are stored immutably on an Ethereum-compatible blockchain using a Solidity smart contract. Secure Sharing & Traceability: The smart contract manages fine-grained, item-level sharing permissions. Sharing actions are logged via events, enabling traceability of who shared what with whom. Client-Side Verification: When viewing media, the application fetches the content from IPFS, recalculates its hash, and verifies it against the hash stored on the blockchain, ensuring tamper-evidence.
Detecting DeepFake Images using Decentralized Network
Our project aims to tackle the challenges posed by deepfake technology to media authenticity by leveraging the combined power of blockchain and machine learning. Deepfake Detection: Uploaded media is analyzed by three backend nodes running a deep learning model to assess if it's likely a deepfake. A consensus mechanism can be used for higher reliability. Integrity Verification: A cryptographic hash (SHA-256) of the original media file is calculated before upload. Decentralized Storage: Verified authentic media is uploaded to IPFS via Pinata, ensuring content-addressable, decentralized storage. Immutable Record: The IPFS CID (Content Identifier) and the calculated hash are stored immutably on an Ethereum-compatible blockchain using a Solidity smart contract. Secure Sharing & Traceability: The smart contract manages fine-grained, item-level sharing permissions. Sharing actions are logged via events, enabling traceability of who shared what with whom. Client-Side Verification: When viewing media, the application fetches the content from IPFS, recalculates its hash, and verifies it against the hash stored on the blockchain, ensuring tamper-evidence.