Close Menu
VLSI Web
  • Home
    • About Us
    • Contact Us
    • Privacy Policy
  • Analog Design
  • Digital Design
    • Digital Circuits
    • Verilog
    • VHDL
    • System Verilog
    • UVM
  • Job Roles
    • RTL Design
    • Design Verification
    • Physical Design
    • DFT
    • STA
  • Interview Questions
  • Informative
Facebook X (Twitter) Instagram LinkedIn
Instagram LinkedIn WhatsApp Telegram
VLSI Web
  • Home
    • About Us
    • Contact Us
    • Privacy Policy
  • Analog Design
  • Digital Design
    • Digital Circuits
    • Verilog
    • VHDL
    • System Verilog
    • UVM
  • Job Roles
    • RTL Design
    • Design Verification
    • Physical Design
    • DFT
    • STA
  • Interview Questions
  • Informative
VLSI Web
Digital Circuits

Top 100 Digital Electronics Interview Questions

Raju GorlaBy Raju Gorla1 January 2024Updated:21 March 2026No Comments32 Mins Read
digital-eletronics-interview-questions
Share
Facebook Twitter LinkedIn Email Telegram WhatsApp

I’ve pulled together 40 digital electronics interview questions spanning logic gates, combinational/sequential circuits, memory systems, and logic families. These questions show up across semiconductor companies — from interviews at Qualcomm’s logic design team to board-level firmware engineers at networking startups. The questions bridge theoretical understanding (why does a latch hold state?) with practical silicon reality (how do you size transistors for speed vs. power?).

💡 Who This Is For: Digital logic designers interviewing for ASIC/FPGA roles, verification engineers who want to deepen hardware fundamentals, and EEs transitioning from firmware or power delivery into core logic design. If you’re comfortable with truth tables and Karnaugh maps but get fuzzy on synchronizer design or timing paths, you’ll find the deeper insights here.

Table of Contents

  • Quick Navigation
  • Section 1: Logic Gates & Boolean Algebra (Q1–Q10)
    • Q1. De Morgan’s Theorem — What Is It, and Why Does It Matter in Gate Design?
    • Q2. Universal Gates — Implement AND, OR, NOT Using Only NAND Gates
    • Q3. Karnaugh Map Simplification — Reduce a 4-Variable Function
    • Q4. SOP vs. POS Forms — When Is Each Preferred?
    • Q5. XOR Properties and Applications — Why Is It Special?
    • Q6. Canonical Forms — What Are Minterms and Maxterms?
    • Q7. Don’t-Care Conditions in Logic Minimization
    • Q8. Hazards in Combinational Logic — Static and Dynamic
    • Q9. Number Systems — Convert 0xAF to Decimal, Binary, BCD
    • Q10. Two’s Complement — Represent -25 in 8-Bit, Detect Overflow
  • Section 2: Combinational Circuits (Q11–Q20)
    • Q11. Full Adder Design — Truth Table, Boolean Expressions, and Gate Implementation
    • Q12. Carry-Lookahead Adder vs. Ripple-Carry — Why CLA Is Faster
    • Q13. 4:1 Multiplexer — Structural Implementation Using 2:1 Muxes
    • Q14. Decoder vs. Demultiplexer — What’s the Difference?
    • Q15. Priority Encoder — Design an 8:3 with Enable
    • Q16. Magnitude Comparator — Design a 2-Bit Version
    • Q17. Wallace Tree Multiplier — Concept and Speed Advantage
    • Q18. Gray Code — Convert Between Gray and Binary, Why Used in Async FIFO
    • Q19. BCD Adder — What Happens When Sum > 9?
    • Q20. PLA vs. PAL vs. CPLD vs. FPGA — When to Use Each?
  • Section 3: Sequential Circuits (Q21–Q30)
    • Q21. SR, D, JK, T Flip-Flops — Truth Tables and Characteristic Equations
    • Q22. Setup Time, Hold Time, Propagation Delay, Clock-to-Q — Definitions
    • Q23. What Is Metastability? Why Violating Setup/Hold Causes It
    • Q24. Synchronous vs. Asynchronous Reset — Which Is Preferred and Why?
    • Q25. Latch vs. Flip-Flop — Key Difference (Level-Sensitive vs. Edge-Triggered)
    • Q26. Shift Register Types — SIPO, PISO, SISO, PIPO with Waveforms
    • Q27. Johnson Counter vs. Ring Counter — Comparison and Application
    • Q28. Design a Mod-6 Counter Using JK Flip-Flops
    • Q29. Moore vs. Mealy FSM — Design a Vending Machine
    • Q30. Race Condition in Asynchronous Sequential Circuits
  • Section 4: Memory, Arithmetic, Logic Families (Q31–Q40)
    • Q31. SRAM vs. DRAM — Construction and Speed Comparison
    • Q32. ROM Types — Mask ROM, PROM, EPROM, EEPROM, Flash
    • Q33. CAM (Content Addressable Memory) — How It Differs from RAM
    • Q34. Subtractor Using Adder — Two's Complement Trick
    • Q35. Overflow Detection in Signed Addition
    • Q36. CMOS vs. TTL — Key Differences
    • Q37. Propagation Delay and Critical Path — How to Calculate Worst-Case
    • Q38. Fanout and Circuit Speed — How Does It Affect Delay?
    • Q39. PLDs — How a PAL Implements Sum-of-Products
    • Q40. Synchronizer Design — Why 2 Flip-Flops, Not 1? MTBF Calculation
  • Core Concepts Cheatsheet

Quick Navigation

  • Section 1: Logic Gates & Boolean Algebra (Q1–Q10)
  • Section 2: Combinational Circuits (Q11–Q20)
  • Section 3: Sequential Circuits (Q21–Q30)
  • Section 4: Memory, Arithmetic, Logic Families (Q31–Q40)
  • Core Concepts Cheatsheet

Section 1: Logic Gates & Boolean Algebra (Q1–Q10)

Q1. De Morgan’s Theorem — What Is It, and Why Does It Matter in Gate Design?

Direct answer: De Morgan’s theorem states: NOT(A AND B) = (NOT A) OR (NOT B), and NOT(A OR B) = (NOT A) AND (NOT B). This is fundamental for converting between AND-OR and NAND-NOR logic.

In real silicon, this matters because NAND gates are often smaller and faster than AND gates (especially in CMOS, where NAND is a basic pull-down stack). De Morgan lets you rewrite a design to use NAND/NOR instead of AND/OR, saving area and improving timing. I’ve seen entire designs restructured just to reduce NAND gate count.

Truth Table Proof:
A | B | A AND B | NOT(A AND B) | NOT A | NOT B | (NOT A) OR (NOT B)
--|---|---------|--------------|-------|-------|-------------------
0 | 0 |    0    |      1       |   1   |   1   |        1
0 | 1 |    0    |      1       |   1   |   0   |        1
1 | 0 |    0    |      1       |   0   |   1   |        1
1 | 1 |    1    |      0       |   0   |   0   |        0
         (same)

Boolean Algebra:
NOT(A AND B) = (NOT A) OR (NOT B)
NOT(A OR B) = (NOT A) AND (NOT B)

Gate Design Impact:
Original:  Y = NOT(A AND B)      (NOT gate + AND gate = 2 gates)
Optimized: Y = (NOT A) OR (NOT B)  (2 NOT gates + 1 OR gate = 3 gates)
In NAND logic: Y = NAND(A, B)    (1 NAND gate, cheaper!)

📌 Note: In CMOS, a NAND gate is built from a series stack of PMOS (pull-up) and parallel NMOS (pull-down). An AND gate is a NAND followed by NOT. For large fan-ins, NAND is significantly cheaper. Many synthesis tools apply De Morgan automatically during optimization.

Q2. Universal Gates — Implement AND, OR, NOT Using Only NAND Gates

Direct answer: All three functions can be built from NAND gates because NAND is “universal” (any combinational logic can be realized using only NAND). NOT is NAND with one input tied high; AND is NAND followed by NOT.

This is more than academic: historically, NAND gates were cheaper and faster, so designers often built everything from NAND cells. Modern PDKs offer both, but understanding this shows you grasp gate-level optimization.

NOT gate from NAND:
  A ----+
        |
        NAND ---- Y
        |
  A ----+
  (Both inputs same, so Y = NOT A)

AND gate from NAND:
  A ----+
        | NAND ----+
  B ----+          |
                   NAND ---- Y
                   |
                1 -+
  (NAND output inverted by NOT = AND)

OR gate from NAND (using De Morgan):
  NOT(A AND B) = (NOT A) OR (NOT B)
  So: OR = NAND( NOT(A), NOT(B) )

  A ----+
        | NAND ----+
  A ----+          |
                   | NAND ---- Y
  B ----+          |
        | NAND ----+
  B ----+

Q3. Karnaugh Map Simplification — Reduce a 4-Variable Function

Direct answer: A Karnaugh map (K-map) is a visual way to group adjacent 1s and 0s, reducing Boolean expressions. Group 1s in powers of 2 (2, 4, 8, 16), each group eliminating one variable.

Here’s a 4-variable example. Suppose we have minterms {0, 1, 4, 5, 12, 13, 14, 15}:

     AB
     00  01  11  10
CD +----+----+----+----+
00 | 1  | 1  | 0  | 0  |  Minterms 0, 1
   +----+----+----+----+
01 | 1  | 1  | 0  | 0  |  Minterms 4, 5
   +----+----+----+----+
11 | 1  | 1  | 1  | 1  |  Minterms 12, 13, 14, 15
   +----+----+----+----+
10 | 0  | 0  | 0  | 0  |
   +----+----+----+----+

Group 1 (red): Minterms 0, 1, 4, 5 (top-left 2x2 square)
  In group: A=0 (constant), C=0 (constant), B varies, D varies
  Simplified term: NOT(A) AND NOT(C)

Group 2 (blue): Minterms 12, 13, 14, 15 (bottom-right 2x2 square)
  In group: C=1 (constant), D=1 (constant), A varies, B varies
  Simplified term: C AND D

Final expression: Y = (NOT A AND NOT C) OR (C AND D)
Without K-map: Y = 0̅1̅0̅0̅ + 0̅1̅0̅1̅ + 1̅0̅1̅0̅ + 1̅0̅1̅1̅ + 1̅1̅0̅0̅ + 1̅1̅0̅1̅ + 1̅1̅1̅0̅ + 1̅1̅1̅1̅
                (8 terms!) → (simplified to 2 terms)

Q4. SOP vs. POS Forms — When Is Each Preferred?

Direct answer: SOP (Sum of Products) = ORs of ANDs; POS (Product of Sums) = ANDs of ORs. SOP is preferred for circuits with fewer 1s in the truth table; POS for fewer 0s.

In gate count: if your truth table has 2 ones and 14 zeros, build from the ones using SOP (short expressions). If it has 14 ones and 2 zeros, build from the zeros using POS (fewer terms). Synthesis tools apply both and pick the smaller.

Example: 2-input AND

Truth table:
A | B | Y
--|---|---
0 | 0 | 0
0 | 1 | 0
1 | 0 | 0
1 | 1 | 1

SOP (1 one): Y = A AND B          (1 AND gate)
POS (3 zeros): Y = (A OR 0) AND (B OR 0) AND ... (more complex)

→ SOP is preferred (simpler)

Example: 2-input NOR (opposite case)

Truth table:
A | B | Y
--|---|---
0 | 0 | 1
0 | 1 | 0
1 | 0 | 0
1 | 1 | 0

SOP (1 one): Y = (NOT A) AND (NOT B)   (2 NOTs + 1 AND)
POS (3 zeros): Y = NOT(A OR B)          (1 NOR gate)

→ POS is preferred (1 gate vs. 3)

Q5. XOR Properties and Applications — Why Is It Special?

Direct answer: XOR (A XOR B) is true when inputs differ. Key properties: it’s self-inverse (A XOR A = 0), associative, and commutative. Used in parity checking, comparators, and half-adders.

Practically: XOR is slower than AND/OR (requires more transistors in CMOS), but essential for arithmetic and error detection. A parity bit computed with cascaded XOR gates catches single-bit errors in transmission.

XOR Truth Table:
A | B | A XOR B
--|---|--------
0 | 0 |   0
0 | 1 |   1
1 | 0 |   1
1 | 1 |   0

Key Properties:
A XOR 0 = A           (identity)
A XOR A = 0           (self-inverse)
A XOR NOT(A) = 1      (opposite)
(A XOR B) XOR C = A XOR (B XOR C)  (associative)

Applications:
Parity generator: parity = A XOR B XOR C XOR D  (even parity)
Half adder sum: Sum = A XOR B
Comparator: out = NOT(A XOR B)  (1 if A=B)
Magnitude comparator: (A > B) uses XOR for borrow logic

Q6. Canonical Forms — What Are Minterms and Maxterms?

Direct answer: A minterm is a product (AND) of all variables in true or complemented form, with exactly one minterm true for each input combination. A maxterm is a sum (OR) of all variables. Canonical SOP uses minterms; canonical POS uses maxterms.

More than naming: canonical forms are the “complete” representation before optimization. Any truth table can be written as a sum (OR) of minterms or a product (AND) of maxterms. Simplification reduces these to fewer terms.

3-variable example: A, B, C

Minterms (all variables, AND form):
m0 = NOT(A) AND NOT(B) AND NOT(C)  (ABC=000)
m1 = NOT(A) AND NOT(B) AND C       (ABC=001)
m2 = NOT(A) AND B AND NOT(C)       (ABC=010)
...
m7 = A AND B AND C                 (ABC=111)

Maxterms (all variables, OR form):
M0 = A OR B OR C                   (ABC=000)
M1 = A OR B OR NOT(C)              (ABC=001)
M2 = A OR NOT(B) OR C              (ABC=010)
...
M7 = NOT(A) OR NOT(B) OR NOT(C)    (ABC=111)

Canonical SOP (sum of minterms):
Y = m1 + m3 + m5 + m7  = Σm(1,3,5,7)

Canonical POS (product of maxterms):
Y = M0 · M2 · M4 · M6  = ΠM(0,2,4,6)

Note: SOP minterms are indices where Y=1; POS maxterms are indices where Y=0

Q7. Don’t-Care Conditions in Logic Minimization

Direct answer: A don’t-care (X) is an input combination where the output is undefined (you don’t care). In K-maps, treat don’t-cares as 1 if grouping them reduces gate count, or 0 if it doesn’t.

Real scenario: your circuit receives binary-coded decimal (BCD) inputs. BCD uses only 0–9 (10 values), so inputs 10–15 are impossible. Mark those as don’t-cares, and simplification will leverage them to reduce gates.

BCD-to-Decimal decoder (first 4 outputs):
Input (BCD) | Out0 | Out1 | Out2 | Out3
    0000    |  1   |  0   |  0   |  0
    0001    |  0   |  1   |  0   |  0
    0010    |  0   |  0   |  1   |  0
    0011    |  0   |  0   |  0   |  1
    0100    |  0   |  0   |  0   |  0
    ...
    1010    |  X   |  X   |  X   |  X   (don't-care: impossible input)
    1011    |  X   |  X   |  X   |  X
    ...
    1111    |  X   |  X   |  X   |  X

By treating X as 1 in K-map groupings, you reduce the expression significantly.
Without don't-cares: Out0 = expensive expression
With don't-cares: Out0 = simpler expression

Q8. Hazards in Combinational Logic — Static and Dynamic

Direct answer: A hazard is a transient (glitch) where output changes unexpectedly during transitions. Static hazard: output should stay constant but momentarily flips. Dynamic hazard: output changes multiple times instead of once.

Why it matters: glitches can corrupt state in edge-sensitive circuits or trigger false clock pulses. In clock dividers or counters, a single glitch causes metastability or illegal state transitions. Hazard-free design uses overlapping groups in K-maps or adds consensus logic.

Static-1 Hazard Example:
Y = AB + NOT(A)C
When A transitions 1→0 with B=1, C=1:
  t0: A=1, B=1, C=1 → AB=1, Y=1
  t1: A→0 (transition) → AB falls, but NOT(A)C not yet high → Y glitches to 0
  t2: NOT(A)C=1 → Y returns to 1
  Expected: Y stays 1; Actual: Y=1 → 0 → 1 (glitch!)

Timing diagram:
A:    ‾‾‾|___________
B:    ‾‾‾‾‾‾‾‾‾‾‾‾‾
C:    ‾‾‾‾‾‾‾‾‾‾‾‾‾
AB:   ‾‾‾|_________  (falls when A→0)
~AC:  ___|‾‾‾‾‾‾‾‾  (rises after A→0 transitioning)
Y:    ‾‾‾|_|‾‾‾‾‾   (glitch: momentary 0)

Fix: Add consensus term AC to bridge the gap
Y_fixed = AB + NOT(A)C + BC  (BC is consensus; no glitch)

💡 Tip: Hazards are often tolerated in non-critical paths (data multiplexing) but eliminated in clock and reset logic. Static timing analysis (STA) tools can flag potential hazards, but designer judgment matters: is a 50ps glitch acceptable in this path?

Q9. Number Systems — Convert 0xAF to Decimal, Binary, BCD

Direct answer: 0xAF is hex. A=10, F=15 in decimal. 0xAF = 10×16 + 15 = 175 (decimal). In binary: 1010_1111. In BCD: 175 = “1” “7” “5” in decimal digits, encoded as 0001_0111_0101.

Interviewers ask this to check your number base fluency. Most people get binary/decimal/hex; BCD trips them up because it’s decimal digits encoded in binary (not pure binary).

0xAF conversion:

Hex to Decimal:
0xAF = A×16^1 + F×16^0 = 10×16 + 15×1 = 160 + 15 = 175

Hex to Binary:
A = 1010, F = 1111
0xAF = 1010_1111

Binary to Decimal verification:
1010_1111 = 128 + 32 + 8 + 4 + 2 + 1 = 175 ✓

Decimal to BCD:
175 in decimal = digits 1, 7, 5
BCD(1) = 0001
BCD(7) = 0111
BCD(5) = 0101
175 in BCD = 0001_0111_0101

Key difference:
Binary 175: 10101111 (efficient, uses 8 bits)
BCD 175: 0001_0111_0101 (12 bits, but each digit is 4-bit decimal)

Q10. Two’s Complement — Represent -25 in 8-Bit, Detect Overflow

Direct answer: Two’s complement: flip all bits, add 1. For -25 in 8-bit: 25 = 0001_1001, flip = 1110_0110, add 1 = 1110_0111. To detect overflow in addition: if both operands are positive and result is negative (or both negative and result is positive), overflow occurred.

This is crucial for signed arithmetic. In an 8-bit signed system, range is -128 to +127. If you add 100 + 50, result is 150, which can’t fit in 8-bit signed (overflows to negative). Overflow detection checks the carry-out of the sign bit.

25 to two's complement (-25) in 8-bit:
Positive 25: 0001_1001
Invert:      1110_0110
Add 1:       1110_0111  = -25 in 8-bit two's complement

Verify: -25 + 25 should = 0
  1110_0111 (+)
  0001_1001 (=)
  ---------
 1 0000_0000  (carry out ignored in signed; result = 0 ✓)

Overflow detection in addition:
A + B, both n-bit signed

Rule: Overflow = (A[n-1] AND B[n-1] AND NOT(Sum[n-1])) OR
                 (NOT(A[n-1]) AND NOT(B[n-1]) AND Sum[n-1])
         (If signs of A, B same but result sign differs, overflow)

Example: 100 + 50 in 8-bit signed:
  0110_0100 (100)
+ 0011_0010 (50)
  ---------
  1001_0110 (146 as unsigned, but -110 as signed!)
  Overflow: both inputs positive, result shows negative (MSB=1)

Section 2: Combinational Circuits (Q11–Q20)

Q11. Full Adder Design — Truth Table, Boolean Expressions, and Gate Implementation

Direct answer: A full adder sums two bits plus a carry-in, producing a sum bit and carry-out. Truth table has 8 rows (3 inputs); sum and carry are expressed as Boolean functions.

The “why” for interviews: a full adder is the building block of larger adders. Understanding its derivation shows you can go from spec → truth table → Boolean → gates → layout. This is the designer’s workflow.

Full Adder Truth Table:
A | B | Cin | Sum | Cout
--|---|-----|-----|-----
0 | 0 |  0  |  0  |  0
0 | 0 |  1  |  1  |  0
0 | 1 |  0  |  1  |  0
0 | 1 |  1  |  0  |  1
1 | 0 |  0  |  1  |  0
1 | 0 |  1  |  0  |  1
1 | 1 |  0  |  0  |  1
1 | 1 |  1  |  1  |  1

Canonical SOP:
Sum = m1 + m2 + m4 + m7
    = NOT(A)NOT(B)Cin + NOT(A)B·NOT(Cin) + A·NOT(B)·NOT(Cin) + ABC
    = (Optimized) A XOR B XOR Cin

Cout = m3 + m5 + m6 + m7
     = NOT(A)B·Cin + A·NOT(B)Cin + AB·NOT(Cin) + ABCin
     = AB + ACin + BCin
     = AB + (A XOR B)Cin

Gate-level implementation:
Sum = (A XOR B) XOR Cin
Cout = AB + (A XOR B)Cin

Circuit diagram:
    A ---+
         | XOR ---+
    B ---+        | XOR --- Sum
              Cin-+

    A ---+
         | AND ----+
    B ---+         | OR --- Cout
         | AND ----+
    Cin--+
  (A XOR B) AND Cin

Q12. Carry-Lookahead Adder vs. Ripple-Carry — Why CLA Is Faster

Direct answer: Ripple-carry: carry propagates sequentially (bit 0 → bit 1 → bit 2…), so delay is O(n). Carry-lookahead: predict carries in advance using generate/propagate signals, reducing delay to O(log n) for 4-bit groups.

Trade-off: CLA is faster but uses more gates (silicon area). For a 32-bit adder in a CPU, CLA (or prefix adders) is mandatory; for a simple 4-bit counter, ripple is fine.

Adder Type Delay (worst-case) Gate Count Use Case
Ripple-Carry O(n) — linear in bits Minimal (n FA blocks) Low-speed, area-critical
Carry-Lookahead (CLA) O(log n) — tree depth High (lookahead logic) High-speed CPUs, FPUs
Kogge-Stone (Parallel Prefix) O(log n) — optimal depth Very high (many parallel stages) Ultra-high-speed (MHz+)
CLA Logic:
Define for each bit position:
  Generate (G_i) = A_i AND B_i       (always produces carry)
  Propagate (P_i) = A_i XOR B_i     (propagates incoming carry)

Then carry signals:
  C0 = carry_in
  C1 = G0 + P0·C0
  C2 = G1 + P1·G0 + P1·P0·C0
  C3 = G2 + P2·G1 + P2·P1·G0 + P2·P1·P0·C0
  C4 = G3 + P3·G2 + P3·P2·G1 + P3·P2·P1·G0 + P3·P2·P1·P0·C0

These are computed in parallel (logic depth ~O(log n)),
avoiding the ripple delay of waiting for each carry sequentially.

📌 Note: Modern processors use hierarchical CLA (4-bit CLA units chained with additional lookahead) or parallel-prefix adders (Brent-Kung, Kogge-Stone). Formal verification tools check that adder output matches expected arithmetic.

Q13. 4:1 Multiplexer — Structural Implementation Using 2:1 Muxes

Direct answer: A 4:1 mux selects 1 of 4 inputs based on 2 select lines. Build it from three 2:1 muxes: two muxes select from the four inputs (based on S0), one mux selects between the two results (based on S1).

This shows you understand hierarchy and don’t just memorize gate-level designs. Larger muxes (8:1, 16:1) scale the same way.

4:1 Mux schematic (S1 S0 = select):
    I0 ----+
    I1 ----|
           | 2:1 Mux (selected by S0) ----+
    I2 ----|                              |
    I3 ----+                              | 2:1 Mux (selected by S1) --- Y
           +                              |
                                          |
                     (or simplified:)   --+

Functional truth table:
S1 | S0 | Output
---|----|---------
0  | 0  | I0
0  | 1  | I1
1  | 0  | I2
1  | 1  | I3

Boolean expression (canonical SOP):
Y = NOT(S1)·NOT(S0)·I0 + NOT(S1)·S0·I1 + S1·NOT(S0)·I2 + S1·S0·I3
  = (NOT(S1)·NOT(S0))·I0 + (NOT(S1)·S0)·I1 + (S1·NOT(S0))·I2 + (S1·S0)·I3

Gate-level (using 2:1 muxes):
Mux1_out = (NOT(S0)·I0) + (S0·I1)
Mux2_out = (NOT(S0)·I2) + (S0·I3)
Y = (NOT(S1)·Mux1_out) + (S1·Mux2_out)

Q14. Decoder vs. Demultiplexer — What’s the Difference?

Direct answer: A decoder has n inputs and 2^n outputs; exactly one output is high (active) based on input combination. A demultiplexer (demux) has n select lines, 1 data input, and 2^n data outputs; the data input is routed to the selected output.

In practice, they’re almost identical in hardware (same gates), but conceptually different: decoder activates (selects), demux routes data. Decoder used for address decoding in memory; demux used for routing data streams.

Aspect Decoder Demultiplexer
Inputs n select lines n select lines + 1 data line
Outputs 2^n single bits (one active) 2^n data paths (one gets data)
Use Case Address decoding, chip selection Data routing, I/O selection

Q15. Priority Encoder — Design an 8:3 with Enable

Direct answer: A priority encoder converts an 8-bit input into a 3-bit binary code representing the highest-priority set bit. Enable input activates the encoder; priority usually favors higher bit positions (MSB = highest).

Practical use: interrupt controllers prioritize multiple requests; only the highest is serviced. A priority encoder solves this combinationally.

8:3 Priority Encoder (MSB = highest priority):
Input (I7 I6 ... I0) | Output (Y2 Y1 Y0) | Valid
---------------------|------------------|-------
1???????             |        111        |   1
01??????             |        110        |   1
001?????             |        101        |   1
0001????             |        100        |   1
00001???             |        011        |   1
000001??             |        010        |   1
0000001?             |        001        |   1
00000001             |        000        |   1
00000000             |        xxx        |   0  (no input set)

Truth table shows: if I7=1, output is 111 (binary 7) regardless of others.
If I7=0 and I6=1, output is 110 (binary 6), etc.

Boolean (from K-map):
Y2 = I7 + I6 + I5 + I4
Y1 = I7 + I6 + NOT(I5)·NOT(I4)·(I3 + I2)
Y0 = I7 + NOT(I6)·(I5 + I3 + I1)
Valid = I7 OR I6 OR I5 OR I4 OR I3 OR I2 OR I1 OR I0

Q16. Magnitude Comparator — Design a 2-Bit Version

Direct answer: A magnitude comparator outputs flags for A > B, A = B, A < B. A 2-bit version compares two 2-bit numbers, producing three output bits.

2-Bit Magnitude Comparator:
A1 A0 vs. B1 B0

Truth table (abbreviated):
A | B | A>B | A=B | AB: (A1·NOT(B1)) + (A1·A0·NOT(B1)) + (NOT(A1)·A1·B1) + ... (complex)
A=B: (A1 XNOR B1) AND (A0 XNOR B0)
AB) AND NOT(A=B)

Or cascaded approach:
If A1 > B1, then A > B (regardless of A0, B0)
If A1 = B1, compare A0 vs. B0
If A1 < B1, then A < B

Q17. Wallace Tree Multiplier — Concept and Speed Advantage

Direct answer: A Wallace tree is a fast multiplication method using partial products and carry-save adders (CSA). Instead of serially adding partial products, add them in parallel, reducing delay from O(n) to O(log n).

High-end multipliers in CPUs use Wallace trees. For a 32x32 adder, the serial approach would take ~32 FA delays; Wallace reduces this to ~5-6.

Q18. Gray Code — Convert Between Gray and Binary, Why Used in Async FIFO

Direct answer: Gray code is a binary numeral system where consecutive values differ by only 1 bit. Conversion: binary to gray is (binary) XOR (binary >> 1); gray to binary is more complex (iterative XOR).

Used in async FIFOs because when a pointer crosses clock domains, only 1 bit changes. A synchronizer (2 flops) safely samples it without glitch vulnerability. Binary pointers have multiple bits changing (e.g., 0111 → 1000), risking metastability.

Gray Code Sequence (3-bit):
Decimal | Binary | Gray Code
--------|--------|----------
0       | 000    | 000
1       | 001    | 001
2       | 010    | 011
3       | 011    | 010
4       | 100    | 110
5       | 101    | 111
6       | 110    | 101
7       | 111    | 100

Notice: each row differs from previous by exactly 1 bit in Gray column.

Binary to Gray conversion:
Gray = Binary XOR (Binary >> 1)
Example: Binary = 0101 (decimal 5)
         0101 XOR (0101 >> 1) = 0101 XOR 0010 = 0111 (Gray for 5)

Gray to Binary conversion:
Binary = Gray
For each bit position i (left to right):
  Binary[i] = Gray[i] XOR Binary[i-1]
Example: Gray = 0111 (decimal 5 in Gray)
         Binary[0] = 0 (MSB, no prior bit)
         Binary[1] = 1 XOR 0 = 1
         Binary[2] = 1 XOR 1 = 0
         Binary[3] = 1 XOR 0 = 1
         Result: Binary = 0101 ✓

Q19. BCD Adder — What Happens When Sum > 9?

Direct answer: A BCD adder adds two 4-bit BCD digits. If the sum exceeds 9 (1001), add 6 (0110) to correct. This is because BCD uses 10 valid codes (0–9); adding 6 shifts invalid codes into the next BCD digit.

Example: 5 + 6 = 11 (decimal). In BCD: 0101 + 0110 = 1011 (invalid). Add 6: 1011 + 0110 = 10001 (overflow to next digit), result is 1 (carry) and 0001 (1).

BCD Addition: 5 + 6
  5 = 0101 (BCD)
+ 6 = 0110 (BCD)
  -----------
    = 1011 (invalid BCD, > 9)

Correction: Add 6 (0110)
  1011 + 0110 = 10001

Result: Carry=1, digit=0001, which represents 11 (decimal) ✓

Another example: 8 + 7 = 15
  8 = 1000 (BCD)
+ 7 = 0111 (BCD)
  -----------
    = 1111 (invalid, =15)

Add 6:
  1111 + 0110 = 10101
  Result: Carry=1 (1 in next digit), digit=0101 (5)
  So: 15 = 1 (tens) and 5 (ones) ✓

BCD Adder logic:
Sum_binary = A + B
If Sum_binary > 1001 (9 decimal):
  Sum_BCD = Sum_binary + 0110 (add 6)
  Carry_out = 1
Else:
  Sum_BCD = Sum_binary
  Carry_out = 0

Q20. PLA vs. PAL vs. CPLD vs. FPGA — When to Use Each?

Direct answer: PLA (Programmable Logic Array) is fully programmable AND/OR planes; PAL (Programmable Array Logic) has fixed OR plane. CPLD (Complex PLD) = many PLAs with routing. FPGA (Field Programmable Gate Array) = LUTs (lookup tables) with routing, most flexible and largest capacity.

Technology AND Plane OR Plane Use Case
PLA Programmable Programmable Dense combinational logic (obsolete today)
PAL Programmable Fixed Small glue logic, discrete designs (vintage)
CPLD Multiple PLAs Routed Mid-scale logic, board-level glue (still used)
FPGA LUT-based Routed Large designs, prototyping, production (dominant)

Section 3: Sequential Circuits (Q21–Q30)

Q21. SR, D, JK, T Flip-Flops — Truth Tables and Characteristic Equations

Direct answer: SR (set-reset) has two inputs; outputs go to set or reset state. D (data) captures input on clock edge. JK (jack-kilby) is like SR but with J=K=1 toggling output. T (toggle) outputs next state based on toggle input. Each has a characteristic equation.

Flip-Flop Input(s) Characteristic Equation Function
SR S, R Q+ = S + NOT(R)·Q Set or Reset (S=R=1 invalid)
D D Q+ = D Capture data on clock
JK J, K Q+ = J·NOT(Q) + NOT(K)·Q Set, Reset, Hold, or Toggle
T T Q+ = T XOR Q Toggle if T=1, hold if T=0
SR FF truth table:
S | R | Q+ | Function
--|---|----|---------
0 | 0 | Q  | Hold
0 | 1 | 0  | Reset
1 | 0 | 1  | Set
1 | 1 | ?  | Invalid (ambiguous)

JK FF truth table:
J | K | Q+ | Function
--|---|----|---------
0 | 0 | Q  | Hold
0 | 1 | 0  | Reset
1 | 0 | 1  | Set
1 | 1 | Q̄  | Toggle

T FF truth table:
T | Q+ | Function
--|----|---------
0 | Q  | Hold
1 | Q̄  | Toggle

Q22. Setup Time, Hold Time, Propagation Delay, Clock-to-Q — Definitions

Direct answer: Setup time: minimum time input must be stable before clock edge. Hold time: minimum time input must stay stable after clock edge. Propagation delay (clk-to-Q): time from clock edge to output update.

Violate setup or hold, and you risk metastability. All three are measured from a flip-flop datasheet and are critical for synchronous design timing closure.

Timing Diagram: D Flip-Flop

CLK:     ___|‾‾‾‾|_|‾‾‾‾|_|‾‾‾‾|_

D:       ----|‾‾|____|‾‾‾‾|____|‾‾|-----
            ↑     ↑              ↑    ↑
        (hold_time before ↓ edge) (setup_time before ↓ edge)

             ← hold time → ← setup time →
             (after rising edge) (before rising edge)

Q:       -------|‾‾‾‾‾‾‾|_____|‾‾‾‾‾‾
             ↑
         (clk-to-Q delay from rising edge)

Timings:
Setup time: D must be stable ≥ 0.5ns before rising CLK edge
Hold time: D must remain stable ≥ 0.2ns after rising CLK edge
Clock-to-Q: Q reflects D input ≤ 1.5ns after rising CLK edge

Violation example:
If D changes 0.1ns before clock (< setup_time), metastability occurs.
Q oscillates for ~nanoseconds before settling.

Q23. What Is Metastability? Why Violating Setup/Hold Causes It

Direct answer: Metastability is a state where the flip-flop output is neither 0 nor 1 (both transistor pairs partially conducting). Physically, a violated setup/hold edge triggers instability in the flip-flop's cross-coupled latch, causing oscillation until one side wins.

Practically: you can't eliminate metastability, only reduce its probability. A 2-flop synchronizer in each clock domain reduces MTBF (mean time between failures) to safe levels (years at modern speeds).

Q24. Synchronous vs. Asynchronous Reset — Which Is Preferred and Why?

Direct answer: Asynchronous reset immediately forces output to 0 (no clock required); synchronous reset uses clock edge. Asynchronous is faster (no clock dependency) but risks metastability. Synchronous is safer but slower. Modern practice: async assert, sync release (reset synchronizer).

Type Speed Metastability Risk Use Case
Asynchronous Immediate (~ns) High on release (needs sync) Global power-up reset
Synchronous Delayed (1 cycle + setup) Low (clock-aligned) Low-power, gated-clock domains

Q25. Latch vs. Flip-Flop — Key Difference (Level-Sensitive vs. Edge-Triggered)

Direct answer: A latch is level-sensitive: output follows input while enable is high. A flip-flop is edge-triggered: output captures input only on clock edge. Latches are faster but dangerous (can oscillate); flip-flops are stable but slower.

Latch (SR latch with gating):
EN:  ---|‾‾‾‾‾‾|___|‾‾‾‾‾‾|___

D:   _|‾‾|_|‾‾‾‾‾|_|‾‾|_____

Q:   _|‾‾‾‾|_|‾‾‾‾‾‾|_|‾‾|___
        (Q follows D while EN=1)
        (transparent: real-time tracking)

Flip-Flop (D FF with clock):
CLK: _|‾|_|‾|_|‾|_|‾|_|‾|_

D:   _|‾‾|_|‾‾‾‾‾|_|‾‾|_____

Q:   _|‾‾‾‾|___|‾‾‾‾|___|‾‾__
         (Q captures D on ↑ CLK edge only)
         (edge-triggered: synchronous, no oscillation risk)

Key insight:
Latch: Q changes immediately when input changes (if EN=1)
       → faster but more complex to debug, risk of glitches
FF:    Q changes only on clock edge
       → slower but predictable, easier to verify timing

📌 Note: Modern designs avoid latches in synchronous logic (except for dynamic latches in high-speed datapath). Latches are useful in asynchronous circuits or as "level-sensitive storage" in gated-clock designs, but flip-flops dominate today.

Q26. Shift Register Types — SIPO, PISO, SISO, PIPO with Waveforms

Direct answer: SIPO (Serial-In Parallel-Out): serial input converted to parallel output. PISO (Parallel-In Serial-Out): reverse. SISO (Serial-In Serial-Out): shift chain. PIPO (Parallel-In Parallel-Out): array of flip-flops. Each has different applications.

Type Input Output Use
SIPO 1-bit serial N-bit parallel Serial-to-parallel converter (UART RX)
PISO N-bit parallel 1-bit serial Parallel-to-serial converter (UART TX)
SISO 1-bit serial 1-bit serial Delay line, bit rotation (LFSR)
PIPO N-bit parallel N-bit parallel Register (pipeline, temporary storage)

Q27. Johnson Counter vs. Ring Counter — Comparison and Application

Direct answer: Ring counter: shift a single 1 around the ring (outputs one line at a time). Johnson counter (twisted ring): shift inverted feedback, producing a different sequence. Johnson uses 2n states for n flip-flops (ring uses n states), and decoding is simpler.

Counter Sequence (4-bit) Mod-N Decoder
Ring 1000→0100→0010→0001→1000 4 (n) Simple (one FF per decode)
Johnson 0000→1000→1100→1110→1111→0111→...→0000 8 (2n) 2-input AND/OR (transition detection)

Q28. Design a Mod-6 Counter Using JK Flip-Flops

Direct answer: A mod-6 counter counts 0–5, then resets to 0. With 3 JK flip-flops (8 states), use state decoding: when output reaches 110 (6), force a reset. Use boolean algebra to derive J/K inputs.

Mod-6 Counter (counts 0,1,2,3,4,5, then reset):
State | Q2 | Q1 | Q0 | Next
------|----|----|----|-----
  0   | 0  | 0  | 0  | 1
  1   | 0  | 0  | 1  | 2
  2   | 0  | 1  | 0  | 3
  3   | 0  | 1  | 1  | 4
  4   | 1  | 0  | 0  | 5
  5   | 1  | 0  | 1  | 0 (reset)
  6   | 1  | 1  | 0  | X (invalid, force reset)
  7   | 1  | 1  | 1  | X (invalid, force reset)

JK truth table recap:
J K | Q+
----|----
0 0 | Q (hold)
0 1 | 0 (reset)
1 0 | 1 (set)
1 1 | Q̄ (toggle)

Excitation table (for JK FF to transition from Q→Q+):
Q | Q+ | J | K
--|----|----|----
0 | 0  | 0 | X (hold: J=0, K=0 or 1)
0 | 1  | 1 | X (set: J=1, K=0 or 1)
1 | 0  | X | 1 (reset: J=0 or 1, K=1)
1 | 1  | X | 0 (hold: J=0 or 1, K=0)

K-map each J and K input vs. current state (Q2, Q1, Q0):
J0 = 1 (always toggle Q0)
J1 = Q0 (toggle Q1 when Q0=1)
J2 = Q0·Q1 (toggle Q2 when Q1,Q0 both set)
K0 = 1 (always toggle Q0)
K1 = Q0 (toggle Q1 when Q0=1)
K2 = Q0·Q1 (toggle Q2 when Q1,Q0 both set)

Q29. Moore vs. Mealy FSM — Design a Vending Machine

Direct answer: Moore outputs depend on state only; Mealy outputs depend on state and inputs. Moore is simpler to verify but slower; Mealy is faster but outputs can glitch. Design choice depends on application.

Example: vending machine (simplified). States: idle, coin_accepted, dispensing. Outputs: dispense, refund. In Moore, dispense happens while IN that state. In Mealy, dispense happens WHEN the trigger input arrives (potentially faster).

Q30. Race Condition in Asynchronous Sequential Circuits

Direct answer: A race condition occurs when multiple state variables change at the same time. If changes don't occur simultaneously (due to gate delays), the circuit may land in an unintended state. Modern designs avoid asynchronous circuits; synchronous design prevents races by gating state changes to clock edges.

Section 4: Memory, Arithmetic, Logic Families (Q31–Q40)

Q31. SRAM vs. DRAM — Construction and Speed Comparison

Direct answer: SRAM (Static RAM) uses flip-flops (cross-coupled transistors) to hold data; no refresh needed. DRAM uses a single transistor + capacitor, requiring periodic refresh. SRAM is faster but larger; DRAM is denser but slower.

Aspect SRAM DRAM
Cell type 6 transistors (cross-latch) 1T1C (transistor + capacitor)
Speed (access) Fast (~ns) Slower (~10s ns, with refresh)
Refresh Not needed Required (every ~64ms)
Density Lower (large cell) High (tiny cell)
Power (idle) Static current (leakage) Low (capacitor holds charge)
Use CPU caches, registers Main memory (DDR, LPDDR)

Q32. ROM Types — Mask ROM, PROM, EPROM, EEPROM, Flash

Direct answer: Mask ROM is hardwired at fabrication (unchangeable). PROM (Programmable) is one-time writable. EPROM (Erasable PROM) is erasable by UV light. EEPROM is electrically erasable. Flash is a variant of EEPROM with faster erase (block-level).

ROM Type Programmable Erasable Speed Use
Mask ROM No No Fast Production (high volume)
PROM Yes (once) No Fast Prototypes
EPROM Yes UV light (whole chip) Fast Retro (vintage microcontrollers)
EEPROM Yes Electrical (byte-level) Slower Config storage (AVR, PIC)
Flash Yes Electrical (block-level) Fast read, slow write NAND/NOR flash (SSDs, USB)

Q33. CAM (Content Addressable Memory) — How It Differs from RAM

Direct answer: RAM is address-addressable (you provide address, get data). CAM is content-addressable (you provide data, get address). Useful for associative lookups, routing tables, TLBs. Slower and more expensive than RAM.

Internally, CAM compares input against all entries in parallel, returning the matching address(es). Used in networking (MAC address lookups), CPUs (cache tag arrays), and real-time systems.

Q34. Subtractor Using Adder — Two's Complement Trick

Direct answer: A - B = A + (-B) = A + NOT(B) + 1. So reuse an adder by inverting B and setting the carry-in to 1.

Example: 10 - 3 in 4-bit
10 = 1010
3 = 0011

Two's complement of 3:
NOT(0011) = 1100
1100 + 1 = 1101 (-3 in two's complement)

10 + (-3):
  1010
+ 1101
------
 10111 (5-bit result, drop MSB)
 = 0111 = 7 (decimal) ✓

Circuit: Use adder with:
  A = 1010 (10)
  B_inverted = NOT(0011) = 1100
  Cin = 1
  Result = A + B_inverted + Cin = 10 - 3 = 7

Q35. Overflow Detection in Signed Addition

Direct answer: In an n-bit signed system, overflow occurs if the sum doesn't fit in the range. Detect by checking if both operands have the same sign but the result has the opposite sign.

Boolean: Overflow = (A[n-1] AND B[n-1] AND NOT(Sum[n-1])) OR (NOT(A[n-1]) AND NOT(B[n-1]) AND Sum[n-1])

Q36. CMOS vs. TTL — Key Differences

Direct answer: CMOS (Complementary MOS) uses both PMOS and NMOS transistors; low power, high noise immunity, slower. TTL (Transistor-Transistor Logic) uses bipolar transistors; higher power, lower noise immunity, faster. CMOS is modern; TTL is vintage (5V era).

Parameter CMOS TTL
Supply voltage 3V–18V (or 1.8V–5V) 5V
Power (static) Minimal (pW range) High (mW per gate)
Noise margin ~40% (good) ~20% (lower)
Speed (gate delay) ~ns (modern: ps) ~1ns
Fanout Very high (500+) Limited (~10)
Use All modern digital (ASICs, FPGAs) Legacy (80s/90s discrete logic)

Q37. Propagation Delay and Critical Path — How to Calculate Worst-Case

Direct answer: Propagation delay is time for a change to propagate through a gate. Critical path is the longest delay path from input to output. Worst-case delay = sum of gate delays on the slowest path.

Example: A → AND → OR → output. If AND_delay=1ns, OR_delay=1.5ns, result is 2.5ns minimum. Account for process corners (slow, nominal, fast), voltage (min, nom, max), and temperature.

Q38. Fanout and Circuit Speed — How Does It Affect Delay?

Direct answer: Fanout is the number of inputs driven by one output. High fanout increases capacitive load, slowing the gate. Design rule: limit fanout to ~4–10 logic levels (or buffer heavily-loaded nets).

In STA (Static Timing Analysis), large fanout is flagged as a violation. The synthesizer adds buffers automatically to reduce load.

Q39. PLDs — How a PAL Implements Sum-of-Products

Direct answer: A PAL (Programmable Array Logic) has a programmable AND array followed by a fixed OR array. Inputs drive both true and complemented versions into AND matrix; AND outputs are summed by fixed OR gates.

Example: Implement Y = AB + AC + BC

PAL architecture:
Inputs: A, B, C (and complements)

AND matrix (programmable):
     A Ā B B̄ C C̄
P0:  1 0 1 0 0 0  → outputs AB
P1:  1 0 0 1 1 0  → outputs AC
P2:  0 1 1 0 1 0  → outputs BC

OR matrix (fixed):
Y = P0 + P1 + P2
  = AB + AC + BC ✓

The user sets fuses in the AND matrix;
OR gates are hardwired.

Q40. Synchronizer Design — Why 2 Flip-Flops, Not 1? MTBF Calculation

Direct answer: A single flip-flop can go metastable if inputs violate setup/hold. A 2-flop synchronizer (cascade) reduces MTBF (mean time between failures) from microseconds to years. The first flop may be metastable, but settles before the second samples it.

MTBF calculation: MTBF = e^(Tmetastability / τ) / (Tclk × fin). Where τ is time constant (nanoseconds), Tmetastability is time for first flop to settle, Tclk is clock period, and fin is input frequency. For reasonable design (1ns settle time, 1GHz clock), MTBF > 100 years.

💡 Tip: Always use at least 2 flip-flops for CDC (clock domain crossing). 3 flops are common for ultra-reliable applications. Formal CDC tools verify synchronizer correctness automatically (Cadence CDC, Synopsys SpyGlass).

Core Concepts Cheatsheet

Concept Quick Definition Key Insight
De Morgan NOT(A AND B) = NOT(A) OR NOT(B) Reduces NAND gate count; enables NAND-only logic
Karnaugh Map Visual 2D grid to group minterms/maxterms Groups of 2^n reduce expression by 1 variable per group
Full Adder Sum = A XOR B XOR Cin; Cout = AB + (A XOR B)Cin Building block for all adders; 1-bit abstraction
Setup/Hold Setup = stabilize before clock; Hold = stabilize after Violate = metastability; use synchronizers to prevent
Metastability FF oscillates between 0 and 1 after violation Inevitable if setup/hold violated; reduce probability only
Gray Code Binary where consecutive values differ by 1 bit Safe for CDC: 1-bit change avoids glitch vulnerability
Two's Complement Negative = NOT + 1; range = [-2^(n-1), 2^(n-1)-1] Subtraction = addition with negated operand
Synchronizer 2+ FF chain across clock domain Always use for CDC; reduces MTBF exponentially

📌 Final Note: These 40 questions represent the overlap of what you'll see in interviews and what you'll use in real silicon design. The deeper understanding comes from *designing* with these concepts: building FIFOs, FSMs, multipliers in your projects, and then asking "why did that work — or fail?" Real interview success isn't memorization; it's confidence from experience.

Share. Facebook Twitter LinkedIn Email Telegram WhatsApp
Next Article Top 50 Verilog HDL Interview Questions for VLSI Interviews
Raju Gorla
  • Website

Related Posts

Interview Questions

DFT Interview Questions and Answers for VLSI Engineers

19 March 2026
Interview Questions

STA Interview Questions: 52 Real-World Questions with Answers (2026)

18 March 2026
Interview Questions

TCL Interview Questions for VLSI Engineers

6 November 2024
Add A Comment
Leave A Reply Cancel Reply

Topics
  • Design Verification
  • Digital Circuits
  • Informative
  • Interview Questions
  • Physical Design
  • RTL Design
  • STA
  • System Verilog
  • UVM
  • Verilog
Instagram LinkedIn WhatsApp Telegram
© 2026 VLSI Web

Type above and press Enter to search. Press Esc to cancel.