How to Set Up and Use SQL Workbench to Interact with AWS RDS Databases

SQL Workbench is a powerful SQL query tool that simplifies database interactions. In this guide, I’ll walk you through setting up SQL Workbench, connecting to a MySQL database hosted on AWS RDS, and running queries.

Setting Environment Variables

To begin, ensure your system's environment variables are configured properly for SQL Workbench.

How to open a new workbench and connect to a Database ?

  • Click on the SQLWorkbench shortcut in your local desktop
  • Give a name, example as "mysql" as per below screenshot
  • We are trying to connect to a MySQL database, hence select MySQL driver under Driver section
  • Note that you need to download required libraries first time, once you selected the driver, it will prompt for it, click yes and proceed further to download libraries, click on Download driver, select and older version like 8.0.28 under available versions, then click ok.
  • Now enter database username and password(you should collect this information while creating this DB in AWS RDS)
  • Now, under URL section, update hostname & port no(you will find this information inside the database folder under RDS service in AWS) - check below screenshot for same information
  • Remove property name_of_database incase if you are not sure what to mention here
  • Click on test if the connection is successful
  • If connection is successful, then click OK
  • If you are doing it for first time, then go to Tools, click show DB Tree, then it will show databases





  • To create a new database, then use below query
  • CREATE DATABASE <name of database>
  • To execute, shortcut is Ctrl+Enter (or) click on run button on top right corner
  • To use this database, use below query
  • USER <name_of_database>

                                                                              



Below are the sample queries to create some dummy tables and insert data into.

-- Create the 'dept' table
CREATE TABLE dept (
    dept_id INT PRIMARY KEY,
    dept_name VARCHAR(100),
    location VARCHAR(100)
);

-- Insert 21 records into 'dept'
INSERT INTO dept (dept_id, dept_name, location) VALUES
(1, 'HR', 'New York'),
(2, 'Finance', 'Los Angeles'),
(3, 'Engineering', 'San Francisco'),
(4, 'Sales', 'Chicago'),
(5, 'Marketing', 'Boston'),
(6, 'Support', 'Seattle'),
(7, 'IT', 'Austin'),
(8, 'R&D', 'San Diego'),
(9, 'Operations', 'Houston'),
(10, 'Legal', 'Dallas'),
(11, 'Admin', 'Denver'),
(12, 'Procurement', 'Miami'),
(13, 'Security', 'Atlanta'),
(14, 'QA', 'Phoenix'),
(15, 'Logistics', 'Detroit'),
(16, 'Design', 'Portland'),
(17, 'Analytics', 'Philadelphia'),
(18, 'Training', 'San Jose'),
(19, 'Consulting', 'Salt Lake City'),
(20, 'Recruitment', 'Las Vegas'),
(21, 'Strategy', 'Minneapolis');

-- Create the 'emp' table
CREATE TABLE emp (
    emp_id INT PRIMARY KEY,
    emp_name VARCHAR(100),
    dept_id INT,
    job_title VARCHAR(100),
    salary DECIMAL(10, 2),
    FOREIGN KEY (dept_id) REFERENCES dept(dept_id)
);

-- Insert 21 records into 'emp'
INSERT INTO emp (emp_id, emp_name, dept_id, job_title, salary) VALUES
(1, 'John Doe', 1, 'Manager', 75000.00),
(2, 'Jane Smith', 2, 'Accountant', 65000.00),
(3, 'Bob Johnson', 3, 'Engineer', 85000.00),
(4, 'Alice Brown', 4, 'Sales Representative', 55000.00),
(5, 'Tom Clark', 5, 'Marketing Specialist', 60000.00),
(6, 'Nancy White', 6, 'Support Analyst', 50000.00),
(7, 'Steve Adams', 7, 'IT Administrator', 70000.00),
(8, 'Emma Lee', 8, 'R&D Scientist', 90000.00),
(9, 'Paul King', 9, 'Operations Manager', 75000.00),
(10, 'Susan Hill', 10, 'Legal Advisor', 80000.00),
(11, 'Mary Green', 11, 'Admin Assistant', 45000.00),
(12, 'David Wright', 12, 'Procurement Officer', 55000.00),
(13, 'Chris Hall', 13, 'Security Specialist', 60000.00),
(14, 'Megan Scott', 14, 'QA Analyst', 52000.00),
(15, 'Laura Evans', 15, 'Logistics Coordinator', 58000.00),
(16, 'Kevin Harris', 16, 'Graphic Designer', 62000.00),
(17, 'Sophia Young', 17, 'Data Analyst', 75000.00),
(18, 'Michael Turner', 18, 'Trainer', 48000.00),
(19, 'Olivia Walker', 19, 'Consultant', 67000.00),
(20, 'Ethan Lewis', 20, 'Recruiter', 52000.00),
(21, 'Isabella Carter', 21, 'Strategy Specialist', 89000.00);

-- Create the 'orders' table
CREATE TABLE orders (
    order_id INT PRIMARY KEY,
    emp_id INT,
    order_date DATE,
    order_amount DECIMAL(10, 2),
    FOREIGN KEY (emp_id) REFERENCES emp(emp_id)
);

-- Insert 21 records into 'orders'
INSERT INTO orders (order_id, emp_id, order_date, order_amount) VALUES
(1, 1, '2023-01-10', 1000.00),
(2, 2, '2023-01-15', 2000.00),
(3, 3, '2023-01-20', 1500.00),
(4, 4, '2023-02-10', 1200.00),
(5, 5, '2023-02-15', 1800.00),
(6, 6, '2023-02-20', 1400.00),
(7, 7, '2023-03-10', 1700.00),
(8, 8, '2023-03-15', 1300.00),
(9, 9, '2023-03-20', 1600.00),
(10, 10, '2023-04-10', 1900.00),
(11, 11, '2023-04-15', 1100.00),
(12, 12, '2023-04-20', 1500.00),
(13, 13, '2023-05-10', 2000.00),
(14, 14, '2023-05-15', 1250.00),
(15, 15, '2023-05-20', 1750.00),
(16, 16, '2023-06-10', 1350.00),
(17, 17, '2023-06-15', 1650.00),
(18, 18, '2023-06-20', 1550.00),
(19, 19, '2023-07-10', 1850.00),
(20, 20, '2023-07-15', 1950.00),
(21, 21, '2023-07-20', 2100.00);

Finally, our SQL workbench is ready to interact with AWS. We can use Databricks account to create a Spark cluster and connect to this DB to read, write data.

Comments

Popular posts from this blog

AWS Athena, AWS Lambda, AWS Glue, and Amazon S3 – Detailed Explanation

Kafka Integrated with Spark Structured Streaming

Azure Data Factory: Copying Data from ADLS to MSSQL