
Nadeesha Vidyamali
AI Postgraduate Turning Academic Knowledge into Practical Innovation
MSc Artificial Intelligence graduate from the University of South Wales with hands-on experience in machine learning, data analysis, and AI-driven software development. Proficient in Python, TensorFlow, and PyTorch, I focus on turning academic knowledge into practical, real-world AI solutions. I am eager to apply my skills in impactful and challenging projects.
About Me
My journey into Artificial Intelligence began during my BSc in Information Technology, where I discovered a passion for how data could be leveraged to create intelligent solutions. This led me to pursue an MSc in AI, focusing on areas like deep learning, NLP, and computer vision.
I thrive on challenges that require analytical thinking and creative problem-solving. Outside of my academic pursuits, I enjoy exploring new technologies and contributing to open-source projects.
Skills
Projects

My Contributions:
- Designed a multi-agent (A2A) architecture: implemented interacting AI agents responsible for data ingestion, analysis, reasoning, and explanation generation.
- Applied explainable AI (XAI) principles: integrated rule-based reasoning to enhance transparency and interpretability of stock market trend analysis.
- Performed stock market description analytics: analyzed historical stock data to identify patterns, trends, and market behaviors.
- Developed interpretable visualizations: created clear charts and dashboard to communicate analytical insights and agent-generated explanations.
- Evaluated system effectiveness: demonstrated how explainability improves trust and understanding in AI-driven financial analytics.
Technologies Used:

My Contributions:
- Led data cleaning and preprocessing: handled missing values, standardized formats, and removed low-information columns to ensure data integrity.
- Engineered key features: computed patient age groups, length of stay, admission month/year, and billing categories to enrich analysis.
- Performed exploratory data analysis: visualized condition prevalence, billing trends, and appointment adherence across age, gender, and medical condition segments.
- Applied machine learning: one-hot encoded categorical variables, scaled numerical features, and trained a stratified random forest classifier to predict test results.
- Communicated findings: prepared comprehensive visualizations and a detailed report with recommendations for targeted preventive care and operational improvements.
Technologies Used:

My Contributions:
- Data Preparation & EDA: Cleaned and standardized 569 samples with 30 numeric features, dropped irrelevant columns, balanced classes, and visualized feature distributions and pairwise relationships.
- Feature Engineering: Applied StandardScaler, encoded labels, and performed stratified train–test splits with cross-validation to preserve class proportions.
- Modeling & Tuning: Implemented and tuned four classifiers—Logistic Regression, SVM, Decision Tree, and a two-layer ANN—with systematic grid searches (C, kernel, max_depth, layer sizes) and early stopping for the ANN.
- Reporting: Created comprehensive visualizations (feature distributions, performance bar charts, ROC curves) and delivered actionable recommendations for deploying the top-performing ANN in clinical settings.
- Evaluation & Analysis: Compared models using accuracy, precision, recall, F1-score, AUC, confusion matrices, and ROC curves; identified concave points, perimeter, radius, and area as key predictive features.
Technologies Used:

My Contributions:
- Developed an end-to-end deep-learning pipeline to automatically identify five key savannah species (buffalo, elephant, giraffe, zebra, rhino) from real-world camera-trap imagery, achieving robust performance across challenging lighting and environmental conditions.
- Preprocessing & Augmentation: Applied Gaussian noise reduction, resized/normalized all images to 128×128 pixels, and generated 5× augmented variants per image (rotations, shifts, zooms, brightness, flips).
- Trained the model, monitored performance, and applied techniques to prevent overfitting.
- Model Architecture & Training: Designed and trained a custom CNN in TensorFlow/Keras, implemented stratified train/validation splits, and monitored accuracy and loss curves over 10+ epochs.
- Evaluation & Visualization: Produced ROC and confusion-matrix plots to assess per-species performance, identifying avenues for further hyperparameter tuning.
- Reporting & Insights: Compiled a comprehensive report summarizing methodology, quantitative results (60% test accuracy), and recommendations for future improvements (additional data, advanced architectures).
Technologies Used:

My Contributions:
- Dataset Curation & Preprocessing: Collected 34k+ BSL images, applied MediaPipe key-point detection to crop and center hands, resized to 224×224, normalized pixel values, and used rotation/flip/brightness augmentations to improve robustness.
- Model Development & Fine-Tuning: Adapted a pre-trained EfficientNetB5 (ImageNet) by unfreezing its top 30 layers, added GAP → Dense(512, ReLU) → Dropout(0.5) → Softmax output, and trained with Adam (1e-4) over 10 epochs.
- Created a variety of plots (histograms, scatter plots, heatmaps) to represent data effectively.
- Evaluation & Analysis: Assessed performance via accuracy (91%), precision, recall, F1-scores, confusion matrices, multi-class ROC and precision-recall curves—identifying gestures (e.g. ‘O’, ‘N’) needing further data for disambiguation.
- Deployment & UX: Packaged the model into a Gradio web interface allowing drag-and-drop image uploads, real-time landmark visualization, and on-the-fly prediction—ensuring accessibility without local installs.
- Challenges & Future Work: Mitigated background noise and lighting variability through targeted augmentations; outlined edge-device deployment (Raspberry Pi) and live-camera support to broaden real-world applicability.
Technologies Used:

My Contributions:
- Data Preprocessing: Applied tokenization, lemmatization, cleanup (regex & stop-word filtering), and sentence segmentation to prepare raw text for downstream tasks.
- Text Summarization: Built and trained an LSTM‐based Seq2Seq encoder–decoder with attention in TensorFlow, achieving a ROUGE-1 score of 0.67 and BLEU of 0.22 for coherent story abstractions.
- Semantic Search: Generated sentence embeddings (all-MiniLM-L6-v2) stored in ChromaDB to enable fast, context-aware retrieval (most similarity scores >0.7) of relevant passages.
- Topic Modelling: Tuned a Gensim LDA model (5 topics) on TF-IDF–filtered text to uncover core themes—detective work, crime, dialogue—and visualized their distributions across the stories.
- Deployment: Packaged the pipeline into an interactive Anvil web app supporting on-the-fly summarization, search, and thematic exploration, with optimizations for inference speed (quantization, caching).
Technologies Used:
Work Experience
- Ensure the availability, reliability, and performance of databases supporting campaigns, media platforms, analytics, and internal systems.
- Monitor, tune, and optimize queries, indexes, and database structures to support high-traffic marketing and advertising workloads.
- Design, implement, and regularly test backup and recovery procedures to minimize downtime and prevent data loss.
- Implement access controls, encryption, and security best practices to protect client and agency data and ensure compliance with privacy regulations.
- Collaborate with developers, analysts, and creative technology teams to provide reliable data for reporting, dashboards, and campaign insights.
- Automate routine administrative tasks, maintain database documentation, and provide operational reports to improve efficiency and scalability.
- Analyzed existing business workflows and operational processes to identify repetitive, rule-based tasks that could be optimized through automation.
- Assisted in the design, development, and configuration of RPA bots using leading automation tools, contributing to increased efficiency and accuracy in routine operations.
- Conducted testing and troubleshooting of automation scripts to ensure smooth execution, data integrity, and compliance with business requirements.
- Created and maintained detailed process documentation, including workflow diagrams, test results, and change logs, ensuring traceability and process transparency.
- Maintain and support Hospital Information Systems (HIS), billing, laboratory, and pharmacy systems, and provide day-to-day IT support to doctors, nurses, and staff.
- Manage computers, servers, printers, network devices, Wi-Fi, and related hardware to ensure continuous hospital operations.
- Protect patient data by managing user access, performing regular data backups, and ensuring the reliability and confidentiality of the system.
- Designed and developed a tracking system for POS device issues.
- Built ER diagrams, interfaces, and core modules in NetBeans.
- Created SSIS packages for data merging, analysis, and reporting.
Education
- Relevant Coursework: Deep Learning, NLP, Computer Vision, Machine Learning Algorithms, AI Ethics
- Final Year Project: Library Management System