My Projects

Past and ongoing projects & learning

Past Projects

Over the course of 13 years working in the Research & Development division of one of Japan's leading technology companies, I contributed to a wide range of initiatives focused on cutting-edge innovation. Our mission was to explore emerging technologies and develop tools that could transform our products and strengthen our position in the industry.

Projects typically progressed through multiple stages: research, feasibility confirmation, and eventual integration into commercial products. Some initiatives required years of development before reaching the final stage, reflecting the complexity and long-term vision of our work.

Below is a brief overview of the projects I participated in. Due to contractual obligations, I am only allowed to share general descriptions and cannot disclose specific technologies, development processes, or respond to detailed inquiries.

Key Contributions
  • Developed an on-premise data collection system that extracted equipment data, converted it into structured formats, generated tables based on a custom data model, and transmitted it to a cloud uploader system
  • Designed the data model for cloud storage integration
  • Built a data generator system to support testing of data collection and upload workflows
Outcomes
  • Delivered a cloud platform ready for testing at customer sites
  • Installed the data collection system on an internal test line for evaluation
  • Provided detailed technical specifications for the data model and software architecture

Key Contributions
  • Built a CI/CD pipeline to automate deployment of new models
  • Automated data preparation, training, and evaluation workflows
  • Migrated the existing on-premise infrastructure to a scalable cloud environment
Outcomes
  • Delivered a functional AI model building platform supporting three types of defect-detection models
  • Secured a patent for the model evaluation and optimization infrastructure
  • Provided comprehensive technical documentation, including installation and operation manuals

Key Contributions
  • Designed the data preparation pipeline for AI model training
  • Built predictive models to estimate subsystem RUL
  • Created a visualization dashboard to monitor RUL forecasts
Outcomes
  • Delivered models achieving 80% accuracy in RUL prediction
  • Deployed a dashboard for real-time visualization of maintenance forecasts
  • Provided technical documentation for model architecture and data workflows
  • Contributed to a patent for subsystem state evaluation methodology

Key Contributions
  • Designed the detection methodology for identifying precision decay
  • Authored a guide for analyzing equipment data and interpreting decay indicators
  • Set up a data extraction infrastructure at customer facilities to validate the methodology under real operating conditions
Outcomes
  • Delivered software infrastructure to implement the precision decay detection system
  • Presented a performance report confirming that the data extraction process had no adverse impact on production
  • Secured a patent for the precision decay detection methodology

Key Contributions
  • Designed and implemented the data processing workflow in KNIME
  • Developed a Tableau dashboard for control chart visualization
  • Created pattern recognition logic in Tableau to identify anomalies in control chart outputs
Outcomes
  • Delivered a complete SPC infrastructure for monitoring and maintaining equipment precision
  • Produced technical specifications and user manuals for deployment and training
  • Validated the system through a live demonstration using real customer data

Key Contributions
  • Built a server application to facilitate communication between the equipment's OS and the commercial OS
  • Developed an automated API testing infrastructure to accelerate validation
  • Created startup software responsible for initializing all operational processes, with built-in debugging capabilities
  • Enhanced the file I/O API infrastructure for improved reliability
  • Designed a methodology for analyzing exceptions generated by the specialized OS APIs
Outcomes
  • Delivered a robust server application enabling OS-to-OS communication
  • Reduced API testing time from several weeks to under two hours through automation
  • Delivered startup software capable of both initiating and debugging process sequences
  • Produced detailed API specifications, software documentation, and test protocols
  • Successfully resolved over 150 reported bug cases with no recurrences
  • Authored a manual for systematic analysis of software exceptions

Ongoing Projects & Learning

Overview

I am creating a cloud service to help users easily summarize and visualize their data. The service offers a friendly user interface that makes it easy for anyone to swiftly clean their data and create pivot tables and graphs to visualize it.

Highlights
  • Serverless service using Fargate
  • Containerized application with version management using ECR
  • Session timeout control implemented with Lambda functions
  • Landing pages hosted in S3
  • Application interface implemented with Python's Dash library
  • Multilingual interface (English, Spanish, Japanese)

Overview

I'm building a multilingual static site infrastructure to host my freelance portfolio and onboarding materials. This project serves as both a technical showcase and a client-facing platform.

Highlights
  • Static hosting via S3
  • CDN acceleration with CloudFront
  • DNS and domain management with Route 53
  • Multilingual layout (English, Spanish, Japanese)

Overview

I'm currently preparing for the AWS Certified AI Practitioner exam to deepen my understanding of cloud-native machine learning workflows. This certification focuses on applying AI services like Amazon SageMaker, Rekognition, and Comprehend to real-world business problems.

Why It Matters
  • Strengthens my ability to prototype and deploy AI solutions
  • Adds credibility for automation and intelligent workflow consulting
  • Aligns with client needs for scalable, cloud-based AI tools
Highlights
  • Sample ML workflows using AWS services
  • Visual guides and templates for non-technical clients

Overview

This long-term certification goal will solidify my expertise in designing and managing data infrastructure using AWS tools like Glue, Redshift, Kinesis, and Lake Formation. It complements my background in data engineering and expands my freelance offerings.

Why It Matters
  • Enables me to build robust, scalable data solutions for clients
  • Supports advanced analytics, reporting, and automation projects
  • Positions me to consult on cloud migration and data strategy
Highlights
  • End-to-end data pipeline demo using AWS
  • Portfolio-ready anonymized case studies
  • Educational content for clients on cloud data architecture