Understanding why GPT's accuracy has been declining and what this means for the future of LLMs.Read More
Hey, I'm Dev Shah
Hey, I'm Dev Shah and I'm a 19 year old Machine Learning researcher based in Toronto. I’m currently a ML researcher working in Medical Imaging x AI lab under Dr. Tyrrell; looking to integrate Artificial Intelligence into a clinical setting to improve the diagnosis process. I’m also studying Computer Science at the University of Toronto. My ultimate aspiration is to make a significant and lasting positive impact on the world, with the hope of touching the lives of billions. While I acknowledge that achieving this goal may be a long and challenging journey, it is one that I am wholeheartedly committed to pursuing. Check out my portfolio and skills below! I will be posting updates on my website and throughout my newsletters, make sure to subscribe below! Stay tuned :)
COMPUTER SCIENCE UNDERGRADUATE AT UNIVERSITY OF TORONTO
Data Analysis, Software Development, Object-Oriented Programming, Mathematics, Data Structures & Algorithms
INNOVATOR AT THE KNOWLEDGE SOCIETY
Exponential Technologies, Real-World Skills, People & Leadership, Character & Mindset, Consulting & Advisory, Artificial Intelligence
INTERNATIONAL BACCALAUREATE PROGRAM
Critical Thinking, Complex Problem Solver, Engaging Global Challenges
Machine Learning Engineer -- Interactions LLC.
Created a LLM driven avatar using NVIDIA’s Omniverse platform and Audio2Face interface for enhancing customer service. Leveraged a tech stack including Python, PyTorch, AWS, Docker, GRPC, Hugging Face, and NVIDIA Audio2Face.
Machine Learning Researcher -- University of Toronto
Developing a massive machine learning model with contrastive learning for feature extraction and diagnosis of knee ultrasounds.
Research Intern -- Interac
Sensing the market through solid research and analysis of trends and technologies that could directly and indirectly, impact the future of financial services.
Ability to analyze and interpret complex data
Proficient in developing software using various programming languages and tools
Strong understanding of object-oriented programming concepts and design patterns
Strong mathematical skills including calculus, linear algebra, and statistics
Applied Machine Learning
Random Forest, XGBoost, Decision Tree, Fast Fourier Algorithm, Tensorflow, pandas, NumPy, Matplotlib, NLTK, SciKit, SkLearn
Machine Learning Frameworks
UNET, Neural Networks (CNN, RNN), Natural Language Processing, Transformers, LLMs
Git, Tensorflow, VS Code, Visual Studio, PyCharm, Jupyter Notebook, Google Colab, Power Bi
LLM Driven Avatar
Developed a LLM driven avatar using NVIDIA’s Omniverse platform and Audio2Face interface, integrating cutting-edge Machine Learning libraries. This solution significantly improved installation-related customer service, resulting in a 47% reduction in support ticket escalation and a 32% decrease in installation process duration. Leveraged a tech stack including Python, PyTorch, AWS, Docker, GRPC, Hugging Face, and NVIDIA Audio2Face to create a state-of-the-art avatar for enhancing customer service. Successfully delivered a robust system that demonstrated its efficiency by achieving a 92% customer satisfaction rate based on post-implementation surveys.
EyeSpy; multimodal AI-model to help the visually impaired.
Developed a multi-model AI system to help the visually impaired navigate through crowded spaces. The program uses the user’s iPhone camera (ideally, this would be a set of glasses with a camera for ease of use) to scan their surroundings; sampling images every couple of seconds and passing them to the Detectron2 model. This model analyzes the image, performs object detection, and creates bounding boxes of the objects near the visually impaired individual. The bounding boxes are processed in the backend and converted into simple English descriptions (i.e. there is a chair on the left) by splitting the image into a grid with 5x5 pixel boxes. This English description is fed to Cohere’s LLM and the model provides an in-depth description of how to navigate/proceed forward. This description is fed to the individual in an audio format using Whisper from OpenAI.