Skip to main content

Posts

Showing posts from 2024

Custom Instructions in ChatGPT

  Custom Instructions in ChatGPT, also known as   Tailored Preferences   are the best way to control ChatGPT’s output easily. Example 1: Like, if you are drained of typing “Give a brief response” or “Please respond in length” like me? Try following this instruction. If I add * at the end of my question, please provide a concise, to-the-point response. If I add **, provide a full and comprehensive response. If I do not provide any symbols, please provide a standard response. Example 2: We can control the model’s temperature using APIs, but there is no way to do it through the UI. You can use the following instruction to have GPT follow temperature. If I specify a temperature range of 0 to 1 at the conclusion of the question, please react accordingly: Temperature 0 indicates highly deterministic responses, thus you should always run a web search before responding. Temperature 1 gives for greater creativity and freedom in reaction.

Glossary of Key Generative AI Terms

  1. Generative AI A subset of artificial intelligence focused on creating new content, such as text, images, audio, video, or code. It relies on models trained on vast datasets to identify patterns and generate similar but unique outputs. 2. Large Language Model (LLM) An AI model trained to process and generate human-like text. Examples include GPT (Generative Pre-trained Transformer), BERT, and LaMDA, all of which leverage deep learning architectures, specifically transformers. 3. Transformer A neural network architecture known for its ability to process sequential data, like text. Transformers use self-attention mechanisms, enabling the model to learn relationships between words and capture long-range dependencies efficiently. 4. Retrieval-Augmented Generation (RAG) A hybrid approach combining retrieval-based and generative AI techniques. RAG models retrieve relevant information from external sources (e.g., databases or documents) and then use this information to generate contex...

LlamaParse: Incredibly good at parsing PDFs

  What is LlamaParse? LlamaParse is a proprietary parsing service that is incredibly good at parsing PDFs with complex tables into a well-structured markdown format. It directly integrates with LlamaIndex ingestion and retrieval to let you build retrieval over complex, semi-structured documents. It is promised to be able to answer complex questions that weren’t possible previously. This service is available in a public preview mode: available to everyone, but with a usage limit (1k pages per day) with 7,000 free pages per week. Then $0.003 per page ($3 per 1,000 pages). It operates as a standalone service that can also be plugged into the managed ingestion and retrieval API Currently, LlamaParse primarily supports PDFs with tables, but they are also building out better support for figures, and an expanded set of the most popular document types: .docx, .pptx, .html as a part of the next enhancements. Code Implementation: Install required dependencies: a) Create requirements.txt in t...

OpenAI Learning:- Chapter 6

  Using Generative AI for Audio/Video Processing: Power of Summarization What Is the Purpose of This Application ? This application is for audio and video summarization. For users who wish to quickly create bullet point summaries of audio/video content, it is a useful tool. Sample code: from langchain.document_loaders import youtube from langchain.text_splitter import RecursiveCharacterTextSplitter import openai import streamlit as st openai.api_key = "<<Add your key here>>" st.set_page_config(page_title= "YouTube Audio/Video Summariser App" ) st.markdown( """<p style="color: #3fd100;font-size: 30px;font-family: sans-serif; text-align:center;margin-bottom:0px;"><b>YouTube Audio/Video </b><span style="color: #3fd100;font-size: 30px;font-family: sans-serif;"><b>Summariser App</b></span></p><p></p>""" , unsafe_allow_html= True ) st.head...

OpenAI Learning:- Chapter 5

  Generative AI-Powered Audio/Video Processing: Whisper's Python Adventure What Is the Purpose of This Application? An application that displays the ability of Generative AI to process and analyze audio/video files, and then output the lyrics for that audio or video file. Sample code: import streamlit as st from pytube import YouTube import os import torch from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline def get_mp3(url):     yt = YouTube(str(url))     audio = yt.streams.filter(only_audio = True ).first()     destination = '.'     out_file = audio.download(output_path=destination)     base, ext = os.path.splitext(out_file)     new_file = base + '.mp3'     os.rename(out_file, new_file)     return new_file def get_transcript(audio_file):     device = "cuda:0" if torch.cuda.is_available() else "cpu"   #If you have GPU else it will use cpu  ...

Implementing Open AI in Software Testing: Creating a Model for Test Case Review/Optimization

This article shows how QA teams and developers may create AI assistants by integrating OpenAI's cutting-edge AI technologies. In this instance, we are setting up the OpenAI package using our specific API key. This key is essential because it provides us with access to the OpenAI platform and allows us to take advantage of all its features. Steps to Build the Model and Web App: Pre-requisite: Ø   Go through this:  https://platform.openai.com/docs/quickstart?context=python Ø   Install Python and other dependencies like streamlit, openAI, etc. on your machine using the PIP package installer if you are planning to run it on your local. Ø   Create an Open API account, and generate API key using  https://platform.openai.com/api-keys  (Note* You receive free $5 when signing up using your mobile phone, which is sufficient for you to play with.) Step 1: Create a GitHub Account and Create a New Repository 1.       Go to  GitHub’s website...